Search results for: high resolution satellite image
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22298

Search results for: high resolution satellite image

22118 The Impact of Varying the Detector and Modulation Types on Inter Satellite Link (ISL) Realizing the Allowable High Data Rate

Authors: Asmaa Zaki M., Ahmed Abd El Aziz, Heba A. Fayed, Moustafa H. Aly

Abstract:

ISLs are the most popular choice for deep space communications because these links are attractive alternatives to present day microwave links. This paper explored the allowable high data rate in this link over different orbits, which is affected by variation in modulation scheme and detector type. Moreover, the objective of this paper is to optimize and analyze the performance of ISL in terms of Q-factor and Minimum Bit Error Rate (Min-BER) based on different detectors comprising some parameters.

Keywords: free space optics (FSO), field of view (FOV), inter satellite link (ISL), optical wireless communication (OWC)

Procedia PDF Downloads 367
22117 Potassium-Phosphorus-Nitrogen Detection and Spectral Segmentation Analysis Using Polarized Hyperspectral Imagery and Machine Learning

Authors: Nicholas V. Scott, Jack McCarthy

Abstract:

Military, law enforcement, and counter terrorism organizations are often tasked with target detection and image characterization of scenes containing explosive materials in various types of environments where light scattering intensity is high. Mitigation of this photonic noise using classical digital filtration and signal processing can be difficult. This is partially due to the lack of robust image processing methods for photonic noise removal, which strongly influence high resolution target detection and machine learning-based pattern recognition. Such analysis is crucial to the delivery of reliable intelligence. Polarization filters are a possible method for ambient glare reduction by allowing only certain modes of the electromagnetic field to be captured, providing strong scene contrast. An experiment was carried out utilizing a polarization lens attached to a hyperspectral imagery camera for the purpose of exploring the degree to which an imaged polarized scene of potassium, phosphorus, and nitrogen mixture allows for improved target detection and image segmentation. Preliminary imagery results based on the application of machine learning algorithms, including competitive leaky learning and distance metric analysis, to polarized hyperspectral imagery, suggest that polarization filters provide a slight advantage in image segmentation. The results of this work have implications for understanding the presence of explosive material in dry, desert areas where reflective glare is a significant impediment to scene characterization.

Keywords: explosive material, hyperspectral imagery, image segmentation, machine learning, polarization

Procedia PDF Downloads 113
22116 Landslide Vulnerability Assessment in Context with Indian Himalayan

Authors: Neha Gupta

Abstract:

Landslide vulnerability is considered as the crucial parameter for the assessment of landslide risk. The term vulnerability defined as the damage or degree of elements at risk of different dimensions, i.e., physical, social, economic, and environmental dimensions. Himalaya region is very prone to multi-hazard such as floods, forest fires, earthquakes, and landslides. With the increases in fatalities rates, loss of infrastructure, and economy due to landslide in the Himalaya region, leads to the assessment of vulnerability. In this study, a methodology to measure the combination of vulnerability dimension, i.e., social vulnerability, physical vulnerability, and environmental vulnerability in one framework. A combined result of these vulnerabilities has rarely been carried out. But no such approach was applied in the Indian Scenario. The methodology was applied in an area of east Sikkim Himalaya, India. The physical vulnerability comprises of building footprint layer extracted from remote sensing data and Google Earth imaginary. The social vulnerability was assessed by using population density based on land use. The land use map was derived from a high-resolution satellite image, and for environment vulnerability assessment NDVI, forest, agriculture land, distance from the river were assessed from remote sensing and DEM. The classes of social vulnerability, physical vulnerability, and environment vulnerability were normalized at the scale of 0 (no loss) to 1 (loss) to get the homogenous dataset. Then the Multi-Criteria Analysis (MCA) was used to assign individual weights to each dimension and then integrate it into one frame. The final vulnerability was further classified into four classes from very low to very high.

Keywords: landslide, multi-criteria analysis, MCA, physical vulnerability, social vulnerability

Procedia PDF Downloads 281
22115 Improvement Image Summarization using Image Processing and Particle swarm optimization Algorithm

Authors: Hooman Torabifard

Abstract:

In the last few years, with the progress of technology and computers and artificial intelligence entry into all kinds of scientific and industrial fields, the lifestyles of human life have changed and in general, the way of humans live on earth has many changes and development. Until now, some of the changes has occurred in the context of digital images and image processing and still continues. However, besides all the benefits, there have been disadvantages. One of these disadvantages is the multiplicity of images with high volume and data; the focus of this paper is on improving and developing a method for summarizing and enhancing the productivity of these images. The general method used for this purpose in this paper consists of a set of methods based on data obtained from image processing and using the PSO (Particle swarm optimization) algorithm. In the remainder of this paper, the method used is elaborated in detail.

Keywords: image summarization, particle swarm optimization, image threshold, image processing

Procedia PDF Downloads 106
22114 A Method of the Semantic on Image Auto-Annotation

Authors: Lin Huo, Xianwei Liu, Jingxiong Zhou

Abstract:

Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective.

Keywords: image auto-annotation, color correlograms, Hash code, image retrieval

Procedia PDF Downloads 462
22113 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection

Procedia PDF Downloads 108
22112 Digital Twin Platform for BDS-3 Satellite Navigation Using Digital Twin Intelligent Visualization Technology

Authors: Rundong Li, Peng Wu, Junfeng Zhang, Zhipeng Ren, Chen Yang, Jiahui Gan, Lu Feng, Haibo Tong, Xuemei Xiao, Yuying Chen

Abstract:

The research of Beidou-3 satellite navigation is on the rise, but in actual work, it is inevitable that satellite data is insecure, research and development is inefficient, and there is no ability to deal with failures in advance. Digital twin technology has obvious advantages in the simulation of life cycle models of aerospace satellite navigation products. In order to meet the increasing demand, this paper builds a Beidou-3 satellite navigation digital twin platform (BDSDTP). The basic establishment of BDSDTP was completed by establishing a digital twin double, Beidou-3 comprehensive digital twin design, predictive maintenance (PdM) mathematical model, and visual interaction design. Finally, this paper provides a time application case of the platform, which provides a reference for the application of BDSDTP in various fields of navigation and provides obvious help for extending the full cycle life of Beidou-3 satellite navigation.

Keywords: BDS-3, digital twin, visualization, PdM

Procedia PDF Downloads 85
22111 MSG Image Encryption Based on AES and RSA Algorithms "MSG Image Security"

Authors: Boukhatem Mohammed Belkaid, Lahdir Mourad

Abstract:

In this paper, we propose a new encryption system for security issues meteorological images from Meteosat Second Generation (MSG), which generates 12 images every 15 minutes. The hybrid encryption scheme is based on AES and RSA algorithms to validate the three security services are authentication, integrity and confidentiality. Privacy is ensured by AES, authenticity is ensured by the RSA algorithm. Integrity is assured by the basic function of the correlation between adjacent pixels. Our system generates a unique password every 15 minutes that will be used to encrypt each frame of the MSG meteorological basis to strengthen and ensure his safety. Several metrics have been used for various tests of our analysis. For the integrity test, we noticed the efficiencies of our system and how the imprint cryptographic changes at reception if a change affects the image in the transmission channel.

Keywords: AES, RSA, integrity, confidentiality, authentication, satellite MSG, encryption, decryption, key, correlation

Procedia PDF Downloads 353
22110 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques

Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang

Abstract:

Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.

Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE

Procedia PDF Downloads 490
22109 Improved Performance in Content-Based Image Retrieval Using Machine Learning Approach

Authors: B. Ramesh Naik, T. Venugopal

Abstract:

This paper presents a novel approach which improves the high-level semantics of images based on machine learning approach. The contemporary approaches for image retrieval and object recognition includes Fourier transforms, Wavelets, SIFT and HoG. Though these descriptors helpful in a wide range of applications, they exploit zero order statistics, and this lacks high descriptiveness of image features. These descriptors usually take benefit of primitive visual features such as shape, color, texture and spatial locations to describe images. These features do not adequate to describe high-level semantics of the images. This leads to a gap in semantic content caused to unacceptable performance in image retrieval system. A novel method has been proposed referred as discriminative learning which is derived from machine learning approach that efficiently discriminates image features. The analysis and results of proposed approach were validated thoroughly on WANG and Caltech-101 Databases. The results proved that this approach is very competitive in content-based image retrieval.

Keywords: CBIR, discriminative learning, region weight learning, scale invariant feature transforms

Procedia PDF Downloads 148
22108 Next-Generation Laser-Based Transponder and 3D Switch for Free Space Optics in Nanosatellite

Authors: Nadir Atayev, Mehman Hasanov

Abstract:

Future spacecraft will require a structural change in the way data is transmitted due to the increase in the volume of data required for space communication. Current radio frequency communication systems are already facing a bottleneck in the volume of data sent to the ground segment due to their technological and regulatory characteristics. To overcome these issues, free space optics communication plays an important role in the integrated terrestrial space network due to its advantages such as significantly improved data rate compared to traditional RF technology, low cost, improved security, and inter-satellite free space communication, as well as uses a laser beam, which is an optical signal carrier to establish satellite-ground & ground-to-satellite links. In this approach, there is a need for high-speed and energy-efficient systems as a base platform for sending high-volume video & audio data. Nano Satellite and its branch CubeSat platforms have more technical functionality than large satellites, wheres cover an important part of the space sector, with their Low-Earth-Orbit application area with low-cost design and technical functionality for building networks using different communication topologies. Along the research theme developed in this regard, the output parameter indicators for the FSO of the optical communication transceiver subsystem on the existing CubeSat platforms, and in the direction of improving the mentioned parameters of this communication methodology, 3D optical switch and laser beam controlled optical transponder with 2U CubeSat structural subsystems and application in the Low Earth Orbit satellite network topology, as well as its functional performance and structural parameters, has been studied accordingly.

Keywords: cubesat, free space optics, nano satellite, optical laser communication.

Procedia PDF Downloads 58
22107 Embedded Digital Image System

Authors: Dawei Li, Cheng Liu, Yiteng Liu

Abstract:

This paper introduces an embedded digital image system for Chinese space environment vertical exploration sounding rocket. In order to record the flight status of the sounding rocket as well as the payloads, an onboard embedded image processing system based on ADV212, a JPEG2000 compression chip, is designed in this paper. Since the sounding rocket is not designed to be recovered, all image data should be transmitted to the ground station before the re-entry while the downlink band used for the image transmission is only about 600 kbps. Under the same condition of compression ratio compared with other algorithm, JPEG2000 standard algorithm can achieve better image quality. So JPEG2000 image compression is applied under this condition with a limited downlink data band. This embedded image system supports lossless to 200:1 real time compression, with two cameras to monitor nose ejection and motor separation, and two cameras to monitor boom deployment. The encoder, ADV7182, receives PAL signal from the camera, then output the ITU-R BT.656 signal to ADV212. ADV7182 switches between four input video channels as the program sequence. Two SRAMs are used for Ping-pong operation and one 512 Mb SDRAM for buffering high frame-rate images. The whole image system has the characteristics of low power dissipation, low cost, small size and high reliability, which is rather suitable for this sounding rocket application.

Keywords: ADV212, image system, JPEG2000, sounding rocket

Procedia PDF Downloads 396
22106 Aerosol Radiative Forcing Over Indian Subcontinent for 2000-2021 Using Satellite Observations

Authors: Shreya Srivastava, Sushovan Ghosh, Sagnik Dey

Abstract:

Aerosols directly affect Earth’s radiation budget by scattering and absorbing incoming solar radiation and outgoing terrestrial radiation. While the uncertainty in aerosol radiative forcing (ARF) has decreased over the years, it is still higher than that of greenhouse gas forcing, particularly in the South Asian region, due to high heterogeneity in their chemical properties. Understanding the Spatio-temporal heterogeneity of aerosol composition is critical in improving climate prediction. Studies using satellite data, in-situ and aircraft measurements, and models have investigated the Spatio-temporal variability of aerosol characteristics. In this study, we have taken aerosol data from Multi-angle Imaging Spectro-Radiometer (MISR) level-2 version 23 aerosol products retrieved at 4.4 km and radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 21 years (2000-2021) over the Indian subcontinent. MISR aerosol product includes size and shapes segregated aerosol optical depth (AOD), Angstrom exponent (AE), and single scattering albedo (SSA). Additionally, 74 aerosol mixtures are included in version 23 data that is used for aerosol speciation. We have seasonally mapped aerosol optical and microphysical properties from MISR for India at quarter degrees resolution. Results show strong Spatio-temporal variability, with a constant higher value of AOD for the Indo-Gangetic Plain (IGP). The contribution of small-size particles is higher throughout the year, spatially during winter months. SSA is found to be overestimated where absorbing particles are present. The climatological map of short wave (SW) ARF at the top of the atmosphere (TOA) shows a strong cooling except in only a few places (values ranging from +2.5o to -22.5o). Cooling due to aerosols is higher in the absence of clouds. Higher negative values of ARF are found over the IGP region, given the high aerosol concentration above the region. Surface ARF values are everywhere negative for our study domain, with higher values in clear conditions. The results strongly correlate with AOD from MISR and ARF from CERES.

Keywords: aerosol Radiative forcing (ARF), aerosol composition, single scattering albedo (SSA), CERES

Procedia PDF Downloads 25
22105 Processing Studies and Challenges Faced in Development of High-Pressure Titanium Alloy Cryogenic Gas Bottles

Authors: Bhanu Pant, Sanjay H. Upadhyay

Abstract:

Frequently, the upper stage of high-performance launch vehicles utilizes cryogenic tank-submerged pressurization gas bottles with high volume-to-weight efficiency to achieve a direct gain in the satellite payload. Titanium alloys, owing to their high specific strength coupled with excellent compatibility with various fluids, are the materials of choice for these applications. Amongst the Titanium alloys, there are two alloys suitable for cryogenic applications, namely Ti6Al4V-ELI and Ti5Al2.5Sn-ELI. The two-phase alpha-beta alloy Ti6Al4V-ELI is usable up to LOX temperature of 90K, while the single-phase alpha alloy Ti5Al2.5Sn-ELI can be used down to LHe temperature of 4 K. The high-pressure gas bottles submerged in the LH2 (20K) can store more amount of gas in as compared to those submerged in LOX (90K) bottles the same volume. Thus, the use of these alpha alloy gas bottles stored at 20K gives a distinct advantage with respect to the need for a lesser number of gas bottles to store the same amount of high-pressure gas, which in turn leads to a one-to-one advantage in the payload in the satellite. The cost advantage to the tune of 15000$/ kg of weight is saved in the upper stages, and, thereby, the satellite payload gain is expected by this change. However, the processing of alpha Ti5Al2.5Sn-ELI alloy gas bottles poses challenges due to the lower forgeability of the alloy and mode of qualification for the critical severe application environment. The present paper describes the processing and challenges/ solutions during the development of these advanced gas bottles for LH2 (20K) applications.

Keywords: titanium alloys, cryogenic gas bottles, alpha titanium alloy, alpha-beta titanium alloy

Procedia PDF Downloads 17
22104 Non-Destructing Testing of Sandstones from Unconventional Reservoir in Poland with Use of Ultrasonic Pulse Velocity Technique and X-Ray Computed Microtomography

Authors: Michał Maksimczuk, Łukasz Kaczmarek, Tomasz Wejrzanowski

Abstract:

This study concerns high-resolution X-ray computed microtomography (µCT) and ultrasonic pulse analysis of Cambrian sandstones from a borehole located in the Baltic Sea Coast of northern Poland. µCT and ultrasonic technique are non-destructive methods commonly used to determine the internal structure of reservoir rock sample. The spatial resolution of the µCT images obtained was 27 µm, which enabled the author to create accurate 3-D visualizations of structure geometry and to calculate the ratio of pores volume to the total sample volume. A copper X-ray source filter was used to reduce image artifacts. Furthermore, samples Young’s modulus and Poisson ratio were obtained with use of ultrasonic pulse technique. µCT and ultrasonic pulse technique provide complex information which can be used for explorations and characterization of reservoir rocks.

Keywords: elastic parameters, linear absorption coefficient, northern Poland, tight gas

Procedia PDF Downloads 220
22103 Preparation and Structural Analysis of Nano-Ciprofloxacin by Fourier Transform X-Ray Diffraction, Infra-Red Spectroscopy, and Semi Electron Microscope (SEM)

Authors: Shahriar Ghammamy, Mehrnoosh Saboony

Abstract:

Purpose: To evaluate the spectral specification (IR-XRD and SEM) of nano-ciprofloxacin that prepared by up-down method (satellite mill). Methods: the ciprofloxacin was minimized to nano-scale with satellite mill and its characterization evaluated by Infrared spectroscopy, XRD diffraction and semi electron microscope (SEM). Expectation enhances the antibacterial property of nano-ciprofloxacin in comparison to ciprofloxacin. IR spectrum of nano-ciprofloxacin compared with spectrum of ciprofloxacin, and both of them were almost agreement with a difference: the peaks in spectrum of nano-ciprofloxacin were sharper than peaks in spectrum of ciprofloxacin. X-Ray powder diffraction analysis of nano-ciprofloxacin shows the diameter of particles equal to 90.9nm. (on the basis of Scherer Equation). SEM image shows the global shape for nano-ciprofloxacin.

Keywords: antibiotic, ciprofloxacin, nano, IR, XRD, SEM

Procedia PDF Downloads 486
22102 Preparation and Structural Analysis of Nano Ciprofloxacin by Fourier Transform Infra-Red Spectroscopy, X-Ray Diffraction and Semi Electron Microscope (SEM)

Authors: Shahriar Ghammamy, Mehrnoosh Saboony

Abstract:

Purpose: to evaluate the spectral specification(IR-XRD and SEM) of nano ciprofloxacin that prepared by up-down method (satellite mill). Methods: the ciprofloxacin was minimized to nano-scale with satellite mill and it,s characterization evaluated by Infrared spectroscopy, XRD diffraction and semi electron microscope (SEM). Expectation: to enhance the antibacterial property of nano ciprofloxacin in comparison to ciprofloxacin.IR spectrum of nano ciprofloxacin compared with spectrum of ciprofloxacin, and both of them were almost agreement with a difference: the peaks in spectrum of nano ciprofloxacin was sharper than peaks in spectrum of ciprofloxacin. X-Ray powder diffraction analysis of nano ciprofloxacin showes the diameter of particles equal to 90.9 nm (on the basis of scherrer equation). SEM image showes the global shape for nano ciprofloxacin.

Keywords: antibiotic, ciprofloxacin, nano, IR, XRD, SEM

Procedia PDF Downloads 381
22101 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France

Authors: Bensaid A., Mostephaoui T., Nedjai R.

Abstract:

Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.

Keywords: land development, GIS, sand dunes, segmentation, remote sensing

Procedia PDF Downloads 25
22100 Deployment of Matrix Transpose in Digital Image Encryption

Authors: Okike Benjamin, Garba E J. D.

Abstract:

Encryption is used to conceal information from prying eyes. Presently, information and data encryption are common due to the volume of data and information in transit across the globe on daily basis. Image encryption is yet to receive the attention of the researchers as deserved. In other words, video and multimedia documents are exposed to unauthorized accessors. The authors propose image encryption using matrix transpose. An algorithm that would allow image encryption is developed. In this proposed image encryption technique, the image to be encrypted is split into parts based on the image size. Each part is encrypted separately using matrix transpose. The actual encryption is on the picture elements (pixel) that make up the image. After encrypting each part of the image, the positions of the encrypted images are swapped before transmission of the image can take place. Swapping the positions of the images is carried out to make the encrypted image more robust for any cryptanalyst to decrypt.

Keywords: image encryption, matrices, pixel, matrix transpose

Procedia PDF Downloads 387
22099 Using Non-Negative Matrix Factorization Based on Satellite Imagery for the Collection of Agricultural Statistics

Authors: Benyelles Zakaria, Yousfi Djaafar, Karoui Moussa Sofiane

Abstract:

Agriculture is fundamental and remains an important objective in the Algerian economy, based on traditional techniques and structures, it generally has a purpose of consumption. Collection of agricultural statistics in Algeria is done using traditional methods, which consists of investigating the use of land through survey and field survey. These statistics suffer from problems such as poor data quality, the long delay between collection of their last final availability and high cost compared to their limited use. The objective of this work is to develop a processing chain for a reliable inventory of agricultural land by trying to develop and implement a new method of extracting information. Indeed, this methodology allowed us to combine data from remote sensing and field data to collect statistics on areas of different land. The contribution of remote sensing in the improvement of agricultural statistics, in terms of area, has been studied in the wilaya of Sidi Bel Abbes. It is in this context that we applied a method for extracting information from satellite images. This method is called the non-negative matrix factorization, which does not consider the pixel as a single entity, but will look for components the pixel itself. The results obtained by the application of the MNF were compared with field data and the results obtained by the method of maximum likelihood. We have seen a rapprochement between the most important results of the FMN and those of field data. We believe that this method of extracting information from satellite data leads to interesting results of different types of land uses.

Keywords: blind source separation, hyper-spectral image, non-negative matrix factorization, remote sensing

Procedia PDF Downloads 392
22098 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE

Authors: Parimalah Velo, Ahmad Zakaria

Abstract:

Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.

Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging

Procedia PDF Downloads 249
22097 Mean Shift-Based Preprocessing Methodology for Improved 3D Buildings Reconstruction

Authors: Nikolaos Vassilas, Theocharis Tsenoglou, Djamchid Ghazanfarpour

Abstract:

In this work we explore the capability of the mean shift algorithm as a powerful preprocessing tool for improving the quality of spatial data, acquired from airborne scanners, from densely built urban areas. On one hand, high resolution image data corrupted by noise caused by lossy compression techniques are appropriately smoothed while at the same time preserving the optical edges and, on the other, low resolution LiDAR data in the form of normalized Digital Surface Map (nDSM) is upsampled through the joint mean shift algorithm. Experiments on both the edge-preserving smoothing and upsampling capabilities using synthetic RGB-z data show that the mean shift algorithm is superior to bilateral filtering as well as to other classical smoothing and upsampling algorithms. Application of the proposed methodology for 3D reconstruction of buildings of a pilot region of Athens, Greece results in a significant visual improvement of the 3D building block model.

Keywords: 3D buildings reconstruction, data fusion, data upsampling, mean shift

Procedia PDF Downloads 292
22096 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 100
22095 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region

Authors: Mohammad Bakhshi, Firas Al Janabi

Abstract:

High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.

Keywords: DiMoN Tool, disaggregation, exceedance probability, Kolmogorov-Smirnov test, rainfall

Procedia PDF Downloads 182
22094 Color Image Enhancement Using Multiscale Retinex and Image Fusion Techniques

Authors: Chang-Hsing Lee, Cheng-Chang Lien, Chin-Chuan Han

Abstract:

In this paper, an edge-strength guided multiscale retinex (EGMSR) approach will be proposed for color image contrast enhancement. In EGMSR, the pixel-dependent weight associated with each pixel in the single scale retinex output image is computed according to the edge strength around this pixel in order to prevent from over-enhancing the noises contained in the smooth dark/bright regions. Further, by fusing together the enhanced results of EGMSR and adaptive multiscale retinex (AMSR), we can get a natural fused image having high contrast and proper tonal rendition. Experimental results on several low-contrast images have shown that our proposed approach can produce natural and appealing enhanced images.

Keywords: image enhancement, multiscale retinex, image fusion, EGMSR

Procedia PDF Downloads 430
22093 Examination of 12-14 Years Old Volleyball Players’ Body Image Levels

Authors: Dilek Yalız Solmaz, Gülsün Güven

Abstract:

The aim of this study is to examine the body image levels of 12-14 years old girls who are playing volleyball. The research group consists of 113 girls who are playing volleyball in Sakarya during the fall season of 2015-2016. Data was collected by means of the 'Body Image Questionnaire' which was originally developed by Secord and Jourard. The consequence of repeated analysis of the reliability of the scale was determined to as '.96'. This study employed statistical calculations as mean, standard deviation and t-test. According to results of this study, it was determined that the mean point of the volleyball players is 158.5 ± 25.1 (minimum=40; maximum=200) and it can be said that the volleyball players’ body image levels are high. There is a significant difference between the underweight (167.4 ± 20.7) and normal weight (151.4 ± 26.2) groups according to their Body Mass Index. Body image levels of underweight group were determined higher than normal weight group.

Keywords: volleyball, players, body image, body image levels

Procedia PDF Downloads 185
22092 Empirical Research on Preference for Conflict Resolution Styles of Owners and Contractors in China

Authors: Junqi Zhao, Yongqiang Chen

Abstract:

The preference for different conflict resolution styles are influenced by cultural background and power distance of two parties involving in conflict. This research put forward 7 hypotheses and tested the preference differences of the five conflict resolution styles between Chinese owner and contractor as well as the preference differences concerning the same style between two parties. The research sample includes 202 practitioners from construction enterprises in mainland China. Research result found that theories concerning conflict resolution styles could be applied in the Chinese construction industry. Some results of this research were not in line with former research, and this research also gave explanation to the differences from the characteristics of construction projects. Based on the findings, certain suggestions were made to serve as a guidance for managers to choose appropriate conflict resolution styles for a better handling of conflict.

Keywords: Chinese owner and contractor, conflict, construction project, conflict resolution styles

Procedia PDF Downloads 490
22091 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 199
22090 Evaluation of High Temperature Wear Performance of as Cladded and Tig Re-Melting Stellite 6 Cladded Overlay on Aisi-304L Using SMAW Process

Authors: Manjit Singha, Sandeep Singh Sandhu, A. S. Shahi

Abstract:

Stellite 6 is cobalt based superalloy used for protective coatings. It is used to improve the wear performance of stainless steel engineering components subjected to harsh environmental conditions. This paper reports the high temperature wear analysis of satellite 6 cladded on AISI 304 L substrate using SMAW process. Bead on plate experiment was carried out by varying current and electrode manipulation techniques to optimize the dilution and hardness. 80 Amp current and weaving technique was found to be the optimum set of parameters for overlaying which were further used for multipass multilayer cladding on two plates of AISI 304 L substrate. On the first plate, seven layers seven passes of stellite 6 was overlaid which was used in as cladded form and the second plate was overlaid with five layers five passes of satellite 6 with further TIG remelting. The wear performance was examined for normal temperature environmental condition and harsh temperature environmental condition. The satellite 6 coating with TIG remelting was found to be better in both the conditions even with lesser metal deposition due to its finer grain structure.

Keywords: surfacing, stellite 6, dilution, overlay, SMAW, high-temperature frictional wear, micro-structure, micro-hardness

Procedia PDF Downloads 264
22089 Imaging Based On Bi-Static SAR Using GPS L5 Signal

Authors: Tahir Saleem, Mohammad Usman, Nadeem Khan

Abstract:

GPS signals are used for navigation and positioning purposes by a diverse set of users. However, this project intends to utilize the reflected GPS L5 signals for location of target in a region of interest by generating an image that highlights the positions of targets in the area of interest. The principle of bi-static radar is used to detect the targets or any movement or changes. The idea is confirmed by the results obtained during MATLAB simulations. A matched filter based technique is employed in the signal processing to improve the system resolution. The simulation is carried out under different conditions with moving receiver and targets. Noise and attenuation is also induced and atmospheric conditions that affect the direct and reflected GPS signals have been simulated to generate a more practical scenario. A realistic GPS L5 signal has been simulated, the simulation results verify that the detection and imaging of targets is possible by employing reflected GPS using L5 signals and matched filter processing technique with acceptable spatial resolution.

Keywords: GPS, L5 Signal, SAR, spatial resolution

Procedia PDF Downloads 507