Search results for: single image super resolution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8662

Search results for: single image super resolution

7882 A Gradient Orientation Based Efficient Linear Interpolation Method

Authors: S. Khan, A. Khan, Abdul R. Soomrani, Raja F. Zafar, A. Waqas, G. Akbar

Abstract:

This paper proposes a low-complexity image interpolation method. Image interpolation is used to convert a low dimension video/image to high dimension video/image. The objective of a good interpolation method is to upscale an image in such a way that it provides better edge preservation at the cost of very low complexity so that real-time processing of video frames can be made possible. However, low complexity methods tend to provide real-time interpolation at the cost of blurring, jagging and other artifacts due to errors in slope calculation. Non-linear methods, on the other hand, provide better edge preservation, but at the cost of high complexity and hence they can be considered very far from having real-time interpolation. The proposed method is a linear method that uses gradient orientation for slope calculation, unlike conventional linear methods that uses the contrast of nearby pixels. Prewitt edge detection is applied to separate uniform regions and edges. Simple line averaging is applied to unknown uniform regions, whereas unknown edge pixels are interpolated after calculation of slopes using gradient orientations of neighboring known edge pixels. As a post-processing step, bilateral filter is applied to interpolated edge regions in order to enhance the interpolated edges.

Keywords: edge detection, gradient orientation, image upscaling, linear interpolation, slope tracing

Procedia PDF Downloads 263
7881 Deleterious SNP’s Detection Using Machine Learning

Authors: Hamza Zidoum

Abstract:

This paper investigates the impact of human genetic variation on the function of human proteins using machine-learning algorithms. Single-Nucleotide Polymorphism represents the most common form of human genome variation. We focus on the single amino-acid polymorphism located in the coding region as they can affect the protein function leading to pathologic phenotypic change. We use several supervised Machine Learning methods to identify structural properties correlated with increased risk of the missense mutation being damaging. SVM associated with Principal Component Analysis give the best performance.

Keywords: single-nucleotide polymorphism, machine learning, feature selection, SVM

Procedia PDF Downloads 383
7880 The Theory of Domination at the Bane of Conflict Resolution and Peace Building Processes in Cameroon

Authors: Nkatow Mafany Christian

Abstract:

According to UNHCR’s annual Database, humanitarian crises have globally been on the increase since the beginning of the 21st Century, especially in the Middle East and in Sub-Saharan Africa. Cameroon is one of the countries that has suffered tremendously from humanitarian challenges in recent years, especially with crises in the Far North, the East and its Two English-speaking Regions. These have been a result of failed mechanisms in conflict resolution peacebuilding by the government. The paper draws from this basic premise to argue that the failure to reach a consensus in order to curb internal conflicts has largely been due to the government’s attachment to the domineering attitude which emphasizes an imposition of peace terms by a superordinate (government) agency on the subordinate (aggrieved) entities. This has stalled peace efforts that have so far been engaged to address the dreaded armed conflicts in the North and South West Regions, leading to the persistence of the armed conflict. The paper exploits written, oral and online sources to sustain its argument. It suggests that an eclectic approach to resolving conflicts, which emphasizes open and frank dialogue as well as a review of the root causes, can go a long way not only to build trust but also to address the Anglophone-Cameroonian problems in Cameroon.

Keywords: conflict, conflict resolution, peace building, humanitarian crisis

Procedia PDF Downloads 74
7879 A Study on an Evacuation Test to Measure Delay Time in Using an Evacuation Elevator

Authors: Kyungsuk Cho, Seungun Chae, Jihun Choi

Abstract:

Elevators are examined as one of evacuation methods in super-tall buildings. However, data on the use of elevators for evacuation at a fire are extremely scarce. Therefore, a test to measure delay time in using an evacuation elevator was conducted. In the test, time taken to get on and get off an elevator was measured and the case in which people gave up boarding when the capacity of the elevator was exceeded was also taken into consideration. 170 men and women participated in the test, 130 of whom were young people (20 ~ 50 years old) and 40 were senior citizens (over 60 years old). The capacity of the elevator was 25 people and it travelled between the 2nd and 4th floors. A video recording device was used to analyze the test. An elevator at an ordinary building, not a super-tall building, was used in the test to measure delay time in getting on and getting off an elevator. In order to minimize interference from other elements, elevator platforms on the 2nd and 4th floors were partitioned off. The elevator travelled between the 2nd and 4th floors where people got on and off. If less than 20 people got on the elevator which was empty, the data were excluded. If the elevator carrying 10 passengers stopped and less than 10 new passengers got on the elevator, the data were excluded. Getting-on an empty elevator was observed 49 times. The average number of passengers was 23.7, it took 14.98 seconds for the passengers to get on the empty elevator and the load factor was 1.67 N/s. It took the passengers, whose average number was 23.7, 10.84 seconds to get off the elevator and the unload factor was 2.33 N/s. When an elevator’s capacity is exceeded, the excessive number of people should get off. Time taken for it and the probability of the case were measure in the test. 37% of the times of boarding experienced excessive number of people. As the number of people who gave up boarding increased, the load factor of the ride decreased. When 1 person gave up boarding, the load factor was 1.55 N/s. The case was observed 10 times, which was 12.7% of the total. When 2 people gave up boarding, the load factor was 1.15 N/s. The case was observed 7 times, which was 8.9% of the total. When 3 people gave up boarding, the load factor was 1.26 N/s. The case was observed 4 times, which was 5.1% of the total. When 4 people gave up boarding, the load factor was 1.03 N/s. The case was observed 5 times, which was 6.3% of the total. Getting-on and getting-off time data for people who can walk freely were obtained from the test. In addition, quantitative results were obtained from the relation between the number of people giving up boarding and time taken for getting on. This work was supported by the National Research Council of Science & Technology (NST) grant by the Korea government (MSIP) (No. CRC-16-02-KICT).

Keywords: evacuation elevator, super tall buildings, evacuees, delay time

Procedia PDF Downloads 178
7878 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique

Authors: Ahmet Karagoz, Irfan Karagoz

Abstract:

Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.

Keywords: automatic target recognition, sparse representation, image classification, SAR images

Procedia PDF Downloads 369
7877 The Impact of the European Single Market on the Austrian Economy

Authors: Reinhard Neck, Guido Schäfer

Abstract:

In this paper, we explore the macroeconomic effects of the European Single Market on Austria by simulating the McKibbin-Sachs Global Model. Global interdependence and the impact of long-run effects on short-run adjustments are taken into account. We study the sensitivity of the results with respect to different assumptions concerning monetary and fiscal policies for the countries and regions of the world economy. The consequences of different assumptions about budgetary policies in Austria are also investigated. The simulation results are contrasted with ex-post evaluations of the actual impact of Austria’s membership in the Single Market. As a result, it can be concluded that the Austrian participation in the European Single Market entails considerable long-run gains for the Austrian economy with nearly no adverse side-effects on any macroeconomic target variable.

Keywords: macroeconomics, European Union, simulation, sensitivity analysis

Procedia PDF Downloads 279
7876 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features

Authors: Bo Wang

Abstract:

The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.

Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection

Procedia PDF Downloads 289
7875 The Predictors of Head and Neck Cancer-Head and Neck Cancer-Related Lymphedema in Patients with Resected Advanced Head and Neck Cancer

Authors: Shu-Ching Chen, Li-Yun Lee

Abstract:

The purpose of the study was to identify the factors associated with head and neck cancer-related lymphoedema (HNCRL)-related symptoms, body image, and HNCRL-related functional outcomes among patients with resected advanced head and neck cancer. A cross-sectional correlational design was conducted to examine the predictors of HNCRL-related functional outcomes in patients with resected advanced head and neck cancer. Eligible patients were recruited from a single medical center in northern Taiwan. Consecutive patients were approached and recruited from the Radiation Head and Neck Outpatient Department of this medical center. Eligible subjects were assessed for the Symptom Distress Scale–Modified for Head and Neck Cancer (SDS-mhnc), Brief International Classification of Functioning, Disability and Health (ICF) Core Set for Head and Neck Cancer (BCSQ-H&N), Body Image Scale–Modified (BIS-m), The MD Anderson Head and Neck Lymphedema Rating Scale (MDAHNLRS), The Foldi’s Stages of Lymphedema (Foldi’s Scale), Patterson’s Scale, UCLA Shoulder Rating Scale (UCLA SRS), and Karnofsky’s Performance Status Index (KPS). The results showed that the worst problems with body HNCRL functional outcomes. Patients’ HNCRL symptom distress and performance status are robust predictors across over for overall HNCRL functional outcomes, problems with body HNCRL functional outcomes, and activity and social functioning HNCRL functional outcomes. Based on the results of this period research program, we will develop a Cancer Rehabilitation and Lymphedema Care Program (CRLCP) to use in the care of patients with resected advanced head and neck cancer.

Keywords: head and neck cancer, resected, lymphedema, symptom, body image, functional outcome

Procedia PDF Downloads 264
7874 LEGO Bricks and Creativity: A Comparison between Classic and Single Sets

Authors: Maheen Zia

Abstract:

Near the early twenty-first century, LEGO decided to diversify its product range which resulted in more specific and single-outcome sets occupying the store shelves than classic kits having fairly all-purpose bricks. Earlier, LEGOs came with more bricks and lesser instructions. Today, there are more single kits being produced and sold, which come with a strictly defined set of guidelines. If one set is used to make a car, the same bricks cannot be put together to produce any other article. Earlier, multiple bricks gave children a chance to be imaginative, think of new items and construct them (by just putting the same pieces differently). The new products are less open-ended and offer a limited possibility for players in both designing and realizing those designs. The article reviews (in the light of existing research) how classic LEGO sets could help enhance a child’s creativity in comparison with single sets, which allow a player to interact (not experiment) with the bricks.

Keywords: constructive play, creativity, LEGO, play-based learning

Procedia PDF Downloads 190
7873 A Three-modal Authentication Method for Industrial Robots

Authors: Luo Jiaoyang, Yu Hongyang

Abstract:

In this paper, we explore a method that can be used in the working scene of intelligent industrial robots to confirm the identity information of operators to ensure that the robot executes instructions in a sufficiently safe environment. This approach uses three information modalities, namely visible light, depth, and sound. We explored a variety of fusion modes for the three modalities and finally used the joint feature learning method to improve the performance of the model in the case of noise compared with the single-modal case, making the maximum noise in the experiment. It can also maintain an accuracy rate of more than 90%.

Keywords: multimodal, kinect, machine learning, distance image

Procedia PDF Downloads 82
7872 Microstructural and Optical Characterization of Heterostructures of ZnS/CdS and CdS/ZnS Synthesized by Chemical Bath Deposition Method

Authors: Temesgen Geremew

Abstract:

ZnS/glass and CdS/glass single layers and ZnS/CdS and CdS/ZnS heterojunction thin films were deposited by the chemical bath deposition method using zinc acetate and cadmium acetate as the metal ion sources and thioacetamide as a nonmetallic ion source in acidic medium. Na2EDTA was used as a complexing agent to control the free cation concentration. +e single layer and heterojunction thin films were characterized with X-ray diffraction (XRD), a scanning electron microscope (SEM), energy dispersive X-ray (EDX), and a UV-VIS spectrometer. +e XRD patterns of the CdS/glass thin film deposited on the soda lime glass substrate crystalized in the cubic structure with a single peak along the (111) plane. +e ZnS/CdS heterojunction and ZnS/glass single layer thin films were crystalized in the hexagonal ZnS structure. +e CdS/ZnS heterojunction thin film is nearly amorphous.The optical analysis results confirmed single band gap values of 2.75 eV and 2.5 eV for ZnS/CdS and CdS/ZnS heterojunction thin films, respectively. +e CdS/glass and CdS/ZnS thin films have more imaginary dielectric components than the real part. The optical conductivity of the single layer and heterojunction films is in the order of 1015 1/s. +e optical study also confirmed refractive index values between 2 and 2.7 for ZnS/glass, ZnS/CdS, and CdS/ZnS thin films for incident photon energies between 1.2 eV and 3.8 eV. +e surface morphology studies revealed compacted spherical grains covering the substrate surfaces with few cracks on ZnS/glass, ZnS/CdS, and CdS/glass and voids on CdS/ZnS thin films. +e EDX result confirmed nearly 1 :1 metallic to nonmetallic ion ratio in the single-layered thin films and the dominance of Zn ion over Cd ion in both ZnS/CdS and CdS/ZnS heterojunction thin films.

Keywords: SERS, sensor, Hg2+, water detection, polythiophene

Procedia PDF Downloads 70
7871 Maximum-likelihood Inference of Multi-Finger Movements Using Neural Activities

Authors: Kyung-Jin You, Kiwon Rhee, Marc H. Schieber, Nitish V. Thakor, Hyun-Chool Shin

Abstract:

It remains unknown whether M1 neurons encode multi-finger movements independently or as a certain neural network of single finger movements although multi-finger movements are physically a combination of single finger movements. We present an evidence of correlation between single and multi-finger movements and also attempt a challenging task of semi-blind decoding of neural data with minimum training of the neural decoder. Data were collected from 115 task-related neurons in M1 of a trained rhesus monkey performing flexion and extension of each finger and the wrist (12 single and 6 two-finger-movements). By exploiting correlation of temporal firing pattern between movements, we found that correlation coefficient for physically related movements pairs is greater than others; neurons tuned to single finger movements increased their firing rate when multi-finger commands were instructed. According to this knowledge, neural semi-blind decoding is done by choosing the greatest and the second greatest likelihood for canonical candidates. We achieved a decoding accuracy about 60% for multiple finger movement without corresponding training data set. this results suggest that only with the neural activities on single finger movements can be exploited to control dexterous multi-fingered neuroprosthetics.

Keywords: finger movement, neural activity, blind decoding, M1

Procedia PDF Downloads 327
7870 Classification of Hyperspectral Image Using Mathematical Morphological Operator-Based Distance Metric

Authors: Geetika Barman, B. S. Daya Sagar

Abstract:

In this article, we proposed a pixel-wise classification of hyperspectral images using a mathematical morphology operator-based distance metric called “dilation distance” and “erosion distance”. This method involves measuring the spatial distance between the spectral features of a hyperspectral image across the bands. The key concept of the proposed approach is that the “dilation distance” is the maximum distance a pixel can be moved without changing its classification, whereas the “erosion distance” is the maximum distance that a pixel can be moved before changing its classification. The spectral signature of the hyperspectral image carries unique class information and shape for each class. This article demonstrates how easily the dilation and erosion distance can measure spatial distance compared to other approaches. This property is used to calculate the spatial distance between hyperspectral image feature vectors across the bands. The dissimilarity matrix is then constructed using both measures extracted from the feature spaces. The measured distance metric is used to distinguish between the spectral features of various classes and precisely distinguish between each class. This is illustrated using both toy data and real datasets. Furthermore, we investigated the role of flat vs. non-flat structuring elements in capturing the spatial features of each class in the hyperspectral image. In order to validate, we compared the proposed approach to other existing methods and demonstrated empirically that mathematical operator-based distance metric classification provided competitive results and outperformed some of them.

Keywords: dilation distance, erosion distance, hyperspectral image classification, mathematical morphology

Procedia PDF Downloads 91
7869 Multiple Images Stitching Based on Gradually Changing Matrix

Authors: Shangdong Zhu, Yunzhou Zhang, Jie Zhang, Hang Hu, Yazhou Zhang

Abstract:

Image stitching is a very important branch in the field of computer vision, especially for panoramic map. In order to eliminate shape distortion, a novel stitching method is proposed based on gradually changing matrix when images are horizontal. For images captured horizontally, this paper assumes that there is only translational operation in image stitching. By analyzing each parameter of the homography matrix, the global homography matrix is gradually transferred to translation matrix so as to eliminate the effects of scaling, rotation, etc. in the image transformation. This paper adopts matrix approximation to get the minimum value of the energy function so that the shape distortion at those regions corresponding to the homography can be minimized. The proposed method can avoid multiple horizontal images stitching failure caused by accumulated shape distortion. At the same time, it can be combined with As-Projective-As-Possible algorithm to ensure precise alignment of overlapping area.

Keywords: image stitching, gradually changing matrix, horizontal direction, matrix approximation, homography matrix

Procedia PDF Downloads 322
7868 Algorithm for Path Recognition in-between Tree Rows for Agricultural Wheeled-Mobile Robots

Authors: Anderson Rocha, Pedro Miguel de Figueiredo Dinis Oliveira Gaspar

Abstract:

Machine vision has been widely used in recent years in agriculture, as a tool to promote the automation of processes and increase the levels of productivity. The aim of this work is the development of a path recognition algorithm based on image processing to guide a terrestrial robot in-between tree rows. The proposed algorithm was developed using the software MATLAB, and it uses several image processing operations, such as threshold detection, morphological erosion, histogram equalization and the Hough transform, to find edge lines along tree rows on an image and to create a path to be followed by a mobile robot. To develop the algorithm, a set of images of different types of orchards was used, which made possible the construction of a method capable of identifying paths between trees of different heights and aspects. The algorithm was evaluated using several images with different characteristics of quality and the results showed that the proposed method can successfully detect a path in different types of environments.

Keywords: agricultural mobile robot, image processing, path recognition, hough transform

Procedia PDF Downloads 149
7867 Video Stabilization Using Feature Point Matching

Authors: Shamsundar Kulkarni

Abstract:

Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

Keywords: video stabilization, point feature matching, salient points, image quality measurement

Procedia PDF Downloads 315
7866 Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation

Authors: Seynabou Toure, Oumar Diop, Kidiyo Kpalma, Amadou S. Maiga

Abstract:

Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature.

Keywords: classification, coastline, color, sea-land segmentation

Procedia PDF Downloads 253
7865 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy

Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu

Abstract:

Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.

Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films

Procedia PDF Downloads 257
7864 Deep Learning for SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network

Procedia PDF Downloads 74
7863 Ambiguity Resolution for Ground-based Pulse Doppler Radars Using Multiple Medium Pulse Repetition Frequency

Authors: Khue Nguyen Dinh, Loi Nguyen Van, Thanh Nguyen Nhu

Abstract:

In this paper, we propose an adaptive method to resolve ambiguities and a ghost target removal process to extract targets detected by a ground-based pulse-Doppler radar using medium pulse repetition frequency (PRF) waveforms. The ambiguity resolution method is an adaptive implementation of the coincidence algorithm, which is implemented on a two-dimensional (2D) range-velocity matrix to resolve range and velocity ambiguities simultaneously, with a proposed clustering filter to enhance the anti-error ability of the system. Here we consider the scenario of multiple target environments. The ghost target removal process, which is based on the power after Doppler processing, is proposed to mitigate ghosting detections to enhance the performance of ground-based radars using a short PRF schedule in multiple target environments. Simulation results on a ground-based pulsed Doppler radar model will be presented to show the effectiveness of the proposed approach.

Keywords: ambiguity resolution, coincidence algorithm, medium PRF, ghosting removal

Procedia PDF Downloads 157
7862 Irradion: Portable Small Animal Imaging and Irradiation Unit

Authors: Josef Uher, Jana Boháčová, Richard Kadeřábek

Abstract:

In this paper, we present a multi-robot imaging and irradiation research platform referred to as Irradion, with full capabilities of portable arbitrary path computed tomography (CT). Irradion is an imaging and irradiation unit entirely based on robotic arms for research on cancer treatment with ion beams on small animals (mice or rats). The platform comprises two subsystems that combine several imaging modalities, such as 2D X-ray imaging, CT, and particle tracking, with precise positioning of a small animal for imaging and irradiation. Computed Tomography: The CT subsystem of the Irradion platform is equipped with two 6-joint robotic arms that position a photon counting detector and an X-ray tube independently and freely around the scanned specimen and allow image acquisition utilizing computed tomography. Irradiation measures nearly all conventional 2D and 3D trajectories of X-ray imaging with precisely calibrated and repeatable geometrical accuracy leading to a spatial resolution of up to 50 µm. In addition, the photon counting detectors allow X-ray photon energy discrimination, which can suppress scattered radiation, thus improving image contrast. It can also measure absorption spectra and recognize different materials (tissue) types. X-ray video recording and real-time imaging options can be applied for studies of dynamic processes, including in vivo specimens. Moreover, Irradion opens the door to exploring new 2D and 3D X-ray imaging approaches. We demonstrate in this publication various novel scan trajectories and their benefits. Proton Imaging and Particle Tracking: The Irradion platform allows combining several imaging modules with any required number of robots. The proton tracking module comprises another two robots, each holding particle tracking detectors with position, energy, and time-sensitive sensors Timepix3. Timepix3 detectors can track particles entering and exiting the specimen and allow accurate guiding of photon/ion beams for irradiation. In addition, quantifying the energy losses before and after the specimen brings essential information for precise irradiation planning and verification. Work on the small animal research platform Irradion involved advanced software and hardware development that will offer researchers a novel way to investigate new approaches in (i) radiotherapy, (ii) spectral CT, (iii) arbitrary path CT, (iv) particle tracking. The robotic platform for imaging and radiation research developed for the project is an entirely new product on the market. Preclinical research systems with precision robotic irradiation with photon/ion beams combined with multimodality high-resolution imaging do not exist currently. The researched technology can potentially cause a significant leap forward compared to the current, first-generation primary devices.

Keywords: arbitrary path CT, robotic CT, modular, multi-robot, small animal imaging

Procedia PDF Downloads 94
7861 Experimental Characterization of Composite Material with Non Contacting Methods

Authors: Nikolaos Papadakis, Constantinos Condaxakis, Konstantinos Savvakis

Abstract:

The aim of this paper is to determine the elastic properties (elastic modulus and Poisson ratio) of a composite material based on noncontacting imaging methods. More specifically, the significantly reduced cost of digital cameras has given the opportunity of the high reliability of low-cost strain measurement. The open source platform Ncorr is used in this paper which utilizes the method of digital image correlation (DIC). The use of digital image correlation in measuring strain uses random speckle preparation on the surface of the gauge area, image acquisition, and postprocessing the image correlation to obtain displacement and strain field on surface under study. This study discusses technical issues relating to the quality of results to be obtained are discussed. [0]8 fabric glass/epoxy composites specimens were prepared and tested at different orientations 0[o], 30[o], 45[o], 60[o], 90[o]. Each test was recorded with the camera at a constant frame rate and constant lighting conditions. The recorded images were processed through the use of the image processing software. The parameters of the test are reported. The strain map output which is obtained through strain measurement using Ncorr is validated by a) comparing the elastic properties with expected values from Classical laminate theory, b) through finite element analysis.

Keywords: composites, Ncorr, strain map, videoextensometry

Procedia PDF Downloads 148
7860 An Automatic Large Classroom Attendance Conceptual Model Using Face Counting

Authors: Sirajdin Olagoke Adeshina, Haidi Ibrahim, Akeem Salawu

Abstract:

large lecture theatres cannot be covered by a single camera but rather by a multicamera setup because of their size, shape, and seating arrangements. Although, classroom capture is achievable through a single camera. Therefore, a design and implementation of a multicamera setup for a large lecture hall were considered. Researchers have shown emphasis on the impact of class attendance taken on the academic performance of students. However, the traditional method of carrying out this exercise is below standard, especially for large lecture theatres, because of the student population, the time required, sophistication, exhaustiveness, and manipulative influence. An automated large classroom attendance system is, therefore, imperative. The common approach in this system is face detection and recognition, where known student faces are captured and stored for recognition purposes. This approach will require constant face database updates due to constant changes in the facial features. Alternatively, face counting can be performed by cropping the localized faces on the video or image into a folder and then count them. This research aims to develop a face localization-based approach to detect student faces in classroom images captured using a multicamera setup. A selected Haar-like feature cascade face detector trained with an asymmetric goal to minimize the False Rejection Rate (FRR) relative to the False Acceptance Rate (FAR) was applied on Raspberry Pi 4B. A relationship between the two factors (FRR and FAR) was established using a constant (λ) as a trade-off between the two factors for automatic adjustment during training. An evaluation of the proposed approach and the conventional AdaBoost on classroom datasets shows an improvement of 8% TPR (output result of low FRR) and 7% minimization of the FRR. The average learning speed of the proposed approach was improved with 1.19s execution time per image compared to 2.38s of the improved AdaBoost. Consequently, the proposed approach achieved 97% TPR with an overhead constraint time of 22.9s compared to 46.7s of the improved Adaboost when evaluated on images obtained from a large lecture hall (DK5) USM.

Keywords: automatic attendance, face detection, haar-like cascade, manual attendance

Procedia PDF Downloads 74
7859 Torque Magnetometry of Low Anisotropic CaCo2As2 Single Crystals

Authors: Kashif Nadeem, W. Zhang, X. G. Qiu

Abstract:

Role of Co spins in CaCo2As2 single crystal is systematically studied by using dc magnetization and magnetic torque measurements. A spin-flop transition in the antiferromagnetism (AFM) CaCo2As2 single crystal is studied by using dc magnetization and magnetic torque. Field dependent and angle dependent torque magnetometry confirmed the existence of spin-flop transition in this compound which is in agreement with the dc magnetization studies. A comparison of dc magnetization and torque magnetometry measurements for CaCo2As2 single crystal is done in detail. In conclusion, torque magnetometry can be a useful tool to study the spin flop transition in low anisotropic compounds analogous to dc magnetization studies.

Keywords: spin flop transition, torque magnetometry, magnetization, anisotropic

Procedia PDF Downloads 550
7858 Large Neural Networks Learning From Scratch With Very Few Data and Without Explicit Regularization

Authors: Christoph Linse, Thomas Martinetz

Abstract:

Recent findings have shown that Neural Networks generalize also in over-parametrized regimes with zero training error. This is surprising, since it is completely against traditional machine learning wisdom. In our empirical study we fortify these findings in the domain of fine-grained image classification. We show that very large Convolutional Neural Networks with millions of weights do learn with only a handful of training samples and without image augmentation, explicit regularization or pretraining. We train the architectures ResNet018, ResNet101 and VGG19 on subsets of the difficult benchmark datasets Caltech101, CUB_200_2011, FGVCAircraft, Flowers102 and StanfordCars with 100 classes and more, perform a comprehensive comparative study and draw implications for the practical application of CNNs. Finally, we show that VGG19 with 140 million weights learns to distinguish airplanes and motorbikes with up to 95% accuracy using only 20 training samples per class.

Keywords: convolutional neural networks, fine-grained image classification, generalization, image recognition, over-parameterized, small data sets

Procedia PDF Downloads 92
7857 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer

Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved

Abstract:

Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.

Keywords: computer-aided system, detection, image segmentation, morphology

Procedia PDF Downloads 155
7856 Pressure-Controlled Dynamic Equations of the PFC Model: A Mathematical Formulation

Authors: Jatupon Em-Udom, Nirand Pisutha-Arnond

Abstract:

The phase-field-crystal, PFC, approach is a density-functional-type material model with an atomic resolution on a diffusive timescale. Spatially, the model incorporates periodic nature of crystal lattices and can naturally exhibit elasticity, plasticity and crystal defects such as grain boundaries and dislocations. Temporally, the model operates on a diffusive timescale which bypasses the need to resolve prohibitively small atomic-vibration time steps. The PFC model has been used to study many material phenomena such as grain growth, elastic and plastic deformations and solid-solid phase transformations. In this study, the pressure-controlled dynamic equation for the PFC model was developed to simulate a single-component system under externally applied pressure; these coupled equations are important for studies of deformable systems such as those under constant pressure. The formulation is based on the non-equilibrium thermodynamics and the thermodynamics of crystalline solids. To obtain the equations, the entropy variation around the equilibrium point was derived. Then the resulting driving forces and flux around the equilibrium were obtained and rewritten as conventional thermodynamic quantities. These dynamics equations are different from the recently-proposed equations; the equations in this study should provide more rigorous descriptions of the system dynamics under externally applied pressure.

Keywords: driving forces and flux, evolution equation, non equilibrium thermodynamics, Onsager’s reciprocal relation, phase field crystal model, thermodynamics of single-component solid

Procedia PDF Downloads 310
7855 Prosperous Digital Image Watermarking Approach by Using DCT-DWT

Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar

Abstract:

In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacks

Keywords: watermarking, digital, DCT-DWT, security

Procedia PDF Downloads 426
7854 High Resolution Sandstone Connectivity Modelling: Implications for Outcrop Geological and Its Analog Studies

Authors: Numair Ahmed Siddiqui, Abdul Hadi bin Abd Rahman, Chow Weng Sum, Wan Ismail Wan Yousif, Asif Zameer, Joel Ben-Awal

Abstract:

Advances in data capturing from outcrop studies have made possible the acquisition of high-resolution digital data, offering improved and economical reservoir modelling methods. Terrestrial laser scanning utilizing LiDAR (Light detection and ranging) provides a new method to build outcrop based reservoir models, which provide a crucial piece of information to understand heterogeneities in sandstone facies with high-resolution images and data set. This study presents the detailed application of outcrop based sandstone facies connectivity model by acquiring information gathered from traditional fieldwork and processing detailed digital point-cloud data from LiDAR to develop an intermediate small-scale reservoir sandstone facies model of the Miocene Sandakan Formation, Sabah, East Malaysia. The software RiScan pro (v1.8.0) was used in digital data collection and post-processing with an accuracy of 0.01 m and point acquisition rate of up to 10,000 points per second. We provide an accurate and descriptive workflow to triangulate point-clouds of different sets of sandstone facies with well-marked top and bottom boundaries in conjunction with field sedimentology. This will provide highly accurate qualitative sandstone facies connectivity model which is a challenge to obtain from subsurface datasets (i.e., seismic and well data). Finally, by applying this workflow, we can build an outcrop based static connectivity model, which can be an analogue to subsurface reservoir studies.

Keywords: LiDAR, outcrop, high resolution, sandstone faceis, connectivity model

Procedia PDF Downloads 233
7853 Optical Flow Technique for Supersonic Jet Measurements

Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi

Abstract:

This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.

Keywords: Schlieren, optical flow, supersonic jets, shock shear layer

Procedia PDF Downloads 314