Search results for: deep vein imaging
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3306

Search results for: deep vein imaging

2646 Research on Design Methods for Riverside Spaces of Deep-cut Rivers in Mountainous Cities: A Case Study of Qingshuixi River in Chongqing City

Authors: Luojie Tang

Abstract:

Riverside space is an important public space and ecological corridor in urban areas, but mountainous urban rivers are often overlooked due to their deep valleys and poor accessibility. This article takes the Qing Shui Xi River in Chongqing as an example, and through long-term field inspections, measurements, interviews, and online surveys, summarizes the problems of poor accessibility, limited space for renovation, lack of waterfront facilities, excessive artificial intervention, low average runoff, severe river water pollution, and difficulty in integrated watershed management in riverside space. Based on the current situation and drawing on relevant experiences, this article summarizes the design methods for riverside space in deep valley rivers in mountainous urban areas. Regarding spatial design techniques, the article emphasizes the importance of integrating waterfront spaces into the urban public space system and vertical linkages. Furthermore, the article suggests different design methods and improvement strategies for the already developed areas and new development areas. Specifically, the article proposes a planning and design strategy of "protection" and "empowerment" for new development areas and an updating and transformation strategy of "improvement" and "revitalization" for already developed areas. In terms of ecological restoration methods, the article suggests three focus points: increasing the runoff of urban rivers, raising the landscape water level during dry seasons, and restoring vegetation and wetlands in the riverbank buffer zone while protecting the overall pattern of the watershed. Additionally, the article presents specific design details of the Qingshuixi River to illustrate the proposed design and restoration techniques.

Keywords: deep-cut river, design method, mountainous city, Qingshuixi river in Chongqing, waterfront space design

Procedia PDF Downloads 82
2645 An Activatable Theranostic for Targeted Cancer Therapy and Imaging

Authors: Sankarprasad Bhuniya, Sukhendu Maiti, Eun-Joong Kim, Hyunseung Lee, Jonathan L. Sessler, Kwan Soo Hong, Jong Seung Kim

Abstract:

A new theranostic strategy is described. It is based on the use of an “all in one” prodrug, namely the biotinylated piperazine-rhodol conjugate 4a. This conjugate, which incorporates the anticancer drug SN-38, undergoes self-immolative cleavage when exposed to biological thiols. This leads to the tumor-targeted release of the active SN-38 payload along with fluorophore 1a. This release is made selective as the result of the biotin functionality. Fluorophore 1a is 32-fold more fluorescent than prodrug 4a. It permits the delivery and release of the SN-38 payload to be monitored easily in vitro and in vivo, as inferred from cell studies and ex vivo analyses of mice xenografts derived HeLa cells, respectively. Prodrug 4a also displays anticancer activity in the HeLa cell murine xenograft tumor model. On the basis of these findings we suggest that the present strategy, which combines within a single agent the key functions of targeting, release, imaging, and treatment, may have a role to play in cancer diagnosis and therapy.

Keywords: theranostic, prodrug, cancer therapy, fluorescence

Procedia PDF Downloads 521
2644 Deep Reinforcement Learning for Advanced Pressure Management in Water Distribution Networks

Authors: Ahmed Negm, George Aggidis, Xiandong Ma

Abstract:

With the diverse nature of urban cities, customer demand patterns, landscape topologies or even seasonal weather trends; managing our water distribution networks (WDNs) has proved a complex task. These unpredictable circumstances manifest as pipe failures, intermittent supply and burst events thus adding to water loss, energy waste and increased carbon emissions. Whilst these events are unavoidable, advanced pressure management has proved an effective tool to control and mitigate them. Henceforth, water utilities have struggled with developing a real-time control method that is resilient when confronting the challenges of water distribution. In this paper we use deep reinforcement learning (DRL) algorithms as a novel pressure control strategy to minimise pressure violations and leakage under both burst and background leakage conditions. Agents based on asynchronous actor critic (A2C) and recurrent proximal policy optimisation (Recurrent PPO) were trained and compared to benchmarked optimisation algorithms (differential evolution, particle swarm optimisation. A2C manages to minimise leakage by 32.48% under burst conditions and 67.17% under background conditions which was the highest performance in the DRL algorithms. A2C and Recurrent PPO performed well in comparison to the benchmarks with higher processing speed and lower computational effort.

Keywords: deep reinforcement learning, pressure management, water distribution networks, leakage management

Procedia PDF Downloads 60
2643 A Multiple Freezing/Thawing Cycles Influence Internal Structure and Mechanical Properties of Achilles Tendon

Authors: Martyna Ekiert, Natalia Grzechnik, Joanna Karbowniczek, Urszula Stachewicz, Andrzej Mlyniec

Abstract:

Tendon grafting is a common procedure performed to treat tendon rupture. Before the surgical procedure, tissues intended for grafts (i.e., Achilles tendon) are stored in ultra-low temperatures for a long time and also may be subjected to unfavorable conditions, such as repetitive freezing (F) and thawing (T). Such storage protocols may highly influence the graft mechanical properties, decrease its functionality and thus increase the risk of complications during the transplant procedure. The literature reports on the influence of multiple F/T cycles on internal structure and mechanical properties of tendons stay inconclusive, confirming and denying the negative influence of multiple F/T at the same time. An inconsistent research methodology and lack of clear limit of F/T cycles, which disqualifies tissue for surgical graft purposes, encouraged us to investigate the issue of multiple F/T cycles by the mean of biomechanical tensile tests supported with Scanning Electron Microscope (SEM) imaging. The study was conducted on male bovine Achilles tendon-derived from the local abattoir. Fresh tendons were cleaned of excessive membranes and then sectioned to obtained fascicle bundles. Collected samples were randomly assigned to 6 groups subjected to 1, 2, 4, 6, 8 and 12 cycles of freezing-thawing (F/T), respectively. Each F/T cycle included deep freezing at -80°C temperature, followed by thawing at room temperature. After final thawing, thin slices of the side part of samples subjected to 1, 4, 8 and 12 F/T cycles were collected for SEM imaging. Then, the width and thickness of all samples were measured to calculate the cross-sectional area. Biomechanical tests were performed using the universal testing machine (model Instron 8872, INSTRON®, Norwood, Massachusetts, USA) using a load cell with a maximum capacity of 250 kN and standard atmospheric conditions. Both ends of each fascicle bundle were manually clamped in grasping clamps using abrasive paper and wet cellulose wadding swabs to prevent tissue slipping while clamping and testing. Samples were subjected to the testing procedure including pre-loading, pre-cycling, loading, holding and unloading steps to obtain stress-strain curves for representing tendon stretching and relaxation. The stiffness of AT fascicles bundle samples was evaluated in terms of modulus of elasticity (Young’s modulus), calculated from the slope of the linear region of stress-strain curves. SEM imaging was preceded by chemical sample preparation including 24hr fixation in 3% glutaraldehyde buffered with 0.1 M phosphate buffer, washing with 0.1 M phosphate buffer solution and dehydration in a graded ethanol solution. SEM images (Merlin Gemini II microscope, ZEISS®) were taken using 30 000x mag, which allowed measuring a diameter of collagen fibrils. The results confirm a decrease in fascicle bundles Young’s modulus as well as a decrease in the diameter of collagen fibrils. These results confirm the negative influence of multiple F/T cycles on the mechanical properties of tendon tissue.

Keywords: biomechanics, collagen, fascicle bundles, soft tissue

Procedia PDF Downloads 110
2642 Particle Size Effect on Shear Strength of Granular Materials in Direct Shear Test

Authors: R. Alias, A. Kasa, M. R. Taha

Abstract:

The effect of particle size on shear strength of granular materials are investigated using direct shear tests. Small direct shear test (60 mm by 60 mm by 24 mm deep) were conducted for particles passing the sieves with opening size of 2.36 mm. Meanwhile, particles passing the standard 20 mm sieves were tested using large direct shear test (300 mm by 300 mm by 200 mm deep). The large direct shear tests and the small direct shear tests carried out using the same shearing rate of 0.09 mm/min and similar normal stresses of 100, 200, and 300 kPa. The results show that the peak and residual shear strength decreases as particle size increases.

Keywords: particle size, shear strength, granular material, direct shear test

Procedia PDF Downloads 467
2641 Comparison of Machine Learning and Deep Learning Algorithms for Automatic Classification of 80 Different Pollen Species

Authors: Endrick Barnacin, Jean-Luc Henry, Jimmy Nagau, Jack Molinie

Abstract:

Palynology is a field of interest in many disciplines due to its multiple applications: chronological dating, climatology, allergy treatment, and honey characterization. Unfortunately, the analysis of a pollen slide is a complicated and time consuming task that requires the intervention of experts in the field, which are becoming increasingly rare due to economic and social conditions. That is why the need for automation of this task is urgent. A lot of studies have investigated the subject using different standard image processing descriptors and sometimes hand-crafted ones.In this work, we make a comparative study between classical feature extraction methods (Shape, GLCM, LBP, and others) and Deep Learning (CNN, Autoencoders, Transfer Learning) to perform a recognition task over 80 regional pollen species. It has been found that the use of Transfer Learning seems to be more precise than the other approaches

Keywords: pollens identification, features extraction, pollens classification, automated palynology

Procedia PDF Downloads 116
2640 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model

Authors: Anshika Kankane, Dongshik Kang

Abstract:

Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.

Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching

Procedia PDF Downloads 85
2639 Impact of Integrated Signals for Doing Human Activity Recognition Using Deep Learning Models

Authors: Milagros Jaén-Vargas, Javier García Martínez, Karla Miriam Reyes Leiva, María Fernanda Trujillo-Guerrero, Francisco Fernandes, Sérgio Barroso Gonçalves, Miguel Tavares Silva, Daniel Simões Lopes, José Javier Serrano Olmedo

Abstract:

Human Activity Recognition (HAR) is having a growing impact in creating new applications and is responsible for emerging new technologies. Also, the use of wearable sensors is an important key to exploring the human body's behavior when performing activities. Hence, the use of these dispositive is less invasive and the person is more comfortable. In this study, a database that includes three activities is used. The activities were acquired from inertial measurement unit sensors (IMU) and motion capture systems (MOCAP). The main objective is differentiating the performance from four Deep Learning (DL) models: Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and hybrid model Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM), when considering acceleration, velocity and position and evaluate if integrating the IMU acceleration to obtain velocity and position represent an increment in performance when it works as input to the DL models. Moreover, compared with the same type of data provided by the MOCAP system. Despite the acceleration data is cleaned when integrating, results show a minimal increase in accuracy for the integrated signals.

Keywords: HAR, IMU, MOCAP, acceleration, velocity, position, feature maps

Procedia PDF Downloads 78
2638 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy

Authors: Chhabi Nigam, S. Ramakrishnan

Abstract:

This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.

Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR

Procedia PDF Downloads 199
2637 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 79
2636 Water-Controlled Fracturing with Fuzzy-Ball Fluid in Tight Gas Reservoirs of Deep Coal Measures in Sulige

Authors: Xiangchun Wang, Lihui Zheng, Maozong Gan, Peng Zhang, Tong Wu, An Chang

Abstract:

The deep coal measure tight gas reservoir in Sulige is usually reformed by fracturing, because the reservoir thickness is small, the water layers can be easily communicated during fracturing, which will lead to water production of gas wells and lower production of gas wells. Therefore, it is necessary to control water during fracturing in deep coal measure tight gas reservoir. Using fuzzy-ball fluid to control water fracturing can not only increase the output but also reduce the water output. The fuzzy-ball fluid was prepared indoors to carry out evaluation experiments. The fuzzy ball fluid was mixed in equal volume with the pre-fluid and formation water to test its compatibility. The core displacement device was used to test the gas and water breaking through the matrix and fractured cores blocked by fuzzy-ball fluid. The breakthrough pressure of the plunger tests its water blocking performance. The experimental results show that there is no precipitation after the fuzzy-ball fluid is mixed with the pad fluid and the formation water, respectively. The breakthrough pressure gradients of gas and water after the fuzzy-ball fluid plugged the cracks were 0.02MPa/cm and 0.04MPa/cm, respectively, and the breakthrough pressure gradients of gas and water after the matrix was plugged were 0.03MPa/cm and 0.2MPa/cm, respectively, which meet the requirements of field operation. Two wells A and B in the Sulige Gas Field were used on site to implement water control fracturing. After the pre-fluid was injected into the two wells, 50m3 of fuzzy-ball fluid was pumped to plug the water. The construction went smoothly. After water control and fracturing, the average daily output in 161 days was increased by 13.71% and 6.99% compared with that of adjacent wells in the same layer. The adjacent wells were bubbled for 3 times and 63 times respectively, while there was no effusion in A and B construction wells. The results show that fuzzy-ball fluid is a water plugging material suitable for water control fracturing in tight gas wells, and its water control mechanism can also provide a new idea for the development of water control fracturing materials.

Keywords: coal seam, deep layer, fracking, fuzzy-ball fluid, reservoir reconstruction

Procedia PDF Downloads 202
2635 Keyframe Extraction Using Face Quality Assessment and Convolution Neural Network

Authors: Rahma Abed, Sahbi Bahroun, Ezzeddine Zagrouba

Abstract:

Due to the huge amount of data in videos, extracting the relevant frames became a necessity and an essential step prior to performing face recognition. In this context, we propose a method for extracting keyframes from videos based on face quality and deep learning for a face recognition task. This method has two steps. We start by generating face quality scores for each face image based on the use of three face feature extractors, including Gabor, LBP, and HOG. The second step consists in training a Deep Convolutional Neural Network in a supervised manner in order to select the frames that have the best face quality. The obtained results show the effectiveness of the proposed method compared to the methods of the state of the art.

Keywords: keyframe extraction, face quality assessment, face in video recognition, convolution neural network

Procedia PDF Downloads 205
2634 Investigation of the Catalytic Role of Surfactants on Carbon Dioxide Hydrate Formation in Sediments

Authors: Ehsan Heidaryan

Abstract:

Gas hydrate sediments are ice like permafrost in deep see and oceans. Methane production in sequestration process and reducing atmospheric carbon dioxide, a main source of greenhouse gas, has been accentuated recently. One focus is capture, separation, and sequestration of industrial carbon dioxide. As a hydrate former, carbon dioxide forms hydrates at moderate temperatures and pressures. This phenomenon could be utilized to capture and separate carbon dioxide from flue gases, and also has the potential to sequester carbon dioxide in the deep seabeds. This research investigated the effect of synthetic surfactants on carbon dioxide hydrate formation, catalysis and consequently, methane production from hydrate permafrosts in sediments. It investigated the sequestration potential of carbon dioxide hydrates in ocean sediments. Also, the catalytic effect of biosurfactants in these processes was investigated.

Keywords: carbon dioxide, hydrate, sequestration, surfactant

Procedia PDF Downloads 415
2633 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 56
2632 Empirical Evaluation of Gradient-Based Training Algorithms for Ordinary Differential Equation Networks

Authors: Martin K. Steiger, Lukas Heisler, Hans-Georg Brachtendorf

Abstract:

Deep neural networks and their variants form the backbone of many AI applications. Based on the so-called residual networks, a continuous formulation of such models as ordinary differential equations (ODEs) has proven advantageous since different techniques may be applied that significantly increase the learning speed and enable controlled trade-offs with the resulting error at the same time. For the evaluation of such models, high-performance numerical differential equation solvers are used, which also provide the gradients required for training. However, whether classical gradient-based methods are even applicable or which one yields the best results has not been discussed yet. This paper aims to redeem this situation by providing empirical results for different applications.

Keywords: deep neural networks, gradient-based learning, image processing, ordinary differential equation networks

Procedia PDF Downloads 142
2631 Probing Neuron Mechanics with a Micropipette Force Sensor

Authors: Madeleine Anthonisen, M. Hussain Sangji, G. Monserratt Lopez-Ayon, Margaret Magdesian, Peter Grutter

Abstract:

Advances in micromanipulation techniques and real-time particle tracking with nanometer resolution have enabled biological force measurements at scales relevant to neuron mechanics. An approach to precisely control and maneuver neurite-tethered polystyrene beads is presented. Analogous to an Atomic Force Microscope (AFM), this multi-purpose platform is a force sensor with imaging acquisition and manipulation capabilities. A mechanical probe composed of a micropipette with its tip fixed to a functionalized bead is used to incite the formation of a neurite in a sample of rat hippocampal neurons while simultaneously measuring the tension in said neurite as the sample is pulled away from the beaded tip. With optical imaging methods, a force resolution of 12 pN is achieved. Moreover, the advantages of this technique over alternatives such as AFM, namely ease of manipulation which ultimately allows higher throughput investigation of the mechanical properties of neurons, is demonstrated.

Keywords: axonal growth, axonal guidance, force probe, pipette micromanipulation, neurite tension, neuron mechanics

Procedia PDF Downloads 346
2630 MIMIC: A Multi Input Micro-Influencers Classifier

Authors: Simone Leonardi, Luca Ardito

Abstract:

Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.

Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media

Procedia PDF Downloads 155
2629 Design and Manufacture Detection System for Patient's Unwanted Movements during Radiology and CT Scan

Authors: Anita Yaghobi, Homayoun Ebrahimian

Abstract:

One of the important tools that can help orthopedic doctors for diagnose diseases is imaging scan. Imaging techniques can help physicians in see different parts of the body, including the bones, muscles, tendons, nerves, and cartilage. During CT scan, a patient must be in the same position from the start to the end of radiation treatment. Patient movements are usually monitored by the technologists through the closed circuit television (CCTV) during scan. If the patient makes a small movement, it is difficult to be noticed by them. In the present work, a simple patient movement monitoring device is fabricated to monitor the patient movement. It uses an electronic sensing device. It continuously monitors the patient’s position while the CT scan is in process. The device has been retrospectively tested on 51 patients whose movement and distance were measured. The results show that 25 patients moved 1 cm to 2.5 cm from their initial position during the CT scan. Hence, the device can potentially be used to control and monitor patient movement during CT scan and Radiography. In addition, an audible alarm situated at the control panel of the control room is provided with this device to alert the technologists. It is an inexpensive, compact device which can be used in any CT scan machine.

Keywords: CT scan, radiology, X Ray, unwanted movement

Procedia PDF Downloads 442
2628 Fabrication of Poly(Ethylene Oxide)/Chitosan/Indocyanine Green Nanoprobe by Co-Axial Electrospinning Method for Early Detection

Authors: Zeynep R. Ege, Aydin Akan, Faik N. Oktar, Betul Karademir, Oguzhan Gunduz

Abstract:

Early detection of cancer could save human life and quality in insidious cases by advanced biomedical imaging techniques. Designing targeted detection system is necessary in order to protect of healthy cells. Electrospun nanofibers are efficient and targetable nanocarriers which have important properties such as nanometric diameter, mechanical properties, elasticity, porosity and surface area to volume ratio. In the present study, indocyanine green (ICG) organic dye was stabilized and encapsulated in polymer matrix which polyethylene oxide (PEO) and chitosan (CHI) multilayer nanofibers via co-axial electrospinning method at one step. The co-axial electrospun nanofibers were characterized as morphological (SEM), molecular (FT-IR), and entrapment efficiency of Indocyanine Green (ICG) (confocal imaging). Controlled release profile of PEO/CHI/ICG nanofiber was also evaluated up to 40 hours.

Keywords: chitosan, coaxial electrospinning, controlled releasing, drug delivery, indocyanine green, polyethylene oxide

Procedia PDF Downloads 152
2627 Clinical Impact of Ultra-Deep Versus Sanger Sequencing Detection of Minority Mutations on the HIV-1 Drug Resistance Genotype Interpretations after Virological Failure

Authors: S. Mohamed, D. Gonzalez, C. Sayada, P. Halfon

Abstract:

Drug resistance mutations are routinely detected using standard Sanger sequencing, which does not detect minor variants with a frequency below 20%. The impact of detecting minor variants generated by ultra-deep sequencing (UDS) on HIV drug-resistance (DR) interpretations has not yet been studied. Fifty HIV-1 patients who experienced virological failure were included in this retrospective study. The HIV-1 UDS protocol allowed the detection and quantification of HIV-1 protease and reverse transcriptase variants related to genotypes A, B, C, E, F, and G. DeepChek®-HIV simplified DR interpretation software was used to compare Sanger sequencing and UDS. The total time required for the UDS protocol was found to be approximately three times longer than Sanger sequencing with equivalent reagent costs. UDS detected all of the mutations found by population sequencing and identified additional resistance variants in all patients. An analysis of DR revealed a total of 643 and 224 clinically relevant mutations by UDS and Sanger sequencing, respectively. Three resistance mutations with > 20% prevalence were detected solely by UDS: A98S (23%), E138A (21%) and V179I (25%). A significant difference in the DR interpretations for 19 antiretroviral drugs was observed between the UDS and Sanger sequencing methods. Y181C and T215Y were the most frequent mutations associated with interpretation differences. A combination of UDS and DeepChek® software for the interpretation of DR results would help clinicians provide suitable treatments. A cut-off of 1% allowed a better characterisation of the viral population by identifying additional resistance mutations and improving the DR interpretation.

Keywords: HIV-1, ultra-deep sequencing, Sanger sequencing, drug resistance

Procedia PDF Downloads 313
2626 Measuring Human Perception and Negative Elements of Public Space Quality Using Deep Learning: A Case Study of Area within the Inner Road of Tianjin City

Authors: Jiaxin Shi, Kaifeng Hao, Qingfan An, Zeng Peng

Abstract:

Due to a lack of data sources and data processing techniques, it has always been difficult to quantify public space quality, which includes urban construction quality and how it is perceived by people, especially in large urban areas. This study proposes a quantitative research method based on the consideration of emotional health and physical health of the built environment. It highlights the low quality of public areas in Tianjin, China, where there are many negative elements. Deep learning technology is then used to measure how effectively people perceive urban areas. First, this work suggests a deep learning model that might simulate how people can perceive the quality of urban construction. Second, we perform semantic segmentation on street images to identify visual elements influencing scene perception. Finally, this study correlated the scene perception score with the proportion of visual elements to determine the surrounding environmental elements that influence scene perception. Using a small-scale labeled Tianjin street view data set based on transfer learning, this study trains five negative spatial discriminant models in order to explore the negative space distribution and quality improvement of urban streets. Then it uses all Tianjin street-level imagery to make predictions and calculate the proportion of negative space. Visualizing the spatial distribution of negative space along the Tianjin Inner Ring Road reveals that the negative elements are mainly found close to the five key districts. The map of Tianjin was combined with the experimental data to perform the visual analysis. Based on the emotional assessment, the distribution of negative materials, and the direction of street guidelines, we suggest guidance content and design strategy points of the negative phenomena in Tianjin street space in the two dimensions of perception and substance. This work demonstrates the utilization of deep learning techniques to understand how people appreciate high-quality urban construction, and it complements both theory and practice in urban planning. It illustrates the connection between human perception and the actual physical public space environment, allowing researchers to make urban interventions.

Keywords: human perception, public space quality, deep learning, negative elements, street images

Procedia PDF Downloads 89
2625 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image

Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias

Abstract:

Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.

Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals

Procedia PDF Downloads 50
2624 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition

Authors: A. Shoiynbek, K. Kozhakhmet, P. Menezes, D. Kuanyshbay, D. Bayazitov

Abstract:

Speech emotion recognition has received increasing research interest all through current years. There was used emotional speech that was collected under controlled conditions in most research work. Actors imitating and artificially producing emotions in front of a microphone noted those records. There are four issues related to that approach, namely, (1) emotions are not natural, and it means that machines are learning to recognize fake emotions. (2) Emotions are very limited by quantity and poor in their variety of speaking. (3) There is language dependency on SER. (4) Consequently, each time when researchers want to start work with SER, they need to find a good emotional database on their language. In this paper, we propose the approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describe the sequence of actions of the proposed approach. One of the first objectives of the sequence of actions is a speech detection issue. The paper gives a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian languages. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To illustrate the working capacity of the developed model, we have performed an analysis of speech detection and extraction from real tasks.

Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset

Procedia PDF Downloads 80
2623 Face Recognition Using Body-Worn Camera: Dataset and Baseline Algorithms

Authors: Ali Almadan, Anoop Krishnan, Ajita Rattani

Abstract:

Facial recognition is a widely adopted technology in surveillance, border control, healthcare, banking services, and lately, in mobile user authentication with Apple introducing “Face ID” moniker with iPhone X. A lot of research has been conducted in the area of face recognition on datasets captured by surveillance cameras, DSLR, and mobile devices. Recently, face recognition technology has also been deployed on body-worn cameras to keep officers safe, enabling situational awareness and providing evidence for trial. However, limited academic research has been conducted on this topic so far, without the availability of any publicly available datasets with a sufficient sample size. This paper aims to advance research in the area of face recognition using body-worn cameras. To this aim, the contribution of this work is two-fold: (1) collection of a dataset consisting of a total of 136,939 facial images of 102 subjects captured using body-worn cameras in in-door and daylight conditions and (2) evaluation of various deep-learning architectures for face identification on the collected dataset. Experimental results suggest a maximum True Positive Rate(TPR) of 99.86% at False Positive Rate(FPR) of 0.000 obtained by SphereFace based deep learning architecture in daylight condition. The collected dataset and the baseline algorithms will promote further research and development. A downloadable link of the dataset and the algorithms is available by contacting the authors.

Keywords: face recognition, body-worn cameras, deep learning, person identification

Procedia PDF Downloads 149
2622 Instance Segmentation of Wildfire Smoke Plumes using Mask-RCNN

Authors: Jamison Duckworth, Shankarachary Ragi

Abstract:

Detection and segmentation of wildfire smoke plumes from remote sensing imagery are being pursued as a solution for early fire detection and response. Smoke plume detection can be automated and made robust by the application of artificial intelligence methods. Specifically, in this study, the deep learning approach Mask Region-based Convolutional Neural Network (RCNN) is being proposed to learn smoke patterns across different spectral bands. This method is proposed to separate the smoke regions from the background and return masks placed over the smoke plumes. Multispectral data was acquired using NASA’s Earthdata and WorldView and services and satellite imagery. Due to the use of multispectral bands along with the three visual bands, we show that Mask R-CNN can be applied to distinguish smoke plumes from clouds and other landscape features that resemble smoke.

Keywords: deep learning, mask-RCNN, smoke plumes, spectral bands

Procedia PDF Downloads 100
2621 New Ethanol Method for Soft Tissue Imaging in Micro-CT

Authors: Matej Patzelt, Jan Dudak, Frantisek Krejci, Jan Zemlicka, Vladimir Musil, Jitka Riedlova, Viktor Sykora, Jana Mrzilkova, Petr Zach

Abstract:

Introduction: Micro-CT is well used for examination of bone structures and teeth. On the other hand visualization of the soft tissues is still limited. The goal of our study was to create a new fixation method for soft tissue imaging in micro-CT. Methodology: We used organs of 18 mice - heart, lungs, kidneys, liver and brain, which we fixated in ethanol after meticulous preparation. We fixated organs in different concentrations of ethanol and for different period of time. We used three types of ethanol concentration - 97%, 50% and ascending ethanol concentration (25%, 50%, 75%, 97% each for 12 hours). Fixated organs were scanned after 72 hours, 168 hours and 336 hours period of fixation. We scanned all specimens in micro-CT MARS (Medipix All Resolution System). Results: Ethanol method provided contrast enhancement in all studied organs in all used types of fixation. Fixation in 97% ethanol provided very fast fixation and the contrast among the tissues was visible already after 72 hours of fixation. Fixation for the period of 168 and 336 hours gave better details, especially in lung tissue, where alveoli were visualized. On the other hand, this type of fixation caused organs to petrify. Fixation in 50% ethanol provided best results in 336 hours fixation, details were visualized better than in 97% ethanol and samples were not as hard as in fixation in 97% ethanol. Best results were obtained in fixation in ascending ethanol concentration. All organs were visualized in great details, best-visualized organ was heart, where trabeculae and valves were visible. In this type of fixation, organs stayed soft for whole time. Conclusion: New ethanol method is a great option for soft tissue fixation as well as the method for enhancing contrast among tissues in organs. The best results were obtained with fixation of the organs in ascending ethanol concentration, the best visualized organ was the heart.

Keywords: x-ray imaging, small animals, ethanol, ex-vivo

Procedia PDF Downloads 304
2620 Searching the Relationship among Components that Contribute to Interactive Plight and Educational Execution

Authors: Shri Krishna Mishra

Abstract:

In an educational context, technology can prompt interactive plight only when it is used in conjunction with interactive plight methods. This study, therefore, examines the relationships among components that contribute to higher levels of interactive plight and execution, such as interactive Plight methods, technology, intrinsic motivation and deep learning. 526 students participated in this study. With structural equation modelling, the authors test the conceptual model and identify satisfactory model fit. The results indicate that interactive Plight methods, technology and intrinsic motivation have significant relationship with interactive Plight; deep learning mediates the relationships of the other variables with Execution.

Keywords: searching the relationship among components, contribute to interactive plight, educational execution, intrinsic motivation

Procedia PDF Downloads 435
2619 Progress in Combining Image Captioning and Visual Question Answering Tasks

Authors: Prathiksha Kamath, Pratibha Jamkhandi, Prateek Ghanti, Priyanshu Gupta, M. Lakshmi Neelima

Abstract:

Combining Image Captioning and Visual Question Answering (VQA) tasks have emerged as a new and exciting research area. The image captioning task involves generating a textual description that summarizes the content of the image. VQA aims to answer a natural language question about the image. Both these tasks include computer vision and natural language processing (NLP) and require a deep understanding of the content of the image and semantic relationship within the image and the ability to generate a response in natural language. There has been remarkable growth in both these tasks with rapid advancement in deep learning. In this paper, we present a comprehensive review of recent progress in combining image captioning and visual question-answering (VQA) tasks. We first discuss both image captioning and VQA tasks individually and then the various ways in which both these tasks can be integrated. We also analyze the challenges associated with these tasks and ways to overcome them. We finally discuss the various datasets and evaluation metrics used in these tasks. This paper concludes with the need for generating captions based on the context and captions that are able to answer the most likely asked questions about the image so as to aid the VQA task. Overall, this review highlights the significant progress made in combining image captioning and VQA, as well as the ongoing challenges and opportunities for further research in this exciting and rapidly evolving field, which has the potential to improve the performance of real-world applications such as autonomous vehicles, robotics, and image search.

Keywords: image captioning, visual question answering, deep learning, natural language processing

Procedia PDF Downloads 56
2618 A Deep Learning Approach to Online Social Network Account Compromisation

Authors: Edward K. Boahen, Brunel E. Bouya-Moko, Changda Wang

Abstract:

The major threat to online social network (OSN) users is account compromisation. Spammers now spread malicious messages by exploiting the trust relationship established between account owners and their friends. The challenge in detecting a compromised account by service providers is validating the trusted relationship established between the account owners, their friends, and the spammers. Another challenge is the increase in required human interaction with the feature selection. Research available on supervised learning (machine learning) has limitations with the feature selection and accounts that cannot be profiled, like application programming interface (API). Therefore, this paper discusses the various behaviours of the OSN users and the current approaches in detecting a compromised OSN account, emphasizing its limitations and challenges. We propose a deep learning approach that addresses and resolve the constraints faced by the previous schemes. We detailed our proposed optimized nonsymmetric deep auto-encoder (OPT_NDAE) for unsupervised feature learning, which reduces the required human interaction levels in the selection and extraction of features. We evaluated our proposed classifier using the NSL-KDD and KDDCUP'99 datasets in a graphical user interface enabled Weka application. The results obtained indicate that our proposed approach outperformed most of the traditional schemes in OSN compromised account detection with an accuracy rate of 99.86%.

Keywords: computer security, network security, online social network, account compromisation

Procedia PDF Downloads 99
2617 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 68