Search results for: deep excavations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2141

Search results for: deep excavations

1991 DNpro: A Deep Learning Network Approach to Predicting Protein Stability Changes Induced by Single-Site Mutations

Authors: Xiao Zhou, Jianlin Cheng

Abstract:

A single amino acid mutation can have a significant impact on the stability of protein structure. Thus, the prediction of protein stability change induced by single site mutations is critical and useful for studying protein function and structure. Here, we presented a deep learning network with the dropout technique for predicting protein stability changes upon single amino acid substitution. While using only protein sequence as input, the overall prediction accuracy of the method on a standard benchmark is >85%, which is higher than existing sequence-based methods and is comparable to the methods that use not only protein sequence but also tertiary structure, pH value and temperature. The results demonstrate that deep learning is a promising technique for protein stability prediction. The good performance of this sequence-based method makes it a valuable tool for predicting the impact of mutations on most proteins whose experimental structures are not available. Both the downloadable software package and the user-friendly web server (DNpro) that implement the method for predicting protein stability changes induced by amino acid mutations are freely available for the community to use.

Keywords: bioinformatics, deep learning, protein stability prediction, biological data mining

Procedia PDF Downloads 463
1990 Enhancing Single Channel Minimum Quantity Lubrication through Bypass Controlled Design for Deep Hole Drilling with Small Diameter Tool

Authors: Yongrong Li, Ralf Domroes

Abstract:

Due to significant energy savings, enablement of higher machining speed as well as environmentally friendly features, Minimum Quantity Lubrication (MQL) has been used for many machining processes efficiently. However, in the deep hole drilling field (small tool diameter D < 5 mm) and long tool (length L > 25xD) it is always a bottle neck for a single channel MQL system. The single channel MQL, based on the Venturi principle, faces a lack of enough oil quantity caused by dropped pressure difference during the deep hole drilling process. In this paper, a system concept based on a bypass design has explored its possibility to dynamically reach the required pressure difference between the air inlet and the inside of aerosol generator, so that the deep hole drilling demanded volume of oil can be generated and delivered to tool tips. The system concept has been investigated in static and dynamic laboratory testing. In the static test, the oil volume with and without bypass control were measured. This shows an oil quantity increasing potential up to 1000%. A spray pattern test has demonstrated the differences of aerosol particle size, aerosol distribution and reaction time between single channel and bypass controlled single channel MQL systems. A dynamic trial machining test of deep hole drilling (drill tool D=4.5mm, L= 40xD) has been carried out with the proposed system on a difficult machining material AlSi7Mg. The tool wear along a 100 meter drilling was tracked and analyzed. The result shows that the single channel MQL with a bypass control can overcome the limitation and enhance deep hole drilling with a small tool. The optimized combination of inlet air pressure and bypass control results in a high quality oil delivery to tool tips with a uniform and continuous aerosol flow.

Keywords: deep hole drilling, green production, Minimum Quantity Lubrication (MQL), near dry machining

Procedia PDF Downloads 203
1989 An Accurate Brain Tumor Segmentation for High Graded Glioma Using Deep Learning

Authors: Sajeeha Ansar, Asad Ali Safi, Sheikh Ziauddin, Ahmad R. Shahid, Faraz Ahsan

Abstract:

Gliomas are most challenging and aggressive type of tumors which appear in different sizes, locations, and scattered boundaries. CNN is most efficient deep learning approach with outstanding capability of solving image analysis problems. A fully automatic deep learning based 2D-CNN model for brain tumor segmentation is presented in this paper. We used small convolution filters (3 x 3) to make architecture deeper. We increased convolutional layers for efficient learning of complex features from large dataset. We achieved better results by pushing convolutional layers up to 16 layers for HGG model. We achieved reliable and accurate results through fine-tuning among dataset and hyper-parameters. Pre-processing of this model includes generation of brain pipeline, intensity normalization, bias correction and data augmentation. We used the BRATS-2015, and Dice Similarity Coefficient (DSC) is used as performance measure for the evaluation of the proposed method. Our method achieved DSC score of 0.81 for complete, 0.79 for core, 0.80 for enhanced tumor regions. However, these results are comparable with methods already implemented 2D CNN architecture.

Keywords: brain tumor segmentation, convolutional neural networks, deep learning, HGG

Procedia PDF Downloads 254
1988 Automated Feature Extraction and Object-Based Detection from High-Resolution Aerial Photos Based on Machine Learning and Artificial Intelligence

Authors: Mohammed Al Sulaimani, Hamad Al Manhi

Abstract:

With the development of Remote Sensing technology, the resolution of optical Remote Sensing images has greatly improved, and images have become largely available. Numerous detectors have been developed for detecting different types of objects. In the past few years, Remote Sensing has benefited a lot from deep learning, particularly Deep Convolution Neural Networks (CNNs). Deep learning holds great promise to fulfill the challenging needs of Remote Sensing and solving various problems within different fields and applications. The use of Unmanned Aerial Systems in acquiring Aerial Photos has become highly used and preferred by most organizations to support their activities because of their high resolution and accuracy, which make the identification and detection of very small features much easier than Satellite Images. And this has opened an extreme era of Deep Learning in different applications not only in feature extraction and prediction but also in analysis. This work addresses the capacity of Machine Learning and Deep Learning in detecting and extracting Oil Leaks from Flowlines (Onshore) using High-Resolution Aerial Photos which have been acquired by UAS fixed with RGB Sensor to support early detection of these leaks and prevent the company from the leak’s losses and the most important thing environmental damage. Here, there are two different approaches and different methods of DL have been demonstrated. The first approach focuses on detecting the Oil Leaks from the RAW Aerial Photos (not processed) using a Deep Learning called Single Shoot Detector (SSD). The model draws bounding boxes around the leaks, and the results were extremely good. The second approach focuses on detecting the Oil Leaks from the Ortho-mosaiced Images (Georeferenced Images) by developing three Deep Learning Models using (MaskRCNN, U-Net and PSP-Net Classifier). Then, post-processing is performed to combine the results of these three Deep Learning Models to achieve a better detection result and improved accuracy. Although there is a relatively small amount of datasets available for training purposes, the Trained DL Models have shown good results in extracting the extent of the Oil Leaks and obtaining excellent and accurate detection.

Keywords: GIS, remote sensing, oil leak detection, machine learning, aerial photos, unmanned aerial systems

Procedia PDF Downloads 31
1987 Analysis of Public Space Usage Characteristics Based on Computer Vision Technology - Taking Shaping Park as an Example

Authors: Guantao Bai

Abstract:

Public space is an indispensable and important component of the urban built environment. How to more accurately evaluate the usage characteristics of public space can help improve its spatial quality. Compared to traditional survey methods, computer vision technology based on deep learning has advantages such as dynamic observation and low cost. This study takes the public space of Shaping Park as an example and, based on deep learning computer vision technology, processes and analyzes the image data of the public space to obtain the spatial usage characteristics and spatiotemporal characteristics of the public space. Research has found that the spontaneous activity time in public spaces is relatively random with a relatively short average activity time, while social activities have a relatively stable activity time with a longer average activity time. Computer vision technology based on deep learning can effectively describe the spatial usage characteristics of the research area, making up for the shortcomings of traditional research methods and providing relevant support for creating a good public space.

Keywords: computer vision, deep learning, public spaces, using features

Procedia PDF Downloads 69
1986 A Different Approach to Smart Phone-Based Wheat Disease Detection System Using Deep Learning for Ethiopia

Authors: Nathenal Thomas Lambamo

Abstract:

Based on the fact that more than 85% of the labor force and 90% of the export earnings are taken by agriculture in Ethiopia and it can be said that it is the backbone of the overall socio-economic activities in the country. Among the cereal crops that the agriculture sector provides for the country, wheat is the third-ranking one preceding teff and maize. In the present day, wheat is in higher demand related to the expansion of industries that use them as the main ingredient for their products. The local supply of wheat for these companies covers only 35 to 40% and the rest 60 to 65% percent is imported on behalf of potential customers that exhaust the country’s foreign currency reserves. The above facts show that the need for this crop in the country is too high and in reverse, the productivity of the crop is very less because of these reasons. Wheat disease is the most devastating disease that contributes a lot to this unbalance in the demand and supply status of the crop. It reduces both the yield and quality of the crop by 27% on average and up to 37% when it is severe. This study aims to detect the most frequent and degrading wheat diseases, Septoria and Leaf rust, using the most efficiently used subset of machine learning technology, deep learning. As a state of the art, a deep learning class classification technique called Convolutional Neural Network (CNN) has been used to detect diseases and has an accuracy of 99.01% is achieved.

Keywords: septoria, leaf rust, deep learning, CNN

Procedia PDF Downloads 74
1985 Automatic Measurement of Garment Sizes Using Deep Learning

Authors: Maulik Parmar, Sumeet Sandhu

Abstract:

The online fashion industry experiences high product return rates. Many returns are because of size/fit mismatches -the size scale on labels can vary across brands, the size parameters may not capture all fit measurements, or the product may have manufacturing defects. Warehouse quality check of garment sizes can be semi-automated to improve speed and accuracy. This paper presents an approach for automatically measuring garment sizes from a single image of the garment -using Deep Learning to learn garment keypoints. The paper focuses on the waist size measurement of jeans and can be easily extended to other garment types and measurements. Experimental results show that this approach can greatly improve the speed and accuracy of today’s manual measurement process.

Keywords: convolutional neural networks, deep learning, distortion, garment measurements, image warping, keypoints

Procedia PDF Downloads 307
1984 Electroencephalogram Based Alzheimer Disease Classification using Machine and Deep Learning Methods

Authors: Carlos Roncero-Parra, Alfonso Parreño-Torres, Jorge Mateo Sotos, Alejandro L. Borja

Abstract:

In this research, different methods based on machine/deep learning algorithms are presented for the classification and diagnosis of patients with mental disorders such as alzheimer. For this purpose, the signals obtained from 32 unipolar electrodes identified by non-invasive EEG were examined, and their basic properties were obtained. More specifically, different well-known machine learning based classifiers have been used, i.e., support vector machine (SVM), Bayesian linear discriminant analysis (BLDA), decision tree (DT), Gaussian Naïve Bayes (GNB), K-nearest neighbor (KNN) and Convolutional Neural Network (CNN). A total of 668 patients from five different hospitals have been studied in the period from 2011 to 2021. The best accuracy is obtained was around 93 % in both ADM and ADA classifications. It can be concluded that such a classification will enable the training of algorithms that can be used to identify and classify different mental disorders with high accuracy.

Keywords: alzheimer, machine learning, deep learning, EEG

Procedia PDF Downloads 125
1983 Obsessive-Compulsive Disorder: Development of Demand-Controlled Deep Brain Stimulation with Methods from Stochastic Phase Resetting

Authors: Mahdi Akhbardeh

Abstract:

Synchronization of neuronal firing is a hallmark of several neurological diseases. Recently, stimulation techniques have been developed which make it possible to desynchronize oscillatory neuronal activity in a mild and effective way, without suppressing the neurons' firing. As yet, these techniques are being used to establish demand-controlled deep brain stimulation (DBS) techniques for the therapy of movement disorders like severe Parkinson's disease or essential tremor. We here present a first conceptualization suggesting that the nucleus accumbens is a promising target for the standard, that is, permanent high-frequency, DBS in patients with severe and chronic obsessive-compulsive disorder (OCD). In addition, we explain how demand-controlled DBS techniques may be applied to the therapy of OCD in those cases that are refractory to behavioral therapies and pharmacological treatment.

Keywords: stereotactic neurosurgery, deep brain stimulation, obsessive-compulsive disorder, phase resetting

Procedia PDF Downloads 511
1982 Enabling Non-invasive Diagnosis of Thyroid Nodules with High Specificity and Sensitivity

Authors: Sai Maniveer Adapa, Sai Guptha Perla, Adithya Reddy P.

Abstract:

Thyroid nodules can often be diagnosed with ultrasound imaging, although differentiating between benign and malignant nodules can be challenging for medical professionals. This work suggests a novel approach to increase the precision of thyroid nodule identification by combining machine learning and deep learning. The new approach first extracts information from the ultrasound pictures using a deep learning method known as a convolutional autoencoder. A support vector machine, a type of machine learning model, is then trained using these features. With an accuracy of 92.52%, the support vector machine can differentiate between benign and malignant nodules. This innovative technique may decrease the need for pointless biopsies and increase the accuracy of thyroid nodule detection.

Keywords: thyroid tumor diagnosis, ultrasound images, deep learning, machine learning, convolutional auto-encoder, support vector machine

Procedia PDF Downloads 56
1981 A Deep-Learning Based Prediction of Pancreatic Adenocarcinoma with Electronic Health Records from the State of Maine

Authors: Xiaodong Li, Peng Gao, Chao-Jung Huang, Shiying Hao, Xuefeng B. Ling, Yongxia Han, Yaqi Zhang, Le Zheng, Chengyin Ye, Modi Liu, Minjie Xia, Changlin Fu, Bo Jin, Karl G. Sylvester, Eric Widen

Abstract:

Predicting the risk of Pancreatic Adenocarcinoma (PA) in advance can benefit the quality of care and potentially reduce population mortality and morbidity. The aim of this study was to develop and prospectively validate a risk prediction model to identify patients at risk of new incident PA as early as 3 months before the onset of PA in a statewide, general population in Maine. The PA prediction model was developed using Deep Neural Networks, a deep learning algorithm, with a 2-year electronic-health-record (EHR) cohort. Prospective results showed that our model identified 54.35% of all inpatient episodes of PA, and 91.20% of all PA that required subsequent chemoradiotherapy, with a lead-time of up to 3 months and a true alert of 67.62%. The risk assessment tool has attained an improved discriminative ability. It can be immediately deployed to the health system to provide automatic early warnings to adults at risk of PA. It has potential to identify personalized risk factors to facilitate customized PA interventions.

Keywords: cancer prediction, deep learning, electronic health records, pancreatic adenocarcinoma

Procedia PDF Downloads 155
1980 Investigating the Factors Affecting Generalization of Deep Learning Models for Plant Disease Detection

Authors: Praveen S. Muthukumarana, Achala C. Aponso

Abstract:

A large percentage of global crop harvest is lost due to crop diseases. Timely identification and treatment of crop diseases is difficult in many developing nations due to insufficient trained professionals in the field of agriculture. Many crop diseases can be accurately diagnosed by visual symptoms. In the past decade, deep learning has been successfully utilized in domains such as healthcare but adoption in agriculture for plant disease detection is rare. The literature shows that models trained with popular datasets such as PlantVillage does not generalize well on real world images. This paper attempts to find out how to make plant disease identification models that generalize well with real world images.

Keywords: agriculture, convolutional neural network, deep learning, plant disease classification, plant disease detection, plant disease diagnosis

Procedia PDF Downloads 142
1979 Research on Key Technologies on Initial Installation of Ultra-Deep-Water Dynamic Umbilical

Authors: Weiwei Xie, Yichao Li

Abstract:

The initial installation of the umbilical can affect the subsequent installation process and final installation. Meanwhile, the design of both ends of the ultra-deep water dynamic umbilical (UDWDU), as well as the design of the surface unit and the subsea production system connected by UDWDU,], varies in different oil and gas fields. To optimize the installation process of UDWDU, on the basis of the summary and analysis of the surface-end and the subsea-end design of UDWDU and the mainstream construction resources, the method of initial installation from the surface unit side or the subsea production system side of UDWDU is studied, and each initiation installation method is pointed out if some difficulties that may be encountered.

Keywords: dynamic umbilical, ultra-deep-water, initial installation, installation process

Procedia PDF Downloads 152
1978 Identification of Breast Anomalies Based on Deep Convolutional Neural Networks and K-Nearest Neighbors

Authors: Ayyaz Hussain, Tariq Sadad

Abstract:

Breast cancer (BC) is one of the widespread ailments among females globally. The early prognosis of BC can decrease the mortality rate. Exact findings of benign tumors can avoid unnecessary biopsies and further treatments of patients under investigation. However, due to variations in images, it is a tough job to isolate cancerous cases from normal and benign ones. The machine learning technique is widely employed in the classification of BC pattern and prognosis. In this research, a deep convolution neural network (DCNN) called AlexNet architecture is employed to get more discriminative features from breast tissues. To achieve higher accuracy, K-nearest neighbor (KNN) classifiers are employed as a substitute for the softmax layer in deep learning. The proposed model is tested on a widely used breast image database called MIAS dataset for experimental purposes and achieved 99% accuracy.

Keywords: breast cancer, DCNN, KNN, mammography

Procedia PDF Downloads 135
1977 A Conv-Long Short-term Memory Deep Learning Model for Traffic Flow Prediction

Authors: Ali Reza Sattarzadeh, Ronny J. Kutadinata, Pubudu N. Pathirana, Van Thanh Huynh

Abstract:

Traffic congestion has become a severe worldwide problem, affecting everyday life, fuel consumption, time, and air pollution. The primary causes of these issues are inadequate transportation infrastructure, poor traffic signal management, and rising population. Traffic flow forecasting is one of the essential and effective methods in urban congestion and traffic management, which has attracted the attention of researchers. With the development of technology, undeniable progress has been achieved in existing methods. However, there is a possibility of improvement in the extraction of temporal and spatial features to determine the importance of traffic flow sequences and extraction features. In the proposed model, we implement the convolutional neural network (CNN) and long short-term memory (LSTM) deep learning models for mining nonlinear correlations and their effectiveness in increasing the accuracy of traffic flow prediction in the real dataset. According to the experiments, the results indicate that implementing Conv-LSTM networks increases the productivity and accuracy of deep learning models for traffic flow prediction.

Keywords: deep learning algorithms, intelligent transportation systems, spatiotemporal features, traffic flow prediction

Procedia PDF Downloads 171
1976 Radar Fault Diagnosis Strategy Based on Deep Learning

Authors: Bin Feng, Zhulin Zong

Abstract:

Radar systems are critical in the modern military, aviation, and maritime operations, and their proper functioning is essential for the success of these operations. However, due to the complexity and sensitivity of radar systems, they are susceptible to various faults that can significantly affect their performance. Traditional radar fault diagnosis strategies rely on expert knowledge and rule-based approaches, which are often limited in effectiveness and require a lot of time and resources. Deep learning has recently emerged as a promising approach for fault diagnosis due to its ability to learn features and patterns from large amounts of data automatically. In this paper, we propose a radar fault diagnosis strategy based on deep learning that can accurately identify and classify faults in radar systems. Our approach uses convolutional neural networks (CNN) to extract features from radar signals and fault classify the features. The proposed strategy is trained and validated on a dataset of measured radar signals with various types of faults. The results show that it achieves high accuracy in fault diagnosis. To further evaluate the effectiveness of the proposed strategy, we compare it with traditional rule-based approaches and other machine learning-based methods, including decision trees, support vector machines (SVMs), and random forests. The results demonstrate that our deep learning-based approach outperforms the traditional approaches in terms of accuracy and efficiency. Finally, we discuss the potential applications and limitations of the proposed strategy, as well as future research directions. Our study highlights the importance and potential of deep learning for radar fault diagnosis. It suggests that it can be a valuable tool for improving the performance and reliability of radar systems. In summary, this paper presents a radar fault diagnosis strategy based on deep learning that achieves high accuracy and efficiency in identifying and classifying faults in radar systems. The proposed strategy has significant potential for practical applications and can pave the way for further research.

Keywords: radar system, fault diagnosis, deep learning, radar fault

Procedia PDF Downloads 90
1975 Novel Synthesis of Metal Oxide Nanoparticles from Type IV Deep Eutectic Solvents

Authors: Lorenzo Gontrani, Marilena Carbone, Domenica Tommasa Donia, Elvira Maria Bauer, Pietro Tagliatesta

Abstract:

One of the fields where DES shows remarkable added values is the synthesis Of inorganic materials, in particular nanoparticles. In this field, the higher- ent and highly-tunable nano-homogeneities of DES structure give origin to a marked templating effect, a precious role that has led to the recent bloom of a vast number of studies exploiting these new synthesis media to prepare Nanomaterials and composite structures of various kinds. In this contribution, the most recent developments in the field will be reviewed, and some ex-citing examples of novel metal oxide nanoparticles syntheses using non-toxic type-IV Deep Eutectic Solvents will be described. The prepared materials possess nanometric dimensions and show flower-like shapes. The use of the pre- pared nanoparticles as fluorescent materials for the detection of various contaminants is under development.

Keywords: metal deep eutectic solvents, nanoparticles, inorganic synthesis, type IV DES, lamellar

Procedia PDF Downloads 133
1974 Facial Emotion Recognition with Convolutional Neural Network Based Architecture

Authors: Koray U. Erbas

Abstract:

Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented.

Keywords: convolutional neural network, deep learning, deep learning based FER, facial emotion recognition

Procedia PDF Downloads 272
1973 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning

Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar

Abstract:

As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling, and proposes the challenges and improvement directions for DRL-based resource scheduling algorithms.

Keywords: resource scheduling, deep reinforcement learning, distributed system, artificial intelligence

Procedia PDF Downloads 110
1972 Effect of Depth on the Distribution of Zooplankton in Wushishi Lake Minna, Niger State, Nigeria

Authors: Adamu Zubairu Mohammed, Fransis Oforum Arimoro, Salihu Maikudi Ibrahim, Y. I. Auta, T. I. Arowosegbe, Y. Abdullahi

Abstract:

The present study was conducted to evaluate the effect of depth on the distribution of zooplankton and some physicochemical parameters in Tungan Kawo Lake (Wushishi dam). Water and zooplankton samples were collected from the surface, 3.0 meters deep and 6.0 meters deep, for a period of 24 hours for six months. Standard procedures were adopted for the determination of physicochemical parameters. Results have shown significant differences in the pH, DO, BOD Hardness, Na, and Mg. A total of 1764 zooplankton were recorded, comprising 35 species, with cladocera having 18 species (58%), 14 species of copepoda (41%), 3 species of diptera (1.0%). Results show that more of the zooplankton were recorded in the 3.0 meters-deep region compared to the two other depts and a significant difference was observed in the distribution of Ceriodaphnia dubia, Daphnia laevis, and Leptodiaptomus coloradensis. Though the most abundant zooplankton was recorded in the 3.0 meters deep, Leptodiaptomus coloradesnsis, which was observed in the 6.0 meters deep as the most individual observed, this was followed by Daphnia laevis. Canonical correspondence analysis between physicochemical parameters and the zooplankton indicated a good relationship in the Lake. Ceriodaphnia dubia was found to have a good association with oxygen, sodium, and potassium, while Daphnia laevis and Leptodiaptomus coloradensis are in good relationship with magnesium and phosphorus. It was generally observed that this depth does not have much influence on the distribution of zooplankton in Wushishi Lake.

Keywords: zooplankton, standard procedures, canonical correspondence analysis, Wushishi, canonical, physicochemical parameter

Procedia PDF Downloads 88
1971 Development of Web-Based Iceberg Detection Using Deep Learning

Authors: A. Kavya Sri, K. Sai Vineela, R. Vanitha, S. Rohith

Abstract:

Large pieces of ice that break from the glaciers are known as icebergs. The threat that icebergs pose to navigation, production of offshore oil and gas services, and underwater pipelines makes their detection crucial. In this project, an automated iceberg tracking method using deep learning techniques and satellite images of icebergs is to be developed. With a temporal resolution of 12 days and a spatial resolution of 20 m, Sentinel-1 (SAR) images can be used to track iceberg drift over the Southern Ocean. In contrast to multispectral images, SAR images are used for analysis in meteorological conditions. This project develops a web-based graphical user interface to detect and track icebergs using sentinel-1 images. To track the movement of the icebergs by using temporal images based on their latitude and longitude values and by comparing the center and area of all detected icebergs. Testing the accuracy is done by precision and recall measures.

Keywords: synthetic aperture radar (SAR), icebergs, deep learning, spatial resolution, temporal resolution

Procedia PDF Downloads 90
1970 Deep Supervision Based-Unet to Detect Buildings Changes from VHR Aerial Imagery

Authors: Shimaa Holail, Tamer Saleh, Xiongwu Xiao

Abstract:

Building change detection (BCD) from satellite imagery is an essential topic in urbanization monitoring, agricultural land management, and updating geospatial databases. Recently, methods for detecting changes based on deep learning have made significant progress and impressive results. However, it has the problem of being insensitive to changes in buildings with complex spectral differences, and the features being extracted are not discriminatory enough, resulting in incomplete buildings and irregular boundaries. To overcome these problems, we propose a dual Siamese network based on the Unet model with the addition of a deep supervision strategy (DS) in this paper. This network consists of a backbone (encoder) based on ImageNet pre-training, a fusion block, and feature pyramid networks (FPN) to enhance the step-by-step information of the changing regions and obtain a more accurate BCD map. To train the proposed method, we created a new dataset (EGY-BCD) of high-resolution and multi-temporal aerial images captured over New Cairo in Egypt to detect building changes for this purpose. The experimental results showed that the proposed method is effective and performs well with the EGY-BCD dataset regarding the overall accuracy, F1-score, and mIoU, which were 91.6 %, 80.1 %, and 73.5 %, respectively.

Keywords: building change detection, deep supervision, semantic segmentation, EGY-BCD dataset

Procedia PDF Downloads 118
1969 Computer Aided Analysis of Breast Based Diagnostic Problems from Mammograms Using Image Processing and Deep Learning Methods

Authors: Ali Berkan Ural

Abstract:

This paper presents the analysis, evaluation, and pre-diagnosis of early stage breast based diagnostic problems (breast cancer, nodulesorlumps) by Computer Aided Diagnosing (CAD) system from mammogram radiological images. According to the statistics, the time factor is crucial to discover the disease in the patient (especially in women) as possible as early and fast. In the study, a new algorithm is developed using advanced image processing and deep learning method to detect and classify the problem at earlystagewithmoreaccuracy. This system first works with image processing methods (Image acquisition, Noiseremoval, Region Growing Segmentation, Morphological Operations, Breast BorderExtraction, Advanced Segmentation, ObtainingRegion Of Interests (ROIs), etc.) and segments the area of interest of the breast and then analyzes these partly obtained area for cancer detection/lumps in order to diagnosis the disease. After segmentation, with using the Spectrogramimages, 5 different deep learning based methods (specified Convolutional Neural Network (CNN) basedAlexNet, ResNet50, VGG16, DenseNet, Xception) are applied to classify the breast based problems.

Keywords: computer aided diagnosis, breast cancer, region growing, segmentation, deep learning

Procedia PDF Downloads 93
1968 Nonparametric Sieve Estimation with Dependent Data: Application to Deep Neural Networks

Authors: Chad Brown

Abstract:

This paper establishes general conditions for the convergence rates of nonparametric sieve estimators with dependent data. We present two key results: one for nonstationary data and another for stationary mixing data. Previous theoretical results often lack practical applicability to deep neural networks (DNNs). Using these conditions, we derive convergence rates for DNN sieve estimators in nonparametric regression settings with both nonstationary and stationary mixing data. The DNN architectures considered adhere to current industry standards, featuring fully connected feedforward networks with rectified linear unit activation functions, unbounded weights, and a width and depth that grows with sample size.

Keywords: sieve extremum estimates, nonparametric estimation, deep learning, neural networks, rectified linear unit, nonstationary processes

Procedia PDF Downloads 40
1967 Comparison Study between Deep Mixed Columns and Encased Sand Column for Soft Clay Soil in Egypt

Authors: Walid El Kamash

Abstract:

Sand columns (or granular piles) can be employed as soil strengthening for flexible constructions such as road embankments, oil storage tanks in addition to multistory structures. The challenge of embedding the sand columns in soft soil is that the surrounding soft soil cannot avail the enough confinement stress in order to keep the form of the sand column. Therefore, the sand columns which were installed in such soil will lose their ability to perform needed load-bearing capacity. The encasement, besides increasing the strength and stiffness of the sand column, prevents the lateral squeezing of sands when the column is installed even in extremely soft soils, thus enabling quicker and more economical installation. This paper investigates the improvement in load capacity of the sand column by encasement through a comprehensive parametric study using the 3-D finite difference analysis for the soft clay of soil in Egypt. Moreover, the study was extended to include a comparison study between encased sand column and Deep Mixed columns (DM). The study showed that confining the sand by geosynthetic resulted in an increment of shear strength. That result paid the attention to use encased sand stone rather than deep mixed columns due to relative high permeability of the first material.

Keywords: encased sand column, Deep mixed column, numerical analysis, improving soft soil

Procedia PDF Downloads 377
1966 Utilizing Federated Learning for Accurate Prediction of COVID-19 from CT Scan Images

Authors: Jinil Patel, Sarthak Patel, Sarthak Thakkar, Deepti Saraswat

Abstract:

Recently, the COVID-19 outbreak has spread across the world, leading the World Health Organization to classify it as a global pandemic. To save the patient’s life, the COVID-19 symptoms have to be identified. But using an AI (Artificial Intelligence) model to identify COVID-19 symptoms within the allotted time was challenging. The RT-PCR test was found to be inadequate in determining the COVID status of a patient. To determine if the patient has COVID-19 or not, a Computed Tomography Scan (CT scan) of patient is a better alternative. It will be challenging to compile and store all the data from various hospitals on the server, though. Federated learning, therefore, aids in resolving this problem. Certain deep learning models help to classify Covid-19. This paper will have detailed work of certain deep learning models like VGG19, ResNet50, MobileNEtv2, and Deep Learning Aggregation (DLA) along with maintaining privacy with encryption.

Keywords: federated learning, COVID-19, CT-scan, homomorphic encryption, ResNet50, VGG-19, MobileNetv2, DLA

Procedia PDF Downloads 71
1965 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction

Authors: Joy Cao, Min Zhou

Abstract:

Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.

Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.

Procedia PDF Downloads 88
1964 A Deep Learning Based Integrated Model For Spatial Flood Prediction

Authors: Vinayaka Gude Divya Sampath

Abstract:

The research introduces an integrated prediction model to assess the susceptibility of roads in a future flooding event. The model consists of deep learning algorithm for forecasting gauge height data and Flood Inundation Mapper (FIM) for spatial flooding. An optimal architecture for Long short-term memory network (LSTM) was identified for the gauge located on Tangipahoa River at Robert, LA. Dropout was applied to the model to evaluate the uncertainty associated with the predictions. The estimates are then used along with FIM to identify the spatial flooding. Further geoprocessing in ArcGIS provides the susceptibility values for different roads. The model was validated based on the devastating flood of August 2016. The paper discusses the challenges for generalization the methodology for other locations and also for various types of flooding. The developed model can be used by the transportation department and other emergency response organizations for effective disaster management.

Keywords: deep learning, disaster management, flood prediction, urban flooding

Procedia PDF Downloads 145
1963 Single-Camera Basketball Tracker through Pose and Semantic Feature Fusion

Authors: Adrià Arbués-Sangüesa, Coloma Ballester, Gloria Haro

Abstract:

Tracking sports players is a widely challenging scenario, specially in single-feed videos recorded in tight courts, where cluttering and occlusions cannot be avoided. This paper presents an analysis of several geometric and semantic visual features to detect and track basketball players. An ablation study is carried out and then used to remark that a robust tracker can be built with Deep Learning features, without the need of extracting contextual ones, such as proximity or color similarity, nor applying camera stabilization techniques. The presented tracker consists of: (1) a detection step, which uses a pretrained deep learning model to estimate the players pose, followed by (2) a tracking step, which leverages pose and semantic information from the output of a convolutional layer in a VGG network. Its performance is analyzed in terms of MOTA over a basketball dataset with more than 10k instances.

Keywords: basketball, deep learning, feature extraction, single-camera, tracking

Procedia PDF Downloads 136
1962 DLtrace: Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

With the widespread popularity of mobile devices and the development of artificial intelligence (AI), deep learning (DL) has been extensively applied in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace; a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Moreover, using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. We conducted an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace has a more robust performance than FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: mobile computing, deep learning apps, sensitive information, static analysis

Procedia PDF Downloads 175