Search results for: deep seated gravitational slope deformation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3680

Search results for: deep seated gravitational slope deformation

2810 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model

Authors: Anshika Kankane, Dongshik Kang

Abstract:

Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.

Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching

Procedia PDF Downloads 106
2809 Impact of Integrated Signals for Doing Human Activity Recognition Using Deep Learning Models

Authors: Milagros Jaén-Vargas, Javier García Martínez, Karla Miriam Reyes Leiva, María Fernanda Trujillo-Guerrero, Francisco Fernandes, Sérgio Barroso Gonçalves, Miguel Tavares Silva, Daniel Simões Lopes, José Javier Serrano Olmedo

Abstract:

Human Activity Recognition (HAR) is having a growing impact in creating new applications and is responsible for emerging new technologies. Also, the use of wearable sensors is an important key to exploring the human body's behavior when performing activities. Hence, the use of these dispositive is less invasive and the person is more comfortable. In this study, a database that includes three activities is used. The activities were acquired from inertial measurement unit sensors (IMU) and motion capture systems (MOCAP). The main objective is differentiating the performance from four Deep Learning (DL) models: Deep Neural Network (DNN), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and hybrid model Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM), when considering acceleration, velocity and position and evaluate if integrating the IMU acceleration to obtain velocity and position represent an increment in performance when it works as input to the DL models. Moreover, compared with the same type of data provided by the MOCAP system. Despite the acceleration data is cleaned when integrating, results show a minimal increase in accuracy for the integrated signals.

Keywords: HAR, IMU, MOCAP, acceleration, velocity, position, feature maps

Procedia PDF Downloads 98
2808 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 99
2807 Creep Compliance Characteristics of Cement Dust Asphalt Concrete Mixtures

Authors: Ayman Othman, Tallat Abd el Wahed

Abstract:

The current research is directed towards studying the creep compliance characteristics of asphalt concrete mixtures modified with cement dust. This study can aid in assessing the permanent deformation potential of asphalt concrete mixtures. Cement dust was added to the mixture as mineral filler and compared with regular lime stone filler. A power law model was used to characterize the creep compliance behavior of the studied mixtures. Creep testing results have revealed that the creep compliance power law parameters have a strong relationship with mixture type. Testing results of the studied mixtures, as indicated by the creep compliance parameters revealed an enhancement in the creep resistance, Marshall stability, indirect tensile strength and compressive strength for cement dust mixtures as compared to mixtures with traditional lime stone filler. It is concluded that cement dust can be successfully used to decrease the potential of asphalt concrete mixture to permanent deformation and improve its mechanical properties. This is in addition to the environmental benefits that can be gained when using cement dust in asphalt paving technology.

Keywords: cement dust, asphalt concrete mixtures, creep compliance, Marshall stability, indirect tensile strength, compressive strength

Procedia PDF Downloads 427
2806 Water-Controlled Fracturing with Fuzzy-Ball Fluid in Tight Gas Reservoirs of Deep Coal Measures in Sulige

Authors: Xiangchun Wang, Lihui Zheng, Maozong Gan, Peng Zhang, Tong Wu, An Chang

Abstract:

The deep coal measure tight gas reservoir in Sulige is usually reformed by fracturing, because the reservoir thickness is small, the water layers can be easily communicated during fracturing, which will lead to water production of gas wells and lower production of gas wells. Therefore, it is necessary to control water during fracturing in deep coal measure tight gas reservoir. Using fuzzy-ball fluid to control water fracturing can not only increase the output but also reduce the water output. The fuzzy-ball fluid was prepared indoors to carry out evaluation experiments. The fuzzy ball fluid was mixed in equal volume with the pre-fluid and formation water to test its compatibility. The core displacement device was used to test the gas and water breaking through the matrix and fractured cores blocked by fuzzy-ball fluid. The breakthrough pressure of the plunger tests its water blocking performance. The experimental results show that there is no precipitation after the fuzzy-ball fluid is mixed with the pad fluid and the formation water, respectively. The breakthrough pressure gradients of gas and water after the fuzzy-ball fluid plugged the cracks were 0.02MPa/cm and 0.04MPa/cm, respectively, and the breakthrough pressure gradients of gas and water after the matrix was plugged were 0.03MPa/cm and 0.2MPa/cm, respectively, which meet the requirements of field operation. Two wells A and B in the Sulige Gas Field were used on site to implement water control fracturing. After the pre-fluid was injected into the two wells, 50m3 of fuzzy-ball fluid was pumped to plug the water. The construction went smoothly. After water control and fracturing, the average daily output in 161 days was increased by 13.71% and 6.99% compared with that of adjacent wells in the same layer. The adjacent wells were bubbled for 3 times and 63 times respectively, while there was no effusion in A and B construction wells. The results show that fuzzy-ball fluid is a water plugging material suitable for water control fracturing in tight gas wells, and its water control mechanism can also provide a new idea for the development of water control fracturing materials.

Keywords: coal seam, deep layer, fracking, fuzzy-ball fluid, reservoir reconstruction

Procedia PDF Downloads 227
2805 Keyframe Extraction Using Face Quality Assessment and Convolution Neural Network

Authors: Rahma Abed, Sahbi Bahroun, Ezzeddine Zagrouba

Abstract:

Due to the huge amount of data in videos, extracting the relevant frames became a necessity and an essential step prior to performing face recognition. In this context, we propose a method for extracting keyframes from videos based on face quality and deep learning for a face recognition task. This method has two steps. We start by generating face quality scores for each face image based on the use of three face feature extractors, including Gabor, LBP, and HOG. The second step consists in training a Deep Convolutional Neural Network in a supervised manner in order to select the frames that have the best face quality. The obtained results show the effectiveness of the proposed method compared to the methods of the state of the art.

Keywords: keyframe extraction, face quality assessment, face in video recognition, convolution neural network

Procedia PDF Downloads 233
2804 Investigation of the Catalytic Role of Surfactants on Carbon Dioxide Hydrate Formation in Sediments

Authors: Ehsan Heidaryan

Abstract:

Gas hydrate sediments are ice like permafrost in deep see and oceans. Methane production in sequestration process and reducing atmospheric carbon dioxide, a main source of greenhouse gas, has been accentuated recently. One focus is capture, separation, and sequestration of industrial carbon dioxide. As a hydrate former, carbon dioxide forms hydrates at moderate temperatures and pressures. This phenomenon could be utilized to capture and separate carbon dioxide from flue gases, and also has the potential to sequester carbon dioxide in the deep seabeds. This research investigated the effect of synthetic surfactants on carbon dioxide hydrate formation, catalysis and consequently, methane production from hydrate permafrosts in sediments. It investigated the sequestration potential of carbon dioxide hydrates in ocean sediments. Also, the catalytic effect of biosurfactants in these processes was investigated.

Keywords: carbon dioxide, hydrate, sequestration, surfactant

Procedia PDF Downloads 437
2803 Characterization and Analysis of Airless Tire in Mountain Cycle

Authors: Sadia Rafiq, Md. Ashab Siddique Zaki, Ananya Roy

Abstract:

Mountain cycling is a type of off-road bicycle racing that typically takes place on rocky, arid, or other challenging terrains on specially-made mountain cycles. Professional cyclists race while attempting to stay on their bikes in a variety of locales across the world. For safety measures in mountain cycling, as there we have a high chance of injury in case of tire puncture, it’s a preferable way to use an airless tire instead of a pneumatic tire. As airless tire does not tend to go flat, it needs to be replaced less frequently. The airless tire replaces the pneumatic tire, wheel, and tire system with a single unit. It consists of a stiff hub connected to a shear band by flexible, pliable spokes, which is made of poly-composite and a tread band, all of which work together as a single unit to replace all of the components of a normal radial tire. In this paper, an analysis of airless tires in the mountain cycle is shown along with structure and material study. We will be taking the Honeycomb and Diamond Structure of spokes to compare the deformation in both cases and choose our preferable structure. As we know, the tread and spokes deform with the surface roughness and impact. So, the tire tread thickness and the design of spokes can control how much the tire can distort. Through the simulation, we can come to the conclusion that the diamond structure deforms less than the honeycomb structure. So, the diamond structure is more preferable.

Keywords: airless tire, diamond structure, honeycomb structure, deformation

Procedia PDF Downloads 82
2802 Empirical Evaluation of Gradient-Based Training Algorithms for Ordinary Differential Equation Networks

Authors: Martin K. Steiger, Lukas Heisler, Hans-Georg Brachtendorf

Abstract:

Deep neural networks and their variants form the backbone of many AI applications. Based on the so-called residual networks, a continuous formulation of such models as ordinary differential equations (ODEs) has proven advantageous since different techniques may be applied that significantly increase the learning speed and enable controlled trade-offs with the resulting error at the same time. For the evaluation of such models, high-performance numerical differential equation solvers are used, which also provide the gradients required for training. However, whether classical gradient-based methods are even applicable or which one yields the best results has not been discussed yet. This paper aims to redeem this situation by providing empirical results for different applications.

Keywords: deep neural networks, gradient-based learning, image processing, ordinary differential equation networks

Procedia PDF Downloads 168
2801 Archaic Ontologies Nowadays: Music of Rituals

Authors: Luminiţa Duţică, Gheorghe Duţică

Abstract:

Many of the interrogations or dilemmas of the contemporary world found the answer in what was generically called the appeal to matrix. This genuine spiritual exercise of re-connection of the present to origins, to the primary source, revealed the ontological condition of timelessness, ahistorical, immutable (epi)phenomena, of those pure essences concentrated in the archetypal-referential layer of the human existence. The musical creation was no exception to this trend, the impasse generated by the deterministic excesses of the whole serialism or, conversely, by some questionable results of the extreme indeterminism proper to the avant-garde movements, stimulating the orientation of many composers to rediscover a universal grammar, as an emanation of a new ‘collective’ order (reverse of the utopian individualism). In this context, the music of oral tradition and therefore the world of the ancient modes represented a true revelation for the composers of the twentieth century, who were suddenly in front of some unsuspected (re)sources, with a major impact on all levels of edification of the musical work: morphology, syntax, timbrality, semantics etc. For the contemporary Romanian creators, the music of rituals, existing in the local archaic culture, opened unsuspected perspectives for which it meant to be a synthetic, inclusive and recoverer vision, where the primary (archetypal) genuine elements merge with the latest achievements of language of the European composers. Thus, anchored in a strong and genuine modal source, the compositions analysed in this paper evoke, in a manner as modern as possible, the atmosphere of some ancestral rituals such as: the invocation of rain during the drought (Paparudele, Scaloianul), funeral ceremony (Bocetul), traditions specific to the winter holidays and new year (Colinda, Cântecul de stea, Sorcova, Folklore traditional dances) etc. The reactivity of those rituals in the sound context of the twentieth century meant potentiating or resizing the archaic spirit of the primordial symbolic entities, in terms of some complexity levels generated by the technique of harmonies of chordal layers, of complex aggregates (gravitational or non-gravitational, geometric), of the mixture polyphonies and with global effect (group, mass), by the technique of heterophony, of texture and cluster, leading to the implementation of some processes of collective improvisation and instrumental theatre.

Keywords: archetype, improvisation, polyphony, ritual, instrumental theatre

Procedia PDF Downloads 304
2800 Subsurface Structures Related to the Hydrocarbon Migration and Accumulation in the Afghan Tajik Basin, Northern Afghanistan: Insights from Seismic Attribute Analysis

Authors: Samim Khair Mohammad, Takeshi Tsuji, Chanmaly Chhun

Abstract:

The Afghan Tajik (foreland) basin, located in the depression zone between mountain axes, is under compression and deformation during the collision of India with the Eurasian plate. The southern part of the Afghan Tajik basin in the Northern part of Afghanistan has not been well studied and explored, but considered for the significant potential for oil and gas resources. The Afghan Tajik basin depositional environments (< 8km) resulted from mixing terrestrial and marine systems, which has potential prospects of Jurrasic (deep) and Tertiary (shallow) petroleum systems. We used 2D regional seismic profiles with a total length of 674.8 km (or over an area of 2500 km²) in the southern part of the basin. To characterize hydrocarbon systems and structures in this study area, we applied advanced seismic attributes such as spectral decomposition (10 - 60Hz) based on time-frequency analysis with continuous wavelet transform. The spectral decomposition results yield the (averaging 20 - 30Hz group) spectral amplitude anomaly. Based on this anomaly result, seismic, and structural interpretation, the potential hydrocarbon accumulations were inferred around the main thrust folds in the tertiary (Paleogene+Neogene) petroleum systems, which appeared to be accumulated around the central study area. Furthermore, it seems that hydrocarbons dominantly migrated along the main thrusts and then concentrated around anticline fold systems which could be sealed by mudstone/carbonate rocks.

Keywords: The Afghan Tajik basin, seismic lines, spectral decomposition, thrust folds, hydrocarbon reservoirs

Procedia PDF Downloads 112
2799 Evaluating the Effectiveness of Combined Psychiatric and Psychotherapeutic Care versus Psychotherapy Alone in the Treatment of Depression and Anxiety in Cancer Patients

Authors: Nathen A. Spitz, Dennis Martin Kivlighan III, Arwa Aburizik

Abstract:

Background and Purpose: Presently, there is a paucity of naturalistic studies that directly compare the effectiveness of psychotherapy versus concurrent psychotherapy and psychiatric care for the treatment of depression and anxiety in cancer patients. Informed by previous clinical trials examining the efficacy of concurrent approaches, this study sought to test the hypothesis that a combined approach would result in the greatest reduction of depression and anxiety symptoms. Methods: Data for this study consisted of 433 adult cancer patients, with 252 receiving only psychotherapy and 181 receiving concurrent psychotherapy and psychiatric care at the University of Iowa Hospitals and Clinics. Longitudinal PHQ9 and GAD7 data were analyzed between both groups using latent growth curve analyses. Results: After controlling for treatment length and provider effects, results indicated that concurrent care was more effective than psychotherapy alone for depressive symptoms (γ₁₂ = -0.12, p = .037). Specifically, the simple slope for concurrent care was -0.25 (p = .022), and the simple slope for psychotherapy alone was -0.13 (p = .006), suggesting that patients receiving concurrent care experienced a greater reduction in depressive symptoms compared to patients receiving psychotherapy alone. In contrast, there were no significant differences between psychotherapy alone and concurrent psychotherapy and psychiatric care in the reduction of anxious symptoms. Conclusions: Overall, as both psychotherapy and psychiatric care may address unique aspects of mental health conditions, in addition to potentially providing synergetic support to each other, a combinatorial approach to mental healthcare for cancer patients may improve outcomes.

Keywords: psychiatry, psychology, psycho-oncology, combined care, psychotherapy, behavioral psychology

Procedia PDF Downloads 118
2798 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation

Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy

Abstract:

The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.

Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis

Procedia PDF Downloads 406
2797 Coupled Flexural-Lateral-Torsional of Shear Deformable Thin-Walled Beams with Asymmetric Cross-Section–Closed Form Exact Solution

Authors: Mohammed Ali Hjaji, Magdi Mohareb

Abstract:

This paper develops the exact solutions for coupled flexural-lateral-torsional static response of thin-walled asymmetric open members subjected to general loading. Using the principle of stationary total potential energy, the governing differential equations of equilibrium are formulated as well as the associated boundary conditions. The formulation is based on a generalized Timoshenko-Vlasov beam theory and accounts for the effects of shear deformation due to bending and warping, and captures the effects of flexural–torsional coupling due to cross-section asymmetry. Closed-form solutions are developed for cantilever and simply supported beams under various forces. In order to demonstrate the validity and the accuracy of this solution, numerical examples are presented and compared with well-established ABAQUS finite element solutions and other numerical results available in the literature. In addition, the results are compared against non-shear deformable beam theories in order to demonstrate the shear deformation effects.

Keywords: asymmetric cross-section, flexural-lateral-torsional response, Vlasov-Timoshenko beam theory, closed form solution

Procedia PDF Downloads 470
2796 MIMIC: A Multi Input Micro-Influencers Classifier

Authors: Simone Leonardi, Luca Ardito

Abstract:

Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.

Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media

Procedia PDF Downloads 183
2795 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks

Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha

Abstract:

This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.

Keywords: millimetre wavebands, SHF band, SINR, cost benefit analysis, 5G

Procedia PDF Downloads 141
2794 Clinical Impact of Ultra-Deep Versus Sanger Sequencing Detection of Minority Mutations on the HIV-1 Drug Resistance Genotype Interpretations after Virological Failure

Authors: S. Mohamed, D. Gonzalez, C. Sayada, P. Halfon

Abstract:

Drug resistance mutations are routinely detected using standard Sanger sequencing, which does not detect minor variants with a frequency below 20%. The impact of detecting minor variants generated by ultra-deep sequencing (UDS) on HIV drug-resistance (DR) interpretations has not yet been studied. Fifty HIV-1 patients who experienced virological failure were included in this retrospective study. The HIV-1 UDS protocol allowed the detection and quantification of HIV-1 protease and reverse transcriptase variants related to genotypes A, B, C, E, F, and G. DeepChek®-HIV simplified DR interpretation software was used to compare Sanger sequencing and UDS. The total time required for the UDS protocol was found to be approximately three times longer than Sanger sequencing with equivalent reagent costs. UDS detected all of the mutations found by population sequencing and identified additional resistance variants in all patients. An analysis of DR revealed a total of 643 and 224 clinically relevant mutations by UDS and Sanger sequencing, respectively. Three resistance mutations with > 20% prevalence were detected solely by UDS: A98S (23%), E138A (21%) and V179I (25%). A significant difference in the DR interpretations for 19 antiretroviral drugs was observed between the UDS and Sanger sequencing methods. Y181C and T215Y were the most frequent mutations associated with interpretation differences. A combination of UDS and DeepChek® software for the interpretation of DR results would help clinicians provide suitable treatments. A cut-off of 1% allowed a better characterisation of the viral population by identifying additional resistance mutations and improving the DR interpretation.

Keywords: HIV-1, ultra-deep sequencing, Sanger sequencing, drug resistance

Procedia PDF Downloads 335
2793 Measuring Human Perception and Negative Elements of Public Space Quality Using Deep Learning: A Case Study of Area within the Inner Road of Tianjin City

Authors: Jiaxin Shi, Kaifeng Hao, Qingfan An, Zeng Peng

Abstract:

Due to a lack of data sources and data processing techniques, it has always been difficult to quantify public space quality, which includes urban construction quality and how it is perceived by people, especially in large urban areas. This study proposes a quantitative research method based on the consideration of emotional health and physical health of the built environment. It highlights the low quality of public areas in Tianjin, China, where there are many negative elements. Deep learning technology is then used to measure how effectively people perceive urban areas. First, this work suggests a deep learning model that might simulate how people can perceive the quality of urban construction. Second, we perform semantic segmentation on street images to identify visual elements influencing scene perception. Finally, this study correlated the scene perception score with the proportion of visual elements to determine the surrounding environmental elements that influence scene perception. Using a small-scale labeled Tianjin street view data set based on transfer learning, this study trains five negative spatial discriminant models in order to explore the negative space distribution and quality improvement of urban streets. Then it uses all Tianjin street-level imagery to make predictions and calculate the proportion of negative space. Visualizing the spatial distribution of negative space along the Tianjin Inner Ring Road reveals that the negative elements are mainly found close to the five key districts. The map of Tianjin was combined with the experimental data to perform the visual analysis. Based on the emotional assessment, the distribution of negative materials, and the direction of street guidelines, we suggest guidance content and design strategy points of the negative phenomena in Tianjin street space in the two dimensions of perception and substance. This work demonstrates the utilization of deep learning techniques to understand how people appreciate high-quality urban construction, and it complements both theory and practice in urban planning. It illustrates the connection between human perception and the actual physical public space environment, allowing researchers to make urban interventions.

Keywords: human perception, public space quality, deep learning, negative elements, street images

Procedia PDF Downloads 114
2792 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition

Authors: A. Shoiynbek, K. Kozhakhmet, P. Menezes, D. Kuanyshbay, D. Bayazitov

Abstract:

Speech emotion recognition has received increasing research interest all through current years. There was used emotional speech that was collected under controlled conditions in most research work. Actors imitating and artificially producing emotions in front of a microphone noted those records. There are four issues related to that approach, namely, (1) emotions are not natural, and it means that machines are learning to recognize fake emotions. (2) Emotions are very limited by quantity and poor in their variety of speaking. (3) There is language dependency on SER. (4) Consequently, each time when researchers want to start work with SER, they need to find a good emotional database on their language. In this paper, we propose the approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describe the sequence of actions of the proposed approach. One of the first objectives of the sequence of actions is a speech detection issue. The paper gives a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian languages. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To illustrate the working capacity of the developed model, we have performed an analysis of speech detection and extraction from real tasks.

Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset

Procedia PDF Downloads 101
2791 Geometrically Nonlinear Analysis of Initially Stressed Hybrid Laminated Composite Structures

Authors: Moumita Sit, Chaitali Ray

Abstract:

The present article deals with the free vibration analysis of hybrid laminated composite structures with initial stresses developed in the laminates. Generally initial stresses may be developed in the laminates by temperature and moisture effect. In this study, an eight noded isoparametric plate bending element has been used for the finite element analysis of composite plates. A numerical model has been developed to assess the geometric nonlinear response of composite plates based on higher order shear deformation theory (HSDT) considering the Green–Lagrange type nonlinearity. A computer code based on finite element method (FEM) has also been developed in MATLAB to perform the numerical calculations. To validate the accuracy of the proposed numerical model, the results obtained from the present study are compared with those available in published literature. Effects of the side to thickness ratio, different boundary conditions and initial stresses on the natural frequency of composite plates have been studied. The free vibration analysis of a hollow stiffened hybrid laminated panel has also been carried out considering initial stresses and presented as case study.

Keywords: geometric nonlinearity, higher order shear deformation theory (HSDT), hybrid composite laminate, the initial stress

Procedia PDF Downloads 150
2790 Face Recognition Using Body-Worn Camera: Dataset and Baseline Algorithms

Authors: Ali Almadan, Anoop Krishnan, Ajita Rattani

Abstract:

Facial recognition is a widely adopted technology in surveillance, border control, healthcare, banking services, and lately, in mobile user authentication with Apple introducing “Face ID” moniker with iPhone X. A lot of research has been conducted in the area of face recognition on datasets captured by surveillance cameras, DSLR, and mobile devices. Recently, face recognition technology has also been deployed on body-worn cameras to keep officers safe, enabling situational awareness and providing evidence for trial. However, limited academic research has been conducted on this topic so far, without the availability of any publicly available datasets with a sufficient sample size. This paper aims to advance research in the area of face recognition using body-worn cameras. To this aim, the contribution of this work is two-fold: (1) collection of a dataset consisting of a total of 136,939 facial images of 102 subjects captured using body-worn cameras in in-door and daylight conditions and (2) evaluation of various deep-learning architectures for face identification on the collected dataset. Experimental results suggest a maximum True Positive Rate(TPR) of 99.86% at False Positive Rate(FPR) of 0.000 obtained by SphereFace based deep learning architecture in daylight condition. The collected dataset and the baseline algorithms will promote further research and development. A downloadable link of the dataset and the algorithms is available by contacting the authors.

Keywords: face recognition, body-worn cameras, deep learning, person identification

Procedia PDF Downloads 163
2789 Cladding Technology for Metal-Hybrid Composites with Network-Structure

Authors: Ha-Guk Jeong, Jong-Beom Lee

Abstract:

Cladding process is very typical technology for manufacturing composite materials by the hydrostatic extrusion. Because there is no friction between the metal and the container, it can be easily obtained in uniform flow during the deformation. The general manufacturing process for a metal-matrix composite in the solid state, mixing metal powders and ceramic powders with a suited volume ratio, prior to be compressed or extruded at the cold or hot condition in a can. Since through a plurality of unit processing steps of dispersing the materials having a large difference in their characteristics and physical mixing, the process is complicated and leads to non-uniform dispersion of ceramics. It is difficult and hard to reach a uniform ideal property in the coherence problems at the interface between the metal and the ceramic reinforcements. Metal hybrid composites, which presented in this report, are manufactured through the traditional plastic deformation processes like hydrostatic extrusion, caliber-rolling, and drawing. By the previous process, the realization of uniform macro and microstructure is surely possible. In this study, as a constituent material, aluminum, copper, and titanium have been used, according to the component ratio, excellent characteristics of each material were possible to produce a metal hybrid composite that appears to maximize. MgB₂ superconductor wire also fabricated via the same process. It will be introduced to their unique artistic and thermal characteristics.

Keywords: cladding process, metal-hybrid composites, hydrostatic extrusion, electronic/thermal characteristics

Procedia PDF Downloads 179
2788 Instance Segmentation of Wildfire Smoke Plumes using Mask-RCNN

Authors: Jamison Duckworth, Shankarachary Ragi

Abstract:

Detection and segmentation of wildfire smoke plumes from remote sensing imagery are being pursued as a solution for early fire detection and response. Smoke plume detection can be automated and made robust by the application of artificial intelligence methods. Specifically, in this study, the deep learning approach Mask Region-based Convolutional Neural Network (RCNN) is being proposed to learn smoke patterns across different spectral bands. This method is proposed to separate the smoke regions from the background and return masks placed over the smoke plumes. Multispectral data was acquired using NASA’s Earthdata and WorldView and services and satellite imagery. Due to the use of multispectral bands along with the three visual bands, we show that Mask R-CNN can be applied to distinguish smoke plumes from clouds and other landscape features that resemble smoke.

Keywords: deep learning, mask-RCNN, smoke plumes, spectral bands

Procedia PDF Downloads 127
2787 Characterization of Articular Cartilage Based on the Response of Cartilage Surface to Loading/Unloading

Authors: Z. Arabshahi, I. Afara, A. Oloyede, H. Moody, J. Kashani, T. Klein

Abstract:

Articular cartilage is a fluid-swollen tissue of synovial joints that functions by providing a lubricated surface for articulation and to facilitate the load transmission. The biomechanical function of this tissue is highly dependent on the integrity of its ultrastructural matrix. Any alteration of articular cartilage matrix, either by injury or degenerative conditions such as osteoarthritis (OA), compromises its functional behaviour. Therefore, the assessment of articular cartilage is important in early stages of degenerative process to prevent or reduce further joint damage with associated socio-economic impact. Therefore, there has been increasing research interest into the functional assessment of articular cartilage. This study developed a characterization parameter for articular cartilage assessment based on the response of cartilage surface to loading/unloading. This is because the response of articular cartilage to compressive loading is significantly depth-dependent, where the superficial zone and underlying matrix respond differently to deformation. In addition, the alteration of cartilage matrix in the early stages of degeneration is often characterized by PG loss in the superficial layer. In this study, it is hypothesized that the response of superficial layer is different in normal and proteoglycan depleted tissue. To establish the viability of this hypothesis, samples of visually intact and artificially proteoglycan-depleted bovine cartilage were subjected to compression at a constant rate to 30 percent strain using a ring-shaped indenter with an integrated ultrasound probe and then unloaded. The response of articular surface which was indirectly loaded was monitored using ultrasound during the time of loading/unloading (deformation/recovery). It was observed that the rate of cartilage surface response to loading/unloading was different for normal and PG-depleted cartilage samples. Principal Component Analysis was performed to identify the capability of the cartilage surface response to loading/unloading, to distinguish between normal and artificially degenerated cartilage samples. The classification analysis of this parameter showed an overlap between normal and degenerated samples during loading. While there was a clear distinction between normal and degenerated samples during unloading. This study showed that the cartilage surface response to loading/unloading has the potential to be used as a parameter for cartilage assessment.

Keywords: cartilage integrity parameter, cartilage deformation/recovery, cartilage functional assessment, ultrasound

Procedia PDF Downloads 192
2786 Searching the Relationship among Components that Contribute to Interactive Plight and Educational Execution

Authors: Shri Krishna Mishra

Abstract:

In an educational context, technology can prompt interactive plight only when it is used in conjunction with interactive plight methods. This study, therefore, examines the relationships among components that contribute to higher levels of interactive plight and execution, such as interactive Plight methods, technology, intrinsic motivation and deep learning. 526 students participated in this study. With structural equation modelling, the authors test the conceptual model and identify satisfactory model fit. The results indicate that interactive Plight methods, technology and intrinsic motivation have significant relationship with interactive Plight; deep learning mediates the relationships of the other variables with Execution.

Keywords: searching the relationship among components, contribute to interactive plight, educational execution, intrinsic motivation

Procedia PDF Downloads 454
2785 Progress in Combining Image Captioning and Visual Question Answering Tasks

Authors: Prathiksha Kamath, Pratibha Jamkhandi, Prateek Ghanti, Priyanshu Gupta, M. Lakshmi Neelima

Abstract:

Combining Image Captioning and Visual Question Answering (VQA) tasks have emerged as a new and exciting research area. The image captioning task involves generating a textual description that summarizes the content of the image. VQA aims to answer a natural language question about the image. Both these tasks include computer vision and natural language processing (NLP) and require a deep understanding of the content of the image and semantic relationship within the image and the ability to generate a response in natural language. There has been remarkable growth in both these tasks with rapid advancement in deep learning. In this paper, we present a comprehensive review of recent progress in combining image captioning and visual question-answering (VQA) tasks. We first discuss both image captioning and VQA tasks individually and then the various ways in which both these tasks can be integrated. We also analyze the challenges associated with these tasks and ways to overcome them. We finally discuss the various datasets and evaluation metrics used in these tasks. This paper concludes with the need for generating captions based on the context and captions that are able to answer the most likely asked questions about the image so as to aid the VQA task. Overall, this review highlights the significant progress made in combining image captioning and VQA, as well as the ongoing challenges and opportunities for further research in this exciting and rapidly evolving field, which has the potential to improve the performance of real-world applications such as autonomous vehicles, robotics, and image search.

Keywords: image captioning, visual question answering, deep learning, natural language processing

Procedia PDF Downloads 73
2784 A Deep Learning Approach to Online Social Network Account Compromisation

Authors: Edward K. Boahen, Brunel E. Bouya-Moko, Changda Wang

Abstract:

The major threat to online social network (OSN) users is account compromisation. Spammers now spread malicious messages by exploiting the trust relationship established between account owners and their friends. The challenge in detecting a compromised account by service providers is validating the trusted relationship established between the account owners, their friends, and the spammers. Another challenge is the increase in required human interaction with the feature selection. Research available on supervised learning (machine learning) has limitations with the feature selection and accounts that cannot be profiled, like application programming interface (API). Therefore, this paper discusses the various behaviours of the OSN users and the current approaches in detecting a compromised OSN account, emphasizing its limitations and challenges. We propose a deep learning approach that addresses and resolve the constraints faced by the previous schemes. We detailed our proposed optimized nonsymmetric deep auto-encoder (OPT_NDAE) for unsupervised feature learning, which reduces the required human interaction levels in the selection and extraction of features. We evaluated our proposed classifier using the NSL-KDD and KDDCUP'99 datasets in a graphical user interface enabled Weka application. The results obtained indicate that our proposed approach outperformed most of the traditional schemes in OSN compromised account detection with an accuracy rate of 99.86%.

Keywords: computer security, network security, online social network, account compromisation

Procedia PDF Downloads 119
2783 Study on the Effects of Grassroots Characteristics on Reinforced Soil Performance by Direct Shear Test

Authors: Zhanbo Cheng, Xueyu Geng

Abstract:

Vegetation slope protection technique is economic, aesthetic and practical. Herbs are widely used in practice because of rapid growth, strong erosion resistance, obvious slope protection and simple method, in which the root system of grass plays a very important role. In this paper, through changing the variables value of grassroots quantity, grassroots diameter, grassroots length and grassroots reinforce layers, the direct shear tests were carried out to discuss the change of shear strength indexes of grassroots reinforced soil under different reinforce situations, and analyse the effects of grassroots characteristics on reinforced soil performance. The laboratory test results show that: (1) in the certain number of grassroots diameter, grassroots length and grassroots reinforce layers, the value of shear strength, and cohesion first increase and then reduce with the increasing of grassroots quantity; (2) in the certain number of grassroots quantity, grassroots length and grassroots reinforce layers, the value of shear strength and cohesion rise with the increasing of grassroots diameter; (3) in the certain number of grassroots diameter, and grassroots reinforce layers, the value of shear strength and cohesion raise with the increasing of grassroots length in a certain range of grassroots quantity, while the value of shear strength and cohesion first rise and then decline with the increasing of grassroots length when the grassroots quantity reaches a certain value; (4) in the certain number of grassroots quantity, grassroots diameter, and grassroots length, the value of shear strength and cohesion first climb and then decline with the increasing of grassroots reinforced layers; (5) the change of internal friction angle is small in different parameters of grassroots. The research results are of importance for understanding the mechanism of vegetation protection for slopes and determining the parameters of grass planting.

Keywords: direct shear test, reinforced soil, grassroots characteristics, shear strength indexes

Procedia PDF Downloads 179
2782 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 89
2781 Understanding and Improving Neural Network Weight Initialization

Authors: Diego Aguirre, Olac Fuentes

Abstract:

In this paper, we present a taxonomy of weight initialization schemes used in deep learning. We survey the most representative techniques in each class and compare them in terms of overhead cost, convergence rate, and applicability. We also introduce a new weight initialization scheme. In this technique, we perform an initial feedforward pass through the network using an initialization mini-batch. Using statistics obtained from this pass, we initialize the weights of the network, so the following properties are met: 1) weight matrices are orthogonal; 2) ReLU layers produce a predetermined number of non-zero activations; 3) the output produced by each internal layer has a unit variance; 4) weights in the last layer are chosen to minimize the error in the initial mini-batch. We evaluate our method on three popular architectures, and a faster converge rates are achieved on the MNIST, CIFAR-10/100, and ImageNet datasets when compared to state-of-the-art initialization techniques.

Keywords: deep learning, image classification, supervised learning, weight initialization

Procedia PDF Downloads 135