Search results for: deep layer
3738 Effects of the Ambient Temperature and the Defect Density on the Performance the Solar Cell (HIT)
Authors: Bouzaki Mohammed Moustafa, Benyoucef Boumediene, Benouaz Tayeb, Benhamou Amina
Abstract:
The ambient temperature and the defects density in the Hetero-junction with Intrinsic Thin layers solar cells (HIT) strongly influence their performances. In first part, we presented the bands diagram on the front/back simulated solar cell based on a-Si: H / c-Si (p)/a-Si:h. In another part, we modeled the following layers structure: ZnO/a-Si:H(n)/a-Si:H(i)/c-Si(p)/a-Si:H(p)/Ag where we studied the effect of the ambient temperature and the defects density in the gap of the crystalline silicon layer on the performance of the heterojunction solar cell with intrinsic layer (HIT).Keywords: heterojunction solar cell, solar cell performance, bands diagram, ambient temperature, defect density
Procedia PDF Downloads 5083737 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 1903736 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data
Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito
Abstract:
Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement
Procedia PDF Downloads 3903735 Stock Price Prediction Using Time Series Algorithms
Authors: Sumit Sen, Sohan Khedekar, Umang Shinde, Shivam Bhargava
Abstract:
This study has been undertaken to investigate whether the deep learning models are able to predict the future stock prices by training the model with the historical stock price data. Since this work required time series analysis, various models are present today to perform time series analysis such as Recurrent Neural Network LSTM, ARIMA and Facebook Prophet. Applying these models the movement of stock price of stocks are predicted and also tried to provide the future prediction of the stock price of a stock. Final product will be a stock price prediction web application that is developed for providing the user the ease of analysis of the stocks and will also provide the predicted stock price for the next seven days.Keywords: Autoregressive Integrated Moving Average, Deep Learning, Long Short Term Memory, Time-series
Procedia PDF Downloads 1423734 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 1953733 The Effects of Different Types of Cement on the Permeability of Deep Mixing Columns
Authors: Mojebullah Wahidy, Murat Olgun
Abstract:
In this study, four different types of cement are used to investigate the permeability of DMC (Deep Mixing Column) in the clay. The clay used in this research is in the kaolin group, and the types of cement are; CEM I 42.5.R. normal portland cement, CEM II/A-M (P-L) pozzolan doped cement, CEM III/A 42.5 N blast furnace slag cement and DMFC-800 fine-grained portland cement. Firstly, some rheological tests are done on every cement, and a 0.9 water/cement ratio is selected as the appropriate ratio. This ratio is used to prepare the small-scale DMCs for all types of cement with %6, %9, %12, and %15, which are determined as the dry weight of the clay. For all the types of cement, three samples were prepared in every percentage and were kept on curing for 7, 14, and 28 days for permeability tests. As a result of the small-scale DMCs, permeability tests, a %12 selected for big-scale DMCs. A total of five big scales DMC were prepared by using a %12-cement and were kept for 28 days curing for permeability tests. The results of the permeability tests show that by increasing the cement percentage and curing time of all DMCs, the permeability coefficient (k) is decreased. Despite variable results in different cement ratios and curing time in general, samples treated by DMFC-800 fine-grained cement have the lowest permeability coefficient. Samples treated with CEM II and CEM I cement types were the second and third lowest permeable samples. The highest permeability coefficient belongs to the samples that were treated with CEM III cement type.Keywords: deep mixing column, rheological test, DMFC-800, permeability test
Procedia PDF Downloads 783732 Frequency Modulation Continuous Wave Radar Human Fall Detection Based on Time-Varying Range-Doppler Features
Authors: Xiang Yu, Chuntao Feng, Lu Yang, Meiyang Song, Wenhao Zhou
Abstract:
The existing two-dimensional micro-Doppler features extraction ignores the correlation information between the spatial and temporal dimension features. For the range-Doppler map, the time dimension is introduced, and a frequency modulation continuous wave (FMCW) radar human fall detection algorithm based on time-varying range-Doppler features is proposed. Firstly, the range-Doppler sequence maps are generated from the echo signals of the continuous motion of the human body collected by the radar. Then the three-dimensional data cube composed of multiple frames of range-Doppler maps is input into the three-dimensional Convolutional Neural Network (3D CNN). The spatial and temporal features of time-varying range-Doppler are extracted by the convolution layer and pool layer at the same time. Finally, the extracted spatial and temporal features are input into the fully connected layer for classification. The experimental results show that the proposed fall detection algorithm has a detection accuracy of 95.66%.Keywords: FMCW radar, fall detection, 3D CNN, time-varying range-doppler features
Procedia PDF Downloads 1233731 Atomic Layer Deposition of MoO₃ on Mesoporous γ-Al₂O₃ Prepared by Sol-Gel Method as Efficient Catalyst for Oxidative Desulfurization of Refractory Dibenzothiophene Compound
Authors: S. Said, Asmaa A. Abdulrahman
Abstract:
MoOₓ/Al₂O₃ based catalyst has long been widely used as an active catalyst in oxidative desulfurization reaction due to its high stability under severe reaction conditions and high resistance to sulfur poisoning. In this context, 4 & 9wt.% MoO₃ grafted on mesoporous γ-Al₂O₃ has been synthesized using the modified atomic layer deposition (ALD) method. Another MoO₃/Al₂O₃ sample was prepared by the conventional wetness impregnation (IM) method, for comparison. The effect of the preparation methods on the metal-support interaction was evaluated using different characterization techniques, including X-ray diffraction, X-ray photoelectron spectroscopy (XPS), N₂-physisorption, transmission electron microscopy (TEM), H₂- temperature-programmed reduction and FT-IR. Oxidative desulfurization (ODS) reaction of the model fuel oil was used as a probe reaction to examine the catalytic efficiency of the prepared catalysts. ALD method led to samples with much better physicochemical properties than those of the prepared one via the impregnation method. However, the 9 wt.%MoO₃/Al₂O₃ (ALD) catalyst in the ODS reaction of model fuel oil shows enhanced catalytic performance with ~90%, which has been attributed to the more Mo⁶⁺ surface concentrations relative to Al³⁺ with large pore diameter and surface area. The kinetic study shows that the ODS of DBT follows a pseudo first-order rate reaction.Keywords: mesoporous Al₂O₃, xMoO₃/Al₂O₃, atomic layer deposition, wetness impregnation, ODS, DBT
Procedia PDF Downloads 1053730 Synthesis of 5-Substituted 1H-Tetrazoles in Deep Eutectic Solvent
Authors: Swapnil A. Padvi, Dipak S. Dalal
Abstract:
The chemistry of tetrazoles has been grown tremendously in the past few years because tetrazoles are important and useful class of heterocyclic compounds which have a widespread application such as anticancer, antimicrobial, analgesics, antibacterial, antifungal, antihypertensive, and anti-allergic drugs in medicinal chemistry. Furthermore, tetrazoles have application in material sciences as explosives, rocket propellants, and in information recording systems. In addition to this, they have a wide range of application in coordination chemistry as a ligand. Deep eutectic solvents (DES) have emerged over the current decade as a novel class of green reaction media and applied in various fields of sciences because of their unique physical and chemical properties similar to the ionic liquids such as low vapor pressure, non-volatility, high thermal stability and recyclability. In addition, the reactants of DES are cheaply available, low-toxic, and biodegradable, which makes them predominantly required for large-scale applications effectively in industrial production. Herein we report the [2+3] cycloaddition reaction of organic nitriles with sodium azide affords the corresponding 5-substituted 1H-tetrazoles in six different types of choline chloride based deep eutectic solvents under mild reaction condition. Choline chloride: ZnCl2 (1:2) showed the best results for the synthesis of 5-substituted 1 H-tetrazoles. This method reduces the disadvantages such as: the use of toxic metals and expensive reagents, drastic reaction conditions and the presence of dangerous hydrazoic acid. The approach provides environment-friendly, short reaction times, good to excellent yields; safe process and simple workup make this method an attractive and useful contribution to present green organic synthesis of 5-substituted-1H-tetrazoles. All synthesized compounds were characterized by IR, 1H NMR, 13C NMR and Mass spectroscopy. DES can be recovered and reused three times with very little loss in activity.Keywords: click chemistry, choline chloride, green chemistry, deep eutectic solvent, tetrazoles
Procedia PDF Downloads 2313729 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering
Authors: R. Nandhini, Gaurab Mudbhari
Abstract:
Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.Keywords: machine learning, deep learning, image classification, image clustering
Procedia PDF Downloads 123728 Preventive Effects of Silymarin in Retinal Intoxication with Methanol in Rat: Transmission Electron Microscope Study
Authors: A. Zarenezhad, A. Esfandiari, E. Zarenezhad, M. Mardkhoshnood
Abstract:
The aim of this study was to investigate the ultra-structure of the photoreceptor layer of male rats under the effect of methanol intoxication and protective effect of silymarin against the methanol toxicity. Fifteen adult male rats were divided into three groups: Control group, Experimental group I (received 4g/kg methanol by intraperitoneal injection for five days), Experimental group II (received 4 g/kg methanol by intraperitoneal injection for five days and received 250 mg/kg silymarin orally for three months). At the end of the experiment, the eyes were removed; retina was separated near the optic disc and studied by transmission electron microscope. Results showed that the retina in the experimental group I exhibited loss of outer segments and disorganization in inner segment. Increased extra cellular space, disappearance of outer limiting membrane and pyknotic nuclei were seen in this group. But normal outer segment, organized inner segment and normal outer limiting membrane were obvious after treatment with silymarin in experimental group II. These findings show that methanol causes damage in the photoreceptor layer of the rat retina and silymarin can protect the damage to retina against the methanol intoxication.Keywords: ultra-structure, photoreceptor layer, methanol intoxication, silymarin, rat
Procedia PDF Downloads 2923727 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning
Authors: Richard O’Riordan, Saritha Unnikrishnan
Abstract:
Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection
Procedia PDF Downloads 1043726 Optimizing Perennial Plants Image Classification by Fine-Tuning Deep Neural Networks
Authors: Khairani Binti Supyan, Fatimah Khalid, Mas Rina Mustaffa, Azreen Bin Azman, Amirul Azuani Romle
Abstract:
Perennial plant classification plays a significant role in various agricultural and environmental applications, assisting in plant identification, disease detection, and biodiversity monitoring. Nevertheless, attaining high accuracy in perennial plant image classification remains challenging due to the complex variations in plant appearance, the diverse range of environmental conditions under which images are captured, and the inherent variability in image quality stemming from various factors such as lighting conditions, camera settings, and focus. This paper proposes an adaptation approach to optimize perennial plant image classification by fine-tuning the pre-trained DNNs model. This paper explores the efficacy of fine-tuning prevalent architectures, namely VGG16, ResNet50, and InceptionV3, leveraging transfer learning to tailor the models to the specific characteristics of perennial plant datasets. A subset of the MYLPHerbs dataset consisted of 6 perennial plant species of 13481 images under various environmental conditions that were used in the experiments. Different strategies for fine-tuning, including adjusting learning rates, training set sizes, data augmentation, and architectural modifications, were investigated. The experimental outcomes underscore the effectiveness of fine-tuning deep neural networks for perennial plant image classification, with ResNet50 showcasing the highest accuracy of 99.78%. Despite ResNet50's superior performance, both VGG16 and InceptionV3 achieved commendable accuracy of 99.67% and 99.37%, respectively. The overall outcomes reaffirm the robustness of the fine-tuning approach across different deep neural network architectures, offering insights into strategies for optimizing model performance in the domain of perennial plant image classification.Keywords: perennial plants, image classification, deep neural networks, fine-tuning, transfer learning, VGG16, ResNet50, InceptionV3
Procedia PDF Downloads 663725 Structured Access Control Mechanism for Mesh-based P2P Live Streaming Systems
Authors: Chuan-Ching Sue, Kai-Chun Chuang
Abstract:
Peer-to-Peer (P2P) live streaming systems still suffer a challenge when thousands of new peers want to join into the system in a short time, called flash crowd, and most of new peers suffer long start-up delay. Recent studies have proposed a slot-based user access control mechanism, which periodically determines a certain number of new peers to enter the system, and a user batch join mechanism, which divides new peers into several tree structures with fixed tree size. However, the slot-based user access control mechanism is difficult for accurately determining the optimal time slot length, and the user batch join mechanism is hard for determining the optimal tree size. In this paper, we propose a structured access control (SAC) mechanism, which constructs new peers to a multi-layer mesh structure. The SAC mechanism constructs new peer connections layer by layer to replace periodical access control, and determines the number of peers in each layer according to the system’s remaining upload bandwidth and average video rate. Furthermore, we propose an analytical model to represent the behavior of the system growth if the system can utilize the upload bandwidth efficiently. The analytical result has shown the similar trend in system growth as the SAC mechanism. Additionally, the extensive simulation is conducted to show the SAC mechanism outperforms two previously proposed methods in terms of system growth and start-up delay.Keywords: peer-to-peer, live video streaming system, flash crowd, start-up delay, access control
Procedia PDF Downloads 3183724 Graphene/ZnO/Polymer Nanocomposite Thin Film for Separation of Oil-Water Mixture
Authors: Suboohi Shervani, Jingjing Ling, Jiabin Liu, Tahir Husain
Abstract:
Offshore oil-spill has become the most emerging problem in the world. In the current paper, a graphene/ZnO/polymer nanocomposite thin film is coated on stainless steel mesh via layer by layer deposition method. The structural characterization of materials is determined by Scanning Electron Microscopy (SEM) and X-ray diffraction (XRD). The total petroleum hydrocarbons (TPHs) and separation efficiency have been measured via gas chromatography – flame ionization detector (GC-FID). TPHs are reduced to 2 ppm and separation efficiency of the nanocomposite coated mesh is reached ≥ 99% for the final sample. The nanocomposite coated mesh acts as a promising candidate for the separation of oil- water mixture.Keywords: oil spill, graphene, oil-water separation, nanocomposite
Procedia PDF Downloads 1743723 Experimental Study of Geotextile Effect on Improving Soil Bearing Capacity in Aggregate Surfaced Roads
Authors: Mahdi Taghipour Masoumi, Ali Abdi Kordani, Mahmoud Nazirizad
Abstract:
Geosynthetics utilization plays an important role in the construction of highways with no additive layers, such as asphalt concrete or cement concrete, or in a subgrade layer which affects the bearing capacity of unbounded layers. This laboratory experimental study was carried out to evaluate changes in the load bearing capacity of reinforced soil with these materials in highway roadbed with regard to geotextile properties. California Bearing Ratio (CBR) test samples were prepared with two types of soil: Clayey and sandy containing non-reinforced and reinforced soil. The samples comprised three types of geotextiles with different characteristics (150, 200, 300 g/m2) and depths (H= 5, 10, 20, 30, 50, 100 mm), and were grouped into two forms, one-layered and two-layered, based on the sample materials in order to perform defined tests. The results showed that the soil bearing characteristics increased when one layer of geotextile was used in clayey and sandy samples reinforced by geotextile. However, the bearing capacity of the soil, in the presence of a geotextile layer material with depth of more than 30 mm, had no remarkable effect. Furthermore, when the two-layered geotextile was applied in material samples, although it increased the soil resistance, it also showed that through the addition of a number or weights of geotextile into samples, the natural composition of the soil changed and the results are unreliable.Keywords: reinforced soil, geosynthetics, geotextile, transportation capacity, CBR experiments
Procedia PDF Downloads 2983722 Magnetohydrodynamic 3D Maxwell Fluid Flow Towards a Horizontal Stretched Surface with Convective Boundary Conditions
Authors: M. Y. Malika, Farzana, Abdul Rehman
Abstract:
The study deals with the steady, 3D MHD boundary layer flow of a non-Newtonian Maxwell fluid flow due to a horizontal surface stretched exponentially in two lateral directions. The temperature at the boundary is assumed to be distributed exponentially and possesses convective boundary conditions. The governing nonlinear system of partial differential equations along with associated boundary conditions is simplified using a suitable transformation and the obtained set of ordinary differential equations is solved through numerical techniques. The effects of important involved parameters associated with fluid flow and heat flux are shown through graphs.Keywords: boundary layer flow, exponentially stretched surface, Maxwell fluid, numerical solution
Procedia PDF Downloads 5893721 Rapid Degradation of High-Concentration Methylene Blue in the Combined System of Plasma-Enhanced Photocatalysis Using TiO₂-Carbon
Authors: Teguh Endah Saraswati, Kusumandari Kusumandari, Candra Purnawan, Annisa Dinan Ghaisani, Aufara Mahayum
Abstract:
The present study aims to investigate the degradation of methylene blue (MB) using TiO₂-carbon (TiO₂-C) photocatalyst combined with dielectric discharge (DBD) plasma. The carbon materials used in the photocatalyst were activated carbon and graphite. The thin layer of TiO₂-C photocatalyst was prepared by ball milling method which was then deposited on the plastic sheet. The characteristic of TiO₂-C thin layer was analyzed using X-ray diffraction (XRD), scanning electron microscopy (SEM) with energy dispersive X-ray (EDX) spectroscopy, and UV-Vis diffuse reflectance spectrophotometer. The XRD diffractogram patterns of TiO₂-G thin layer in various weight compositions of 50:1, 50:3, and 50:5 show the 2θ peaks found around 25° and 27° are the main characteristic of TiO₂ and carbon. SEM analysis shows spherical and regular morphology of the photocatalyst. Analysis using UV-Vis diffuse reflectance shows TiO₂-C has narrower band gap energy. The DBD plasma reactor was generated using two electrodes of Cu tape connected with stainless steel mesh and Fe wire separated by a glass dielectric insulator, supplied by a high voltage 5 kV with an air flow rate of 1 L/min. The optimization of the weight composition of TiO₂-C thin layer was studied based on the highest reduction of the MB concentration achieved, examined by UV-Vis spectrophotometer. The changes in pH values and color of MB indicated the success of MB degradation. Moreover, the degradation efficiency of MB was also studied in various higher concentrations of 50, 100, 200, 300 ppm treated for 0, 2, 4, 6, 8, 10 min. The degradation efficiency of MB treated in combination system of photocatalysis and DBD plasma reached more than 99% in 6 min, in which the greater concentration of methylene blue dye, the lower degradation rate of methylene blue dye would be achieved.Keywords: activated carbon, DBD plasma, graphite, methylene blue, photocatalysis
Procedia PDF Downloads 1243720 Detecting Memory-Related Gene Modules in sc/snRNA-seq Data by Deep-Learning
Authors: Yong Chen
Abstract:
To understand the detailed molecular mechanisms of memory formation in engram cells is one of the most fundamental questions in neuroscience. Recent single-cell RNA-seq (scRNA-seq) and single-nucleus RNA-seq (snRNA-seq) techniques have allowed us to explore the sparsely activated engram ensembles, enabling access to the molecular mechanisms that underlie experience-dependent memory formation and consolidation. However, the absence of specific and powerful computational methods to detect memory-related genes (modules) and their regulatory relationships in the sc/snRNA-seq datasets has strictly limited the analysis of underlying mechanisms and memory coding principles in mammalian brains. Here, we present a deep-learning method named SCENTBOX, to detect memory-related gene modules and causal regulatory relationships among themfromsc/snRNA-seq datasets. SCENTBOX first constructs codifferential expression gene network (CEGN) from case versus control sc/snRNA-seq datasets. It then detects the highly correlated modules of differential expression genes (DEGs) in CEGN. The deep network embedding and attention-based convolutional neural network strategies are employed to precisely detect regulatory relationships among DEG genes in a module. We applied them on scRNA-seq datasets of TRAP; Ai14 mouse neurons with fear memory and detected not only known memory-related genes, but also the modules and potential causal regulations. Our results provided novel regulations within an interesting module, including Arc, Bdnf, Creb, Dusp1, Rgs4, and Btg2. Overall, our methods provide a general computational tool for processing sc/snRNA-seq data from case versus control studie and a systematic investigation of fear-memory-related gene modules.Keywords: sc/snRNA-seq, memory formation, deep learning, gene module, causal inference
Procedia PDF Downloads 1203719 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application
Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior
Abstract:
Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks
Procedia PDF Downloads 1703718 Machine Learning and Deep Learning Approach for People Recognition and Tracking in Crowd for Safety Monitoring
Authors: A. Degale Desta, Cheng Jian
Abstract:
Deep learning application in computer vision is rapidly advancing, giving it the ability to monitor the public and quickly identify potentially anomalous behaviour from crowd scenes. Therefore, the purpose of the current work is to improve the performance of safety of people in crowd events from panic behaviour through introducing the innovative idea of Aggregation of Ensembles (AOE), which makes use of the pre-trained ConvNets and a pool of classifiers to find anomalies in video data with packed scenes. According to the theory of algorithms that applied K-means, KNN, CNN, SVD, and Faster-CNN, YOLOv5 architectures learn different levels of semantic representation from crowd videos; the proposed approach leverages an ensemble of various fine-tuned convolutional neural networks (CNN), allowing for the extraction of enriched feature sets. In addition to the above algorithms, a long short-term memory neural network to forecast future feature values and a handmade feature that takes into consideration the peculiarities of the crowd to understand human behavior. On well-known datasets of panic situations, experiments are run to assess the effectiveness and precision of the suggested method. Results reveal that, compared to state-of-the-art methodologies, the system produces better and more promising results in terms of accuracy and processing speed.Keywords: action recognition, computer vision, crowd detecting and tracking, deep learning
Procedia PDF Downloads 1623717 Stiffness and Modulus of Subgrade Reaction of the Soft Soil Improved by Stone Columns
Authors: Sudheer Kumar J., Sudhanshu Sharma
Abstract:
Stone columns are extensively used as constructive and environmentally sustainable improvement methods for improving stiffness, modulus of subgrade reaction, and maximum lateral displacement in the multilayer soil system. The advantage of using stone columns in improving the single-layer soft soil as a ground reinforcement element for supporting various structures up to shallow depth is well researched, but the understanding of strengthening the multiplayer soil system for a deeper level requires further studies. In this paper, a series of cases have been conducted to study the behaviour of ordinary stone columns (OSC), geosynthetic encased stone columns (GESC) over various objectives for strengthening multilayer soil system up to deep level. A finite element analyses were carried out using the software package PLAXIS to study further correlate the results. The study aims to find the stiffness of composite soil, modulus of subgrade reaction, which is generally required for designing of various foundations, and also discusses the maximum horizontal displacement location, which is the major failure criteria seen after the installation of stone columns.Keywords: stone columns, geotextile, finite element method, stiffness, modulus of subgrade reaction, maximum lateral displacement point
Procedia PDF Downloads 1363716 Decolonising Postgraduate Research Curricula and Its Impact on a Sustainable Protein Supply in Rural-Based Communities
Authors: Fabian Nde Fon
Abstract:
Decolonisation is one of the hottest topics in most African Universities; this is because many researchers focus on research that does not speak to their immediate community. This research looked at postgraduate research projects that can take students to the community to apply the knowledge that they have learned as an attempt to transform their community. In regards to this, an honours project was designed to try and provide a cheaper and continuous source of protein (egg) using amber-link layers and to investigate the potential of the project to promote postgraduate student development and entrepreneurship. Two ban layer production systems were created: (1) Production system one on a Hill (PS-I) and (2) Production system two in a valley, closer to a dam (PS-II) at Nqutshini, Gingindlovu, KwaZulu-Natal Province. Forty point-of-lay (18 weeks old) amber links were bought at Inverness Rearers and divided into PS-I (20), and PS-II (20), and each of the production systems was further divided into two groups of ten (PS-I-1 and PS-II-1 (partially supplemented) and PS-I-2 and PS-II-2 (supplemented with layer mash)) by a random selection. Birds' weights were balanced in each group to avoid bias. The two groups in each production system were caged separately (1.5x1.5m² for ten birds) and in close proximity. Partially supplemented birds received 0.6 kg of layer mash (60g/per bird/day) and kitchen leftovers daily, and supplemented birds were fed 1.2 kg of layer mash (120g/per bird/day). Egg collection was daily after feeding in the morning while was given ad libitium. The eggs were assessed for internal and external quality after weighing before recording. Egg production from fully supplemented birds (PS-I-2 and PS-II-2) was generally higher (P<0.05) than those of PS-I-1 and PS-II-1. The difference in production was only 6% in the valley while on the Hill, it was only 3%. However, some of the birds in the valley showed signs of respiratory infections, which was not observed with those on the Hill. There are no differences in the internal and external qualities of eggs (york colour and egg shell) determined. This implies that both systems were sustainable. It was suggested members in the community living at the valley or Hill can use these hardy layers as a cheaper source of protein and preferable to the partially supplemented systems because it is relatively cheaper. The smallholder farmers are still pursuing the project long after the students graduate; hence the benefit of the project is reciprocal for both the university and the community (entrepreneurship).Keywords: animal nutrition, ban layer, production, postgraduate curricula, entrepreneurship
Procedia PDF Downloads 1143715 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 1683714 Improvement of Soft Clay Soil with Biopolymer
Authors: Majid Bagherinia
Abstract:
Lime and cement are frequently used as binders in the Deep Mixing Method (DMM) to improve soft clay soils. The most significant disadvantages of these materials are carbon dioxide emissions and the consumption of natural resources. In this study, three different biopolymers, guar gum, locust bean gum, and sodium alginate, were investigated for the improvement of soft clay using DMM. In the experimental study, the effects of the additive ratio and curing time on the Unconfined Compressive Strength (UCS) of stabilized specimens were investigated. According to the results, the UCS values of the specimens increased as the additive ratio and curing time increased. The most effective additive was sodium alginate, and the highest strength was obtained after 28 days.Keywords: deep mixing method, soft clays, ground improvement, biopolymers, unconfined compressive strength
Procedia PDF Downloads 803713 Road Condition Monitoring Using Built-in Vehicle Technology Data, Drones, and Deep Learning
Authors: Judith Mwakalonge, Geophrey Mbatta, Saidi Siuhi, Gurcan Comert, Cuthbert Ruseruka
Abstract:
Transportation agencies worldwide continuously monitor their roads' conditions to minimize road maintenance costs and maintain public safety and rideability quality. Existing methods for carrying out road condition surveys involve manual observations of roads using standard survey forms done by qualified road condition surveyors or engineers either on foot or by vehicle. Automated road condition survey vehicles exist; however, they are very expensive since they require special vehicles equipped with sensors for data collection together with data processing and computing devices. The manual methods are expensive, time-consuming, infrequent, and can hardly provide real-time information for road conditions. This study contributes to this arena by utilizing built-in vehicle technologies, drones, and deep learning to automate road condition surveys while using low-cost technology. A single model is trained to capture flexible pavement distresses (Potholes, Rutting, Cracking, and raveling), thereby providing a more cost-effective and efficient road condition monitoring approach that can also provide real-time road conditions. Additionally, data fusion is employed to enhance the road condition assessment with data from vehicles and drones.Keywords: road conditions, built-in vehicle technology, deep learning, drones
Procedia PDF Downloads 1243712 Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza
Authors: Hongmei Wang, Ziyun Xiang, Ying liu, Li Yu, Dongsheng Yue
Abstract:
Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT.Keywords: COVID-19, Fastai, influenza, transfer network
Procedia PDF Downloads 1433711 Deep Injection Wells for Flood Prevention and Groundwater Management
Authors: Mohammad R. Jafari, Francois G. Bernardeau
Abstract:
With its arid climate, Qatar experiences low annual rainfall, intense storms, and high evaporation rates. However, the fast-paced rate of infrastructure development in the capital city of Doha has led to recurring instances of surface water flooding as well as rising groundwater levels. Public Work Authority (PWA/ASHGHAL) has implemented an approach to collect and discharge the flood water into a) positive gravity systems; b) Emergency Flooding Area (EFA) – Evaporation, Infiltration or Storage off-site using tankers; and c) Discharge to deep injection wells. As part of the flood prevention scheme, 21 deep injection wells have been constructed to discharge the collected surface and groundwater table in Doha city. These injection wells function as an alternative in localities that do not possess either positive gravity systems or downstream networks that can accommodate additional loads. These injection wells are 400-m deep and are constructed in a complex karstic subsurface condition with large cavities. The injection well system will discharge collected groundwater and storm surface runoff into the permeable Umm Er Radhuma Formation, which is an aquifer present throughout the Persian Gulf Region. The Umm Er Radhuma formation contains saline water that is not being used for water supply. The injection zone is separated by an impervious gypsum formation which acts as a barrier between upper and lower aquifer. State of the art drilling, grouting, and geophysical techniques have been implemented in construction of the wells to assure that the shallow aquifer would not be contaminated and impacted by injected water. Injection and pumping tests were performed to evaluate injection well functionality (injectability). The results of these tests indicated that majority of the wells can accept injection rate of 200 to 300 m3 /h (56 to 83 l/s) under gravity with average value of 250 m3 /h (70 l/s) compared to design value of 50 l/s. This paper presents design and construction process and issues associated with these injection wells, performing injection/pumping tests to determine capacity and effectiveness of the injection wells, the detailed design of collection system and conveying system into the injection wells, and the operation and maintenance process. This system is completed now and is under operation, and therefore, construction of injection wells is an effective option for flood control.Keywords: deep injection well, flood prevention scheme, geophysical tests, pumping and injection tests, wellhead assembly
Procedia PDF Downloads 1193710 Research on Design Methods for Riverside Spaces of Deep-cut Rivers in Mountainous Cities: A Case Study of Qingshuixi River in Chongqing City
Authors: Luojie Tang
Abstract:
Riverside space is an important public space and ecological corridor in urban areas, but mountainous urban rivers are often overlooked due to their deep valleys and poor accessibility. This article takes the Qing Shui Xi River in Chongqing as an example, and through long-term field inspections, measurements, interviews, and online surveys, summarizes the problems of poor accessibility, limited space for renovation, lack of waterfront facilities, excessive artificial intervention, low average runoff, severe river water pollution, and difficulty in integrated watershed management in riverside space. Based on the current situation and drawing on relevant experiences, this article summarizes the design methods for riverside space in deep valley rivers in mountainous urban areas. Regarding spatial design techniques, the article emphasizes the importance of integrating waterfront spaces into the urban public space system and vertical linkages. Furthermore, the article suggests different design methods and improvement strategies for the already developed areas and new development areas. Specifically, the article proposes a planning and design strategy of "protection" and "empowerment" for new development areas and an updating and transformation strategy of "improvement" and "revitalization" for already developed areas. In terms of ecological restoration methods, the article suggests three focus points: increasing the runoff of urban rivers, raising the landscape water level during dry seasons, and restoring vegetation and wetlands in the riverbank buffer zone while protecting the overall pattern of the watershed. Additionally, the article presents specific design details of the Qingshuixi River to illustrate the proposed design and restoration techniques.Keywords: deep-cut river, design method, mountainous city, Qingshuixi river in Chongqing, waterfront space design
Procedia PDF Downloads 1093709 The Problems of Current Earth Coordinate System for Earthquake Forecasting Using Single Layer Hierarchical Graph Neuron
Authors: Benny Benyamin Nasution, Rahmat Widia Sembiring, Abdul Rahman Dalimunthe, Nursiah Mustari, Nisfan Bahri, Berta br Ginting, Riadil Akhir Lubis, Rita Tavip Megawati, Indri Dithisari
Abstract:
The earth coordinate system is an important part of an attempt for earthquake forecasting, such as the one using Single Layer Hierarchical Graph Neuron (SLHGN). However, there are a number of problems that need to be worked out before the coordinate system can be utilized for the forecaster. One example of those is that SLHGN requires that the focused area of an earthquake must be constructed in a grid-like form. In fact, within the current earth coordinate system, the same longitude-difference would produce different distances. This can be observed at the distance on the Equator compared to distance at both poles. To deal with such a problem, a coordinate system has been developed, so that it can be used to support the ongoing earthquake forecasting using SLHGN. Two important issues have been developed in this system: 1) each location is not represented through two-value (longitude and latitude), but only a single value, 2) the conversion of the earth coordinate system to the x-y cartesian system requires no angular formulas, which is therefore fast. The accuracy and the performance have not been measured yet, since earthquake data is difficult to obtain. However, the characteristics of the SLHGN results show a very promising answer.Keywords: hierarchical graph neuron, multidimensional hierarchical graph neuron, single layer hierarchical graph neuron, natural disaster forecasting, earthquake forecasting, earth coordinate system
Procedia PDF Downloads 216