Search results for: deep machine learning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9377

Search results for: deep machine learning

9017 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.

Authors: Zabeehullah, Fahim Arif, Yawar Abbas

Abstract:

Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.

Keywords: SDN, IoT, DL, ML, DRS

Procedia PDF Downloads 90
9016 Systematic and Meta-Analysis of Navigation in Oral and Maxillofacial Trauma and Impact of Machine Learning and AI in Management

Authors: Shohreh Ghasemi

Abstract:

Introduction: Managing oral and maxillofacial trauma is a multifaceted challenge, as it can have life-threatening consequences and significant functional and aesthetic impact. Navigation techniques have been introduced to improve surgical precision to meet this challenge. A machine learning algorithm was also developed to support clinical decision-making regarding treating oral and maxillofacial trauma. Given these advances, this systematic meta-analysis aims to assess the efficacy of navigational techniques in treating oral and maxillofacial trauma and explore the impact of machine learning on their management. Methods: A detailed and comprehensive analysis of studies published between January 2010 and September 2021 was conducted through a systematic meta-analysis. This included performing a thorough search of Web of Science, Embase, and PubMed databases to identify studies evaluating the efficacy of navigational techniques and the impact of machine learning in managing oral and maxillofacial trauma. Studies that did not meet established entry criteria were excluded. In addition, the overall quality of studies included was evaluated using Cochrane risk of bias tool and the Newcastle-Ottawa scale. Results: Total of 12 studies, including 869 patients with oral and maxillofacial trauma, met the inclusion criteria. An analysis of studies revealed that navigation techniques effectively improve surgical accuracy and minimize the risk of complications. Additionally, machine learning algorithms have proven effective in predicting treatment outcomes and identifying patients at high risk for complications. Conclusion: The introduction of navigational technology has great potential to improve surgical precision in oral and maxillofacial trauma treatment. Furthermore, developing machine learning algorithms offers opportunities to improve clinical decision-making and patient outcomes. Still, further studies are necessary to corroborate these results and establish the optimal use of these technologies in managing oral and maxillofacial trauma

Keywords: trauma, machine learning, navigation, maxillofacial, management

Procedia PDF Downloads 41
9015 Improving Machine Learning Translation of Hausa Using Named Entity Recognition

Authors: Aishatu Ibrahim Birma, Aminu Tukur, Abdulkarim Abbass Gora

Abstract:

Machine translation plays a vital role in the Field of Natural Language Processing (NLP), breaking down language barriers and enabling communication across diverse communities. In the context of Hausa, a widely spoken language in West Africa, mainly in Nigeria, effective translation systems are essential for enabling seamless communication and promoting cultural exchange. However, due to the unique linguistic characteristics of Hausa, accurate translation remains a challenging task. The research proposes an approach to improving the machine learning translation of Hausa by integrating Named Entity Recognition (NER) techniques. Named entities, such as person names, locations, organizations, and dates, are critical components of a language's structure and meaning. Incorporating NER into the translation process can enhance the quality and accuracy of translations by preserving the integrity of named entities and also maintaining consistency in translating entities (e.g., proper names), and addressing the cultural references specific to Hausa. The NER will be incorporated into Neural Machine Translation (NMT) for the Hausa to English Translation.

Keywords: machine translation, natural language processing (NLP), named entity recognition (NER), neural machine translation (NMT)

Procedia PDF Downloads 19
9014 Machine Learning Approach for Lateralization of Temporal Lobe Epilepsy

Authors: Samira-Sadat JamaliDinan, Haidar Almohri, Mohammad-Reza Nazem-Zadeh

Abstract:

Lateralization of temporal lobe epilepsy (TLE) is very important for positive surgical outcomes. We propose a machine learning framework to ultimately identify the epileptogenic hemisphere for temporal lobe epilepsy (TLE) cases using magnetoencephalography (MEG) coherence source imaging (CSI) and diffusion tensor imaging (DTI). Unlike most studies that use classification algorithms, we propose an effective clustering approach to distinguish between normal and TLE cases. We apply the famous Minkowski weighted K-Means (MWK-Means) technique as the clustering framework. To overcome the problem of poor initialization of K-Means, we use particle swarm optimization (PSO) to effectively select the initial centroids of clusters prior to applying MWK-Means. We demonstrate that compared to K-means and MWK-means independently, this approach is able to improve the result of a benchmark data set.

Keywords: temporal lobe epilepsy, machine learning, clustering, magnetoencephalography

Procedia PDF Downloads 129
9013 Image Processing-Based Maize Disease Detection Using Mobile Application

Authors: Nathenal Thomas

Abstract:

In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.

Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot

Procedia PDF Downloads 55
9012 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia

Authors: Carol Anne Hargreaves

Abstract:

A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.

Keywords: machine learning, stock market trading, logistic regression, cluster analysis, factor analysis, decision trees, neural networks, automated stock investment system

Procedia PDF Downloads 135
9011 A System to Detect Inappropriate Messages in Online Social Networks

Authors: Shivani Singh, Shantanu Nakhare, Kalyani Nair, Rohan Shetty

Abstract:

As social networking is growing at a rapid pace today it is vital that we work on improving its management. Research has shown that the content present in online social networks may have significant influence on impressionable minds. If such platforms are misused, it will lead to negative consequences. Detecting insults or inappropriate messages continues to be one of the most challenging aspects of Online Social Networks (OSNs) today. We address this problem through a Machine Learning Based Soft Text Classifier approach using Support Vector Machine algorithm. The proposed system acts as a screening mechanism the alerts the user about such messages. The messages are classified according to their subject matter and each comment is labeled for the presence of profanity and insults.

Keywords: machine learning, online social networks, soft text classifier, support vector machine

Procedia PDF Downloads 480
9010 System for the Detecting of Fake Profiles on Online Social Networks Using Machine Learning and the Bio-Inspired Algorithms

Authors: Sekkal Nawel, Mahammed Nadir

Abstract:

The proliferation of online activities on Online Social Networks (OSNs) has captured significant user attention. However, this growth has been hindered by the emergence of fraudulent accounts that do not represent real individuals and violate privacy regulations within social network communities. Consequently, it is imperative to identify and remove these profiles to enhance the security of OSN users. In recent years, researchers have turned to machine learning (ML) to develop strategies and methods to tackle this issue. Numerous studies have been conducted in this field to compare various ML-based techniques. However, the existing literature still lacks a comprehensive examination, especially considering different OSN platforms. Additionally, the utilization of bio-inspired algorithms has been largely overlooked. Our study conducts an extensive comparison analysis of various fake profile detection techniques in online social networks. The results of our study indicate that supervised models, along with other machine learning techniques, as well as unsupervised models, are effective for detecting false profiles in social media. To achieve optimal results, we have incorporated six bio-inspired algorithms to enhance the performance of fake profile identification results.

Keywords: machine learning, bio-inspired algorithm, detection, fake profile, system, social network

Procedia PDF Downloads 44
9009 Exploring Deep Neural Network Compression: An Overview

Authors: Ghorab Sara, Meziani Lila, Rubin Harvey Stuart

Abstract:

The rapid growth of deep learning has led to intricate and resource-intensive deep neural networks widely used in computer vision tasks. However, their complexity results in high computational demands and memory usage, hindering real-time application. To address this, research focuses on model compression techniques. The paper provides an overview of recent advancements in compressing neural networks and categorizes the various methods into four main approaches: network pruning, quantization, network decomposition, and knowledge distillation. This paper aims to provide a comprehensive outline of both the advantages and limitations of each method.

Keywords: model compression, deep neural network, pruning, knowledge distillation, quantization, low-rank decomposition

Procedia PDF Downloads 19
9008 Predicting the Frequencies of Tropical Cyclone-Induced Rainfall Events in the US Using a Machine-Learning Model

Authors: Elham Sharifineyestani, Mohammad Farshchin

Abstract:

Tropical cyclones are one of the most expensive and deadliest natural disasters. They cause heavy rainfall and serious flash flooding that result in billions of dollars of damage and considerable mortality each year in the United States. Prediction of the frequency of tropical cyclone-induced rainfall events can be helpful in emergency planning and flood risk management. In this study, we have developed a machine-learning model to predict the exceedance frequencies of tropical cyclone-induced rainfall events in the United States. Model results show a satisfactory agreement with available observations. To examine the effectiveness of our approach, we also have compared the result of our predictions with the exceedance frequencies predicted using a physics-based rainfall model by Feldmann.

Keywords: flash flooding, tropical cyclones, frequencies, machine learning, risk management

Procedia PDF Downloads 223
9007 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 104
9006 Investigation on Behavior of Fixed-Ended Reinforced Concrete Deep Beams

Authors: Y. Heyrani Birak, R. Hizaji, J. Shahkarami

Abstract:

Reinforced Concrete (RC) deep beams are special structural elements because of their geometry and behavior under loads. For example, assumption of strain- stress distribution is not linear in the cross section. These types of beams may have simple supports or fixed supports. A lot of research works have been conducted on simply supported deep beams, but little study has been done in the fixed-end RC deep beams behavior. Recently, using of fixed-ended deep beams has been widely increased in structures. In this study, the behavior of fixed-ended deep beams is investigated, and the important parameters in capacity of this type of beams are mentioned.

Keywords: deep beam, capacity, reinforced concrete, fixed-ended

Procedia PDF Downloads 314
9005 Embedded Hybrid Intuition: A Deep Learning and Fuzzy Logic Approach to Collective Creation and Computational Assisted Narratives

Authors: Roberto Cabezas H

Abstract:

The current work shows the methodology developed to create narrative lighting spaces for the multimedia performance piece 'cluster: the vanished paradise.' This empirical research is focused on exploring unconventional roles for machines in subjective creative processes, by delving into the semantics of data and machine intelligence algorithms in hybrid technological, creative contexts to expand epistemic domains trough human-machine cooperation. The creative process in scenic and performing arts is guided mostly by intuition; from that idea, we developed an approach to embed collective intuition in computational creative systems, by joining the properties of Generative Adversarial Networks (GAN’s) and Fuzzy Clustering based on a semi-supervised data creation and analysis pipeline. The model makes use of GAN’s to learn from phenomenological data (data generated from experience with lighting scenography) and algorithmic design data (augmented data by procedural design methods), fuzzy logic clustering is then applied to artificially created data from GAN’s to define narrative transitions built on membership index; this process allowed for the creation of simple and complex spaces with expressive capabilities based on position and light intensity as the parameters to guide the narrative. Hybridization comes not only from the human-machine symbiosis but also on the integration of different techniques for the implementation of the aided design system. Machine intelligence tools as proposed in this work are well suited to redefine collaborative creation by learning to express and expand a conglomerate of ideas and a wide range of opinions for the creation of sensory experiences. We found in GAN’s and Fuzzy Logic an ideal tool to develop new computational models based on interaction, learning, emotion and imagination to expand the traditional algorithmic model of computation.

Keywords: fuzzy clustering, generative adversarial networks, human-machine cooperation, hybrid collective data, multimedia performance

Procedia PDF Downloads 119
9004 Comprehensive Study of Data Science

Authors: Asifa Amara, Prachi Singh, Kanishka, Debargho Pathak, Akshat Kumar, Jayakumar Eravelly

Abstract:

Today's generation is totally dependent on technology that uses data as its fuel. The present study is all about innovations and developments in data science and gives an idea about how efficiently to use the data provided. This study will help to understand the core concepts of data science. The concept of artificial intelligence was introduced by Alan Turing in which the main principle was to create an artificial system that can run independently of human-given programs and can function with the help of analyzing data to understand the requirements of the users. Data science comprises business understanding, analyzing data, ethical concerns, understanding programming languages, various fields and sources of data, skills, etc. The usage of data science has evolved over the years. In this review article, we have covered a part of data science, i.e., machine learning. Machine learning uses data science for its work. Machines learn through their experience, which helps them to do any work more efficiently. This article includes a comparative study image between human understanding and machine understanding, advantages, applications, and real-time examples of machine learning. Data science is an important game changer in the life of human beings. Since the advent of data science, we have found its benefits and how it leads to a better understanding of people, and how it cherishes individual needs. It has improved business strategies, services provided by them, forecasting, the ability to attend sustainable developments, etc. This study also focuses on a better understanding of data science which will help us to create a better world.

Keywords: data science, machine learning, data analytics, artificial intelligence

Procedia PDF Downloads 53
9003 Integration of Big Data to Predict Transportation for Smart Cities

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

The Intelligent transportation system is essential to build smarter cities. Machine learning based transportation prediction could be highly promising approach by delivering invisible aspect visible. In this context, this research aims to make a prototype model that predicts transportation network by using big data and machine learning technology. In detail, among urban transportation systems this research chooses bus system.  The research problem that existing headway model cannot response dynamic transportation conditions. Thus, bus delay problem is often occurred. To overcome this problem, a prediction model is presented to fine patterns of bus delay by using a machine learning implementing the following data sets; traffics, weathers, and bus statues. This research presents a flexible headway model to predict bus delay and analyze the result. The prototyping model is composed by real-time data of buses. The data are gathered through public data portals and real time Application Program Interface (API) by the government. These data are fundamental resources to organize interval pattern models of bus operations as traffic environment factors (road speeds, station conditions, weathers, and bus information of operating in real-time). The prototyping model is designed by the machine learning tool (RapidMiner Studio) and conducted tests for bus delays prediction. This research presents experiments to increase prediction accuracy for bus headway by analyzing the urban big data. The big data analysis is important to predict the future and to find correlations by processing huge amount of data. Therefore, based on the analysis method, this research represents an effective use of the machine learning and urban big data to understand urban dynamics.

Keywords: big data, machine learning, smart city, social cost, transportation network

Procedia PDF Downloads 236
9002 Failure Mechanism in Fixed-Ended Reinforced Concrete Deep Beams under Cyclic Load

Authors: A. Aarabzadeh, R. Hizaji

Abstract:

Reinforced Concrete (RC) deep beams are a special type of beams due to their geometry, boundary conditions, and behavior compared to ordinary shallow beams. For example, assumption of a linear strain-stress distribution in the cross section is not valid. Little study has been dedicated to fixed-end RC deep beams. Also, most experimental studies are carried out on simply supported deep beams. Regarding recent tendency for application of deep beams, possibility of using fixed-ended deep beams has been widely increased in structures. Therefore, it seems necessary to investigate the aforementioned structural element in more details. In addition to experimental investigation of a concrete deep beam under cyclic load, different failure mechanisms of fixed-ended deep beams under this type of loading have been evaluated in the present study. The results show that failure mechanisms of deep beams under cyclic loads are quite different from monotonic loads.

Keywords: deep beam, cyclic load, reinforced concrete, fixed-ended

Procedia PDF Downloads 332
9001 Time Series Forecasting (TSF) Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window

Procedia PDF Downloads 137
9000 Loan Repayment Prediction Using Machine Learning: Model Development, Django Web Integration and Cloud Deployment

Authors: Seun Mayowa Sunday

Abstract:

Loan prediction is one of the most significant and recognised fields of research in the banking, insurance, and the financial security industries. Some prediction systems on the market include the construction of static software. However, due to the fact that static software only operates with strictly regulated rules, they cannot aid customers beyond these limitations. Application of many machine learning (ML) techniques are required for loan prediction. Four separate machine learning models, random forest (RF), decision tree (DT), k-nearest neighbour (KNN), and logistic regression, are used to create the loan prediction model. Using the anaconda navigator and the required machine learning (ML) libraries, models are created and evaluated using the appropriate measuring metrics. From the finding, the random forest performs with the highest accuracy of 80.17% which was later implemented into the Django framework. For real-time testing, the web application is deployed on the Alibabacloud which is among the top 4 biggest cloud computing provider. Hence, to the best of our knowledge, this research will serve as the first academic paper which combines the model development and the Django framework, with the deployment into the Alibaba cloud computing application.

Keywords: k-nearest neighbor, random forest, logistic regression, decision tree, django, cloud computing, alibaba cloud

Procedia PDF Downloads 106
8999 Breast Cancer Diagnosing Based on Online Sequential Extreme Learning Machine Approach

Authors: Musatafa Abbas Abbood Albadr, Masri Ayob, Sabrina Tiun, Fahad Taha Al-Dhief, Mohammad Kamrul Hasan

Abstract:

Breast Cancer (BC) is considered one of the most frequent reasons of cancer death in women between 40 to 55 ages. The BC is diagnosed by using digital images of the FNA (Fine Needle Aspirate) for both benign and malignant tumors of the breast mass. Therefore, this work proposes the Online Sequential Extreme Learning Machine (OSELM) algorithm for diagnosing BC by using the tumor features of the breast mass. The current work has used the Wisconsin Diagnosis Breast Cancer (WDBC) dataset, which contains 569 samples (i.e., 357 samples for benign class and 212 samples for malignant class). Further, numerous measurements of assessment were used in order to evaluate the proposed OSELM algorithm, such as specificity, precision, F-measure, accuracy, G-mean, MCC, and recall. According to the outcomes of the experiment, the highest performance of the proposed OSELM was accomplished with 97.66% accuracy, 98.39% recall, 95.31% precision, 97.25% specificity, 96.83% F-measure, 95.00% MCC, and 96.84% G-Mean. The proposed OSELM algorithm demonstrates promising results in diagnosing BC. Besides, the performance of the proposed OSELM algorithm was superior to all its comparatives with respect to the rate of classification.

Keywords: breast cancer, machine learning, online sequential extreme learning machine, artificial intelligence

Procedia PDF Downloads 93
8998 Facial Emotion Recognition with Convolutional Neural Network Based Architecture

Authors: Koray U. Erbas

Abstract:

Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented.

Keywords: convolutional neural network, deep learning, deep learning based FER, facial emotion recognition

Procedia PDF Downloads 235
8997 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2

Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk

Abstract:

Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.

Keywords: ecosystem services, grassland management, machine learning, remote sensing

Procedia PDF Downloads 191
8996 Human Digital Twin for Personal Conversation Automation Using Supervised Machine Learning Approaches

Authors: Aya Salama

Abstract:

Digital Twin is an emerging research topic that attracted researchers in the last decade. It is used in many fields, such as smart manufacturing and smart healthcare because it saves time and money. It is usually related to other technologies such as Data Mining, Artificial Intelligence, and Machine Learning. However, Human digital twin (HDT), in specific, is still a novel idea that still needs to prove its feasibility. HDT expands the idea of Digital Twin to human beings, which are living beings and different from the inanimate physical entities. The goal of this research was to create a Human digital twin that is responsible for real-time human replies automation by simulating human behavior. For this reason, clustering, supervised classification, topic extraction, and sentiment analysis were studied in this paper. The feasibility of the HDT for personal replies generation on social messaging applications was proved in this work. The overall accuracy of the proposed approach in this paper was 63% which is a very promising result that can open the way for researchers to expand the idea of HDT. This was achieved by using Random Forest for clustering the question data base and matching new questions. K-nearest neighbor was also applied for sentiment analysis.

Keywords: human digital twin, sentiment analysis, topic extraction, supervised machine learning, unsupervised machine learning, classification, clustering

Procedia PDF Downloads 66
8995 A Machine Learning-Based Model to Screen Antituberculosis Compound Targeted against LprG Lipoprotein of Mycobacterium tuberculosis

Authors: Syed Asif Hassan, Syed Atif Hassan

Abstract:

Multidrug-resistant Tuberculosis (MDR-TB) is an infection caused by the resistant strains of Mycobacterium tuberculosis that do not respond either to isoniazid or rifampicin, which are the most important anti-TB drugs. The increase in the occurrence of a drug-resistance strain of MTB calls for an intensive search of novel target-based therapeutics. In this context LprG (Rv1411c) a lipoprotein from MTB plays a pivotal role in the immune evasion of Mtb leading to survival and propagation of the bacterium within the host cell. Therefore, a machine learning method will be developed for generating a computational model that could predict for a potential anti LprG activity of the novel antituberculosis compound. The present study will utilize dataset from PubChem database maintained by National Center for Biotechnology Information (NCBI). The dataset involves compounds screened against MTB were categorized as active and inactive based upon PubChem activity score. PowerMV, a molecular descriptor generator, and visualization tool will be used to generate the 2D molecular descriptors for the actives and inactive compounds present in the dataset. The 2D molecular descriptors generated from PowerMV will be used as features. We feed these features into three different classifiers, namely, random forest, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model based on the accuracy of predicting novel antituberculosis compound with an anti LprG activity. Additionally, the efficacy of predicted active compounds will be screened using SMARTS filter to choose molecule with drug-like features.

Keywords: antituberculosis drug, classifier, machine learning, molecular descriptors, prediction

Procedia PDF Downloads 366
8994 Complex Learning Tasks and Their Impact on Cognitive Engagement for Undergraduate Engineering Students

Authors: Anastassis Kozanitis, Diane Leduc, Alain Stockless

Abstract:

This paper presents preliminary results from a two-year funded research program looking to analyze and understand the relationship between high cognitive engagement, higher order cognitive processes employed in situations of complex learning tasks, and the use of active learning pedagogies in engineering undergraduate programs. A mixed method approach was used to gauge student engagement and their cognitive processes when accomplishing complex tasks. Quantitative data collected from the self-report cognitive engagement scale shows that deep learning approach is positively correlated with high levels of complex learning tasks and the level of student engagement, in the context of classroom active learning pedagogies. Qualitative analyses of in depth face-to-face interviews reveal insights into the mechanisms influencing students’ cognitive processes when confronted with open-ended problem resolution. Findings also support evidence that students will adjust their level of cognitive engagement according to the specific didactic environment.

Keywords: cognitive engagement, deep and shallow strategies, engineering programs, higher order cognitive processes

Procedia PDF Downloads 301
8993 Defect Identification in Partial Discharge Patterns of Gas Insulated Switchgear and Straight Cable Joint

Authors: Chien-Kuo Chang, Yu-Hsiang Lin, Yi-Yun Tang, Min-Chiu Wu

Abstract:

With the trend of technological advancement, the harm caused by power outages is substantial, mostly due to problems in the power grid. This highlights the necessity for further improvement in the reliability of the power system. In the power system, gas-insulated switches (GIS) and power cables play a crucial role. Long-term operation under high voltage can cause insulation materials in the equipment to crack, potentially leading to partial discharges. If these partial discharges (PD) can be analyzed, preventative maintenance and replacement of equipment can be carried out, there by improving the reliability of the power grid. This research will diagnose defects by identifying three different defects in GIS and three different defects in straight cable joints, for a total of six types of defects. The partial discharge data measured will be converted through phase analysis diagrams and pulse sequence analysis. Discharge features will be extracted using convolutional image processing, and three different deep learning models, CNN, ResNet18, and MobileNet, will be used for training and evaluation. Class Activation Mapping will be utilized to interpret the black-box problem of deep learning models, with each model achieving an accuracy rate of over 95%. Lastly, the overall model performance will be enhanced through an ensemble learning voting method.

Keywords: partial discharge, gas-insulated switches, straight cable joint, defect identification, deep learning, ensemble learning

Procedia PDF Downloads 52
8992 Quantum Statistical Machine Learning and Quantum Time Series

Authors: Omar Alzeley, Sergey Utev

Abstract:

Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.

Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series

Procedia PDF Downloads 442
8991 3D Plant Growth Measurement System Using Deep Learning Technology

Authors: Kazuaki Shiraishi, Narumitsu Asai, Tsukasa Kitahara, Sosuke Mieno, Takaharu Kameoka

Abstract:

The purpose of this research is to facilitate productivity advances in agriculture. To accomplish this, we developed an automatic three-dimensional (3D) recording system for growth of field crops that consists of a number of inexpensive modules: a very low-cost stereo camera, a couple of ZigBee wireless modules, a Raspberry Pi single-board computer, and a third generation (3G) wireless communication module. Our system uses an inexpensive Web stereo camera in order to keep total costs low. However, inexpensive video cameras record low-resolution images that are very noisy. Accordingly, in order to resolve these problems, we adopted a deep learning method. Based on the results of extended period of time operation test conducted without the use of an external power supply, we found that by using Super-Resolution Convolutional Neural Network method, our system could achieve a balance between the competing goals of low-cost and superior performance. Our experimental results showed the effectiveness of our system.

Keywords: 3D plant data, automatic recording, stereo camera, deep learning, image processing

Procedia PDF Downloads 252
8990 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 78
8989 Development of a Decision-Making Method by Using Machine Learning Algorithms in the Early Stage of School Building Design

Authors: Pegah Eshraghi, Zahra Sadat Zomorodian, Mohammad Tahsildoost

Abstract:

Over the past decade, energy consumption in educational buildings has steadily increased. The purpose of this research is to provide a method to quickly predict the energy consumption of buildings using separate evaluation of zones and decomposing the building to eliminate the complexity of geometry at the early design stage. To produce this framework, machine learning algorithms such as Support vector regression (SVR) and Artificial neural network (ANN) are used to predict energy consumption and thermal comfort metrics in a school as a case. The database consists of more than 55000 samples in three climates of Iran. Cross-validation evaluation and unseen data have been used for validation. In a specific label, cooling energy, it can be said the accuracy of prediction is at least 84% and 89% in SVR and ANN, respectively. The results show that the SVR performed much better than the ANN.

Keywords: early stage of design, energy, thermal comfort, validation, machine learning

Procedia PDF Downloads 63
8988 Deep Learning Prediction of Residential Radon Health Risk in Canada and Sweden to Prevent Lung Cancer Among Non-Smokers

Authors: Selim M. Khan, Aaron A. Goodarzi, Joshua M. Taron, Tryggve Rönnqvist

Abstract:

Indoor air quality, a prime determinant of health, is strongly influenced by the presence of hazardous radon gas within the built environment. As a health issue, dangerously high indoor radon arose within the 20th century to become the 2nd leading cause of lung cancer. While the 21st century building metrics and human behaviors have captured, contained, and concentrated radon to yet higher and more hazardous levels, the issue is rapidly worsening in Canada. It is established that Canadians in the Prairies are the 2nd highest radon-exposed population in the world, with 1 in 6 residences experiencing 0.2-6.5 millisieverts (mSv) radiation per week, whereas the Canadian Nuclear Safety Commission sets maximum 5-year occupational limits for atomic workplace exposure at only 20 mSv. This situation is also deteriorating over time within newer housing stocks containing higher levels of radon. Deep machine learning (LSTM) algorithms were applied to analyze multiple quantitative and qualitative features, determine the most important contributory factors, and predicted radon levels in the known past (1990-2020) and projected future (2021-2050). The findings showed gradual downwards patterns in Sweden, whereas it would continue to go from high to higher levels in Canada over time. The contributory factors found to be the basement porosity, roof insulation depthness, R-factor, and air dynamics of the indoor environment related to human window opening behaviour. Building codes must consider including these factors to ensure adequate indoor ventilation and healthy living that can prevent lung cancer in non-smokers.

Keywords: radon, building metrics, deep learning, LSTM prediction model, lung cancer, canada, sweden

Procedia PDF Downloads 94