Search results for: deep foundations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2355

Search results for: deep foundations

1905 Slurry Erosion Behaviour of Cryotreated SS316L Impeller Steel Used for Irrigation Pumps

Authors: Jagtar Singh, Kulwinder Singh

Abstract:

Slurry erosion is a type of erosion wherein material is removed from the target surface due to impingement of solid particles entrained in liquid medium. Slurry erosion performance of deep cryogenic treatment on impeller steel SS 316 L has been investigated. Slurry collected from an actual irrigation pump used as the abrasive media in an erosion test rig. An attempt has been made to study the effect of velocity of fluid and impingement angle by constant concentration (ppm) on the slurry erosion behavior of these cryotreated steels under different experimental conditions. The slurry erosion wear analysis of cryotreated and untreated steels was done. The slurry erosion performance of cryotreated SS 316L impeller steel has been found to superior to that of untreated steel. Metallurgical investigation, hardness as well as %age of carbide in both types of steel was also investigated.

Keywords: deep cryogenic treatment, impeller, Irrigation pumps SS316L, slurry erosion

Procedia PDF Downloads 375
1904 Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening

Authors: Ksheeraj Sai Vepuri, Nada Attar

Abstract:

We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple.

Keywords: facial expression recognittion, image preprocessing, deep learning, CNN

Procedia PDF Downloads 110
1903 Structure of Consciousness According to Deep Systemic Constellations

Authors: Dmitry Ustinov, Olga Lobareva

Abstract:

The method of Deep Systemic Constellations is based on a phenomenological approach. Using the phenomenon of substitutive perception it was established that the human consciousness has a hierarchical structure, where deeper levels govern more superficial ones (reactive level, energy or ancestral level, spiritual level, magical level, and deeper levels of consciousness). Every human possesses a depth of consciousness to the spiritual level, however deeper levels of consciousness are not found for every person. It was found that the spiritual level of consciousness is not homogeneous and has its own internal hierarchy of sublevels (the level of formation of spiritual values, the level of the 'inner observer', the level of the 'path', the level of 'God', etc.). The depth of the spiritual level of a person defines the paradigm of all his internal processes and the main motives of the movement through life. At any level of consciousness disturbances can occur. Disturbances at a deeper level cause disturbances at more superficial levels and are manifested in the daily life of a person in feelings, behavioral patterns, psychosomatics, etc. Without removing the deepest source of a disturbance it is impossible to completely correct its manifestation in the actual moment. Thus a destructive pattern of feeling and behavior in the actual moment can exist because of a disturbance, for example, at the spiritual level of a person (although in most cases the source is at the energy level). Psychological work with superficial levels without removing a source of disturbance cannot fully solve the problem. The method of Deep Systemic Constellations allows one to work effectively with the source of the problem located at any depth. The methodology has confirmed its effectiveness in working with more than a thousand people.

Keywords: constellations, spiritual psychology, structure of consciousness, transpersonal psychology

Procedia PDF Downloads 218
1902 Kinetic Study on Extracting Lignin from Black Liquor Using Deep Eutectic Solvents

Authors: Fatemeh Saadat Ghareh Bagh, Srimanta Ray, Jerald Lalman

Abstract:

Lignin, the largest inventory of organic carbon with a high caloric energy value is a major component in woody and non-woody biomass. In pulping mills, a large amount of the lignin is burned for energy. At the same time, the phenolic structure of lignin enables it to be converted to value-added compounds.This study has focused on extracting lignin from black liquor using deep eutectic solvents (DESs). Therefore, three choline chloride (ChCl)-DESs paired with lactic acid (LA) (1:11), oxalic acid.2H₂O (OX) (1:4), and malic acid (MA) (1:3) were synthesized at 90oC and atmospheric pressure. The kinetics of lignin recovery from black liquor using DES was investigated at three moderate temperatures (338, 353, and 368 K) at time intervals from 30 to 210 min. The extracted lignin (acid soluble lignin plus Klason lignin) was characterized by Fourier transform infrared spectroscopy (FTIR). The FTIR studies included comparing the extracted lignin with a model Kraft lignin. The extracted lignin was characterized spectrophotometrically to determine the acid soluble lignin (ASL) [TAPPI UM 250] fraction and Klason lignin was determined gravimetrically using TAPPI T 222 om02. The lignin extraction reaction using DESs was modeled by first-order reaction kinetics and the activation energy of the process was determined. The ChCl:LA-DES recovered lignin was 79.7±2.1% at 368K and a DES:BL ratio of 4:1 (v/v). The quantity of lignin extracted for the control solvent, [emim][OAc], was 77.5+2.2%. The activation energy measured for the LA-DES system was 22.7 KJ mol⁻¹, while the activation energy for the OX-DES and MA-DES systems were 7.16 KJ·mol⁻¹ and 8.66 KJ·mol⁻¹ when the total lignin recovery was 75.4 ±0.9% and 62.4 ±1.4, % respectively.

Keywords: black liquor, deep eutectic solvents, kinetics, lignin

Procedia PDF Downloads 120
1901 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: computer vision, deep learning, object detection, semiconductor

Procedia PDF Downloads 112
1900 A General Framework to Successfully Operate the Digital Transformation Process in the Post-COVID Era

Authors: Driss Kettani

Abstract:

In this paper, we shed light on “Digital Divide 2.0,” which we see as COVID-19’s Version of the Digital Divide! We believe that “Fighting” against Digital Divide 2.0 necessitates for a Country to be seriously advanced in the Global Digital Transformation that is, naturally, a complex, delicate, costly and long-term Process. We build an argument supporting our assumption and, from there, we present the foundations of a computational framework to guide and streamline Digital Transformation at all levels.

Keywords: digital divide 2.0, digital transformation, ICTs for development, computational outcomes assessment

Procedia PDF Downloads 142
1899 Using Deep Learning for the Detection of Faulty RJ45 Connectors on a Radio Base Station

Authors: Djamel Fawzi Hadj Sadok, Marrone Silvério Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner

Abstract:

A radio base station (RBS), part of the radio access network, is a particular type of equipment that supports the connection between a wide range of cellular user devices and an operator network access infrastructure. Nowadays, most of the RBS maintenance is carried out manually, resulting in a time consuming and costly task. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. This paper proposes and compares two deep learning solutions to identify attached RJ45 connectors on network ports. We named connector detection, the solution based on object detection, and connector classification, the one based on object classification. With the connector detection, we get an accuracy of 0:934, mean average precision 0:903. Connector classification, get a maximum accuracy of 0:981 and an AUC of 0:989. Although connector detection was outperformed in this study, this should not be viewed as an overall result as connector detection is more flexible for scenarios where there is no precise information about the environment and the possible devices. At the same time, the connector classification requires that information to be well-defined.

Keywords: radio base station, maintenance, classification, detection, deep learning, automation

Procedia PDF Downloads 173
1898 A Heart Arrhythmia Prediction Using Machine Learning’s Classification Approach and the Concept of Data Mining

Authors: Roshani S. Golhar, Neerajkumar S. Sathawane, Snehal Dongre

Abstract:

Background and objectives: As the, cardiovascular illnesses increasing and becoming cause of mortality worldwide, killing around lot of people each year. Arrhythmia is a type of cardiac illness characterized by a change in the linearity of the heartbeat. The goal of this study is to develop novel deep learning algorithms for successfully interpreting arrhythmia using a single second segment. Because the ECG signal indicates unique electrical heart activity across time, considerable changes between time intervals are detected. Such variances, as well as the limited number of learning data available for each arrhythmia, make standard learning methods difficult, and so impede its exaggeration. Conclusions: The proposed method was able to outperform several state-of-the-art methods. Also proposed technique is an effective and convenient approach to deep learning for heartbeat interpretation, that could be probably used in real-time healthcare monitoring systems

Keywords: electrocardiogram, ECG classification, neural networks, convolutional neural networks, portable document format

Procedia PDF Downloads 48
1897 A Survey of Response Generation of Dialogue Systems

Authors: Yifan Fan, Xudong Luo, Pingping Lin

Abstract:

An essential task in the field of artificial intelligence is to allow computers to interact with people through natural language. Therefore, researches such as virtual assistants and dialogue systems have received widespread attention from industry and academia. The response generation plays a crucial role in dialogue systems, so to push forward the research on this topic, this paper surveys various methods for response generation. We sort out these methods into three categories. First one includes finite state machine methods, framework methods, and instance methods. The second contains full-text indexing methods, ontology methods, vast knowledge base method, and some other methods. The third covers retrieval methods and generative methods. We also discuss some hybrid methods based knowledge and deep learning. We compare their disadvantages and advantages and point out in which ways these studies can be improved further. Our discussion covers some studies published in leading conferences such as IJCAI and AAAI in recent years.

Keywords: deep learning, generative, knowledge, response generation, retrieval

Procedia PDF Downloads 108
1896 Effect of Punch Diameter on Optimal Loading Profiles in Hydromechanical Deep Drawing Process

Authors: Mehmet Halkaci, Ekrem Öztürk, Mevlüt Türköz, H. Selçuk Halkacı

Abstract:

Hydromechanical deep drawing (HMD) process is an advanced manufacturing process used to form deep parts with only one forming step. In this process, sheet metal blank can be drawn deeper by means of fluid pressure acting on sheet surface in the opposite direction of punch movement. High limiting drawing ratio, good surface quality, less springback characteristic and high dimensional accuracy are some of the advantages of this process. The performance of the HMD process is affected by various process parameters such as fluid pressure, blank holder force, punch-die radius, pre-bulging pressure and height, punch diameter, friction between sheet-die and sheet-punch. The fluid pressure and bank older force are the main loading parameters and affect the formability of HMD process significantly. The punch diameter also influences the limiting drawing ratio (the ratio of initial sheet diameter to punch diameter) of the sheet metal blank. In this research, optimal loading (fluid pressure and blank holder force) profiles were determined for AA 5754-O sheet material through fuzzy control algorithm developed in previous study using LS-DYNA finite element analysis (FEA) software. In the preceding study, the fuzzy control algorithm was developed utilizing geometrical criteria such as thinning and wrinkling. In order to obtain the final desired part with the developed algorithm in terms of the punch diameter requested, the effect of punch diameter, which is the one of the process parameters, on loading profiles was investigated separately using blank thickness of 1 mm. Thus, the practicality of the previously developed fuzzy control algorithm with different punch diameters was clarified. Also, thickness distributions of the sheet metal blank along a curvilinear distance were compared for the FEA in which different punch diameters were used. Consequently, it was found that the use of different punch diameters did not affect the optimal loading profiles too much.

Keywords: Finite Element Analysis (FEA), fuzzy control, hydromechanical deep drawing, optimal loading profiles, punch diameter

Procedia PDF Downloads 402
1895 Factors Affecting Weld Line Movement in Tailor Welded Blank

Authors: Sanjay Patil, Shakil A. Kagzi, Harit K. Raval

Abstract:

Tailor Welded Blanks (TWB) are utilized in automotive industries widely because of their advantage of weight and cost reduction and maintaining required strength and structural integrity. TWB consist of two or more sheet having dissimilar or similar material and thickness; welded together to form a single sheet before forming it to desired shape. Forming of the tailor welded blank is affected by ratio of thickness of blanks, ratio of their strength, etc. mainly due to in-homogeneity of material. In the present work the relative effect of these parameters on weld line movement is studied during deep drawing of TWB using FE simulation using HYPERWORKS. The simulation is validated with results from the literature. Simulations were than performed based on Taguchi orthogonal array followed by the ANOVA analysis to determine the significance of these parameters on forming of TWB.

Keywords: ANOVA, deep drawing, Tailor Welded Blank (TWB), weld line movement

Procedia PDF Downloads 290
1894 Mainland China and Taiwan’s Strategies for Overcoming the Middle/High Income Trap: Domestic Consensus-Building and the Foundations of Cross-Strait Interactions

Authors: Mingke Ma

Abstract:

The recent discovery of the High-Income Trap phenomena and the established Middle-Income Trap literature have identified the similarity of the structural challenges that both Mainland China and Taiwan have been facing since the simultaneous growth slowdown from the 2000s. Mainland China and Taiwan’s ineffectiveness in productivity growth weakened their overall competitiveness in Global Value Chains. With the subsequent decline of industrial profitability, social compression from late development persists and jeopardises the social cohesion. From Ma Ying-jeou’s ‘633’ promise and Tsai Ing-wen’s ‘5+2’ industrial framework to Mainland China’s 11th to 14th Five-Year Plans, leaderships across the Strait have been striving to constitute new models for inclusive and sustainable development through policy responses. This study argues that social consensuses that have been constructed by the domestic political processes define the feasibility of the reform strategies, which further construct the conditions for Cross-Strait interactions. Based on the existing literature of New Institutional Economics, Middle/High Income Trap, and Compressed Development, this study adopts a Historical Institutionalist analytical framework to identify how the historical path-dependency contributes to the contemporary growth constraints in both economies and the political difficulty on navigating the institutional and Organisational change. It continues by tracing the political process of economic reform to examine the sustainability and resilience of the manifested social consensus that had empowered the proposed policy frameworks. Afterwards, it examines how the political outcomes in such a simultaneous process shared by both Mainland China and Taiwan construct the social, economic, institutional, and political foundations of contemporary Cross-Strait engagement.

Keywords: historical institutionalism, political economy, cross-strait relations, high/middle income trap

Procedia PDF Downloads 172
1893 Modern Machine Learning Conniptions for Automatic Speech Recognition

Authors: S. Jagadeesh Kumar

Abstract:

This expose presents a luculent of recent machine learning practices as employed in the modern and as pertinent to prospective automatic speech recognition schemes. The aspiration is to promote additional traverse ablution among the machine learning and automatic speech recognition factions that have transpired in the precedent. The manuscript is structured according to the chief machine learning archetypes that are furthermore trendy by now or have latency for building momentous hand-outs to automatic speech recognition expertise. The standards offered and convoluted in this article embraces adaptive and multi-task learning, active learning, Bayesian learning, discriminative learning, generative learning, supervised and unsupervised learning. These learning archetypes are aggravated and conferred in the perspective of automatic speech recognition tools and functions. This manuscript bequeaths and surveys topical advances of deep learning and learning with sparse depictions; further limelight is on their incessant significance in the evolution of automatic speech recognition.

Keywords: automatic speech recognition, deep learning methods, machine learning archetypes, Bayesian learning, supervised and unsupervised learning

Procedia PDF Downloads 418
1892 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 135
1891 Data Augmentation for Early-Stage Lung Nodules Using Deep Image Prior and Pix2pix

Authors: Qasim Munye, Juned Islam, Haseeb Qureshi, Syed Jung

Abstract:

Lung nodules are commonly identified in computed tomography (CT) scans by experienced radiologists at a relatively late stage. Early diagnosis can greatly increase survival. We propose using a pix2pix conditional generative adversarial network to generate realistic images simulating early-stage lung nodule growth. We have applied deep images prior to 2341 slices from 895 computed tomography (CT) scans from the Lung Image Database Consortium (LIDC) dataset to generate pseudo-healthy medical images. From these images, 819 were chosen to train a pix2pix network. We observed that for most of the images, the pix2pix network was able to generate images where the nodule increased in size and intensity across epochs. To evaluate the images, 400 generated images were chosen at random and shown to a medical student beside their corresponding original image. Of these 400 generated images, 384 were defined as satisfactory - meaning they resembled a nodule and were visually similar to the corresponding image. We believe that this generated dataset could be used as training data for neural networks to detect lung nodules at an early stage or to improve the accuracy of such networks. This is particularly significant as datasets containing the growth of early-stage nodules are scarce. This project shows that the combination of deep image prior and generative models could potentially open the door to creating larger datasets than currently possible and has the potential to increase the accuracy of medical classification tasks.

Keywords: medical technology, artificial intelligence, radiology, lung cancer

Procedia PDF Downloads 44
1890 Graphical User Interface Testing by Using Deep Learning

Authors: Akshat Mathur, Sunil Kumar Khatri

Abstract:

This paper presents brief about how the use of Artificial intelligence in respect to GUI testing can reduce workload by using DL-fueled method. This paper also discusses about how graphical user interface and event driven software testing can derive benefits from the use of AI techniques. The use of AI techniques not only reduces the task and work load but also helps in getting better output than manual testing. Although results are same, but the use of Artifical intelligence techniques for GUI testing has proven to provide ideal results. DL-fueled framework helped us to find imperfections of the entire webpage and provides test failure result in a score format between 0 and 1which signifies that are test meets it quality criteria or not. This paper proposes DL-fueled method which helps us to find the genuine GUI bugs and defects and also helped us to scale the existing labour-intensive and skill-intensive methodologies.

Keywords: graphical user interface, GUI, artificial intelligence, deep learning, ML technology

Procedia PDF Downloads 142
1889 Collaboration-Based Islamic Financial Services: Case Study of Islamic Fintech in Indonesia

Authors: Erika Takidah, Salina Kassim

Abstract:

Digital transformation has accelerated in the new millennium. It is reshaping the financial services industry from a traditional system to financial technology. Moreover, the number of financial inclusion rates in Indonesia is less than 60%. An innovative model needed to elucidate this national problem. On the other hand, the Islamic financial service industry and financial technology grow fast as a new aspire in economic development. An Islamic bank, takaful, Islamic microfinance, Islamic financial technology and Islamic social finance institution could collaborate to intensify the financial inclusion number in Indonesia. The primary motive of this paper is to examine the strategy of collaboration-based Islamic financial services to enhance financial inclusion in Indonesia, particularly facing the digital era. The fundamental findings for the main problems are the foundations and key ecosystems aspect involved in the development of collaboration-based Islamic financial services. By using the Interpretive Structural Model (ISM) approach, the core problems faced in the development of the models have lacked policy instruments guarding the collaboration-based Islamic financial services with fintech work process and availability of human resources for fintech. The core strategies or foundations that are needed in the framework of collaboration-based Islamic financial services are the ability to manage and analyze data in the big data era. For the aspects of the Ecosystem or actors involved in the development of this model, the important actor is government or regulator, educational institutions, and also existing industries (Islamic financial services). The outcome of the study designates that strategy collaboration of Islamic financial services institution supported by robust technology, a legal and regulatory commitment of the regulators and policymakers of the Islamic financial institutions, extensive public awareness of financial inclusion in Indonesia. The study limited itself to realize financial inclusion, particularly in Islamic finance development in Indonesia. The study will have an inference for the concerned professional bodies, regulators, policymakers, stakeholders, and practitioners of Islamic financial service institutions.

Keywords: collaboration, financial inclusion, Islamic financial services, Islamic fintech

Procedia PDF Downloads 112
1888 Deep Reinforcement Learning with Leonard-Ornstein Processes Based Recommender System

Authors: Khalil Bachiri, Ali Yahyaouy, Nicoleta Rogovschi

Abstract:

Improved user experience is a goal of contemporary recommender systems. Recommender systems are starting to incorporate reinforcement learning since it easily satisfies this goal of increasing a user’s reward every session. In this paper, we examine the most effective Reinforcement Learning agent tactics on the Movielens (1M) dataset, balancing precision and a variety of recommendations. The absence of variability in final predictions makes simplistic techniques, although able to optimize ranking quality criteria, worthless for consumers of the recommendation system. Utilizing the stochasticity of Leonard-Ornstein processes, our suggested strategy encourages the agent to investigate its surroundings. Research demonstrates that raising the NDCG (Discounted Cumulative Gain) and HR (HitRate) criterion without lowering the Ornstein-Uhlenbeck process drift coefficient enhances the diversity of suggestions.

Keywords: recommender systems, reinforcement learning, deep learning, DDPG, Leonard-Ornstein process

Procedia PDF Downloads 112
1887 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving

Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian

Abstract:

In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.

Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning

Procedia PDF Downloads 120
1886 IoT and Deep Learning approach for Growth Stage Segregation and Harvest Time Prediction of Aquaponic and Vermiponic Swiss Chards

Authors: Praveen Chandramenon, Andrew Gascoyne, Fideline Tchuenbou-Magaia

Abstract:

Aquaponics offers a simple conclusive solution to the food and environmental crisis of the world. This approach combines the idea of Aquaculture (growing fish) to Hydroponics (growing vegetables and plants in a soilless method). Smart Aquaponics explores the use of smart technology including artificial intelligence and IoT, to assist farmers with better decision making and online monitoring and control of the system. Identification of different growth stages of Swiss Chard plants and predicting its harvest time is found to be important in Aquaponic yield management. This paper brings out the comparative analysis of a standard Aquaponics with a Vermiponics (Aquaponics with worms), which was grown in the controlled environment, by implementing IoT and deep learning-based growth stage segregation and harvest time prediction of Swiss Chards before and after applying an optimal freshwater replenishment. Data collection, Growth stage classification and Harvest Time prediction has been performed with and without water replenishment. The paper discusses the experimental design, IoT and sensor communication with architecture, data collection process, image segmentation, various regression and classification models and error estimation used in the project. The paper concludes with the results comparison, including best models that performs growth stage segregation and harvest time prediction of the Aquaponic and Vermiponic testbed with and without freshwater replenishment.

Keywords: aquaponics, deep learning, internet of things, vermiponics

Procedia PDF Downloads 45
1885 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision

Procedia PDF Downloads 111
1884 Network Conditioning and Transfer Learning for Peripheral Nerve Segmentation in Ultrasound Images

Authors: Harold Mauricio Díaz-Vargas, Cristian Alfonso Jimenez-Castaño, David Augusto Cárdenas-Peña, Guillermo Alberto Ortiz-Gómez, Alvaro Angel Orozco-Gutierrez

Abstract:

Precise identification of the nerves is a crucial task performed by anesthesiologists for an effective Peripheral Nerve Blocking (PNB). Now, anesthesiologists use ultrasound imaging equipment to guide the PNB and detect nervous structures. However, visual identification of the nerves from ultrasound images is difficult, even for trained specialists, due to artifacts and low contrast. The recent advances in deep learning make neural networks a potential tool for accurate nerve segmentation systems, so addressing the above issues from raw data. The most widely spread U-Net network yields pixel-by-pixel segmentation by encoding the input image and decoding the attained feature vector into a semantic image. This work proposes a conditioning approach and encoder pre-training to enhance the nerve segmentation of traditional U-Nets. Conditioning is achieved by the one-hot encoding of the kind of target nerve a the network input, while the pre-training considers five well-known deep networks for image classification. The proposed approach is tested in a collection of 619 US images, where the best C-UNet architecture yields an 81% Dice coefficient, outperforming the 74% of the best traditional U-Net. Results prove that pre-trained models with the conditional approach outperform their equivalent baseline by supporting learning new features and enriching the discriminant capability of the tested networks.

Keywords: nerve segmentation, U-Net, deep learning, ultrasound imaging, peripheral nerve blocking

Procedia PDF Downloads 80
1883 A U-Net Based Architecture for Fast and Accurate Diagram Extraction

Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal

Abstract:

In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.

Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO

Procedia PDF Downloads 107
1882 Artificial Intelligence in Bioscience: The Next Frontier

Authors: Parthiban Srinivasan

Abstract:

With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.

Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction

Procedia PDF Downloads 333
1881 Dynamic Analysis of Turbine Foundation

Authors: Mogens Saberi

Abstract:

This paper presents different design approaches for the design of turbine foundations. In the design process, several unknown factors must be considered such as the soil stiffness at the site. The main static and dynamic loads are presented and the results of a dynamic simulation are presented for a turbine foundation that is currently being built. A turbine foundation is an important part of a power plant since a non-optimal behavior of the foundation can damage the turbine itself and thereby stop the power production with large consequences.

Keywords: dynamic turbine design, harmonic response analysis, practical turbine design experience, concrete foundation

Procedia PDF Downloads 285
1880 In Vitro Anthelmintic Effects of Citrullus colocynthis Fruit Extract on Fasciola gigantica of Domestic Buffalo (Bubalus bubalis) in Udaipur, India

Authors: Rajnarayan Damor, Gayatri Swarnakar

Abstract:

Fasciola gigantica are present in the biliary ducts of liver and gall bladder of domestic buffaloes. They are very harmful and causes significant lose to live stock owners, on account of poor growth and lower productivity of domestic buffaloes. Synthetic veterinary drugs have been used to eliminate parasites from cattle but these drugs are unaffordable and inaccessible for poor cattle farmers. The in vitro anthelmintic effect of Citrullus colocynthis fruit extract against Fasciola gigantica parasites were observed by light and scanning electron microscopy. Fruit extracts of C. colocynthis exhibit highest mortality 100% at 50 mg/ml in 15th hour of exposure. The oral and ventral sucker appeared to be slightly more swollen than control and synthetic drug albendazole. The tegument showed submerged spines by the swollen tegument around them. The tegument of the middle region showed deep furrows, folding and submerged spines which either lied very flat against the surface or had become submerged in the tegument by the swollen tegument around them leaving deep furrows. Posterior region showed with deep folding in the tegument, completely disappearance of spines and swelling of the tegument led to completely submerged spines leaving spine socket. The present study revealed that fruit extracts of Citrullus colocynthis found to be potential sources for novel anthelmintic and justify their ethno-veterinary use.

Keywords: anthelmintic, buffalo, Citrullus colocynthis, Fasciola gigantica, mortality, tegument

Procedia PDF Downloads 212
1879 Predicting Shortage of Hospital Beds during COVID-19 Pandemic in United States

Authors: Saba Ebrahimi, Saeed Ahmadian, Hedie Ashrafi

Abstract:

World-wide spread of coronavirus grows the concern about planning for the excess demand of hospital services in response to COVID-19 pandemic. The surge in the hospital services demand beyond the current capacity leads to shortage of ICU beds and ventilators in some parts of US. In this study, we forecast the required number of hospital beds and possible shortage of beds in US during COVID-19 pandemic to be used in the planning and hospitalization of new cases. In this paper, we used a data on COVID-19 deaths and patients’ hospitalization besides the data on hospital capacities and utilization in US from publicly available sources and national government websites. we used a novel ensemble modelling of deep learning networks, based on stacking different linear and non-linear layers to predict the shortage in hospital beds. The results showed that our proposed approach can predict the excess hospital beds demand very well and this can be helpful in developing strategies and plans to mitigate this gap.

Keywords: COVID-19, deep learning, ensembled models, hospital capacity planning

Procedia PDF Downloads 133
1878 Feasibility of Washing/Extraction Treatment for the Remediation of Deep-Sea Mining Trailings

Authors: Kyoungrean Kim

Abstract:

Importance of deep-sea mineral resources is dramatically increasing due to the depletion of land mineral resources corresponding to increasing human’s economic activities. Korea has acquired exclusive exploration licenses at four areas which are the Clarion-Clipperton Fracture Zone in the Pacific Ocean (2002), Tonga (2008), Fiji (2011) and Indian Ocean (2014). The preparation for commercial mining of Nautilus minerals (Canada) and Lockheed martin minerals (USA) is expected by 2020. The London Protocol 1996 (LP) under International Maritime Organization (IMO) and International Seabed Authority (ISA) will set environmental guidelines for deep-sea mining until 2020, to protect marine environment. In this research, the applicability of washing/extraction treatment for the remediation of deep-sea mining tailings was mainly evaluated in order to present preliminary data to develop practical remediation technology in near future. Polymetallic nodule samples were collected at the Clarion-Clipperton Fracture Zone in the Pacific Ocean, then stored at room temperature. Samples were pulverized by using jaw crusher and ball mill then, classified into 3 particle sizes (> 63 µm, 63-20 µm, < 20 µm) by using vibratory sieve shakers (Analysette 3 Pro, Fritsch, Germany) with 63 µm and 20 µm sieve. Only the particle size 63-20 µm was used as the samples for investigation considering the lower limit of ore dressing process which is tens to 100 µm. Rhamnolipid and sodium alginate as biosurfactant and aluminum sulfate which are mainly used as flocculant were used as environmentally friendly additives. Samples were adjusted to 2% liquid with deionized water then mixed with various concentrations of additives. The mixture was stirred with a magnetic bar during specific reaction times and then the liquid phase was separated by a centrifugal separator (Thermo Fisher Scientific, USA) under 4,000 rpm for 1 h. The separated liquid was filtered with a syringe and acrylic-based filter (0.45 µm). The extracted heavy metals in the filtered liquid were then determined using a UV-Vis spectrometer (DR-5000, Hach, USA) and a heat block (DBR 200, Hach, USA) followed by US EPA methods (8506, 8009, 10217 and 10220). Polymetallic nodule was mainly composed of manganese (27%), iron (8%), nickel (1.4%), cupper (1.3 %), cobalt (1.3%) and molybdenum (0.04%). Based on remediation standards of various countries, Nickel (Ni), Copper (Cu), Cadmium (Cd) and Zinc (Zn) were selected as primary target materials. Throughout this research, the use of rhamnolipid was shown to be an effective approach for removing heavy metals in samples originated from manganese nodules. Sodium alginate might also be one of the effective additives for the remediation of deep-sea mining tailings such as polymetallic nodules. Compare to the use of rhamnolipid and sodium alginate, aluminum sulfate was more effective additive at short reaction time within 4 h. Based on these results, sequencing particle separation, selective extraction/washing, advanced filtration of liquid phase, water treatment without dewatering and solidification/stabilization may be considered as candidate technologies for the remediation of deep-sea mining tailings.

Keywords: deep-sea mining tailings, heavy metals, remediation, extraction, additives

Procedia PDF Downloads 137
1877 Evaluating the Use of Manned and Unmanned Aerial Vehicles in Strategic Offensive Tasks

Authors: Yildiray Korkmaz, Mehmet Aksoy

Abstract:

In today's operations, countries want to reach their aims in the shortest way due to economical, political and humanitarian aspects. The most effective way of achieving this goal is to be able to penetrate strategic targets. Strategic targets are generally located deep inside of the countries and are defended by modern and efficient surface to air missiles (SAM) platforms which are operated as integrated with Intelligence, Surveillance and Reconnaissance (ISR) systems. On the other hand, these high valued targets are buried deep underground and hardened with strong materials against attacks. Therefore, to penetrate these targets requires very detailed intelligence. This intelligence process should include a wide range that is from weaponry to threat assessment. Accordingly, the framework of the attack package will be determined. This mission package has to execute missions in a high threat environment. The way to minimize the risk which depends on loss of life is to use packages which are formed by UAVs. However, some limitations arising from the characteristics of UAVs restricts the performance of the mission package consisted of UAVs. So, the mission package should be formed with UAVs under the leadership of a fifth generation manned aircraft. Thus, we can minimize the limitations, easily penetrate in the deep inside of the enemy territory with minimum risk, make a decision according to ever-changing conditions and finally destroy the strategic targets. In this article, the strengthens and weakness aspects of UAVs are examined by SWOT analysis. And also, it revealed features of a mission package and presented as an example what kind of a mission package we should form in order to get marginal benefit and penetrate into strategic targets with the development of autonomous mission execution capability in the near future.

Keywords: UAV, autonomy, mission package, strategic attack, mission planning

Procedia PDF Downloads 521
1876 Emotional Labor Strategies and Intentions to Quit among Nurses in Pakistan

Authors: Maham Malik, Amjad Ali, Muhammad Asif

Abstract:

Current study aims to examine the relationship of emotional labor strategies - deep acting and surface acting - with employees' job satisfaction, organizational commitment and intentions to quit. The study also examines the mediating role of job satisfaction and organizational commitment for relationship of emotional labor strategies with intentions to quit. Data were conveniently collected from 307 nurses by using self-administered questionnaire. Linear regression test was applied to find the relationship between the variables. Mediation was checked through Baron and Kenny Model and Sobel test. Results prove the existence of partial mediation of job satisfaction between the emotional labor strategies and quitting intentions. The study recommends that deep acting should be promoted because it is positively associated with quality of work life, work engagement and organizational citizenship behavior of employees.

Keywords: emotional labor strategies, intentions to quit, job satisfaction, organizational commitment, nursing

Procedia PDF Downloads 119