Search results for: quantification accuracy
3172 Artificial Intelligence in Bioscience: The Next Frontier
Authors: Parthiban Srinivasan
Abstract:
With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction
Procedia PDF Downloads 3573171 Improvement plan for Integrity of Intensive Care Unit Patients Withdrawn from Life-Sustaining Medical Care
Authors: Shang-Sin Shiu, Shu-I Chin, Hsiu-Ju Chen, Ru-Yu Lien
Abstract:
The Hospice and Palliative Care Act has undergone three revisions, making it less challenging for terminal patients to withdraw life support systems. However, the adequacy of care before withdraw is a crucial factor in end-of-life medical treatment. The author observed that intensive care unit (ICU) nursing staff often rely on simple flowcharts or word of mouth, leading to inadequate preparation and failure to meet patient needs before withdraw. This results in confusion or hesitation among those executing the process. Therefore, there is a motivation to improve the withdraw of patient care processes, establish standardized procedures, ensure the accuracy of removal execution, enhance end-of-life care self-efficacy for nursing staff, and improve the overall quality of care. The investigation identified key issues: the lack of applicable guidelines for ICU care for withdraw from life-sustaining, insufficient education and training on withdraw and end-of-life care, scattered locations of withdraw-related tools, and inadequate self-efficacy in withdraw from life-sustaining care. Solutions proposed include revising withdraw care processes and guidelines, integrating tools and locations, conducting educational courses, and forming support groups. After the project implementation, the accuracy of removal cognition improved from 78% to 96.5%, self-efficacy in end-of-life care after removal increased from 54.7% to 93.1%, and the correctness of care behavior progressed from 27.7% to 97.8%. It is recommended to regularly conduct courses on removing life support system care and grief consolation to enhance the quality of end-of-life care.Keywords: the intensive care unit (ICU) patients, nursing staff, withdraw life support systems, self-efficacy
Procedia PDF Downloads 513170 Demographic Dividend Explained by Infrastructure Costs of Population Growth Rate, Distinct from Age Dependency
Authors: Jane N. O'Sullivan
Abstract:
Although it is widely believed that fertility decline has benefitted economic advancement, particularly in East and South-East Asian countries, the causal mechanisms for this stimulus are contested. Since the turn of this century, demographic dividend theory has been increasingly recognised, hypothesising that higher proportions of working-age people can contribute to economic expansion if conditions are met to employ them productively. Population growth rate, as a systemic condition distinct from age composition, has not been similar attention since the 1970s and has lacked methodology for quantitative assessment. This paper explores conceptual and empirical quantification of the burden of expanding physical capital to accommodate a growing population. In proof-of-concept analyses of Australia and the United Kingdom, actual expenditure on gross fixed capital formation was compiled over four decades and apportioned to maintenance/turnover or expansion to accommodate population growth, based on lifespan of capital assets and population growth rate. In both countries, capital expansion was estimated to cost 6.5-7.0% of GDP per 1% population growth rate. This opportunity cost impedes the improvement of per capita capacity needed to realise the potential of the working-age population. Economic modelling of demographic scenarios have to date omitted this channel of influence; the implications of its inclusion are discussed.Keywords: age dependency, demographic dividend, infrastructure, population growth rate
Procedia PDF Downloads 1433169 Technology of Gyro Orientation Measurement Unit (Gyro Omu) for Underground Utility Mapping Practice
Authors: Mohd Ruzlin Mohd Mokhtar
Abstract:
At present, most operators who are working on projects for utilities such as power, water, oil, gas, telecommunication and sewerage are using technologies e.g. Total station, Global Positioning System (GPS), Electromagnetic Locator (EML) and Ground Penetrating Radar (GPR) to perform underground utility mapping. With the increase in popularity of Horizontal Directional Drilling (HDD) method among the local authorities and asset owners, most of newly installed underground utilities need to use the HDD method. HDD method is seen as simple and create not much disturbance to the public and traffic. Thus, it was the preferred utilities installation method in most of areas especially in urban areas. HDDs were installed much deeper than exiting utilities (some reports saying that HDD is averaging 5 meter in depth). However, this impacts the accuracy or ability of existing underground utility mapping technologies. In most of Malaysia underground soil condition, those technologies were limited to maximum of 3 meter depth. Thus, those utilities which were installed much deeper than 3 meter depth could not be detected by using existing detection tools. The accuracy and reliability of existing underground utility mapping technologies or work procedure were in doubt. Thus, a mitigation action plan is required. While installing new utility using Horizontal Directional Drilling (HDD) method, a more accurate underground utility mapping can be achieved by using Gyro OMU compared to existing practice using e.g. EML and GPR. Gyro OMU is a method to accurately identify the location of HDD thus this mapping can be used or referred to avoid those cost of breakdown due to future HDD works which can be caused by inaccurate underground utility mapping.Keywords: Gyro Orientation Measurement Unit (Gyro OMU), Horizontal Directional Drilling (HDD), Ground Penetrating Radar (GPR), Electromagnetic Locator (EML)
Procedia PDF Downloads 1403168 The Characteristics of Quantity Operation for 2nd and 3rd Grade Mathematics Slow Learners
Authors: Pi-Hsia Hung
Abstract:
The development of mathematical competency has individual benefits as well as benefits to the wider society. Children who begin school behind their peers in their understanding of number, counting, and simple arithmetic are at high risk of staying behind throughout their schooling. The development of effective strategies for improving the educational trajectory of these individuals will be contingent on identifying areas of early quantitative knowledge that influence later mathematics achievement. A computer-based quantity assessment was developed in this study to investigate the characteristics of 2nd and 3rd grade slow learners in quantity. The concept of quantification involves understanding measurements, counts, magnitudes, units, indicators, relative size, and numerical trends and patterns. Fifty-five tasks of quantitative reasoning—such as number sense, mental calculation, estimation and assessment of reasonableness of results—are included as quantity problem solving. Thus, quantity is defined in this study as applying knowledge of number and number operations in a wide variety of authentic settings. Around 1000 students were tested and categorized into 4 different performance levels. Students’ quantity ability correlated higher with their school math grade than other subjects. Around 20% students are below basic level. The intervention design implications of the preliminary item map constructed are discussed.Keywords: mathematics assessment, mathematical cognition, quantity, number sense, validity
Procedia PDF Downloads 2473167 Quantification of Polychlorinated Biphenyls (PCBs) in Soil Samples of Electrical Power Substations from Different Cities in Nigeria
Authors: Omasan Urhie Urhie, Adenipekun C. O, Eke W., Ogwu K., Erinle K. O
Abstract:
Polychlorinated Biphenyls (PCBs) are Persistent organic pollutants (POPs) that are very toxic; they possess ability to accumulate in soil and in human tissues hence resulting in health issues like birth defect, reproductive disorder and cancer. The air is polluted by PCBs through volatilization and dispersion; they also contaminate soil and sediments and are not easily degraded. Soil samples were collected from a depth of 0-15 cm from three substations (Warri, Ughelli and Ibadan) of Power Holding Company of Nigeria (PHCN) where old transformers were dumped in Nigeria. Extraction and cleanup of soil samples were conducted using Accelerated Solvent Extraction (ASE) with Pressurized Liquid extraction (PLE). The concentration of PCBs was determined using gsas chromatography/mass spectrometry (GC/MS). Mean total PCB concentrations in the soil samples increased in the order Ughelli ˂ Ibadan˂ Warri, 2.457757ppm Ughelli substation 4.198926ppm, for Ibadan substation and 14.05065ppm at Warri substation. In the Warri samples, PCB-167 was the most abundant at about 30% (4.28086ppm) followed by PCB-157 at about 20% (2.77871), of the total PCB concentrations (14.05065ppm). Of the total PCBs in the Ughelli and Ibadan samples, PCB-156 was the most abundant at about 44% and 40%, respectively. This study provides a baseline report on the presence of PCBs in the vicinity of abandoned electrical power facilities in different cities in Nigeria.Keywords: polychlorintated biphenyls, persistent organic pollutants, soil, transformer
Procedia PDF Downloads 1393166 Beam Spatio-Temporal Multiplexing Approach for Improving Control Accuracy of High Contrast Pulse
Authors: Ping Li, Bing Feng, Junpu Zhao, Xudong Xie, Dangpeng Xu, Kuixing Zheng, Qihua Zhu, Xiaofeng Wei
Abstract:
In laser driven inertial confinement fusion (ICF), the control of the temporal shape of the laser pulse is a key point to ensure an optimal interaction of laser-target. One of the main difficulties in controlling the temporal shape is the foot part control accuracy of high contrast pulse. Based on the analysis of pulse perturbation in the process of amplification and frequency conversion in high power lasers, an approach of beam spatio-temporal multiplexing is proposed to improve the control precision of high contrast pulse. In the approach, the foot and peak part of high contrast pulse are controlled independently, which propagate separately in the near field, and combine together in the far field to form the required pulse shape. For high contrast pulse, the beam area ratio of the two parts is optimized, and then beam fluence and intensity of the foot part are increased, which brings great convenience to the control of pulse. Meanwhile, the near field distribution of the two parts is also carefully designed to make sure their F-numbers are the same, which is another important parameter for laser-target interaction. The integrated calculation results show that for a pulse with a contrast of up to 500, the deviation of foot part can be improved from 20% to 5% by using beam spatio-temporal multiplexing approach with beam area ratio of 1/20, which is almost the same as that of peak part. The research results are expected to bring a breakthrough in power balance of high power laser facility.Keywords: inertial confinement fusion, laser pulse control, beam spatio-temporal multiplexing, power balance
Procedia PDF Downloads 1473165 Unsupervised Echocardiogram View Detection via Autoencoder-Based Representation Learning
Authors: Andrea Treviño Gavito, Diego Klabjan, Sanjiv J. Shah
Abstract:
Echocardiograms serve as pivotal resources for clinicians in diagnosing cardiac conditions, offering non-invasive insights into a heart’s structure and function. When echocardiographic studies are conducted, no standardized labeling of the acquired views is performed. Employing machine learning algorithms for automated echocardiogram view detection has emerged as a promising solution to enhance efficiency in echocardiogram use for diagnosis. However, existing approaches predominantly rely on supervised learning, necessitating labor-intensive expert labeling. In this paper, we introduce a fully unsupervised echocardiographic view detection framework that leverages convolutional autoencoders to obtain lower dimensional representations and the K-means algorithm for clustering them into view-related groups. Our approach focuses on discriminative patches from echocardiographic frames. Additionally, we propose a trainable inverse average layer to optimize decoding of average operations. By integrating both public and proprietary datasets, we obtain a marked improvement in model performance when compared to utilizing a proprietary dataset alone. Our experiments show boosts of 15.5% in accuracy and 9.0% in the F-1 score for frame-based clustering, and 25.9% in accuracy and 19.8% in the F-1 score for view-based clustering. Our research highlights the potential of unsupervised learning methodologies and the utilization of open-sourced data in addressing the complexities of echocardiogram interpretation, paving the way for more accurate and efficient cardiac diagnoses.Keywords: artificial intelligence, echocardiographic view detection, echocardiography, machine learning, self-supervised representation learning, unsupervised learning
Procedia PDF Downloads 323164 Integrating Knowledge Distillation of Multiple Strategies
Authors: Min Jindong, Wang Mingxia
Abstract:
With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.Keywords: object detection, knowledge distillation, convolutional network, model compression
Procedia PDF Downloads 2783163 Quantification of Size Segregated Particulate Matter Deposition in Human Respiratory Tract and Health Risk to Residents of Glass City
Authors: Kalpana Rajouriya, Ajay Taneja
Abstract:
The objective of the present study is to investigate the regional and lobar deposition of size-segregated PM in respiratory tract of human body. PM in different fractions is monitored using the Grimm portable environmental dust monitor during winter season in Firozabad; a Glass city of India. PM10 concentration (200.817g/m³) was 4.46 and 2.0 times higher than the limits prescribed by WHO (45g/m⁻³) and NAAQS (100g/m⁻³) government agencies. PM2.5 concentration (83.538 g/m3) was 5.56 and 1.39 times higher from WHO (15g/m-3) and NAAQS (60g/m⁻³) limits. Results inferred that PM10 and PM2.5 was highest deposited in head region (0.3477-0.5622 & 0.366-0.4704) followed by pulmonary region, especially in the 9-21year old persons. The variation in deposition percentage in our study is mainly due to the airway geometry, PM size, and its deposition mechanisms. The coarse fraction, due to its large size, cannot follow the airway path and mostly gets deposited by inertial impaction in the head region and its bifurcations. The present study results inferred that Coarse and fine PM deposition was highly visualized in 9 (8.45610⁻⁴ g, 2.91110⁻⁴g) year and 3 (1.49610⁻⁴ g, 8.59310⁻⁵g) month age category. So, the 9year children and 3month infants category have high level of health risk.Keywords: particulate matter, MPPD model, regional deposition, lobar deposition, health risk
Procedia PDF Downloads 613162 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline
Authors: Kenan Morani, Esra Kaya Ayana
Abstract:
This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation
Procedia PDF Downloads 1313161 Improving Diagnostic Accuracy of Ankle Syndesmosis Injuries: A Comparison of Traditional Radiographic Measurements and Computed Tomography-Based Measurements
Authors: Yasar Samet Gokceoglu, Ayse Nur Incesu, Furkan Okatar, Berk Nimetoglu, Serkan Bayram, Turgut Akgul
Abstract:
Ankle syndesmosis injuries pose a significant challenge in orthopedic practice due to their potential for prolonged recovery and chronic ankle dysfunction. Accurate diagnosis and management of these injuries are essential for achieving optimal patient outcomes. The use of radiological methods, such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI), plays a vital role in the accurate diagnosis of syndesmosis injuries in the context of ankle fractures. Treatment options for ankle syndesmosis injuries vary, with surgical interventions such as screw fixation and suture-button implantation being commonly employed. The choice of treatment is influenced by the severity of the injury and the presence of associated fractures. Additionally, the mechanism of injury, such as pure syndesmosis injury or specific fracture types, can impact the stability and management of syndesmosis injuries. Ankle fractures with syndesmosis injury present a complex clinical scenario, requiring accurate diagnosis, appropriate reduction, and tailored management strategies. The interplay between the mechanism of injury, associated fractures, and treatment modalities significantly influences the outcomes of these challenging injuries. The long-term outcomes and patient satisfaction following ankle fractures with syndesmosis injury are crucial considerations in the field of orthopedics. Patient-reported outcome measures, such as the Foot and Ankle Outcome Score (FAOS), provide essential information about functional recovery and quality of life after these injuries. When diagnosing syndesmosis injuries, standard measurements, such as the medial clear space, tibiofibular overlap, tibiofibular clear space, anterior tibiofibular ratio (ATFR), and the anterior-posterior tibiofibular ratio (APTF), are assessed through radiographs and computed tomography (CT) scans. These parameters are critical in evaluating the presence and severity of syndesmosis injuries, enabling clinicians to choose the most appropriate treatment approach. Despite advancements in diagnostic imaging, challenges remain in accurately diagnosing and treating ankle syndesmosis injuries. Traditional diagnostic parameters, while beneficial, may not capture the full extent of the injury or provide sufficient information to guide therapeutic decisions. This gap highlights the need for exploring additional diagnostic parameters that could enhance the accuracy of syndesmosis injury diagnoses and inform treatment strategies more effectively. The primary goal of this research is to evaluate the usefulness of traditional radiographic measurements in comparison to new CT-based measurements for diagnosing ankle syndesmosis injuries. Specifically, this study aims to assess the accuracy of conventional parameters, including medial clear space, tibiofibular overlap, tibiofibular clear space, ATFR, and APTF, in contrast with the recently proposed CT-based measurements such as the delta and gamma angles. Moreover, the study intends to explore the relationship between these diagnostic parameters and functional outcomes, as measured by the Foot and Ankle Outcome Score (FAOS). Establishing a correlation between specific diagnostic measurements and FAOS scores will enable us to identify the most reliable predictors of functional recovery following syndesmosis injuries. This comparative analysis will provide valuable insights into the accuracy and dependability of CT-based measurements in diagnosing ankle syndesmosis injuries and their potential impact on predicting patient outcomes. The results of this study could greatly influence clinical practices by refining diagnostic criteria and optimizing treatment planning for patients with ankle syndesmosis injuries.Keywords: ankle syndesmosis injury, diagnostic accuracy, computed tomography, radiographic measurements, Tibiofibular syndesmosis distance
Procedia PDF Downloads 733160 Supervised Machine Learning Approach for Studying the Effect of Different Joint Sets on Stability of Mine Pit Slopes Under the Presence of Different External Factors
Authors: Sudhir Kumar Singh, Debashish Chakravarty
Abstract:
Slope stability analysis is an important aspect in the field of geotechnical engineering. It is also important from safety, and economic point of view as any slope failure leads to loss of valuable lives and damage to property worth millions. This paper aims at mitigating the risk of slope failure by studying the effect of different joint sets on the stability of mine pit slopes under the influence of various external factors, namely degree of saturation, rainfall intensity, and seismic coefficients. Supervised machine learning approach has been utilized for making accurate and reliable predictions regarding the stability of slopes based on the value of Factor of Safety. Numerous cases have been studied for analyzing the stability of slopes using the popular Finite Element Method, and the data thus obtained has been used as training data for the supervised machine learning models. The input data has been trained on different supervised machine learning models, namely Random Forest, Decision Tree, Support vector Machine, and XGBoost. Distinct test data that is not present in training data has been used for measuring the performance and accuracy of different models. Although all models have performed well on the test dataset but Random Forest stands out from others due to its high accuracy of greater than 95%, thus helping us by providing a valuable tool at our disposition which is neither computationally expensive nor time consuming and in good accordance with the numerical analysis result.Keywords: finite element method, geotechnical engineering, machine learning, slope stability
Procedia PDF Downloads 1013159 Potentials of Additive Manufacturing: An Approach to Increase the Flexibility of Production Systems
Authors: A. Luft, S. Bremen, N. Balc
Abstract:
The task of flexibility planning and design, just like factory planning, for example, is to create the long-term systemic framework that constitutes the restriction for short-term operational management. This is a strategic challenge since, due to the decision defect character of the underlying flexibility problem, multiple types of flexibility need to be considered over the course of various scenarios, production programs, and production system configurations. In this context, an evaluation model has been developed that integrates both conventional and additive resources on a basic task level and allows the quantification of flexibility enhancement in terms of mix and volume flexibility, complexity reduction, and machine capacity. The model helps companies to decide in early decision-making processes about the potential gains of implementing additive manufacturing technologies on a strategic level. For companies, it is essential to consider both additive and conventional manufacturing beyond pure unit costs. It is necessary to achieve an integrative view of manufacturing that incorporates both additive and conventional manufacturing resources and quantifies their potential with regard to flexibility and manufacturing complexity. This also requires a structured process for the strategic production systems design that spans the design of various scenarios and allows for multi-dimensional and comparative analysis. A respective guideline for the planning of additive resources on a strategic level is being laid out in this paper.Keywords: additive manufacturing, production system design, flexibility enhancement, strategic guideline
Procedia PDF Downloads 1243158 Classifying Students for E-Learning in Information Technology Course Using ANN
Authors: Sirilak Areerachakul, Nat Ployong, Supayothin Na Songkla
Abstract:
This research’s objective is to select the model with most accurate value by using Neural Network Technique as a way to filter potential students who enroll in IT course by electronic learning at Suan Suanadha Rajabhat University. It is designed to help students selecting the appropriate courses by themselves. The result showed that the most accurate model was 100 Folds Cross-validation which had 73.58% points of accuracy.Keywords: artificial neural network, classification, students, e-learning
Procedia PDF Downloads 4263157 Device-integrated Micro-thermocouples for Reliable Temperature Measurement of GaN HEMTs
Authors: Hassan Irshad Bhatti, Saravanan Yuvaraja, Xiaohang Li
Abstract:
GaN-based devices, such as high electron mobility transistors (HEMTs), offer superior characteristics for high-power, high-frequency, and high-temperature applications [1]. However, this exceptional electrical performance is compromised by undesirable self-heating effects under high-power applications [2, 3]. Some of the issues caused by self-heating are current collapse, thermal runway and performance degradation [4, 5]. Therefore, accurate and reliable methods for measuring the temperature of individual devices on a chip are needed to monitor and control the thermal behavior of GaN-based devices [6]. Temperature measurement at the micro/nanoscale is a challenging task that requires specialized techniques such as Infrared microscopy, Raman thermometry, and thermoreflectance. Recently, micro-thermocouples (MTCs) have attracted considerable attention due to their advantages of simplicity, low cost, high sensitivity, and compatibility with standard fabrication processes [7, 8]. A micro-thermocouple is a junction of two different metal thin films, which generates a Seebeck voltage related to the temperature difference between a hot and cold zone. Integrating MTC in a device allows local temperature to be measured with high sensitivity and accuracy [9]. This work involves the fabrication and integration of micro-thermocouples (MTCs) to measure the channel temperature of GaN HEMT. Our fabricated MTC (Platinum-Chromium junction) has shown a sensitivity of 16.98 µV/K and can measure device channel temperature with high precision and accuracy. The temperature information obtained using this sensor can help improve GaN-based devices and provide thermal engineers with useful insights for optimizing their designs.Keywords: Electrical Engineering, Thermal engineering, Power Devices, Semiconuctors
Procedia PDF Downloads 193156 Improvement of Microscopic Detection of Acid-Fast Bacilli for Tuberculosis by Artificial Intelligence-Assisted Microscopic Platform and Medical Image Recognition System
Authors: Hsiao-Chuan Huang, King-Lung Kuo, Mei-Hsin Lo, Hsiao-Yun Chou, Yusen Lin
Abstract:
The most robust and economical method for laboratory diagnosis of TB is to identify mycobacterial bacilli (AFB) under acid-fast staining despite its disadvantages of low sensitivity and labor-intensive. Though digital pathology becomes popular in medicine, an automated microscopic system for microbiology is still not available. A new AI-assisted automated microscopic system, consisting of a microscopic scanner and recognition program powered by big data and deep learning, may significantly increase the sensitivity of TB smear microscopy. Thus, the objective is to evaluate such an automatic system for the identification of AFB. A total of 5,930 smears was enrolled for this study. An intelligent microscope system (TB-Scan, Wellgen Medical, Taiwan) was used for microscopic image scanning and AFB detection. 272 AFB smears were used for transfer learning to increase the accuracy. Referee medical technicians were used as Gold Standard for result discrepancy. Results showed that, under a total of 1726 AFB smears, the automated system's accuracy, sensitivity and specificity were 95.6% (1,650/1,726), 87.7% (57/65), and 95.9% (1,593/1,661), respectively. Compared to culture, the sensitivity for human technicians was only 33.8% (38/142); however, the automated system can achieve 74.6% (106/142), which is significantly higher than human technicians, and this is the first of such an automated microscope system for TB smear testing in a controlled trial. This automated system could achieve higher TB smear sensitivity and laboratory efficiency and may complement molecular methods (eg. GeneXpert) to reduce the total cost for TB control. Furthermore, such an automated system is capable of remote access by the internet and can be deployed in the area with limited medical resources.Keywords: TB smears, automated microscope, artificial intelligence, medical imaging
Procedia PDF Downloads 2293155 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping
Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM
Procedia PDF Downloads 943154 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes
Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo
Abstract:
Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation
Procedia PDF Downloads 2063153 Quantification of Glucosinolates in Turnip Greens and Turnip Tops by Near-Infrared Spectroscopy
Authors: S. Obregon-Cano, R. Moreno-Rojas, E. Cartea-Gonzalez, A. De Haro-Bailon
Abstract:
The potential of near-infrared spectroscopy (NIRS) for screening the total glucosinolate (t-GSL) content, and also, the aliphatic glucosinolates gluconapin (GNA), progoitrin (PRO) and glucobrassicanapin (GBN) in turnip greens and turnip tops was assessed. This crop is grown for edible leaves and stems for human consumption. The reference values for glucosinolates, as they were obtained by high performance liquid chromatography on the vegetable samples, were regressed against different spectral transformations by modified partial least-squares (MPLS) regression (calibration set of samples n= 350). The resulting models were satisfactory, with calibration coefficient values from 0.72 (GBN) to 0.98 (tGSL). The predictive ability of the equations obtained was tested using a set of samples (n=70) independent of the calibration set. The determination coefficients and prediction errors (SEP) obtained in the external validation were: GNA=0.94 (SEP=3.49); PRO=0.41 (SEP=1.08); GBN=0.55 (SEP=0.60); tGSL=0.96 (SEP=3.28). These results show that the equations developed for total glucosinolates, as well as for gluconapin can be used for screening these compounds in the leaves and stems of this species. In addition, the progoitrin and glucobrassicanapin equations obtained can be used to identify those samples with high, medium and low contents. The calibration equations obtained were accurate enough for a fast, non-destructive and reliable analysis of the content in GNA and tGSL directly from NIR spectra. The equations for PRO and GBN can be employed to identify samples with high, medium and low contents.Keywords: brassica rapa, glucosinolates, gluconapin, NIRS, turnip greens
Procedia PDF Downloads 1443152 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil
Authors: M. Seguini, D. Nedjar
Abstract:
An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability
Procedia PDF Downloads 4143151 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques
Authors: Gizem Eser Erdek
Abstract:
This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet
Procedia PDF Downloads 773150 Quantification of Peptides (linusorbs) in Gluten-free Flaxseed Fortified Bakery Products
Authors: Youn Young Shim, Ji Hye Kim, Jae Youl Cho, Martin JT Reaney
Abstract:
Flaxseed (Linumusitatissimum L.) is gaining popularity in the food industry as a superfood due to its health-promoting properties. Linusorbs (LOs, a.k.a. Cyclolinopeptide) are bioactive compounds present in flaxseed exhibiting potential health effects. The study focused on the effects of processing and storage on the stability of flaxseed-derived LOs added to various bakery products. The flaxseed meal fortified gluten-free (GF) bakery bread was prepared, and the changes of LOs during the bread-making process (meal, fortified flour, dough, and bread) and storage (0, 1, 2, and 4 weeks) at different temperatures (−18 °C, 4 °C, and 22−23 °C) were analyzed by high-performance liquid chromatography-diode array detection. The total oxidative LOs and LO1OB2 were almost kept stable in flaxseed meals at storage temperatures of 22−23 °C, −18 °C, and 4 °C for up to four weeks. Processing steps during GF-bread production resulted in the oxidation of LOs. Interestingly, no LOs were detected in the dough sample; however, LOs appeared when the dough was stored at −18 °C for one week, suggesting that freezing destroyed the sticky structure of the dough and resulted in the release of LOs. The final product, flaxseed meal fortified bread, could be stored for up to four weeks at −18 °C and 4 °C, and for one week at 22−23 °C. All these results suggested that LOs may change during processing and storage and that flaxseed flour-fortified bread should be stored at low temperatures to preserve effective LOs components.Keywords: linum usitatissimum L., flaxseed, linusorb, stability, gluten-free, peptides, cyclolinopeptide
Procedia PDF Downloads 1793149 Advantages of Computer Navigation in Knee Arthroplasty
Authors: Mohammad Ali Al Qatawneh, Bespalchuk Pavel Ivanovich
Abstract:
Computer navigation has been introduced in total knee arthroplasty to improve the accuracy of the procedure. Computer navigation improves the accuracy of bone resection in the coronal and sagittal planes. It was also noted that it normalizes the rotational alignment of the femoral component and fully assesses and balances the deformation of soft tissues in the coronal plane. The work is devoted to the advantages of using computer navigation technology in total knee arthroplasty in 62 patients (11 men and 51 women) suffering from gonarthrosis, aged 51 to 83 years, operated using a computer navigation system, followed up to 3 years from the moment of surgery. During the examination, the deformity variant was determined, and radiometric parameters of the knee joints were measured using the Knee Society Score (KSS), Functional Knee Society Score (FKSS), and Western Ontario and McMaster University Osteoarthritis Index (WOMAC) scales. Also, functional stress tests were performed to assess the stability of the knee joint in the frontal plane and functional indicators of the range of motion. After surgery, improvement was observed in all scales; firstly, the WOMAC values decreased by 5.90 times, and the median value to 11 points (p < 0.001), secondly KSS increased by 3.91 times and reached 86 points (p < 0.001), and the third one is that FKSS data increased by 2.08 times and reached 94 points (p < 0.001). After TKA, the axis deviation of the lower limbs of more than 3 degrees was observed in 4 patients at 6.5% and frontal instability of the knee joint just in 2 cases at 3.2%., The lower incidence of sagittal instability of the knee joint after the operation was 9.6%. The range of motion increased by 1.25 times; the volume of movement averaged 125 degrees (p < 0.001). Computer navigation increases the accuracy of the spatial orientation of the endoprosthesis components in all planes, reduces the variability of the axis of the lower limbs within ± 3 °, allows you to achieve the best results of surgical interventions, and can be used to solve most basic tasks, allowing you to achieve excellent and good outcomes of operations in 100% of cases according to the WOMAC scale. With diaphyseal deformities of the femur and/or tibia, as well as with obstruction of their medullary canal, the use of computer navigation is the method of choice. The use of computer navigation prevents the occurrence of flexion contracture and hyperextension of the knee joint during the distal sawing of the femur. Using the navigation system achieves high-precision implantation for the endoprosthesis; in addition, it achieves an adequate balance of the ligaments, which contributes to the stability of the joint, reduces pain, and allows for the achievement of a good functional result of the treatment.Keywords: knee joint, arthroplasty, computer navigation, advantages
Procedia PDF Downloads 903148 Computer-Aided Diagnosis System Based on Multiple Quantitative Magnetic Resonance Imaging Features in the Classification of Brain Tumor
Authors: Chih Jou Hsiao, Chung Ming Lo, Li Chun Hsieh
Abstract:
Brain tumor is not the cancer having high incidence rate, but its high mortality rate and poor prognosis still make it as a big concern. On clinical examination, the grading of brain tumors depends on pathological features. However, there are some weak points of histopathological analysis which can cause misgrading. For example, the interpretations can be various without a well-known definition. Furthermore, the heterogeneity of malignant tumors is a challenge to extract meaningful tissues under surgical biopsy. With the development of magnetic resonance imaging (MRI), tumor grading can be accomplished by a noninvasive procedure. To improve the diagnostic accuracy further, this study proposed a computer-aided diagnosis (CAD) system based on MRI features to provide suggestions of tumor grading. Gliomas are the most common type of malignant brain tumors (about 70%). This study collected 34 glioblastomas (GBMs) and 73 lower-grade gliomas (LGGs) from The Cancer Imaging Archive. After defining the region-of-interests in MRI images, multiple quantitative morphological features such as region perimeter, region area, compactness, the mean and standard deviation of the normalized radial length, and moment features were extracted from the tumors for classification. As results, two of five morphological features and three of four image moment features achieved p values of <0.001, and the remaining moment feature had p value <0.05. Performance of the CAD system using the combination of all features achieved the accuracy of 83.18% in classifying the gliomas into LGG and GBM. The sensitivity is 70.59% and the specificity is 89.04%. The proposed system can become a second viewer on clinical examinations for radiologists.Keywords: brain tumor, computer-aided diagnosis, gliomas, magnetic resonance imaging
Procedia PDF Downloads 2603147 Long-Term Subcentimeter-Accuracy Landslide Monitoring Using a Cost-Effective Global Navigation Satellite System Rover Network: Case Study
Authors: Vincent Schlageter, Maroua Mestiri, Florian Denzinger, Hugo Raetzo, Michel Demierre
Abstract:
Precise landslide monitoring with differential global navigation satellite system (GNSS) is well known, but technical or economic reasons limit its application by geotechnical companies. This study demonstrates the reliability and the usefulness of Geomon (Infrasurvey Sàrl, Switzerland), a stand-alone and cost-effective rover network. The system permits deploying up to 15 rovers, plus one reference station for differential GNSS. A dedicated radio communication links all the modules to a base station, where an embedded computer automatically provides all the relative positions (L1 phase, open-source RTKLib software) and populates an Internet server. Each measure also contains information from an internal inclinometer, battery level, and position quality indices. Contrary to standard GNSS survey systems, which suffer from a limited number of beacons that must be placed in areas with good GSM signal, Geomon offers greater flexibility and permits a real overview of the whole landslide with good spatial resolution. Each module is powered with solar panels, ensuring autonomous long-term recordings. In this study, we have tested the system on several sites in the Swiss mountains, setting up to 7 rovers per site, for an 18 month-long survey. The aim was to assess the robustness and the accuracy of the system in different environmental conditions. In one case, we ran forced blind tests (vertical movements of a given amplitude) and compared various session parameters (duration from 10 to 90 minutes). Then the other cases were a survey of real landslides sites using fixed optimized parameters. Sub centimetric-accuracy with few outliers was obtained using the best parameters (session duration of 60 minutes, baseline 1 km or less), with the noise level on the horizontal component half that of the vertical one. The performance (percent of aborting solutions, outliers) was reduced with sessions shorter than 30 minutes. The environment also had a strong influence on the percent of aborting solutions (ambiguity search problem), due to multiple reflections or satellites obstructed by trees and mountains. The length of the baseline (distance reference-rover, single baseline processing) reduced the accuracy above 1 km but had no significant effect below this limit. In critical weather conditions, the system’s robustness was limited: snow, avalanche, and frost-covered some rovers, including the antenna and vertically oriented solar panels, leading to data interruption; and strong wind damaged a reference station. The possibility of changing the sessions’ parameters remotely was very useful. In conclusion, the rover network tested provided the foreseen sub-centimetric-accuracy while providing a dense spatial resolution landslide survey. The ease of implementation and the fully automatic long-term survey were timesaving. Performance strongly depends on surrounding conditions, but short pre-measures should allow moving a rover to a better final placement. The system offers a promising hazard mitigation technique. Improvements could include data post-processing for alerts and automatic modification of the duration and numbers of sessions based on battery level and rover displacement velocity.Keywords: GNSS, GSM, landslide, long-term, network, solar, spatial resolution, sub-centimeter.
Procedia PDF Downloads 1113146 Low-Cost Parking Lot Mapping and Localization for Home Zone Parking Pilot
Authors: Hongbo Zhang, Xinlu Tang, Jiangwei Li, Chi Yan
Abstract:
Home zone parking pilot (HPP) is a fast-growing segment in low-speed autonomous driving applications. It requires the car automatically cruise around a parking lot and park itself in a range of up to 100 meters inside a recurrent home/office parking lot, which requires precise parking lot mapping and localization solution. Although Lidar is ideal for SLAM, the car OEMs favor a low-cost fish-eye camera based visual SLAM approach. Recent approaches have employed segmentation models to extract semantic features and improve mapping accuracy, but these AI models are memory unfriendly and computationally expensive, making deploying on embedded ADAS systems difficult. To address this issue, we proposed a new method that utilizes object detection models to extract robust and accurate parking lot features. The proposed method could reduce computational costs while maintaining high accuracy. Once combined with vehicles’ wheel-pulse information, the system could construct maps and locate the vehicle in real-time. This article will discuss in detail (1) the fish-eye based Around View Monitoring (AVM) with transparent chassis images as the inputs, (2) an Object Detection (OD) based feature point extraction algorithm to generate point cloud, (3) a low computational parking lot mapping algorithm and (4) the real-time localization algorithm. At last, we will demonstrate the experiment results with an embedded ADAS system installed on a real car in the underground parking lot.Keywords: ADAS, home zone parking pilot, object detection, visual SLAM
Procedia PDF Downloads 673145 Experimental Quantification and Modeling of Dissolved Gas during Hydrate Crystallization: CO₂ Hydrate Case
Authors: Amokrane Boufares, Elise Provost, Veronique Osswald, Pascal Clain, Anthony Delahaye, Laurence Fournaison, Didier Dalmazzone
Abstract:
Gas hydrates have long been considered as problematic for flow assurance in natural gas and oil transportation. On the other hand, they are now seen as future promising materials for various applications (i.e. desalination of seawater, natural gas and hydrogen storage, gas sequestration, gas combustion separation and cold storage and transport). Nonetheless, a better understanding of the crystallization mechanism of gas hydrate and of their formation kinetics is still needed for a better comprehension and control of the process. To that purpose, measuring the real-time evolution of the dissolved gas concentration in the aqueous phase during hydrate formation is required. In this work, CO₂ hydrates were formed in a stirred reactor equipped with an Attenuated Total Reflection (ATR) probe coupled to a Fourier Transform InfraRed (FTIR) spectroscopy analyzer. A method was first developed to continuously measure in-situ the CO₂ concentration in the liquid phase during solubilization, supersaturation, hydrate crystallization and dissociation steps. Thereafter, the measured concentration data were compared with those of equilibrium concentrations. It was observed that the equilibrium is instantly reached in the liquid phase due to the fast consumption of dissolved gas by the hydrate crystallization. Consequently, it was shown that hydrate crystallization kinetics is limited by the gas transfer at the gas-liquid interface. Finally, we noticed that the liquid-hydrate equilibrium during the hydrate crystallization is governed by the temperature of the experiment under the tested conditions.Keywords: gas hydrate, dissolved gas, crystallization, infrared spectroscopy
Procedia PDF Downloads 2823144 Forecasting Residential Water Consumption in Hamilton, New Zealand
Authors: Farnaz Farhangi
Abstract:
Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model
Procedia PDF Downloads 3373143 An Intelligent Prediction Method for Annular Pressure Driven by Mechanism and Data
Authors: Zhaopeng Zhu, Xianzhi Song, Gensheng Li, Shuo Zhu, Shiming Duan, Xuezhe Yao
Abstract:
Accurate calculation of wellbore pressure is of great significance to prevent wellbore risk during drilling. The traditional mechanism model needs a lot of iterative solving procedures in the calculation process, which reduces the calculation efficiency and is difficult to meet the demand of dynamic control of wellbore pressure. In recent years, many scholars have introduced artificial intelligence algorithms into wellbore pressure calculation, which significantly improves the calculation efficiency and accuracy of wellbore pressure. However, due to the ‘black box’ property of intelligent algorithm, the existing intelligent calculation model of wellbore pressure is difficult to play a role outside the scope of training data and overreacts to data noise, often resulting in abnormal calculation results. In this study, the multi-phase flow mechanism is embedded into the objective function of the neural network model as a constraint condition, and an intelligent prediction model of wellbore pressure under the constraint condition is established based on more than 400,000 sets of pressure measurement while drilling (MPD) data. The constraint of the multi-phase flow mechanism makes the prediction results of the neural network model more consistent with the distribution law of wellbore pressure, which overcomes the black-box attribute of the neural network model to some extent. The main performance is that the accuracy of the independent test data set is further improved, and the abnormal calculation values basically disappear. This method is a prediction method driven by MPD data and multi-phase flow mechanism, and it is the main way to predict wellbore pressure accurately and efficiently in the future.Keywords: multiphase flow mechanism, pressure while drilling data, wellbore pressure, mechanism constraints, combined drive
Procedia PDF Downloads 174