Search results for: defect prediction
529 A Hybrid Simulation Approach to Evaluate Cooling Energy Consumption for Public Housings of Subtropics
Authors: Kwok W. Mui, Ling T. Wong, Chi T. Cheung
Abstract:
Cooling energy consumption in the residential sector, different from shopping mall, office or commercial buildings, is significantly subject to occupant decisions where in-depth investigations are found limited. It shows that energy consumptions could be associated with housing types. Surveys have been conducted in existing Hong Kong public housings to understand the housing characteristics, apartment electricity demands, occupant’s thermal expectations, and air–conditioning usage patterns for further cooling energy-saving assessments. The aim of this study is to develop a hybrid cooling energy prediction model, which integrated by EnergyPlus (EP) and artificial neural network (ANN) to estimate cooling energy consumption in public residential sector. Sensitivity tests are conducted to find out the energy impacts with changing building parameters regarding to external wall and window material selection, window size reduction, shading extension, building orientation and apartment size control respectively. Assessments are performed to investigate the relationships between cooling demands and occupant behavior on thermal environment criteria and air-conditioning operation patterns. The results are summarized into a cooling energy calculator for layman use to enhance the cooling energy saving awareness in their own living environment. The findings can be used as a directory framework for future cooling energy evaluation in residential buildings, especially focus on the occupant behavioral air–conditioning operation and criteria of energy-saving incentives.Keywords: artificial neural network, cooling energy, occupant behavior, residential buildings, thermal environment
Procedia PDF Downloads 168528 The Influence of Caregivers’ Preparedness and Role Burden on Quality of Life among Stroke Patients
Authors: Yeaji Seok, Myung Kyung Lee
Abstract:
Background: Even if patients survive after a stroke, stroke patients may experience disability in mobility, sensation, cognition, and speech and language. Stroke patients require rehabilitation for functional recovery and daily life for a considerable time. During rehabilitation, the role of caregivers is important. However, the stroke patients’ quality of life may deteriorate due to family caregivers’ non-preparedness and increased role burden. Purpose: To investigate the prediction of caregivers' preparedness and role burden on stroke patients’ quality of life. Methods: The target population was stroke patients who were hospitalized for rehabilitation and their family care providers. A total of 153 patient-family caregiver dyads were recruited from June to August 2021. Data were collected from self-reported questionnaires and analyzed using descriptive statistics, t-tests, chi-squared test, one-way analysis of variance, Pearson’s correlation coefficients, and multiple regression with SPSS statistics 28 programs. Results: Family caregivers’ preparedness affected stroke patients’ mobility (β = .20, p < 0.05) and character (β = -.084, p < 0.05) and production activities (β = -.197, p < 0.05) in quality of life. The role burden of family caregivers affected language skills (β = .310, p<0.05), visual functions (β=-.357, p < 0.05), thinking skills (β = 0.443, p = 0.05), mood conditions (β = 0.565, p < 0.001), family roles (β = -0.361, p < 0.001), and social roles (β = -0.304, p < 0.001), while the caregivers’ burden of performing self-protection negatively affected patients’ social roles (β = .180, p=.048). In addition, caregivers’ role burden of personal life sacrifice affected patients’ mobility (β = .311, p < 0.05), self-care (β =.232, p < 0.05) and energy (β = .239, p < 0.05). Conclusion: This study indicated that family caregivers' preparedness and role burden affected stroke patients’ quality of life. The results of this study suggested that intervention to improve family caregivers’ preparedness and to reduce role burden should be required for quality of life in stroke patients.Keywords: quality of life, preparedness, role burden, caregivers, stroke
Procedia PDF Downloads 210527 Prediction of Antibacterial Peptides against Propionibacterium acnes from the Peptidomes of Achatina fulica Mucus Fractions
Authors: Suwapitch Chalongkulasak, Teerasak E-Kobon, Pramote Chumnanpuen
Abstract:
Acne vulgaris is a common skin disease mainly caused by the Gram–positive pathogenic bacterium, Propionibacterium acnes. This bacterium stimulates inflammation process in human sebaceous glands. Giant African snail (Achatina fulica) is alien species that rapidly reproduces and seriously damages agricultural products in Thailand. There were several research reports on the medical and pharmaceutical benefits of this snail mucus peptides and proteins. This study aimed to in silico predict multifunctional bioactive peptides from A. fulica mucus peptidome using several bioinformatic tools for determination of antimicrobial (iAMPpred), anti–biofilm (dPABBs), cytotoxic (Toxinpred), cell membrane penetrating (CPPpred) and anti–quorum sensing (QSPpred) peptides. Three candidate peptides with the highest predictive score were selected and re-designed/modified to improve the required activities. Structural and physicochemical properties of six anti–P. acnes (APA) peptide candidates were performed by PEP–FOLD3 program and the five aforementioned tools. All candidates had random coiled structure and were named as APA1–ori, APA2–ori, APA3–ori, APA1–mod, APA2–mod and APA3–mod. To validate the APA activity, these peptide candidates were synthesized and tested against six isolates of P. acnes. The modified APA peptides showed high APA activity on some isolates. Therefore, our biomimetic mucus peptides could be useful for preventing acne vulgaris and further examined on other activities important to medical and pharmaceutical applications.Keywords: Propionibacterium acnes, Achatina fulica, peptidomes, antibacterial peptides, snail mucus
Procedia PDF Downloads 133526 Fire and Explosion Consequence Modeling Using Fire Dynamic Simulator: A Case Study
Authors: Iftekhar Hassan, Sayedil Morsalin, Easir A Khan
Abstract:
Accidents involving fire occur frequently in recent times and their causes showing a great deal of variety which require intervention methods and risk assessment strategies are unique in each case. On September 4, 2020, a fire and explosion occurred in a confined space caused by a methane gas leak from an underground pipeline in Baitus Salat Jame mosque during Night (Esha) prayer in Narayanganj District, Bangladesh that killed 34 people. In this research, this incident is simulated using Fire Dynamics Simulator (FDS) software to analyze and understand the nature of the accident and associated consequences. FDS is an advanced computational fluid dynamics (CFD) system of fire-driven fluid flow which solves numerically a large eddy simulation form of the Navier–Stokes’s equations for simulation of the fire and smoke spread and prediction of thermal radiation, toxic substances concentrations and other relevant parameters of fire. This study focuses on understanding the nature of the fire and consequence evaluation due to thermal radiation caused by vapor cloud explosion. An evacuation modeling was constructed to visualize the effect of evacuation time and fractional effective dose (FED) for different types of agents. The results were presented by 3D animation, sliced pictures and graphical representation to understand fire hazards caused by thermal radiation or smoke due to vapor cloud explosion. This study will help to design and develop appropriate respond strategy for preventing similar accidents.Keywords: consequence modeling, fire and explosion, fire dynamics simulation (FDS), thermal radiation
Procedia PDF Downloads 226525 Ontology-Driven Knowledge Discovery and Validation from Admission Databases: A Structural Causal Model Approach for Polytechnic Education in Nigeria
Authors: Bernard Igoche Igoche, Olumuyiwa Matthew, Peter Bednar, Alexander Gegov
Abstract:
This study presents an ontology-driven approach for knowledge discovery and validation from admission databases in Nigerian polytechnic institutions. The research aims to address the challenges of extracting meaningful insights from vast amounts of admission data and utilizing them for decision-making and process improvement. The proposed methodology combines the knowledge discovery in databases (KDD) process with a structural causal model (SCM) ontological framework. The admission database of Benue State Polytechnic Ugbokolo (Benpoly) is used as a case study. The KDD process is employed to mine and distill knowledge from the database, while the SCM ontology is designed to identify and validate the important features of the admission process. The SCM validation is performed using the conditional independence test (CIT) criteria, and an algorithm is developed to implement the validation process. The identified features are then used for machine learning (ML) modeling and prediction of admission status. The results demonstrate the adequacy of the SCM ontological framework in representing the admission process and the high predictive accuracies achieved by the ML models, with k-nearest neighbors (KNN) and support vector machine (SVM) achieving 92% accuracy. The study concludes that the proposed ontology-driven approach contributes to the advancement of educational data mining and provides a foundation for future research in this domain.Keywords: admission databases, educational data mining, machine learning, ontology-driven knowledge discovery, polytechnic education, structural causal model
Procedia PDF Downloads 64524 Modified Clusterwise Regression for Pavement Management
Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella
Abstract:
Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.Keywords: clusterwise regression, pavement management system, performance model, optimization
Procedia PDF Downloads 251523 A Cephalometric Superimposition of a Skeletal Class III Orthognathic Patient on Nasion-Sella Line
Authors: Albert Suryaprawira
Abstract:
The Nasion-Sella Line (NSL) has been used for several years as a reference line in longitudinal growth study. Therefore this line is considered to be stable not only to evaluate treatment outcome and to predict relapse possibility but also to manage prognosis. This is a radiographic superimposition of an adult male aged 19 years who complained of difficulty in aesthetic, talking and chewing. Patient has a midface hypoplasia profile (concave). He was diagnosed to have a severe Skeletal Class III with Class III malocclusion, increased lower vertical height, and an anterior open bite. A pre-treatment cephalometric radiograph was taken to analyse the skeletal problem and to measure the amount of bone movement and the prediction soft tissue response. A panoramic radiograph was also taken to analyse bone quality, bone abnormality, third molar impaction, etc. Before the surgery, a pre-surgical cephalometric radiograph was taken to re-evaluate the plan and to settle the final amount of bone cut. After the surgery, a post-surgical cephalometric radiograph was taken to confirm the result with the plan. The superimposition using NSL as a reference line between those radiographs was performed to analyse the outcome. It is important to describe the amount of hard and soft tissue movement and to predict the possibility of relapse after the surgery. The patient also needs to understand all the surgical plan, outcome and relapse prevention. The surgical management included maxillary impaction and advancement of Le Fort I osteotomy. The evaluation using NSL as a reference was a very useful method in determining the outcome and prognosis.Keywords: Nasion-Sella Line, midface hypoplasia, Le Fort 1, maxillary advancement
Procedia PDF Downloads 142522 External Store Safe Separation Evaluation Process Implementing CFD and MIL-HDBK-1763
Authors: Thien Bach Nguyen, Nhu-Van Nguyen, Phi-Minh Nguyen, Minh Hien Dao
Abstract:
The external store safe separation evaluation process implementing CFD and MIL-HDBK-1763 is proposed to support the evaluation and compliance of the external store safe separation with the extensive using CFD and the criteria from MIL-HDBK-1763. The criteria of safe separation are researched and investigated for the various standards and handbooks such as MIL-HDBK-1763, MIL-HDBK-244A, AGARD-AG-202 and AGARD-AG-300 to acquire the appropriate and tailored values and limits for the typical applications of external carriages and aircraft fighters. The CFD and 6DOF simulations are extensively used in ANSYS 2023 R1 Software for verification and validation of moving unstructured meshes and solvers by calibrating the position, aerodynamic forces and moments of the existing air-to-ground missile models. The verified CFD and 6DoF simulation separation process is applied and implemented for the investigation of the typical munition separation phenomena and compliance with the tailored requirements of MIL-HDBK-1763. The prediction of munition trajectory parameters under aircraft aerodynamics interference and specified rack unit consideration after munition separation is provided and complied with the tailored requirements to support the safe separation evaluation of improved and newly external store munition before the flight test performed. The proposed process demonstrates the effectiveness and reliability in providing the understanding of the complicated store separation and the reduction of flight test sorties during the improved and new munition development projects by extensively using the CFD and tailoring the existing standards.Keywords: external store separation, MIL-HDBK-1763, CFD, moving meshes, flight test data, munition.
Procedia PDF Downloads 24521 Development of an Implicit Physical Influence Upwind Scheme for Cell-Centered Finite Volume Method
Authors: Shidvash Vakilipour, Masoud Mohammadi, Rouzbeh Riazi, Scott Ormiston, Kimia Amiri, Sahar Barati
Abstract:
An essential component of a finite volume method (FVM) is the advection scheme that estimates values on the cell faces based on the calculated values on the nodes or cell centers. The most widely used advection schemes are upwind schemes. These schemes have been developed in FVM on different kinds of structured and unstructured grids. In this research, the physical influence scheme (PIS) is developed for a cell-centered FVM that uses an implicit coupled solver. Results are compared with the exponential differencing scheme (EDS) and the skew upwind differencing scheme (SUDS). Accuracy of these schemes is evaluated for a lid-driven cavity flow at Re = 1000, 3200, and 5000 and a backward-facing step flow at Re = 800. Simulations show considerable differences between the results of EDS scheme with benchmarks, especially for the lid-driven cavity flow at high Reynolds numbers. These differences occur due to false diffusion. Comparing SUDS and PIS schemes shows relatively close results for the backward-facing step flow and different results in lid-driven cavity flow. The poor results of SUDS in the lid-driven cavity flow can be related to its lack of sensitivity to the pressure difference between cell face and upwind points, which is critical for the prediction of such vortex dominant flows.Keywords: cell-centered finite volume method, coupled solver, exponential differencing scheme (EDS), physical influence scheme (PIS), pressure weighted interpolation method (PWIM), skew upwind differencing scheme (SUDS)
Procedia PDF Downloads 284520 Peculiarities of Absorption near the Edge of the Fundamental Band of Irradiated InAs-InP Solid Solutions
Authors: Nodar Kekelidze, David Kekelidze, Elza Khutsishvili, Bela Kvirkvelia
Abstract:
The semiconductor devices are irreplaceable elements for investigations in Space (artificial Earth satellite, interplanetary space craft, probes, rockets) and for investigation of elementary particles on accelerators, for atomic power stations, nuclear reactors, robots operating on heavily radiation contaminated territories (Chernobyl, Fukushima). Unfortunately, the most important parameters of semiconductors dramatically worsen under irradiation. So creation of radiation-resistant semiconductor materials for opto and microelectronic devices is actual problem, as well as investigation of complicated processes developed in irradiated solid states. Homogeneous single crystals of InP-InAs solid solutions were grown with zone melting method. There has been studied the dependence of the optical absorption coefficient vs photon energy near fundamental absorption edge. This dependence changes dramatically with irradiation. The experiments were performed on InP, InAs and InP-InAs solid solutions before and after irradiation with electrons and fast neutrons. The investigations of optical properties were carried out on infrared spectrophotometer in temperature range of 10K-300K and 1mkm-50mkm spectral area. Radiation fluencies of fast neutrons was equal to 2·1018neutron/cm2 and electrons with 3MeV, 50MeV up to fluxes of 6·1017electron/cm2. Under irradiation, there has been revealed the exponential type of the dependence of the optical absorption coefficient vs photon energy with energy deficiency. The indicated phenomenon takes place at high and low temperatures as well at impurity different concentration and practically in all cases of irradiation by various energy electrons and fast neutrons. We have developed the common mechanism of this phenomenon for unirradiated materials and implemented the quantitative calculations of distinctive parameter; this is in a satisfactory agreement with experimental data. For the irradiated crystals picture get complicated. In the work, the corresponding analysis is carried out. It has been shown, that in the case of InP, irradiated with electrons (Ф=1·1017el/cm2), the curve of optical absorption is shifted to lower energies. This is caused by appearance of the tails of density of states in forbidden band due to local fluctuations of ionized impurity (defect) concentration. Situation is more complicated in the case of InAs and for solid solutions with composition near to InAs when besides noticeable phenomenon there takes place Burstein effect caused by increase of electrons concentration as a result of irradiation. We have shown, that in certain conditions it is possible the prevalence of Burstein effect. This causes the opposite effect: the shift of the optical absorption edge to higher energies. So in given solid solutions there take place two different opposite directed processes. By selection of solid solutions composition and doping impurity we obtained such InP-InAs, solid solution in which under radiation mutual compensation of optical absorption curves displacement occurs. Obtained result let create on the base of InP-InAs, solid solution radiation-resistant optical materials. Conclusion: It was established the nature of optical absorption near fundamental edge in semiconductor materials and it was created radiation-resistant optical material.Keywords: InAs-InP, electrons concentration, irradiation, solid solutions
Procedia PDF Downloads 201519 Statistical Model of Water Quality in Estero El Macho, Machala-El Oro
Authors: Rafael Zhindon Almeida
Abstract:
Surface water quality is an important concern for the evaluation and prediction of water quality conditions. The objective of this study is to develop a statistical model that can accurately predict the water quality of the El Macho estuary in the city of Machala, El Oro province. The methodology employed in this study is of a basic type that involves a thorough search for theoretical foundations to improve the understanding of statistical modeling for water quality analysis. The research design is correlational, using a multivariate statistical model involving multiple linear regression and principal component analysis. The results indicate that water quality parameters such as fecal coliforms, biochemical oxygen demand, chemical oxygen demand, iron and dissolved oxygen exceed the allowable limits. The water of the El Macho estuary is determined to be below the required water quality criteria. The multiple linear regression model, based on chemical oxygen demand and total dissolved solids, explains 99.9% of the variance of the dependent variable. In addition, principal component analysis shows that the model has an explanatory power of 86.242%. The study successfully developed a statistical model to evaluate the water quality of the El Macho estuary. The estuary did not meet the water quality criteria, with several parameters exceeding the allowable limits. The multiple linear regression model and principal component analysis provide valuable information on the relationship between the various water quality parameters. The findings of the study emphasize the need for immediate action to improve the water quality of the El Macho estuary to ensure the preservation and protection of this valuable natural resource.Keywords: statistical modeling, water quality, multiple linear regression, principal components, statistical models
Procedia PDF Downloads 98518 Avoidance of Brittle Fracture in Bridge Bearings: Brittle Fracture Tests and Initial Crack Size
Authors: Natalie Hoyer
Abstract:
Bridges in both roadway and railway systems depend on bearings to ensure extended service life and functionality. These bearings enable proper load distribution from the superstructure to the substructure while permitting controlled movement of the superstructure. The design of bridge bearings, according to Eurocode DIN EN 1337 and the relevant sections of DIN EN 1993, increasingly requires the use of thick plates, especially for long-span bridges. However, these plate thicknesses exceed the limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with DIN EN 1993-1-10 regulations regarding material toughness and through-thickness properties necessitates further modifications. Consequently, these standards cannot be directly applied to the selection of bearing materials without supplementary guidance and design rules. In this context, a recommendation was developed in 2011 to regulate the selection of appropriate steel grades for bearing components. Prior to the initiation of the research project underlying this contribution, this recommendation had only been available as a technical bulletin. Since July 2023, it has been integrated into guideline 804 of the German railway. However, recent findings indicate that certain bridge-bearing components are exposed to high fatigue loads, which necessitate consideration in structural design, material selection, and calculations. Therefore, the German Centre for Rail Traffic Research called a research project with the objective of defining a proposal to expand the current standards in order to implement a sufficient choice of steel material for bridge bearings to avoid brittle fracture, even for thick plates and components subjected to specific fatigue loads. The results obtained from theoretical considerations, such as finite element simulations and analytical calculations, are validated through large-scale component tests. Additionally, experimental observations are used to calibrate the calculation models and modify the input parameters of the design concept. Within the large-scale component tests, a brittle failure is artificially induced in a bearing component. For this purpose, an artificially generated initial defect is introduced at the previously defined hotspot into the specimen using spark erosion. Then, a dynamic load is applied until the crack initiation process occurs to achieve realistic conditions in the form of a sharp notch similar to a fatigue crack. This initiation process continues until the crack length reaches a predetermined size. Afterward, the actual test begins, which requires cooling the specimen with liquid nitrogen until a temperature is reached where brittle fracture failure is expected. In the next step, the component is subjected to a quasi-static tensile test until failure occurs in the form of a brittle failure. The proposed paper will present the latest research findings, including the results of the conducted component tests and the derived definition of the initial crack size in bridge bearings.Keywords: bridge bearings, brittle fracture, fatigue, initial crack size, large-scale tests
Procedia PDF Downloads 44517 Numerical Investigation of the Transverse Instability in Radiation Pressure Acceleration
Authors: F. Q. Shao, W. Q. Wang, Y. Yin, T. P. Yu, D. B. Zou, J. M. Ouyang
Abstract:
The Radiation Pressure Acceleration (RPA) mechanism is very promising in laser-driven ion acceleration because of high laser-ion energy conversion efficiency. Although some experiments have shown the characteristics of RPA, the energy of ions is quite limited. The ion energy obtained in experiments is only several MeV/u, which is much lower than theoretical prediction. One possible limiting factor is the transverse instability incited in the RPA process. The transverse instability is basically considered as the Rayleigh-Taylor (RT) instability, which is a kind of interfacial instability and occurs when a light fluid pushes against a heavy fluid. Multi-dimensional particle-in-cell (PIC) simulations show that the onset of transverse instability will destroy the acceleration process and broaden the energy spectrum of fast ions during the RPA dominant ion acceleration processes. The evidence of the RT instability driven by radiation pressure has been observed in a laser-foil interaction experiment in a typical RPA regime, and the dominant scale of RT instability is close to the laser wavelength. The development of transverse instability in the radiation-pressure-acceleration dominant laser-foil interaction is numerically examined by two-dimensional particle-in-cell simulations. When a laser interacts with a foil with modulated surface, the internal instability is quickly incited and it develops. The linear growth and saturation of the transverse instability are observed, and the growth rate is numerically diagnosed. In order to optimize interaction parameters, a method of information entropy is put forward to describe the chaotic degree of the transverse instability. With moderate modulation, the transverse instability shows a low chaotic degree and a quasi-monoenergetic proton beam is produced.Keywords: information entropy, radiation pressure acceleration, Rayleigh-Taylor instability, transverse instability
Procedia PDF Downloads 345516 Localization of Pyrolysis and Burning of Ground Forest Fires
Authors: Pavel A. Strizhak, Geniy V. Kuznetsov, Ivan S. Voytkov, Dmitri V. Antonov
Abstract:
This paper presents the results of experiments carried out at a specialized test site for establishing macroscopic patterns of heat and mass transfer processes at localizing model combustion sources of ground forest fires with the use of barrier lines in the form of a wetted lay of material in front of the zone of flame burning and thermal decomposition. The experiments were performed using needles, leaves, twigs, and mixtures thereof. The dimensions of the model combustion source and the ranges of heat release correspond well to the real conditions of ground forest fires. The main attention is paid to the complex analysis of the effect of dispersion of water aerosol (concentration and size of droplets) used to form the barrier line. It is shown that effective conditions for localization and subsequent suppression of flame combustion and thermal decomposition of forest fuel can be achieved by creating a group of barrier lines with different wetting width and depth of the material. Relative indicators of the effectiveness of one and combined barrier lines were established, taking into account all the main characteristics of the processes of suppressing burning and thermal decomposition of forest combustible materials. We performed the prediction of the necessary and sufficient parameters of barrier lines (water volume, width, and depth of the wetted lay of the material, specific irrigation density) for combustion sources with different dimensions, corresponding to the real fire extinguishing practice.Keywords: forest fire, barrier water lines, pyrolysis front, flame front
Procedia PDF Downloads 133515 A Deep Learning Approach to Calculate Cardiothoracic Ratio From Chest Radiographs
Authors: Pranav Ajmera, Amit Kharat, Tanveer Gupte, Richa Pant, Viraj Kulkarni, Vinay Duddalwar, Purnachandra Lamghare
Abstract:
The cardiothoracic ratio (CTR) is the ratio of the diameter of the heart to the diameter of the thorax. An abnormal CTR, that is, a value greater than 0.55, is often an indicator of an underlying pathological condition. The accurate prediction of an abnormal CTR from chest X-rays (CXRs) aids in the early diagnosis of clinical conditions. We propose a deep learning-based model for automatic CTR calculation that can assist the radiologist with the diagnosis of cardiomegaly and optimize the radiology flow. The study population included 1012 posteroanterior (PA) CXRs from a single institution. The Attention U-Net deep learning (DL) architecture was used for the automatic calculation of CTR. A CTR of 0.55 was used as a cut-off to categorize the condition as cardiomegaly present or absent. An observer performance test was conducted to assess the radiologist's performance in diagnosing cardiomegaly with and without artificial intelligence (AI) assistance. The Attention U-Net model was highly specific in calculating the CTR. The model exhibited a sensitivity of 0.80 [95% CI: 0.75, 0.85], precision of 0.99 [95% CI: 0.98, 1], and a F1 score of 0.88 [95% CI: 0.85, 0.91]. During the analysis, we observed that 51 out of 1012 samples were misclassified by the model when compared to annotations made by the expert radiologist. We further observed that the sensitivity of the reviewing radiologist in identifying cardiomegaly increased from 40.50% to 88.4% when aided by the AI-generated CTR. Our segmentation-based AI model demonstrated high specificity and sensitivity for CTR calculation. The performance of the radiologist on the observer performance test improved significantly with AI assistance. A DL-based segmentation model for rapid quantification of CTR can therefore have significant potential to be used in clinical workflows.Keywords: cardiomegaly, deep learning, chest radiograph, artificial intelligence, cardiothoracic ratio
Procedia PDF Downloads 98514 Automated Fact-Checking by Incorporating Contextual Knowledge and Multi-Faceted Search
Authors: Wenbo Wang, Yi-Fang Brook Wu
Abstract:
The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state-of-the-art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study introduces a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive, and authoritative data; 2) developing a search function to automatically select relevant, new, and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graphs in Wikidata to dynamically augment the representations of claims and references without introducing too much noise, II) exploring semantic relations in claims and references to further enhance fact-checking.Keywords: fact checking, claim verification, deep learning, natural language processing
Procedia PDF Downloads 62513 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network
Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin
Abstract:
The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake
Procedia PDF Downloads 64512 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes
Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo
Abstract:
Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation
Procedia PDF Downloads 207511 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques
Authors: Gizem Eser Erdek
Abstract:
This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet
Procedia PDF Downloads 77510 Peculiarities of Internal Friction and Shear Modulus in 60Co γ-Rays Irradiated Monocrystalline SiGe Alloys
Authors: I. Kurashvili, G. Darsavelidze, T. Kimeridze, G. Chubinidze, I. Tabatadze
Abstract:
At present, a number of modern semiconductor devices based on SiGe alloys have been created in which the latest achievements of high technologies are used. These devices might cause significant changes to networking, computing, and space technology. In the nearest future new materials based on SiGe will be able to restrict the A3B5 and Si technologies and firmly establish themselves in medium frequency electronics. Effective realization of these prospects requires the solution of prediction and controlling of structural state and dynamical physical –mechanical properties of new SiGe materials. Based on these circumstances, a complex investigation of structural defects and structural-sensitive dynamic mechanical characteristics of SiGe alloys under different external impacts (deformation, radiation, thermal cycling) acquires great importance. Internal friction (IF) and shear modulus temperature and amplitude dependences of the monocrystalline boron-doped Si1-xGex(x≤0.05) alloys grown by Czochralski technique is studied in initial and 60Co gamma-irradiated states. In the initial samples, a set of dislocation origin relaxation processes and accompanying modulus defects are revealed in a temperature interval of 400-800 ⁰C. It is shown that after gamma-irradiation intensity of relaxation internal friction in the vicinity of 280 ⁰C increases and simultaneously activation parameters of high temperature relaxation processes reveal clear rising. It is proposed that these changes of dynamical mechanical characteristics might be caused by a decrease of the dislocation mobility in the Cottrell atmosphere enriched by the radiation defects.Keywords: internal friction, shear modulus, gamma-irradiation, SiGe alloys
Procedia PDF Downloads 143509 Application Difference between Cox and Logistic Regression Models
Authors: Idrissa Kayijuka
Abstract:
The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio
Procedia PDF Downloads 455508 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys
Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta
Abstract:
The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.Keywords: chimney, deterministic model, van der pol, vortex-induced vibration
Procedia PDF Downloads 221507 Next Generation Sequencing Analysis of Circulating MiRNAs in Rheumatoid Arthritis and Osteoarthritis
Authors: Khalda Amr, Noha Eltaweel, Sherif Ismail, Hala Raslan
Abstract:
Introduction: Osteoarthritis is the most common form of arthritis that involves the wearing away of the cartilage that caps the bones in the joints. While rheumatoid arthritis is an autoimmune disease in which the immune system attacks the joints, beginning with the lining of joints. In this study, we aimed to study the top deregulated miRNAs that might be the cause of pathogenesis in both diseases. Methods: Eight cases were recruited in this study: 4 rheumatoid arthritis (RA), 2 osteoarthritis (OA) patients, as well as 2 healthy controls. Total RNA was isolated from plasma to be subjected to miRNA profiling by NGS. Sequencing libraries were constructed and generated using the NEBNextR UltraTM small RNA Sample Prep Kit for Illumina R (NEB, USA), according to the manufacturer’s instructions. The quality of samples were checked using fastqc and multiQC. Results were compared RA vs Controls and OA vs. Controls. Target gene prediction and functional annotation of the deregulated miRNAs were done using Mienturnet. The top deregulated miRNAs in each disease were selected for further validation using qRT-PCR. Results: The average number of sequencing reads per sample exceeded 2.2 million, of which approximately 57% were mapped to the human reference genome. The top DEMs in RA vs controls were miR-6724-5p, miR-1469, miR-194-3p (up), miR-1468-5p, miR-486-3p (down). In comparison, the top DEMs in OA vs controls were miR-1908-3p, miR-122b-3p, miR-3960 (up), miR-1468-5p, miR-15b-3p (down). The functional enrichment of the selected top deregulated miRNAs revealed the highly enriched KEGG pathways and GO terms. Six of the deregulated miRNAs (miR-15b, -128, -194, -328, -542 and -3180) had multiple target genes in the RA pathway, so they are more likely to affect the RA pathogenesis. Conclusion: Six of our studied deregulated miRNAs (miR-15b, -128, -194, -328, -542 and -3180) might be highly involved in the disease pathogenesis. Further functional studies are crucial to assess their functions and actual target genes.Keywords: next generation sequencing, mirnas, rheumatoid arthritis, osteoarthritis
Procedia PDF Downloads 97506 Admission C-Reactive Protein Serum Levels and In-Hospital Mortality in the Elderly Admitted to the Acute Geriatrics Department
Authors: Anjelika Kremer, Irina Nachimov, Dan Justo
Abstract:
Background: C-reactive protein (CRP) serum levels are commonly measured in hospitalized patients. Elevated admission CRP serum levels and in-hospital mortality has been seldom studied in the general population of elderly patients admitted to the acute Geriatrics department. Methods: A retrospective cross-sectional study was conducted at a tertiary medical center. Included were all elderly patients (age 65 years or more) admitted to a single acute Geriatrics department from the emergency room between April 2014 and January 2015. CRP serum levels were measured routinely in all patients upon the first 24 hours of admission. A logistic regression analysis was used to study if admission CRP serum levels were associated with in-hospital mortality independent of age, gender, functional status, and co-morbidities. Results: Overall, 498 elderly patients were included in the analysis: 306 (61.4%) female patients and 192 (38.6%) male patients. The mean age was 84.8±7.0 years (median: 85 years; IQR: 80-90 years). The mean admission CRP serum levels was 43.2±67.1 mg/l (median: 13.1 mg/l; IQR: 2.8-51.7 mg/l). Overall, 33 (6.6%) elderly patients died during the hospitalization. A logistic regression analysis showed that in-hospital mortality was independently associated with history of stroke (p < 0.0001), heart failure (p < 0.0001), and admission CRP serum levels (p < 0.0001) – and to a lesser extent with age (p = 0.042), collagen vascular disease (p=0.011), and recent venous thromboembolism (p=0.037). Receiver operating characteristic (ROC) curve showed that admission CRP serum levels predict in-hospital mortality fairly with an area under the curve (AUC) of 0.694 (p < 0.0001). Cut-off value with maximal sensitivity and specificity was 19.7 mg/L. Conclusions: Admission CRP serum levels may be used to predict in-hospital mortality in the general population of elderly patients admitted to the acute Geriatrics department.Keywords: c-reactive protein, elderly, mortality, prediction
Procedia PDF Downloads 239505 Changing Roles and Skills of Urban Planners in the Turkish Planning System
Authors: Fatih Eren
Abstract:
This research aims to find an answer to the question of which knowledge and skills do the Turkish urban planners need in their business practice. Understanding change in cities, making a prediction, making an urban decision and putting it into practice, working together with actors from different organizations from various academic disciplines, persuading people to accept something and developing good personal and professional relationships have become very complex and difficult in today’s world. The truth is that urban planners work in many institutions under various positions which are not similar to each other by field of activity and all planners are forced to develop some knowledge and skills for success in their business in Turkey. This study targets to explore what urban planners do in the global information age. The study is the product of a comprehensive nation-wide research. In-depth interviews were conducted with 174 experienced urban planners, who work in different public institutions and private companies under varied positions in the Turkish Planning System, to find out knowledge and skills needed by next-generation urban planners. The main characteristics of next-generation urban planners are defined; skills that planners needed today are explored in this paper. Findings show that the positivist (traditional) planning approach has given place to anti-positivist planning approaches in the Turkish Planning System so next-generation urban planners who seek success and want to carve out a niche for themselves in business life have to equip themselves with innovative skills. The result section also includes useful and instructive findings for planners about what is the meaning of being an urban planner and what is the ideal content and context of planning education at universities in the global age.Keywords: global information age, Turkish Planning System, the institutional approach, urban planners, roles, skills, values
Procedia PDF Downloads 285504 An Integrated Experimental and Numerical Approach to Develop an Electronic Instrument to Study Apple Bruise Damage
Authors: Paula Pascoal-Faria, Rúben Pereira, Elodie Pinto, Miguel Belbut, Ana Rosa, Inês Sousa, Nuno Alves
Abstract:
Apple bruise damage from harvesting, handling, transporting and sorting is considered to be the major source of reduced fruit quality, resulting in loss of profits for the entire fruit industry. The three factors which can physically cause fruit bruising are vibration, compression load and impact, the latter being the most common source of bruise damage. Therefore, prediction of the level of damage, stress distribution and deformation of the fruits under external force has become a very important challenge. In this study, experimental and numerical methods were used to better understand the impact caused when an apple is dropped from different heights onto a plastic surface and a conveyor belt. Results showed that the extent of fruit damage is significantly higher for plastic surface, being dependent on the height. In order to support the development of a biomimetic electronic device for the determination of fruit damage, the mechanical properties of the apple fruit were determined using mechanical tests. Preliminary results showed different values for the Young’s modulus according to the zone of the apple tested. Along with the mechanical characterization of the apple fruit, the development of the first two prototypes is discussed and the integration of the results obtained to construct the final element model of the apple is presented. This work will help to reduce significantly the bruise damage of fruits or vegetables during the entire processing which will allow the introduction of exportation destines and consequently an increase in the economic profits in this sector.Keywords: apple, fruit damage, impact during crop and post-crop, mechanical characterization of the apple, numerical evaluation of fruit damage, electronic device
Procedia PDF Downloads 305503 Diagnostic and Prognostic Use of Kinetics of Microrna and Cardiac Biomarker in Acute Myocardial Infarction
Authors: V. Kuzhandai Velu, R. Ramesh
Abstract:
Background and objectives: Acute myocardial infarction (AMI) is the most common cause of mortality and morbidity. Over the last decade, microRNAs (miRs) have emerged as a potential marker for detecting AMI. The current study evaluates the kinetics and importance of miRs in the differential diagnosis of ST-segment elevated MI (STEMI) and non-STEMI (NSTEMI) and its correlation to conventional biomarkers and to predict the immediate outcome of AMI for arrhythmias and left ventricular (LV) dysfunction. Materials and Method: A total of 100 AMI patients were recruited for the study. Routine cardiac biomarker and miRNA levels were measured during diagnosis and serially at admission, 6, 12, 24, and 72hrs. The baseline biochemical parameters were analyzed. The expression of miRs was compared between STEMI and NSTEMI at different time intervals. Diagnostic utility of miR-1, miR-133, miR-208, and miR-499 levels were analyzed by using RT-PCR and with various diagnostics statistical tools like ROC, odds ratio, and likelihood ratio. Results: The miR-1, miR-133, and miR-499 showed peak concentration at 6 hours, whereas miR-208 showed high significant differences at all time intervals. miR-133 demonstrated the maximum area under the curve at different time intervals in the differential diagnosis of STEMI and NSTEMI which was followed by miR-499 and miR-208. Evaluation of miRs for predicting arrhythmia and LV dysfunction using admission sample demonstrated that miR-1 (OR = 8.64; LR = 1.76) and miR-208 (OR = 26.25; LR = 5.96) showed maximum odds ratio and likelihood respectively. Conclusion: Circulating miRNA showed a highly significant difference between STEMI and NSTEMI in AMI patients. The peak was much earlier than the conventional biomarkers. miR-133, miR-208, and miR-499 can be used in the differential diagnosis of STEMI and NSTEMI, whereas miR-1 and miR-208 could be used in the prediction of arrhythmia and LV dysfunction, respectively.Keywords: myocardial infarction, cardiac biomarkers, microRNA, arrhythmia, left ventricular dysfunction
Procedia PDF Downloads 128502 Assessment of the Impacts of Climate Change on Climatic Zones over the Korean Peninsula for Natural Disaster Management Information
Authors: Sejin Jung, Dongho Kang, Byungsik Kim
Abstract:
Assessing the impact of climate change requires the use of a multi-model ensemble (MME) to quantify uncertainties between scenarios and produce downscaled outlines for simulation of climate under the influence of different factors, including topography. This study decreases climate change scenarios from the 13 global climate models (GCMs) to assess the impacts of future climate change. Unlike South Korea, North Korea lacks in studies using climate change scenarios of the CoupledModelIntercomparisonProject (CMIP5), and only recently did the country start the projection of extreme precipitation episodes. One of the main purposes of this study is to predict changes in the average climatic conditions of North Korea in the future. The result of comparing downscaled climate change scenarios with observation data for a reference period indicates high applicability of the Multi-Model Ensemble (MME). Furthermore, the study classifies climatic zones by applying the Köppen-Geiger climate classification system to the MME, which is validated for future precipitation and temperature. The result suggests that the continental climate (D) that covers the inland area for the reference climate is expected to shift into the temperate climate (C). The coefficient of variation (CVs) in the temperature ensemble is particularly low for the southern coast of the Korean peninsula, and accordingly, a high possibility of the shifting climatic zone of the coast is predicted. This research was supported by a grant (MOIS-DP-2015-05) of Disaster Prediction and Mitigation Technology Development Program funded by Ministry of Interior and Safety (MOIS, Korea).Keywords: MME, North Korea, Koppen–Geiger, climatic zones, coefficient of variation, CV
Procedia PDF Downloads 111501 Coupled Hydro-Geomechanical Modeling of Oil Reservoir Considering Non-Newtonian Fluid through a Fracture
Authors: Juan Huang, Hugo Ninanya
Abstract:
Oil has been used as a source of energy and supply to make materials, such as asphalt or rubber for many years. This is the reason why new technologies have been implemented through time. However, research still needs to continue increasing due to new challenges engineers face every day, just like unconventional reservoirs. Various numerical methodologies have been applied in petroleum engineering as tools in order to optimize the production of reservoirs before drilling a wellbore, although not all of these have the same efficiency when talking about studying fracture propagation. Analytical methods like those based on linear elastic fractures mechanics fail to give a reasonable prediction when simulating fracture propagation in ductile materials whereas numerical methods based on the cohesive zone method (CZM) allow to represent the elastoplastic behavior in a reservoir based on a constitutive model; therefore, predictions in terms of displacements and pressure will be more reliable. In this work, a hydro-geomechanical coupled model of horizontal wells in fractured rock was developed using ABAQUS; both extended element method and cohesive elements were used to represent predefined fractures in a model (2-D). A power law for representing the rheological behavior of fluid (shear-thinning, power index <1) through fractures and leak-off rate permeating to the matrix was considered. Results have been showed in terms of aperture and length of the fracture, pressure within fracture and fluid loss. It was showed a high infiltration rate to the matrix as power index decreases. A sensitivity analysis is conclusively performed to identify the most influential factor of fluid loss.Keywords: fracture, hydro-geomechanical model, non-Newtonian fluid, numerical analysis, sensitivity analysis
Procedia PDF Downloads 206500 Correlation between Fetal Umbilical Cord pH and the Day, the Time and the Team Hand over Times: An Analysis of 6929 Deliveries of the Ulm University Hospital
Authors: Sabine Pau, Sophia Volz, Emanuel Bauer, Amelie De Gregorio, Frank Reister, Wolfgang Janni, Florian Ebner
Abstract:
Purpose: The umbilical cord pH is a well evaluated contributor for prediction of neonatal outcome. This study correlates nenonatal umbilical cord pH with the weekday of delivery, the time of birth as well as the staff hand over times (midwifes and doctors). Material and Methods: This retrospective study included all deliveries of a 20 year period (1994-2014) at our primary obstetric center. All deliveries with a newborn cord pH under 7,20 were included in this analysis (6929 of 48974 deliveries (14,4%)). Further subgroups were formed according to the pH (< 7,05; 7,05 – 7,09; 7,10 – 7,14; 7,15 – 7,19). The data were then separated in day- and night time (8am-8pm/8pm-8am) for a first analysis. Finally, handover times were defined at 6 am – 6.30 am, 2 pm -2.30 pm, 10 pm- 10.30 pm (midwives) and for the doctors 8-8.30 am, 4 – 4.30 pm (Monday- Thursday); 2 pm -2.30 pm (Friday) and 9 am – 9.30 am (weekend). Routinely a shift consists of at least three doctors as well as three midwives. Results: During the last 20 years, 6929 neonates were born with an umbilical cord ph < 7,20 ( < 7,05 : 7,1%; 7,05 – 7,09 : 10,9%; 7,10 – 7,14 : 30,2%; 7,15 – 7,19:51,8%). There was no significant difference between either night/day delivery (p = 0.408), delivery on different weekdays (p = 0.253), delivery between Monday to Thursday, Friday and the weekend (p = 0.496 ) or delivery during the handover times of the doctors as well as the midwives (p = 0.221). Even the standard deviation showed no differences between the groups. Conclusion: Despite an increased workload over the last 20 years, the standard of care remains high even during the handover times and night shifts. This applies for midwives and doctors. As the neonatal outcome depends on various factors, further studies are necessary to take more factors influencing the fetal outcome into consideration. In order to maintain this high standard of care, an adaption of work-load and changing conditions is necessary.Keywords: delivery, fetal umbilical cord pH, day time, hand over times
Procedia PDF Downloads 316