Search results for: predictive modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2663

Search results for: predictive modelling

2153 Spirometric Reference Values in 236,606 Healthy, Non-Smoking Chinese Aged 4–90 Years

Authors: Jiashu Shen

Abstract:

Objectives: Spirometry is a basic reference for health evaluation which is widely used in clinical. Previous reference of spirometry is not applicable because of drastic changes of social and natural circumstance in China. A new reference values for the spirometry of the Chinese population is extremely needed. Method: Spirometric reference value was established using the statistical modeling method Generalized Additive Models for Location, Scale and Shape for forced expiratory volume in 1 s (FEV1), forced vital capacity (FVC), FEV1/FVC, and maximal mid-expiratory flow (MMEF). Results: Data from 236,606 healthy non-smokers aged 4–90 years was collected from the MJ Health Check database. Spirometry equations for FEV1, FVC, MMEF, and FEV1/FVC were established, including the predicted values and lower limits of normal (LLNs) by sex. The predictive equations that were developed for the spirometric results elaborated the relationship between spirometry and age, and they eliminated the effects of height as a variable. Most previous predictive equations for Chinese spirometry were significantly overestimated (to be exact, with mean differences of 22.21% in FEV1 and 31.39% in FVC for males, along with differences of 26.93% in FEV1 and 35.76% in FVC for females) or underestimated (with mean differences of -5.81% in MMEF and -14.56% in FEV1/FVC for males, along with a difference of -14.54% in FEV1/FVC for females) the results of lung function measurements as found in this study. Through cross-validation, our equations were established as having good fit, and the means of the measured value and the estimated value were compared, with good results. Conclusions: Our study updates the spirometric reference equations for Chinese people of all ages and provides comprehensive values for both physical examination and clinical diagnosis.

Keywords: Chinese, GAMLSS model, reference values, spirometry

Procedia PDF Downloads 113
2152 A Comprehensive Review of Artificial Intelligence Applications in Sustainable Building

Authors: Yazan Al-Kofahi, Jamal Alqawasmi.

Abstract:

In this study, a comprehensive literature review (SLR) was conducted, with the main goal of assessing the existing literature about how artificial intelligence (AI), machine learning (ML), deep learning (DL) models are used in sustainable architecture applications and issues including thermal comfort satisfaction, energy efficiency, cost prediction and many others issues. For this reason, the search strategy was initiated by using different databases, including Scopus, Springer and Google Scholar. The inclusion criteria were used by two research strings related to DL, ML and sustainable architecture. Moreover, the timeframe for the inclusion of the papers was open, even though most of the papers were conducted in the previous four years. As a paper filtration strategy, conferences and books were excluded from database search results. Using these inclusion and exclusion criteria, the search was conducted, and a sample of 59 papers was selected as the final included papers in the analysis. The data extraction phase was basically to extract the needed data from these papers, which were analyzed and correlated. The results of this SLR showed that there are many applications of ML and DL in Sustainable buildings, and that this topic is currently trendy. It was found that most of the papers focused their discussions on addressing Environmental Sustainability issues and factors using machine learning predictive models, with a particular emphasis on the use of Decision Tree algorithms. Moreover, it was found that the Random Forest repressor demonstrates strong performance across all feature selection groups in terms of cost prediction of the building as a machine-learning predictive model.

Keywords: machine learning, deep learning, artificial intelligence, sustainable building

Procedia PDF Downloads 45
2151 LORA: A Learning Outcome Modelling Approach for Higher Education

Authors: Aqeel Zeid, Hasna Anees, Mohamed Adheeb, Mohamed Rifan, Kalpani Manathunga

Abstract:

To achieve constructive alignment in a higher education program, a clear set of learning outcomes must be defined. Traditional learning outcome definition techniques such as Bloom’s taxonomy are not written to be utilized by the student. This might be disadvantageous for students in student-centric learning settings where the students are expected to formulate their own learning strategies. To solve the problem, we propose the learning outcome relation and aggregation (LORA) model. To achieve alignment, we developed learning outcome, assessment, and resource authoring tools which help teachers to tag learning outcomes during creation. A pilot study was conducted with an expert panel consisting of experienced professionals in the education domain to evaluate whether the LORA model and tools present an improvement over the traditional methods. The panel unanimously agreed that the model and tools are beneficial and effective. Moreover, it helped them model learning outcomes in a more student centric and descriptive way.

Keywords: learning design, constructive alignment, Bloom’s taxonomy, learning outcome modelling

Procedia PDF Downloads 168
2150 Development of a Practical Screening Measure for the Prediction of Low Birth Weight and Neonatal Mortality in Upper Egypt

Authors: Prof. Ammal Mokhtar Metwally, Samia M. Sami, Nihad A. Ibrahim, Fatma A. Shaaban, Iman I. Salama

Abstract:

Objectives: Reducing neonatal mortality by 2030 is still a challenging goal in developing countries. low birth weight (LBW) is a significant contributor to this, especially where weighing newborns is not possible routinely. The present study aimed to determine a simple, easy, reliable anthropometric measure(s) that can predict LBW) and neonatal mortality. Methods: A prospective cohort study of 570 babies born in districts of El Menia governorate, Egypt (where most deliveries occurred at home) was examined at birth. Newborn weight, length, head, chest, mid-arm, and thigh circumferences were measured. Follow up of the examined neonates took place during their first four weeks of life to report any mortalities. The most predictable anthropometric measures were determined using the statistical package of SPSS, and multiple Logistic regression analysis was performed.: Results: Head and chest circumferences with cut-off points < 33 cm and ≤ 31.5 cm, respectively, were the significant predictors for LBW. They carried the best combination of having the highest sensitivity (89.8 % & 86.4 %) and least false negative predictive value (1.4 % & 1.7 %). Chest circumference with a cut-off point ≤ 31.5 cm was the significant predictor for neonatal mortality with 83.3 % sensitivity and 0.43 % false negative predictive value. Conclusion: Using chest circumference with a cut-off point ≤ 31.5 cm is recommended as a single simple anthropometric measurement for the prediction of both LBW and neonatal mortality. The predicted measure could act as a substitute for weighting newborns in communities where scales to weigh them are not routinely available.

Keywords: low birth weight, neonatal mortality, anthropometric measures, practical screening

Procedia PDF Downloads 74
2149 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 96
2148 Oral Microbiota as a Novel Predictive Biomarker of Response To Immune Checkpoint Inhibitors in Advanced Non-small Cell Lung Cancer Patients

Authors: Francesco Pantano, Marta Fogolari, Michele Iuliani, Sonia Simonetti, Silvia Cavaliere, Marco Russano, Fabrizio Citarella, Bruno Vincenzi, Silvia Angeletti, Giuseppe Tonini

Abstract:

Background: Although immune checkpoint inhibitors (ICIs) have changed the treatment paradigm of non–small cell lung cancer (NSCLC), these drugs fail to elicit durable responses in the majority of NSCLC patients. The gut microbiota, able to regulate immune responsiveness, is emerging as a promising, modifiable target to improve ICIs response rates. Since the oral microbiome has been demonstrated to be the primary source of bacterial microbiota in the lungs, we investigated its composition as a potential predictive biomarker to identify and select patients who could benefit from immunotherapy. Methods: Thirty-five patients with stage IV squamous and non-squamous cell NSCLC eligible for an anti-PD-1/PD-L1 as monotherapy were enrolled. Saliva samples were collected from patients prior to the start of treatment, bacterial DNA was extracted using the QIAamp® DNA Microbiome Kit (QIAGEN) and the 16S rRNA gene was sequenced on a MiSeq sequencing instrument (Illumina). Results: NSCLC patients were dichotomized as “Responders” (partial or complete response) and “Non-Responders” (progressive disease), after 12 weeks of treatment, based on RECIST criteria. A prevalence of the phylum Candidatus Saccharibacteria was found in the 10 responders compared to non-responders (abundance 5% vs 1% respectively; p-value = 1.46 x 10-7; False Discovery Rate (FDR) = 1.02 x 10-6). Moreover, a higher prevalence of Saccharibacteria Genera Incertae Sedis genus (belonging to the Candidatus Saccharibacteria phylum) was observed in "responders" (p-value = 6.01 x 10-7 and FDR = 2.46 x 10-5). Finally, the patients who benefit from immunotherapy showed a significant abundance of TM7 Phylum Sp Oral Clone FR058 strain, member of Saccharibacteria Genera Incertae Sedis genus (p-value = 6.13 x 10-7 and FDR=7.66 x 10-5). Conclusions: These preliminary results showed a significant association between oral microbiota and ICIs response in NSCLC patients. In particular, the higher prevalence of Candidatus Saccharibacteria phylum and TM7 Phylum Sp Oral Clone FR058 strain in responders suggests their potential immunomodulatory role. The study is still ongoing and updated data will be presented at the congress.

Keywords: oral microbiota, immune checkpoint inhibitors, non-small cell lung cancer, predictive biomarker

Procedia PDF Downloads 70
2147 A New Study on Mathematical Modelling of COVID-19 with Caputo Fractional Derivative

Authors: Sadia Arshad

Abstract:

The new coronavirus disease or COVID-19 still poses an alarming situation around the world. Modeling based on the derivative of fractional order is relatively important to capture real-world problems and to analyze the realistic situation of the proposed model. Weproposed a mathematical model for the investigation of COVID-19 dynamics in a generalized fractional framework. The new model is formulated in the Caputo sense and employs a nonlinear time-varying transmission rate. The existence and uniqueness solutions of the fractional order derivative have been studied using the fixed-point theory. The associated dynamical behaviors are discussed in terms of equilibrium, stability, and basic reproduction number. For the purpose of numerical implementation, an effcient approximation scheme is also employed to solve the fractional COVID-19 model. Numerical simulations are reported for various fractional orders, and simulation results are compared with a real case of COVID-19 pandemic. According to the comparative results with real data, we find the best value of fractional orderand justify the use of the fractional concept in the mathematical modelling, for the new fractional modelsimulates the reality more accurately than the other classical frameworks.

Keywords: fractional calculus, modeling, stability, numerical solution

Procedia PDF Downloads 87
2146 A Study of High Viscosity Oil-Gas Slug Flow Using Gamma Densitometer

Authors: Y. Baba, A. Archibong-Eso, H. Yeung

Abstract:

Experimental study of high viscosity oil-gas flows in horizontal pipelines published in literature has indicated that hydrodynamic slug flow is the dominant flow pattern observed. Investigations have shown that hydrodynamic slugging brings about high instabilities in pressure that can damage production facilities thereby making it inherent to study high viscous slug flow regime so as to improve the understanding of its flow dynamics. Most slug flow models used in the petroleum industry for the design of pipelines together with their closure relationships were formulated based on observations of low viscosity liquid-gas flows. New experimental investigations and data are therefore required to validate these models. In cases where these models underperform, improving upon or building new predictive models and correlations will also depend on the new experimental dataset and further understanding of the flow dynamics in high viscous oil-gas flows. In this study conducted at the Flow laboratory, Oil and Gas Engineering Centre of Cranfield University, slug flow variables such as pressure gradient, mean liquid holdup, frequency and slug length for oil viscosity ranging from 1..0 – 5.5 Pa.s are experimentally investigated and analysed. The study was carried out in a 0.076m ID pipe, two fast sampling gamma densitometer and pressure transducers (differential and point) were used to obtain experimental measurements. Comparison of the measured slug flow parameters to the existing slug flow prediction models available in the literature showed disagreement with high viscosity experimental data thus highlighting the importance of building new predictive models and correlations.

Keywords: gamma densitometer, mean liquid holdup, pressure gradient, slug frequency and slug length

Procedia PDF Downloads 308
2145 Kinematic Hardening Parameters Identification with Respect to Objective Function

Authors: Marina Franulovic, Robert Basan, Bozidar Krizan

Abstract:

Constitutive modelling of material behaviour is becoming increasingly important in prediction of possible failures in highly loaded engineering components, and consequently, optimization of their design. In order to account for large number of phenomena that occur in the material during operation, such as kinematic hardening effect in low cycle fatigue behaviour of steels, complex nonlinear material models are used ever more frequently, despite of the complexity of determination of their parameters. As a method for the determination of these parameters, genetic algorithm is good choice because of its capability to provide very good approximation of the solution in systems with large number of unknown variables. For the application of genetic algorithm to parameter identification, inverse analysis must be primarily defined. It is used as a tool to fine-tune calculated stress-strain values with experimental ones. In order to choose proper objective function for inverse analysis among already existent and newly developed functions, the research is performed to investigate its influence on material behaviour modelling.

Keywords: genetic algorithm, kinematic hardening, material model, objective function

Procedia PDF Downloads 305
2144 Modelling of Damage as Hinges in Segmented Tunnels

Authors: Gelacio JuáRez-Luna, Daniel Enrique GonzáLez-RamíRez, Enrique Tenorio-Montero

Abstract:

Frame elements coupled with springs elements are used for modelling the development of hinges in segmented tunnels, the spring elements modelled the rotational, transversal and axial failure. These spring elements are equipped with constitutive models to include independently the moment, shear force and axial force, respectively. These constitutive models are formulated based on damage mechanics and experimental test reported in the literature review. The mesh of the segmented tunnels was discretized in the software GID, and the nonlinear analyses were carried out in the finite element software ANSYS. These analyses provide the capacity curve of the primary and secondary lining of a segmented tunnel. Two numerical examples of segmented tunnels show the capability of the spring elements to release energy by the development of hinges. The first example is a segmental concrete lining discretized with frame elements loaded until hinges occurred in the lining. The second example is a tunnel with primary and secondary lining, discretized with a double ring frame model. The outer ring simulates the segmental concrete lining and the inner ring simulates the secondary cast-in-place concrete lining. Spring elements also modelled the joints between the segments in the circumferential direction and the ring joints, which connect parallel adjacent rings. The computed load vs displacement curves are congruent with numerical and experimental results reported in the literature review. It is shown that the modelling of a tunnel with primary and secondary lining with frame elements and springs provides reasonable results and save computational cost, comparing with 2D or 3D models equipped with smeared crack models.

Keywords: damage, hinges, lining, tunnel

Procedia PDF Downloads 372
2143 Treatment of Healthcare Wastewater Using The Peroxi-Photoelectrocoagulation Process: Predictive Models for Chemical Oxygen Demand, Color Removal, and Electrical Energy Consumption

Authors: Samuel Fekadu A., Esayas Alemayehu B., Bultum Oljira D., Seid Tiku D., Dessalegn Dadi D., Bart Van Der Bruggen A.

Abstract:

The peroxi-photoelectrocoagulation process was evaluated for the removal of chemical oxygen demand (COD) and color from healthcare wastewater. A 2-level full factorial design with center points was created to investigate the effect of the process parameters, i.e., initial COD, H₂O₂, pH, reaction time and current density. Furthermore, the total energy consumption and average current efficiency in the system were evaluated. Predictive models for % COD, % color removal and energy consumption were obtained. The initial COD and pH were found to be the most significant variables in the reduction of COD and color in peroxi-photoelectrocoagulation process. Hydrogen peroxide only has a significant effect on the treated wastewater when combined with other input variables in the process like pH, reaction time and current density. In the peroxi-photoelectrocoagulation process, current density appears not as a single effect but rather as an interaction effect with H₂O₂ in reducing COD and color. Lower energy expenditure was observed at higher initial COD, shorter reaction time and lower current density. The average current efficiency was found as low as 13 % and as high as 777 %. Overall, the study showed that hybrid electrochemical oxidation can be applied effectively and efficiently for the removal of pollutants from healthcare wastewater.

Keywords: electrochemical oxidation, UV, healthcare pollutants removals, factorial design

Procedia PDF Downloads 57
2142 Influence of Intelligence and Failure Mindsets on Parent's Failure Feedback

Authors: Sarah Kalaouze, Maxine Iannucelli, Kristen Dunfield

Abstract:

Children’s implicit beliefs regarding intelligence (i.e., intelligence mindsets) influence their motivation, perseverance, and success. Previous research suggests that the way parents perceive failure influences the development of their child’s intelligence mindsets. We invited 151 children-parent dyads (Age= 5–6 years) to complete a series of difficult puzzles over zoom. We assessed parents’ intelligence and failure mindsets using questionnaires and recorded parents’ person/performance-oriented (e.g., “you are smart” or "you were almost able to complete that one) and process-oriented (e.g., “you are trying really hard” or "maybe if you place the bigger pieces first") failure feedback. We were interested in observing the relation between parental mindsets and the type of feedback provided. We found that parents’ intelligence mindsets were not predictive of the feedback they provided children. Failure mindsets, on the other hand, were predictive of failure feedback. Parents who view failure-as-debilitating provided more person-oriented feedback, focusing on performance and personal ability. Whereas parents who view failure-as-enhancing provided process-oriented feedback, focusing on effort and strategies. Taken all together, our results allow us to determine that although parents might already have a growth intelligence mindset, they don’t necessarily have a failure-as-enhancing mindset. Parents adopting a failure-as-enhancing mindset would influence their children to view failure as a learning opportunity, further promoting practice, effort, and perseverance during challenging tasks. The focus placed on a child’s learning, rather than their performance, encourages them to perceive intelligence as malleable (growth mindset) rather than fix (fixed mindset). This implies that parents should not only hold a growth mindset but thoroughly understand their role in the transmission of intelligence beliefs.

Keywords: mindset(s), failure, intelligence, parental feedback, parents

Procedia PDF Downloads 120
2141 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence

Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno

Abstract:

Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.

Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index

Procedia PDF Downloads 145
2140 Digital Transformation in Developing Countries, A Study into Building Information Modelling Adoption in Thai Design and Engineering Small- and Medium-Sizes Enterprises

Authors: Prompt Udomdech, Eleni Papadonikolaki, Andrew Davies

Abstract:

Building information modelling (BIM) is the major technological trend amongst built environment organisations. Digitalising businesses and operations, BIM brings forth a digital transformation in any built environment industry. The adoption of BIM presents challenges for organisations, especially small- and medium-sizes enterprises (SMEs). The main problem for built-environment SMEs is the lack of project actors with adequate BIM competences. The research highlights learning in projects as the key and explores into the learning of BIM in projects of designers and engineers within Thai design and engineering SMEs. The study uncovers three impeding attributes, which are: a) lack of English proficiency; b) unfamiliarity with digital technologies; and c) absence of public standards. This research expands on the literature on BIM competences and adoption.

Keywords: BIM competences and adoption, digital transformation, learning in projects, SMEs, and developing built environment industry

Procedia PDF Downloads 119
2139 Protection of Cultural Heritage against the Effects of Climate Change Using Autonomous Aerial Systems Combined with Automated Decision Support

Authors: Artur Krukowski, Emmanouela Vogiatzaki

Abstract:

The article presents an ongoing work in research projects such as SCAN4RECO or ARCH, both funded by the European Commission under Horizon 2020 program. The former one concerns multimodal and multispectral scanning of Cultural Heritage assets for their digitization and conservation via spatiotemporal reconstruction and 3D printing, while the latter one aims to better preserve areas of cultural heritage from hazards and risks. It co-creates tools that would help pilot cities to save cultural heritage from the effects of climate change. It develops a disaster risk management framework for assessing and improving the resilience of historic areas to climate change and natural hazards. Tools and methodologies are designed for local authorities and practitioners, urban population, as well as national and international expert communities, aiding authorities in knowledge-aware decision making. In this article we focus on 3D modelling of object geometry using primarily photogrammetric methods to achieve very high model accuracy using consumer types of devices, attractive both to professions and hobbyists alike.

Keywords: 3D modelling, UAS, cultural heritage, preservation

Procedia PDF Downloads 106
2138 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 308
2137 The Value of Serum Procalcitonin in Patients with Acute Musculoskeletal Infections

Authors: Mustafa Al-Yaseen, Haider Mohammed Mahdi, Haider Ali Al–Zahid, Nazar S. Haddad

Abstract:

Background: Early diagnosis of musculoskeletal infections is of vital importance to avoid devastating complications. There is no single laboratory marker which is sensitive and specific in diagnosing these infections accurately. White blood cell count, erythrocyte sedimentation rate, and C-reactive protein are not specific as they can also be elevated in conditions other than bacterial infections. Materials Culture and sensitivity is not a true gold standard due to its varied positivity rates. Serum Procalcitonin is one of the new laboratory markers for pyogenic infections. The objective of this study is to assess the value of PCT in the diagnosis of soft tissue, bone, and joint infections. Patients and Methods: Patients of all age groups (seventy-four patients) with a diagnosis of musculoskeletal infection are prospectively included in this study. All patients were subjected to White blood cell count, erythrocyte sedimentation rate, C-reactive protein, and serum Procalcitonin measurements. A healthy non infected outpatient group (twenty-two patients) taken as a control group and underwent the same evaluation steps as the study group. Results: The study group showed mean Procalcitonin levels of 1.3 ng/ml. Procalcitonin, at 0.5 ng/ml, was (42.6%) sensitive and (95.5%) specific in diagnosing of musculoskeletal infections with (positive predictive value of 87.5% and negative predictive value of 48.3%) and (positive likelihood ratio of 9.3 and negative likelihood ratio of 0.6). Conclusion: Serum Procalcitonin, at a cut – off of 0.5 ng/ml, is a specific but not sensitive marker in the diagnosis of musculoskeletal infections, and it can be used effectively to rule in the diagnosis of infection but not to rule out it.

Keywords: procalcitonin, infection, labratory markers, musculoskeletal

Procedia PDF Downloads 147
2136 Advancements in Laser Welding Process: A Comprehensive Model for Predictive Geometrical, Metallurgical, and Mechanical Characteristics

Authors: Seyedeh Fatemeh Nabavi, Hamid Dalir, Anooshiravan Farshidianfar

Abstract:

Laser welding is pivotal in modern manufacturing, offering unmatched precision, speed, and efficiency. Its versatility in minimizing heat-affected zones, seamlessly joining dissimilar materials, and working with various metals makes it indispensable for crafting intricate automotive components. Integration into automated systems ensures consistent delivery of high-quality welds, thereby enhancing overall production efficiency. Noteworthy are the safety benefits of laser welding, including reduced fumes and consumable materials, which align with industry standards and environmental sustainability goals. As the automotive sector increasingly demands advanced materials and stringent safety and quality standards, laser welding emerges as a cornerstone technology. A comprehensive model encompassing thermal dynamic and characteristics models accurately predicts geometrical, metallurgical, and mechanical aspects of the laser beam welding process. Notably, Model 2 showcases exceptional accuracy, achieving remarkably low error rates in predicting primary and secondary dendrite arm spacing (PDAS and SDAS). These findings underscore the model's reliability and effectiveness, providing invaluable insights and predictive capabilities crucial for optimizing welding processes and ensuring superior productivity, efficiency, and quality in the automotive industry.

Keywords: laser welding process, geometrical characteristics, mechanical characteristics, metallurgical characteristics, comprehensive model, thermal dynamic

Procedia PDF Downloads 31
2135 Modelling and Detecting the Demagnetization Fault in the Permanent Magnet Synchronous Machine Using the Current Signature Analysis

Authors: Yassa Nacera, Badji Abderrezak, Saidoune Abdelmalek, Houassine Hamza

Abstract:

Several kinds of faults can occur in a permanent magnet synchronous machine (PMSM) systems: bearing faults, electrically short/open faults, eccentricity faults, and demagnetization faults. Demagnetization fault means that the strengths of permanent magnets (PM) in PMSM decrease, and it causes low output torque, which is undesirable for EVs. The fault is caused by physical damage, high-temperature stress, inverse magnetic field, and aging. Motor current signature analysis (MCSA) is a conventional motor fault detection method based on the extraction of signal features from stator current. a simulation model of the PMSM under partial demagnetization and uniform demagnetization fault was established, and different degrees of demagnetization fault were simulated. The harmonic analyses using the Fast Fourier Transform (FFT) show that the fault diagnosis method based on the harmonic wave analysis is only suitable for partial demagnetization fault of the PMSM and does not apply to uniform demagnetization fault of the PMSM.

Keywords: permanent magnet, diagnosis, demagnetization, modelling

Procedia PDF Downloads 40
2134 Finite Element Modelling of Log Wall Corner Joints

Authors: Reza Kalantari, Ghazanfarah Hafeez

Abstract:

The paper presents outcomes of the numerical research performed on standard and dovetail corner joints under lateral loads. An overview of the past research on log shear walls is also presented. To the authors’ best knowledge, currently, there are no specific design guidelines available in the code for the design of log shear walls, implying the need to investigate the performance of log shear walls. This research explores the performance of the log shear wall corner joint system of standard joint and dovetail types using numerical methods based on research available in the literature. A parametric study is performed to study the effect of gap size provided between two orthogonal logs and the presence of wood and steel dowels provided as joinery between log courses on the performance of such a structural system. The research outcomes are the force-displacement curves. 8% variability is seen in the reaction forces with the change of gap size for the case of the standard joint, while a variation of 10% is observed in the reaction forces for the dovetail joint system.

Keywords: dovetail joint, finite element modelling, log shear walls, standard joint

Procedia PDF Downloads 194
2133 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 122
2132 Hourly Solar Radiations Predictions for Anticipatory Control of Electrically Heated Floor: Use of Online Weather Conditions Forecast

Authors: Helene Thieblemont, Fariborz Haghighat

Abstract:

Energy storage systems play a crucial role in decreasing building energy consumption during peak periods and expand the use of renewable energies in buildings. To provide a high building thermal performance, the energy storage system has to be properly controlled to insure a good energy performance while maintaining a satisfactory thermal comfort for building’s occupant. In the case of passive discharge storages, defining in advance the required amount of energy is required to avoid overheating in the building. Consequently, anticipatory supervisory control strategies have been developed forecasting future energy demand and production to coordinate systems. Anticipatory supervisory control strategies are based on some predictions, mainly of the weather forecast. However, if the forecasted hourly outdoor temperature may be found online with a high accuracy, solar radiations predictions are most of the time not available online. To estimate them, this paper proposes an advanced approach based on the forecast of weather conditions. Several methods to correlate hourly weather conditions forecast to real hourly solar radiations are compared. Results show that using weather conditions forecast allows estimating with an acceptable accuracy solar radiations of the next day. Moreover, this technique allows obtaining hourly data that may be used for building models. As a result, this solar radiation prediction model may help to implement model-based controller as Model Predictive Control.

Keywords: anticipatory control, model predictive control, solar radiation forecast, thermal storage

Procedia PDF Downloads 253
2131 3-D Modeling of Particle Size Reduction from Micro to Nano Scale Using Finite Difference Method

Authors: Himanshu Singh, Rishi Kant, Shantanu Bhattacharya

Abstract:

This paper adopts a top-down approach for mathematical modeling to predict the size reduction from micro to nano-scale through persistent etching. The process is simulated using a finite difference approach. Previously, various researchers have simulated the etching process for 1-D and 2-D substrates. It consists of two processes: 1) Convection-Diffusion in the etchant domain; 2) Chemical reaction at the surface of the particle. Since the process requires analysis along moving boundary, partial differential equations involved cannot be solved using conventional methods. In 1-D, this problem is very similar to Stefan's problem of moving ice-water boundary. A fixed grid method using finite volume method is very popular for modelling of etching on a one and two dimensional substrate. Other popular approaches include moving grid method and level set method. In this method, finite difference method was used to discretize the spherical diffusion equation. Due to symmetrical distribution of etchant, the angular terms in the equation can be neglected. Concentration is assumed to be constant at the outer boundary. At the particle boundary, the concentration of the etchant is assumed to be zero since the rate of reaction is much faster than rate of diffusion. The rate of reaction is proportional to the velocity of the moving boundary of the particle. Modelling of the above reaction was carried out using Matlab. The initial particle size was taken to be 50 microns. The density, molecular weight and diffusion coefficient of the substrate were taken as 2.1 gm/cm3, 60 and 10-5 cm2/s respectively. The etch-rate was found to decline initially and it gradually became constant at 0.02µ/s (1.2µ/min). The concentration profile was plotted along with space at different time intervals. Initially, a sudden drop is observed at the particle boundary due to high-etch rate. This change becomes more gradual with time due to declination of etch rate.

Keywords: particle size reduction, micromixer, FDM modelling, wet etching

Procedia PDF Downloads 409
2130 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 105
2129 Simulation of Wind Generator with Fixed Wind Turbine under Matlab-Simulink

Authors: Mahdi Motahari, Mojtaba Farzaneh, Armin Parsian Nejad

Abstract:

The rapidly growing wind industry is highly expressing the need for education and training worldwide, particularly on the system level. Modelling and simulating wind generator system using Matlab-Simulink provides expert help in understanding wind systems engineering and system design. Working under Matlab-Simulink we present the integration of the developed WECS model with public electrical grid. A test of the calculated power and Cp related to the experimental equivalent data, using statistical analysis is performed. The statistical indicators of accuracy show better results of the presented method with RMSE: 21%, 22%, MBE : 0.77%, 0.12 % and MAE :3%, 4%.On the other hand we study its behavior when integrated in whole power system. Three level of wind speeds have been chosen: low with 5m/s as the mean value, medium with 8m/s as the mean value and high speed with 12m/s as the mean value. These allowed predicting and supervising the active power produced by the system, characterized respectively by the middle powers of -150 kW, -250kW and -480 kW which will be injected directly into the public electrical grid and the reactive power, characterized respectively by the middle powers of 60 kW, 180 kW and 320 kW and will be consumed by the wind generator.

Keywords: modelling, simulation, wind generator, fixed speed wind turbine, Matlab-Simulink

Procedia PDF Downloads 600
2128 Formulation of Optimal Shifting Sequence for Multi-Speed Automatic Transmission

Authors: Sireesha Tamada, Debraj Bhattacharjee, Pranab K. Dan, Prabha Bhola

Abstract:

The most important component in an automotive transmission system is the gearbox which controls the speed of the vehicle. In an automatic transmission, the right positioning of actuators ensures efficient transmission mechanism embodiment, wherein the challenge lies in formulating the number of actuators associated with modelling a gearbox. Data with respect to actuation and gear shifting sequence has been retrieved from the available literature, including patent documents, and has been used in this proposed heuristics based methodology for modelling actuation sequence in a gear box. This paper presents a methodological approach in designing a gearbox for the purpose of obtaining an optimal shifting sequence. The computational model considers factors namely, the number of stages and gear teeth as input parameters since these two are the determinants of the gear ratios in an epicyclic gear train. The proposed transmission schematic or stick diagram aids in developing the gearbox layout design. The number of iterations and development time required to design a gearbox layout is reduced by using this approach.

Keywords: automatic transmission, gear-shifting, multi-stage planetary gearbox, rank ordered clustering

Procedia PDF Downloads 300
2127 Sedimentary Response to Coastal Defense Works in São Vicente Bay, São Paulo

Authors: L. C. Ansanelli, P. Alfredini

Abstract:

The article presents the evaluation of the effectiveness of two groins located at Gonzaguinha and Milionários Beaches, situated on the southeast coast of Brazil. The effectiveness of these coastal defense structures is evaluated in terms of sedimentary dynamics, which is one of the most important environmental processes to be assessed in coastal engineering studies. The applied method is based on the implementation of the Delft3D numerical model system tools. Delft3D-WAVE module was used for waves modelling, Delft3D-FLOW for hydrodynamic modelling and Delft3D-SED for sediment transport modelling. The calibration of the models was carried out in a way that the simulations adequately represent the region studied, evaluating improvements in the model elements with the use of statistical comparisons of similarity between the results and waves, currents and tides data recorded in the study area. Analysis of the maximum wave heights was carried to select the months with higher accumulated energy to implement these conditions in the engineering scenarios. The engineering studies were performed for two scenarios: 1) numerical simulation of the area considering only the two existing groins; 2) conception of breakwaters coupled at the ends of the existing groins, resulting in two “T” shaped structures. The sediment model showed that, for the simulated period, the area is affected by erosive processes and that the existing groins have little effectiveness in defending the coast in question. The implemented T structures showed some effectiveness in protecting the beaches against erosion and provided the recovery of the portion directly covered by it on the Milionários Beach. In order to complement this study, it is suggested the conception of further engineering scenarios that might recover other areas of the studied region.

Keywords: coastal engineering, coastal erosion, Sao Vicente bay, Delft3D, coastal engineering works

Procedia PDF Downloads 108
2126 Real Time Data Communication with FlightGear Using Simulink Over a UDP Protocol

Authors: Adil Loya, Ali Haider, Arslan A. Ghaffor, Abubaker Siddique

Abstract:

Simulation and modelling of Unmanned Aero Vehicle (UAV) has gained wide popularity in front of aerospace community. The demand of designing and modelling optimized control system for UAV has increased ten folds since last decade. The reason is next generation warfare is dependent on unmanned technologies. Therefore, this research focuses on the simulation of nonlinear UAV dynamics on Simulink and its integration with Flightgear. There has been lots of research on implementation of optimizing control using Simulink, however, there are fewer known techniques to simulate these dynamics over Flightgear and a tedious technique of acquiring data has been tackled in this research horizon. Sending data to Flightgear is easy but receiving it from Simulink is not that straight forward, i.e. we can only receive control data on the output. However, in this research we have managed to get the data out from the Flightgear by implementation of level 2 s-function block within Simulink. Moreover, the results captured from Flightgear over a Universal Datagram Protocol (UDP) communication are then compared with the attitude signal that were sent previously. This provide useful information regarding the difference in outputs attained from Simulink to Flightgear. It was found that values received on Simulink were in high agreement with that of the Flightgear output. And complete study has been conducted in a discrete way.

Keywords: aerospace, flight control, flightgear, communication, Simulink

Procedia PDF Downloads 249
2125 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method

Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent

Abstract:

A method of modelling topography used in the simulation of riverbeds is proposed in this paper, which removes the need for datapoints and measurements of physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools, which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.

Keywords: bed topography, FBM, LBM, shallow water, simulations

Procedia PDF Downloads 75
2124 Hybrid Direct Numerical Simulation and Large Eddy Simulating Wall Models Approach for the Analysis of Turbulence Entropy

Authors: Samuel Ahamefula

Abstract:

Turbulent motion is a highly nonlinear and complex phenomenon, and its modelling is still very challenging. In this study, we developed a hybrid computational approach to accurately simulate fluid turbulence phenomenon. The focus is coupling and transitioning between Direct Numerical Simulation (DNS) and Large Eddy Simulating Wall Models (LES-WM) regions. In the framework, high-order fidelity fluid dynamical methods are utilized to simulate the unsteady compressible Navier-Stokes equations in the Eulerian format on the unstructured moving grids. The coupling and transitioning of DNS and LES-WM are conducted through the linearly staggered Dirichlet-Neumann coupling scheme. The high-fidelity framework is verified and validated based on namely, DNS ability for capture full range of turbulent scales, giving accurate results and LES-WM efficiency in simulating near-wall turbulent boundary layer by using wall models.

Keywords: computational methods, turbulence modelling, turbulence entropy, navier-stokes equations

Procedia PDF Downloads 77