Search results for: predictive microbiology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1110

Search results for: predictive microbiology

750 Modelling Fluidization by Data-Based Recurrence Computational Fluid Dynamics

Authors: Varun Dongre, Stefan Pirker, Stefan Heinrich

Abstract:

Over the last decades, the numerical modelling of fluidized bed processes has become feasible even for industrial processes. Commonly, continuous two-fluid models are applied to describe large-scale fluidization. In order to allow for coarse grids novel two-fluid models account for unresolved sub-grid heterogeneities. However, computational efforts remain high – in the order of several hours of compute-time for a few seconds of real-time – thus preventing the representation of long-term phenomena such as heating or particle conversion processes. In order to overcome this limitation, data-based recurrence computational fluid dynamics (rCFD) has been put forward in recent years. rCFD can be regarded as a data-based method that relies on the numerical predictions of a conventional short-term simulation. This data is stored in a database and then used by rCFD to efficiently time-extrapolate the flow behavior in high spatial resolution. This study will compare the numerical predictions of rCFD simulations with those of corresponding full CFD reference simulations for lab-scale and pilot-scale fluidized beds. In assessing the predictive capabilities of rCFD simulations, we focus on solid mixing and secondary gas holdup. We observed that predictions made by rCFD simulations are highly sensitive to numerical parameters such as diffusivity associated with face swaps. We achieved a computational speed-up of four orders of magnitude (10,000 time faster than classical TFM simulation) eventually allowing for real-time simulations of fluidized beds. In the next step, we apply the checkerboarding technique by introducing gas tracers subjected to convection and diffusion. We then analyze the concentration profiles by observing mixing, transport of gas tracers, insights about the convective and diffusive pattern of the gas tracers, and further towards heat and mass transfer methods. Finally, we run rCFD simulations and calibrate them with numerical and physical parameters compared with convectional Two-fluid model (full CFD) simulation. As a result, this study gives a clear indication of the applicability, predictive capabilities, and existing limitations of rCFD in the realm of fluidization modelling.

Keywords: multiphase flow, recurrence CFD, two-fluid model, industrial processes

Procedia PDF Downloads 73
749 Predictive Factors of Prognosis in Acute Stroke Patients Receiving Traditional Chinese Medicine Therapy: A Retrospective Study

Authors: Shaoyi Lu

Abstract:

Background: Traditional Chinese medicine has been used to treat stroke, which is a major cause of morbidity and mortality. There is, however, no clear agreement about the optimal timing, population, efficacy, and predictive prognosis factors of traditional Chinese medicine supplemental therapy. Method: In this study, we used a retrospective analysis with data collection from stroke patients in Stroke Registry In Chang Gung Healthcare System (SRICHS). Stroke patients who received traditional Chinese medicine consultation in neurology ward of Keelung Chang Gung Memorial Hospital from Jan 2010 to Dec 2014 were enrolled. Clinical profiles including the neurologic deficit, activities of daily living and other basic characteristics were analyzed. Through propensity score matching, we compared the NIHSS and Barthel index before and after the hospitalization, and applied with subgroup analysis, and adjusted by multivariate regression method. Results: Totally 115 stroke patients were enrolled with experiment group in 23 and control group in 92. The most important factor for prognosis prediction were the scores of National Institutes of Health Stroke Scale and Barthel index right before the hospitalization. Traditional Chinese medicine intervention had no statistically significant influence on the neurological deficit of acute stroke patients, and mild negative influence on daily activity performance of acute hemorrhagic stroke patient. Conclusion: Efficacy of traditional Chinese medicine as a supplemental therapy for acute stroke patients was controversial. The reason for this phenomenon might be complex and require more research to comprehend. Key words: traditional Chinese medicine, acupuncture, Stroke, NIH stroke scale, Barthel index, predictive factor. Method: In this study, we used a retrospective analysis with data collection from stroke patients in Stroke Registry In Chang Gung Healthcare System (SRICHS). Stroke patients who received traditional Chinese medicine consultation in neurology ward of Keelung Chang Gung Memorial Hospital from Jan 2010 to Dec 2014 were enrolled. Clinical profiles including the neurologic deficit, activities of daily living and other basic characteristics were analyzed. Through propensity score matching, we compared the NIHSS and Barthel index before and after the hospitalization, and applied with subgroup analysis, and adjusted by multivariate regression method. Results: Totally 115 stroke patients were enrolled with experiment group in 23 and control group in 92. The most important factor for prognosis prediction were the scores of National Institutes of Health Stroke Scale and Barthel index right before the hospitalization. Traditional Chinese medicine intervention had no statistically significant influence on the neurological deficit of acute stroke patients, and mild negative influence on daily activity performance of acute hemorrhagic stroke patient. Conclusion: Efficacy of traditional Chinese medicine as a supplemental therapy for acute stroke patients was controversial. The reason for this phenomenon might be complex and require more research to comprehend.

Keywords: traditional Chinese medicine, complementary and alternative medicine, stroke, acupuncture

Procedia PDF Downloads 359
748 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings

Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian

Abstract:

Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.

Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM

Procedia PDF Downloads 107
747 Prediction of Anticancer Potential of Curcumin Nanoparticles by Means of Quasi-Qsar Analysis Using Monte Carlo Method

Authors: Ruchika Goyal, Ashwani Kumar, Sandeep Jain

Abstract:

The experimental data for anticancer potential of curcumin nanoparticles was calculated by means of eclectic data. The optimal descriptors were examined using Monte Carlo method based CORAL SEA software. The statistical quality of the model is following: n = 14, R² = 0.6809, Q² = 0.5943, s = 0.175, MAE = 0.114, F = 26 (sub-training set), n =5, R²= 0.9529, Q² = 0.7982, s = 0.086, MAE = 0.068, F = 61, Av Rm² = 0.7601, ∆R²m = 0.0840, k = 0.9856 and kk = 1.0146 (test set) and n = 5, R² = 0.6075 (validation set). This data can be used to build predictive QSAR models for anticancer activity.

Keywords: anticancer potential, curcumin, model, nanoparticles, optimal descriptors, QSAR

Procedia PDF Downloads 315
746 Impact of Diabetes Mellitus Type 2 on Clinical In-Stent Restenosis in First Elective Percutaneous Coronary Intervention Patients

Authors: Leonard Simoni, Ilir Alimehmeti, Ervina Shirka, Endri Hasimi, Ndricim Kallashi, Verona Beka, Suerta Kabili, Artan Goda

Abstract:

Background: Diabetes Mellitus type 2, small vessel calibre, stented length of vessel, complex lesion morphology, and prior bypass surgery have resulted risk factors for In-Stent Restenosis (ISR). However, there are some contradictory results about body mass index (BMI) as a risk factor for ISR. Purpose: We want to identify clinical, lesional and procedural factors that can predict clinical ISR in our patients. Methods: Were enrolled 759 patients who underwent first-time elective PCI with Bare Metal Stents (BMS) from September 2011 to December 2013 in our Department of Cardiology and followed them for at least 1.5 years with a median of 862 days (2 years and 4 months). Only the patients re-admitted with ischemic heart disease underwent control coronary angiography but no routine angiographic control was performed. Patients were categorized in ISR and non-ISR groups and compared between them. Multivariate analysis - Binary Logistic Regression: Forward Conditional Method was used to identify independent predictive risk factors. P was considered statistically significant when <0.05. Results: ISR compared to non-ISR individuals had a significantly lower BMI (25.7±3.3 vs. 26.9±3.7, p=0.004), higher risk anatomy (LM + 3-vessel CAD) (23% vs. 14%, p=0.03), higher number of stents/person used (2.1±1.1 vs. 1.75±0.96, p=0.004), greater length of stents/person used (39.3±21.6 vs. 33.3±18.5, p=0.01), and a lower use of clopidogrel and ASA (together) (95% vs. 99%, p=0.012). They also had a higher, although not statistically significant, prevalence of Diabetes Mellitus (42% vs. 32%, p=0.072) and a greater number of treated vessels (1.36±0.5 vs. 1.26±0.5, p=0.08). In the multivariate analysis, Diabetes Mellitus type 2 and multiple stents used were independent predictors risk factors for In-Stent Restenosis, OR 1.66 [1.03-2.68], p=0.039, and OR 1.44 [1.16-1.78,] p=0.001, respectively. On the other side higher BMI and use of clopidogrel and ASA together resulted protective factors OR 0.88 [0.81-0.95], p=0.001 and OR 0.2 [0.06-0.72] p=0.013, respectively. Conclusion: Diabetes Mellitus and multiple stents are strong predictive risk factors, whereas the use of clopidogrel and ASA together are protective factors for clinical In-Stent Restenosis. Paradoxically High BMI is a protective factor for In-stent Restenosis, probably related to a larger diameter of vessels and consequently a larger diameter of stents implanted in these patients. Further studies are needed to clarify this finding.

Keywords: body mass index, diabetes mellitus, in-stent restenosis, percutaneous coronary intervention

Procedia PDF Downloads 209
745 Outcomes of Combined Penetrating keratoplasty and Vitreo-Retinal Surgery in Management of Endophthalmitis with Obscured Corneal Clarity

Authors: Abhishek Dave, Manisha Singh

Abstract:

Purpose: The study aims to evaluate the outcomes of combined Penetrating keratoplasty (PKP) and Vitreo-Retinal (VR) surgery in patients having endophthalmitis with poor corneal clarity. Methods: PKP with VR Surgery was performed in 43 eyes. This is a retrospective analysis of their preoperative, intraoperative and microbiological characteristics and anatomical and functional outcomes. Results: Corneal opacification was due to corneal ulcer in 30 (69.7%), graft infection in 8 (18.6%), bullous keratopathy in 4 and corneal scar in 1 eye. Postoperative visual acuity improved in 20 (46.5%), not changed in 14 (32.5%) and deteriorated in 9 eyes (20.9%). Poor anatomic success was seen in 15 (34.88%) eyes (9-phthisis bulbi, 6-eviscerated). Graft remained clear in 24 eyes (1 year). Microbiology revealed bacteria in 26, fungus in 14 and no growth in 3 eyes. Six out of 11 patients having poor vision in the fellow eye, too, achieved functional success. Conclusion: PKP with VR surgery is a complex but globe-salvaging procedure for poor prognosis eyes, which otherwise would need evisceration.

Keywords: penetrating keratoplasty, VR surgery, endophthalmitis, corneal ulcer

Procedia PDF Downloads 48
744 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School

Authors: Martín Pratto Burgos

Abstract:

The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.

Keywords: machine-learning, engineering, university, education, computational models

Procedia PDF Downloads 93
743 Utilization of Standard Paediatric Observation Chart to Evaluate Infants under Six Months Presenting with Non-Specific Complaints

Authors: Michael Zhang, Nicholas Marriage, Valerie Astle, Marie-Louise Ratican, Jonathan Ash, Haddijatou Hughes

Abstract:

Objective: Young infants are often brought to the Emergency Department (ED) with a variety of complaints, some of them are non-specific and present as a diagnostic challenge to the attending clinician. Whilst invasive investigations such as blood tests and lumbar puncture are necessary in some cases to exclude serious infections, some basic clinical tools in additional to thorough clinical history can be useful to assess the risks of serious conditions in these young infants. This study aimed to examine the utilization of one of clinical tools in this regard. Methods: This retrospective observational study examined the medical records of infants under 6 months presenting to a mixed urban ED between January 2013 and December 2014. The infants deemed to have non-specific complaints or diagnoses by the emergency clinicians were selected for analysis. The ones with clear systemic diagnoses were excluded. Among all relevant clinical information and investigation results, utilization of Standard Paediatric Observation Chart (SPOC) was particularly scrutinized in these medical records. This specific chart was developed by the expert clinicians in local health department. It categorizes important clinical signs into some color-coded zones as a visual cue for serious implication of some abnormalities. An infant is regarded as SPOC positive when fulfills 1 red zone or 2 yellow zones criteria, and the attending clinician would be prompted to investigate and treat for potential serious conditions accordingly. Results: Eight hundred and thirty-five infants met the inclusion criteria for this project. The ones admitted to the hospital for further management were more likely to have SPOC positive criteria than the discharged infants (Odds ratio: 12.26, 95% CI: 8.04 – 18.69). Similarly, Sepsis alert criteria on SPOC were positive in a higher percentage of patients with serious infections (56.52%) in comparison to those with mild conditions (15.89%) (p < 0.001). The SPOC sepsis criteria had a sensitivity of 56.5% (95% CI: 47.0% - 65.7%) and a moderate specificity of 84.1% (95% CI: 80.8% - 87.0%) to identify serious infections. Applying to this infant population, with a 17.4% prevalence of serious infection, the positive predictive value was only 42.8% (95% CI: 36.9% - 49.0%). However, the negative predictive value was high at 90.2% (95% CI: 88.1% - 91.9%). Conclusions: Standard Paediatric Observation Chart has been applied as a useful clinical tool in the clinical practice to help identify and manage young sick infants in ED effectively.

Keywords: clinical tool, infants, non-specific complaints, Standard Paediatric Observation Chart

Procedia PDF Downloads 252
742 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 147
741 Is School Misbehavior a Decision: Implications for School Guidance

Authors: Rachel C. F. Sun

Abstract:

This study examined the predictive effects of moral competence, prosocial norms and positive behavior recognition on school misbehavior among Chinese junior secondary school students. Results of multiple regression analysis showed that students were more likely to misbehave in school when they had lower levels of moral competence and prosocial norms, and when they perceived their positive behavior being less likely recognized. Practical implications were discussed on how to guide students to make the right choices to behave appropriately in school. Implications for future research were also discussed.

Keywords: moral competence, positive behavior recognition, prosocial norms, school misbehavior

Procedia PDF Downloads 383
740 Thermal Effect in Power Electrical for HEMTs Devices with InAlN/GaN

Authors: Zakarya Kourdi, Mohammed Khaouani, Benyounes Bouazza, Ahlam Guen-Bouazza, Amine Boursali

Abstract:

In this paper, we have evaluated the thermal effect for high electron mobility transistors (HEMTs) heterostructure InAlN/GaN with a gate length 30nm high-performance. It also shows the analysis and simulated these devices, and how can be used in different application. The simulator Tcad-Silvaco software has used for predictive results good for the DC, AC and RF characteristic, Devices offered max drain current 0.67A; transconductance is 720 mS/mm the unilateral power gain of 180 dB. A cutoff frequency of 385 GHz, and max frequency 810 GHz These results confirm the feasibility of using HEMTs with InAlN/GaN in high power amplifiers, as well as thermal places.

Keywords: HEMT, Thermal Effect, Silvaco, InAlN/GaN

Procedia PDF Downloads 465
739 Expression of uPA, tPA, and PAI-1 in Calcified Aortic Valves

Authors: Abdullah M. Alzahrani

Abstract:

Our physiopathological assumption is that u-PA, t-PA, and PAI-1 are released by calcified aortic valves and play a role in the calcification of these valves. Sixty-five calcified aortic valves were collected from patients suffering from aortic stenosis. Each valve was incubated for 24 hours in culture medium. The supernatants were used to measure u-PA, t-PA, and PAI-1 concentrations; the valve calcification was evaluated using biphotonic absorptiometry. Aortic stenosis valves expressed normal plasminogen activators concentrations and overexpressed PAI-1 (u-PA, t-PA, and PAI-1 mean concentrations were, resp., 1.69 ng/mL ± 0.80, 2.76 ng/mL ± 1.33, and 53.27 ng/mL ± 36.39). There was no correlation between u-PA and PAI-1 (r = 0.3) but t-PA and PAI-1 were strongly correlated with each other (r = 0.6). Over expression of PAI-1 was proportional to the calcium content of theAS valves. Our results demonstrate a consistent increase of PAI-1 proportional to the calcification. The over expression of PAI-1 may be useful as a predictive indicator in patients with aortic stenosis.

Keywords: aortic valve, PAI-1, tPA gene, uPA gene

Procedia PDF Downloads 473
738 Reducing the Risk of Alcohol Relapse after Liver-Transplantation

Authors: Rebeca V. Tholen, Elaine Bundy

Abstract:

Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving an LT. Methods: The HRAR Scale is a predictive tool designed to determine the severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients. (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients, and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving a LT. Methods: The HRAR Scale is a predictive tool designed to determine severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients.

Keywords: alcoholism, liver transplant, quality improvement, substance abuse

Procedia PDF Downloads 115
737 Agreement between Basal Metabolic Rate Measured by Bioelectrical Impedance Analysis and Estimated by Prediction Equations in Obese Groups

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Basal metabolic rate (BMR) is widely used and an accepted measure of energy expenditure. Its principal determinant is body mass. However, this parameter is also correlated with a variety of other factors. The objective of this study is to measure BMR and compare it with the values obtained from predictive equations in adults classified according to their body mass index (BMI) values. 276 adults were included into the scope of this study. Their age, height and weight values were recorded. Five groups were designed based on their BMI values. First group (n = 85) was composed of individuals with BMI values varying between 18.5 and 24.9 kg/m2. Those with BMI values varying from 25.0 to 29.9 kg/m2 constituted Group 2 (n = 90). Individuals with 30.0-34.9 kg/m2, 35.0-39.9 kg/m2, > 40.0 kg/m2 were included in Group 3 (n = 53), 4 (n = 28) and 5 (n = 20), respectively. The most commonly used equations to be compared with the measured BMR values were selected. For this purpose, the values were calculated by the use of four equations to predict BMR values, by name, introduced by Food and Agriculture Organization (FAO)/World Health Organization (WHO)/United Nations University (UNU), Harris and Benedict, Owen and Mifflin. Descriptive statistics, ANOVA, post-Hoc Tukey and Pearson’s correlation tests were performed by a statistical program designed for Windows (SPSS, version 16.0). p values smaller than 0.05 were accepted as statistically significant. Mean ± SD of groups 1, 2, 3, 4 and 5 for measured BMR in kcal were 1440.3 ± 210.0, 1618.8 ± 268.6, 1741.1 ± 345.2, 1853.1 ± 351.2 and 2028.0 ± 412.1, respectively. Upon evaluation of the comparison of means among groups, differences were highly significant between Group 1 and each of the remaining four groups. The values were increasing from Group 2 to Group 5. However, differences between Group 2 and Group 3, Group 3 and Group 4, Group 4 and Group 5 were not statistically significant. These insignificances were lost in predictive equations proposed by Harris and Benedict, FAO/WHO/UNU and Owen. For Mifflin, the insignificance was limited only to Group 4 and Group 5. Upon evaluation of the correlations of measured BMR and the estimated values computed from prediction equations, the lowest correlations between measured BMR and estimated BMR values were observed among the individuals within normal BMI range. The highest correlations were detected in individuals with BMI values varying between 30.0 and 34.9 kg/m2. Correlations between measured BMR values and BMR values calculated by FAO/WHO/UNU as well as Owen were the same and the highest. In all groups, the highest correlations were observed between BMR values calculated from Mifflin and Harris and Benedict equations using age as an additional parameter. In conclusion, the unique resemblance of the FAO/WHO/UNU and Owen equations were pointed out. However, mean values obtained from FAO/WHO/UNU were much closer to the measured BMR values. Besides, the highest correlations were found between BMR calculated from FAO/WHO/UNU and measured BMR. These findings suggested that FAO/WHO/UNU was the most reliable equation, which may be used in conditions when the measured BMR values are not available.

Keywords: adult, basal metabolic rate, fao/who/unu, obesity, prediction equations

Procedia PDF Downloads 131
736 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case

Authors: Lukas Reznak, Maria Reznakova

Abstract:

Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.

Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany

Procedia PDF Downloads 246
735 Application of Granular Computing Paradigm in Knowledge Induction

Authors: Iftikhar U. Sikder

Abstract:

This paper illustrates an application of granular computing approach, namely rough set theory in data mining. The paper outlines the formalism of granular computing and elucidates the mathematical underpinning of rough set theory, which has been widely used by the data mining and the machine learning community. A real-world application is illustrated, and the classification performance is compared with other contending machine learning algorithms. The predictive performance of the rough set rule induction model shows comparative success with respect to other contending algorithms.

Keywords: concept approximation, granular computing, reducts, rough set theory, rule induction

Procedia PDF Downloads 529
734 Assessment of Bisphenol A and 17 α-Ethinyl Estradiol Bioavailability in Soils Treated with Biosolids

Authors: I. Ahumada, L. Ascar, C. Pedraza, J. Montecino

Abstract:

It has been found that the addition of biosolids to soil is beneficial to soil health, enriching soil with essential nutrient elements. Although this sludge has properties that allow for the improvement of the physical features and productivity of agricultural and forest soils and the recovery of degraded soils, they also contain trace elements, organic trace and pathogens that can cause damage to the environment. The application of these biosolids to land without the total reclamation and the treated wastewater can transfer these compounds into terrestrial and aquatic environments, giving rise to potential accumulation in plants. The general aim of this study was to evaluate the bioavailability of bisphenol A (BPA), and 17 α-ethynyl estradiol (EE2) in a soil-biosolid system using wheat (Triticum aestivum) plant assays and a predictive extraction method using a solution of hydroxypropyl-β-cyclodextrin (HPCD) to determine if it is a reliable surrogate for this bioassay. Two soils were obtained from the central region of Chile (Lo Prado and Chicauma). Biosolids were obtained from a regional wastewater treatment plant. The soils were amended with biosolids at 90 Mg ha-1. Soils treated with biosolids, spiked with 10 mgkg-1 of the EE2 and 15 mgkg-1 and 30 mgkg-1of BPA were also included. The BPA, and EE2 concentration were determined in biosolids, soils and plant samples through ultrasound assisted extraction, solid phase extraction (SPE) and gas chromatography coupled to mass spectrometry determination (GC/MS). The bioavailable fraction found of each one of soils cultivated with wheat plants was compared with results obtained through a cyclodextrin biosimulator method. The total concentration found in biosolid from a treatment plant was 0.150 ± 0.064 mgkg-1 and 12.8±2.9 mgkg-1 of EE2 and BPA respectively. BPA and EE2 bioavailability is affected by the organic matter content and the physical and chemical properties of the soil. The bioavailability response of both compounds in the two soils varied with the EE2 and BPA concentration. It was observed in the case of EE2, the bioavailability in wheat plant crops contained higher concentrations in the roots than in the shoots. The concentration of EE2 increased with increasing biosolids rate. On the other hand, for BPA, a higher concentration was found in the shoot than the roots of the plants. The predictive capability the HPCD extraction was assessed using a simple linear correlation test, for both compounds in wheat plants. The correlation coefficients for the EE2 obtained from the HPCD extraction with those obtained from the wheat plants were r= 0.99 and p-value ≤ 0.05. On the other hand, in the case of BPA a correlation was not found. Therefore, the methodology was validated with respect to wheat plants bioassays, only in the EE2 case. Acknowledgments: The authors thank FONDECYT 1150502.

Keywords: emerging compounds, bioavailability, biosolids, endocrine disruptors

Procedia PDF Downloads 144
733 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 98
732 Fair Value Accounting and Evolution of the Ohlson Model

Authors: Mohamed Zaher Bouaziz

Abstract:

Our study examines the Ohlson Model, which links a company's market value to its equity and net earnings, in the context of the evolution of the Canadian accounting model, characterized by more extensive use of fair value and a broader measure of performance after IFRS adoption. Our hypothesis is that if equity is reported at its fair value, this valuation is closely linked to market capitalization, so the weight of earnings weakens or even disappears in the Ohlson Model. Drawing on Canada's adoption of the International Financial Reporting Standards (IFRS), our results support our hypothesis that equity appears to include most of the relevant information for investors, while earnings have become less important. However, the predictive power of earnings does not disappear.

Keywords: fair value accounting, Ohlson model, IFRS adoption, value-relevance of equity and earnings

Procedia PDF Downloads 187
731 Predicting the Success of Bank Telemarketing Using Artificial Neural Network

Authors: Mokrane Selma

Abstract:

The shift towards decision making (DM) based on artificial intelligence (AI) techniques will change the way in which consumer markets and our societies function. Through AI, predictive analytics is being used by businesses to identify these patterns and major trends with the objective to improve the DM and influence future business outcomes. This paper proposes an Artificial Neural Network (ANN) approach to predict the success of telemarketing calls for selling bank long-term deposits. To validate the proposed model, we uses the bank marketing data of 41188 phone calls. The ANN attains 98.93% of accuracy which outperforms other conventional classifiers and confirms that it is credible and valuable approach for telemarketing campaign managers.

Keywords: bank telemarketing, prediction, decision making, artificial intelligence, artificial neural network

Procedia PDF Downloads 158
730 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 132
729 A Research on Tourism Market Forecast and Its Evaluation

Authors: Min Wei

Abstract:

The traditional prediction methods of the forecast for tourism market are paid more attention to the accuracy of the forecasts, ignoring the results of the feasibility of forecasting and predicting operability, which had made it difficult to predict the results of scientific testing. With the application of Linear Regression Model, this paper attempts to construct a scientific evaluation system for predictive value, both to ensure the accuracy, stability of the predicted value, and to ensure the feasibility of forecasting and predicting the results of operation. The findings show is that a scientific evaluation system can implement the scientific concept of development, the harmonious development of man and nature co-ordinate.

Keywords: linear regression model, tourism market, forecast, tourism economics

Procedia PDF Downloads 330
728 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 102
727 Attention Problems among Adolescents: Examining Educational Environments

Authors: Zhidong Zhang, Zhi-Chao Zhang, Georgianna Duarte

Abstract:

This study investigated the attention problems with the instrument of Achenbach System of Empirically Based Assessment (ASEBA). Two thousand eight hundred and ninety-four adolescents were surveyed by using a stratified sampling method. We examined the relationships between relevant background variables and attention problems. Multiple regression models were applied to analyze the data. Relevant variables such as sports activities, hobbies, age, grade and the number of close friends were included in this study as predictive variables. The analysis results indicated that educational environments and extracurricular activities are important factors which influence students’ attention problems.

Keywords: adolescents, ASEBA, attention problems, educational environments, stratified sampling

Procedia PDF Downloads 281
726 A Prospective Neurosurgical Registry Evaluating the Clinical Care of Traumatic Brain Injury Patients Presenting to Mulago National Referral Hospital in Uganda

Authors: Benjamin J. Kuo, Silvia D. Vaca, Joao Ricardo Nickenig Vissoci, Catherine A. Staton, Linda Xu, Michael Muhumuza, Hussein Ssenyonjo, John Mukasa, Joel Kiryabwire, Lydia Nanjula, Christine Muhumuza, Henry E. Rice, Gerald A. Grant, Michael M. Haglund

Abstract:

Background: Traumatic Brain Injury (TBI) is disproportionally concentrated in low- and middle-income countries (LMICs), with the odds of dying from TBI in Uganda more than 4 times higher than in high income countries (HICs). The disparities in the injury incidence and outcome between LMICs and resource-rich settings have led to increased health outcomes research for TBIs and their associated risk factors in LMICs. While there have been increasing TBI studies in LMICs over the last decade, there is still a need for more robust prospective registries. In Uganda, a trauma registry implemented in 2004 at the Mulago National Referral Hospital (MNRH) showed that RTI is the major contributor (60%) of overall mortality in the casualty department. While the prior registry provides information on injury incidence and burden, it’s limited in scope and doesn’t follow patients longitudinally throughout their hospital stay nor does it focus specifically on TBIs. And although these retrospective analyses are helpful for benchmarking TBI outcomes, they make it hard to identify specific quality improvement initiatives. The relationship among epidemiology, patient risk factors, clinical care, and TBI outcomes are still relatively unknown at MNRH. Objective: The objectives of this study are to describe the processes of care and determine risk factors predictive of poor outcomes for TBI patients presenting to a single tertiary hospital in Uganda. Methods: Prospective data were collected for 563 TBI patients presenting to a tertiary hospital in Kampala from 1 June – 30 November 2016. Research Electronic Data Capture (REDCap) was used to systematically collect variables spanning 8 categories. Univariate and multivariate analysis were conducted to determine significant predictors of mortality. Results: 563 TBI patients were enrolled from 1 June – 30 November 2016. 102 patients (18%) received surgery, 29 patients (5.1%) intended for surgery failed to receive it, and 251 patients (45%) received non-operative management. Overall mortality was 9.6%, which ranged from 4.7% for mild and moderate TBI to 55% for severe TBI patients with GCS 3-5. Within each TBI severity category, mortality differed by management pathway. Variables predictive of mortality were TBI severity, more than one intracranial bleed, failure to receive surgery, high dependency unit admission, ventilator support outside of surgery, and hospital arrival delayed by more than 4 hours. Conclusions: The overall mortality rate of 9.6% in Uganda for TBI is high, and likely underestimates the true TBI mortality. Furthermore, the wide-ranging mortality (3-82%), high ICU fatality, and negative impact of care delays suggest shortcomings with the current triaging practices. Lack of surgical intervention when needed was highly predictive of mortality in TBI patients. Further research into the determinants of surgical interventions, quality of step-up care, and prolonged care delays are needed to better understand the complex interplay of variables that affect patient outcome. These insights guide the development of future interventions and resource allocation to improve patient outcomes.

Keywords: care continuum, global neurosurgery, Kampala Uganda, LMIC, Mulago, prospective registry, traumatic brain injury

Procedia PDF Downloads 234
725 Using Mining Methods of WEKA to Predict Quran Verb Tense and Aspect in Translations from Arabic to English: Experimental Results and Analysis

Authors: Jawharah Alasmari

Abstract:

In verb inflection, tense marks past/present/future action, and aspect marks progressive/continues perfect/completed actions. This usage and meaning of tense and aspect differ in Arabic and English. In this research, we applied data mining methods to test the predictive function of candidate features by using our dataset of Arabic verbs in-context, and their 7 translations. Weka machine learning classifiers is used in this experiment in order to examine the key features that can be used to provide guidance to enable a translator’s appropriate English translation of the Arabic verb tense and aspect.

Keywords: Arabic verb, English translations, mining methods, Weka software

Procedia PDF Downloads 271
724 Designing AI-Enabled Smart Maintenance Scheduler: Enhancing Object Reliability through Automated Management

Authors: Arun Prasad Jaganathan

Abstract:

In today's rapidly evolving technological landscape, the need for efficient and proactive maintenance management solutions has become increasingly evident across various industries. Traditional approaches often suffer from drawbacks such as reactive strategies, leading to potential downtime, increased costs, and decreased operational efficiency. In response to these challenges, this paper proposes an AI-enabled approach to object-based maintenance management aimed at enhancing reliability and efficiency. The paper contributes to the growing body of research on AI-driven maintenance management systems, highlighting the transformative impact of intelligent technologies on enhancing object reliability and operational efficiency.

Keywords: AI, machine learning, predictive maintenance, object-based maintenance, expert team scheduling

Procedia PDF Downloads 57
723 Predicting Resistance of Commonly Used Antimicrobials in Urinary Tract Infections: A Decision Tree Analysis

Authors: Meera Tandan, Mohan Timilsina, Martin Cormican, Akke Vellinga

Abstract:

Background: In general practice, many infections are treated empirically without microbiological confirmation. Understanding susceptibility of antimicrobials during empirical prescribing can be helpful to reduce inappropriate prescribing. This study aims to apply a prediction model using a decision tree approach to predict the antimicrobial resistance (AMR) of urinary tract infections (UTI) based on non-clinical features of patients over 65 years. Decision tree models are a novel idea to predict the outcome of AMR at an initial stage. Method: Data was extracted from the database of the microbiological laboratory of the University Hospitals Galway on all antimicrobial susceptibility testing (AST) of urine specimens from patients over the age of 65 from January 2011 to December 2014. The primary endpoint was resistance to common antimicrobials (Nitrofurantoin, trimethoprim, ciprofloxacin, co-amoxiclav and amoxicillin) used to treat UTI. A classification and regression tree (CART) model was generated with the outcome ‘resistant infection’. The importance of each predictor (the number of previous samples, age, gender, location (nursing home, hospital, community) and causative agent) on antimicrobial resistance was estimated. Sensitivity, specificity, negative predictive (NPV) and positive predictive (PPV) values were used to evaluate the performance of the model. Seventy-five percent (75%) of the data were used as a training set and validation of the model was performed with the remaining 25% of the dataset. Results: A total of 9805 UTI patients over 65 years had their urine sample submitted for AST at least once over the four years. E.coli, Klebsiella, Proteus species were the most commonly identified pathogens among the UTI patients without catheter whereas Sertia, Staphylococcus aureus; Enterobacter was common with the catheter. The validated CART model shows slight differences in the sensitivity, specificity, PPV and NPV in between the models with and without the causative organisms. The sensitivity, specificity, PPV and NPV for the model with non-clinical predictors was between 74% and 88% depending on the antimicrobial. Conclusion: The CART models developed using non-clinical predictors have good performance when predicting antimicrobial resistance. These models predict which antimicrobial may be the most appropriate based on non-clinical factors. Other CART models, prospective data collection and validation and an increasing number of non-clinical factors will improve model performance. The presented model provides an alternative approach to decision making on antimicrobial prescribing for UTIs in older patients.

Keywords: antimicrobial resistance, urinary tract infection, prediction, decision tree

Procedia PDF Downloads 253
722 Islamic Extremist Groups' Usage of Populism in Social Media to Radicalize Muslim Migrants in Europe

Authors: Muhammad Irfan

Abstract:

The rise of radicalization within Islam has spawned a new era of global terror. The battlefield Successes of ISIS and the Taliban are fuelled by an ideological war waged, largely and successfully, in the media arena. This research will examine how Islamic extremist groups are using media modalities and populist narratives to influence migrant Muslim populations in Europe towards extremism. In 2014, ISIS shocked the world in exporting horrifically graphic forms of violence on social media. Their Muslim support base was largely disgusted and reviled. In response, they reconfigured their narrative by introducing populist 'hooks', astutely portraying the Muslim populous as oppressed and exploited by unjust, corrupt autocratic regimes and Western power structures. Within this crucible of real and perceived oppression, hundreds of thousands of the most desperate, vulnerable and abused migrants left their homelands, risking their lives in the hope of finding peace, justice, and prosperity in Europe. Instead, many encountered social stigmatization, detention and/or discrimination for being illegal migrants, for lacking resources and for simply being Muslim. This research will examine how Islamic extremist groups are exploiting the disenfranchisement of these migrant populations and using populist messaging on social media to influence them towards violent extremism. ISIS, in particular, formulates specific encoded messages for newly-arriving Muslims in Europe, preying upon their vulnerability. Violence is posited, as a populist response, to the tyranny of European oppression. This research will analyze the factors and indicators which propel Muslim migrants along the spectrum from resilience to violence extremism. Expected outcomes are identification of factors which influence vulnerability towards violent extremism; an early-warning detection framework; predictive analysis models; and de-radicalization frameworks. This research will provide valuable tools (practical and policy level) for European governments, security stakeholders, communities, policy-makers, and educators; it is anticipated to contribute to a de-escalation of Islamic extremism globally.

Keywords: populism, radicalization, de-radicalization, social media, ISIS, Taliban, shariah, jihad, Islam, Europe, political communication, terrorism, migrants, refugees, extremism, global terror, predictive analysis, early warning detection, models, strategic communication, populist narratives, Islamic extremism

Procedia PDF Downloads 118
721 QoS-CBMG: A Model for e-Commerce Customer Behavior

Authors: Hoda Ghavamipoor, S. Alireza Hashemi Golpayegani

Abstract:

An approach to model the customer interaction with e-commerce websites is presented. Considering the service quality level as a predictive feature, we offer an improved method based on the Customer Behavior Model Graph (CBMG), a state-transition graph model. To derive the Quality of Service sensitive-CBMG (QoS-CBMG) model, process-mining techniques is applied to pre-processed website server logs which are categorized as ‘buy’ or ‘visit’. Experimental results on an e-commerce website data confirmed that the proposed method outperforms CBMG based method.

Keywords: customer behavior model, electronic commerce, quality of service, customer behavior model graph, process mining

Procedia PDF Downloads 414