Search results for: predictive Model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17298

Search results for: predictive Model

17148 The Effect of Acute Rejection and Delayed Graft Function on Renal Transplant Fibrosis in Live Donor Renal Transplantation

Authors: Wisam Ismail, Sarah Hosgood, Michael Nicholson

Abstract:

The research hypothesis is that early post-transplant allograft fibrosis will be linked to donor factors and that acute rejection and/or delayed graft function in the recipient will be independent risk factors for the development of fibrosis. This research hypothesis is to explore whether acute rejection/delay graft function has an effect on the renal transplant fibrosis within the first year post live donor kidney transplant between 1998 and 2009. Methods: The study has been designed to identify five time points of the renal transplant biopsies [0 (pre-transplant), 1 month, 3 months, 6 months and 12 months] for 300 live donor renal transplant patients over 12 years period between March 1997 – August 2009. Paraffin fixed slides were collected from Leicester General Hospital and Leicester Royal Infirmary. These were routinely sectioned at a thickness of 4 Micro millimetres for standardization. Conclusions: Fibrosis at 1 month after the transplant was found significantly associated with baseline fibrosis (p<0.001) and HTN in the transplant recipient (p<0.001). Dialysis after the transplant showed a weak association with fibrosis at 1 month (p=0.07). The negative coefficient for HTN (-0.05) suggests a reduction in fibrosis in the absence of HTN. Fibrosis at 1 month was significantly associated with fibrosis at baseline (p 0.01 and 95%CI 0.11 to 0.67). Fibrosis at 3, 6 or 12 months was not found to be associated with fibrosis at baseline (p=0.70. 0.65 and 0.50 respectively). The amount of fibrosis at 1 month is significantly associated with graft survival (p=0.01 and 95%CI 0.02 to 0.14). Rejection and severity of rejection were not found to be associated with fibrosis at 1 month. The amount of fibrosis at 1 month was significantly associated with graft survival (p=0.02) after adjusting for baseline fibrosis (p=0.01). Both baseline fibrosis and graft survival were significant predictive factors. The amount of fibrosis at 1 month was not found to be significantly associated with rejection (p=0.64) after adjusting for baseline fibrosis (p=0.01). The amount of fibrosis at 1 month was not found to be significantly associated with rejection severity (p=0.29) after adjusting for baseline fibrosis (p=0.04). Fibrosis at baseline and HTN in the recipient were found to be predictive factors of fibrosis at 1 month. (p 0.02, p <0.001 respectively). Age of the donor, their relation to the patient, the pre-op Creatinine, artery, kidney weight and warm time were not found to be significantly associated with fibrosis at 1 month. In this complex model baseline fibrosis, HTN in the recipient and cold time were found to be predictive factors of fibrosis at 1 month (p=0.01,<0.001 and 0.03 respectively). Donor age was found to be a predictive factor of fibrosis at 6 months. The above analysis was repeated for 3, 6 and 12 months. No associations were detected between fibrosis and any of the explanatory variables with the exception of the donor age which was found to be a predictive factor of fibrosis at 6 months.

Keywords: fibrosis, transplant, renal, rejection

Procedia PDF Downloads 229
17147 Character Development Outcomes: A Predictive Model for Behaviour Analysis in Tertiary Institutions

Authors: Rhoda N. Kayongo

Abstract:

As behavior analysts in education continue to debate on how higher institutions can continue to benefit from their social and academic related programs, higher education is facing challenges in the area of character development. This is manifested in the percentages of college completion rates, teen pregnancies, drug abuse, sexual abuse, suicide, plagiarism, lack of academic integrity, and violence among their students. Attending college is a perceived opportunity to positively influence the actions and behaviors of the next generation of society; thus colleges and universities have to provide opportunities to develop students’ values and behaviors. Prior studies were mainly conducted in private institutions and more so in developed countries. However, with the complexity of the nature of student body currently due to the changing world, a multidimensional approach combining multiple factors that enhance character development outcomes is needed to suit the changing trends. The main purpose of this study was to identify opportunities in colleges and develop a model for predicting character development outcomes. A survey questionnaire composed of 7 scales including in-classroom interaction, out-of-classroom interaction, school climate, personal lifestyle, home environment, and peer influence as independent variables and character development outcomes as the dependent variable was administered to a total of five hundred and one students of 3rd and 4th year level in selected public colleges and universities in the Philippines and Rwanda. Using structural equation modelling, a predictive model explained 57% of the variance in character development outcomes. Findings from the results of the analysis showed that in-classroom interactions have a substantial direct influence on character development outcomes of the students (r = .75, p < .05). In addition, out-of-classroom interaction, school climate, and home environment contributed to students’ character development outcomes but in an indirect way. The study concluded that in the classroom are many opportunities for teachers to teach, model and integrate character development among their students. Thus, suggestions are made to public colleges and universities to deliberately boost and implement experiences that cultivate character within the classroom. These may contribute tremendously to the students' character development outcomes and hence render effective models of behaviour analysis in higher education.

Keywords: character development, tertiary institutions, predictive model, behavior analysis

Procedia PDF Downloads 134
17146 Multinomial Dirichlet Gaussian Process Model for Classification of Multidimensional Data

Authors: Wanhyun Cho, Soonja Kang, Sanggoon Kim, Soonyoung Park

Abstract:

We present probabilistic multinomial Dirichlet classification model for multidimensional data and Gaussian process priors. Here, we have considered an efficient computational method that can be used to obtain the approximate posteriors for latent variables and parameters needed to define the multiclass Gaussian process classification model. We first investigated the process of inducing a posterior distribution for various parameters and latent function by using the variational Bayesian approximations and important sampling method, and next we derived a predictive distribution of latent function needed to classify new samples. The proposed model is applied to classify the synthetic multivariate dataset in order to verify the performance of our model. Experiment result shows that our model is more accurate than the other approximation methods.

Keywords: multinomial dirichlet classification model, Gaussian process priors, variational Bayesian approximation, importance sampling, approximate posterior distribution, marginal likelihood evidence

Procedia PDF Downloads 441
17145 Data Science-Based Key Factor Analysis and Risk Prediction of Diabetic

Authors: Fei Gao, Rodolfo C. Raga Jr.

Abstract:

This research proposal will ascertain the major risk factors for diabetes and to design a predictive model for risk assessment. The project aims to improve diabetes early detection and management by utilizing data science techniques, which may improve patient outcomes and healthcare efficiency. The phase relation values of each attribute were used to analyze and choose the attributes that might influence the examiner's survival probability using Diabetes Health Indicators Dataset from Kaggle’s data as the research data. We compare and evaluate eight machine learning algorithms. Our investigation begins with comprehensive data preprocessing, including feature engineering and dimensionality reduction, aimed at enhancing data quality. The dataset, comprising health indicators and medical data, serves as a foundation for training and testing these algorithms. A rigorous cross-validation process is applied, and we assess their performance using five key metrics like accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC). After analyzing the data characteristics, investigate their impact on the likelihood of diabetes and develop corresponding risk indicators.

Keywords: diabetes, risk factors, predictive model, risk assessment, data science techniques, early detection, data analysis, Kaggle

Procedia PDF Downloads 73
17144 Machine Learning Model Applied for SCM Processes to Efficiently Determine Its Impacts on the Environment

Authors: Elena Puica

Abstract:

This paper aims to investigate the impact of Supply Chain Management (SCM) on the environment by applying a Machine Learning model while pointing out the efficiency of the technology used. The Machine Learning model was used to derive the efficiency and optimization of technology used in SCM and the environmental impact of SCM processes. The model applied is a predictive classification model and was trained firstly to determine which stage of the SCM has more outputs and secondly to demonstrate the efficiency of using advanced technology in SCM instead of recuring to traditional SCM. The outputs are the emissions generated in the environment, the consumption from different steps in the life cycle, the resulting pollutants/wastes emitted, and all the releases to air, land, and water. This manuscript presents an innovative approach to applying advanced technology in SCM and simultaneously studies the efficiency of technology and the SCM's impact on the environment. Identifying the conceptual relationships between SCM practices and their impact on the environment is a new contribution to the research. The authors can take a forward step in developing recent studies in SCM and its effects on the environment by applying technology.

Keywords: machine-learning model in SCM, SCM processes, SCM and the environmental impact, technology in SCM

Procedia PDF Downloads 115
17143 Probability Sampling in Matched Case-Control Study in Drug Abuse

Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell

Abstract:

Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.

Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling

Procedia PDF Downloads 492
17142 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 94
17141 Predictive Machine Learning Model for Assessing the Impact of Untreated Teeth Grinding on Gingival Recession and Jaw Pain

Authors: Joseph Salim

Abstract:

This paper proposes the development of a supervised machine learning system to predict the consequences of untreated bruxism (teeth grinding) on gingival (gum) recession and jaw pain (most often bilateral jaw pain with possible headaches and limited ability to open the mouth). As a general dentist in a multi-specialty practice, the author has encountered many patients suffering from these issues due to uncontrolled bruxism (teeth grinding) at night. The most effective treatment for managing this problem involves wearing a nightguard during sleep and receiving therapeutic Botox injections to relax the muscles (the masseter muscle) responsible for grinding. However, some patients choose to postpone these treatments, leading to potentially irreversible and costlier consequences in the future. The proposed machine learning model aims to track patients who forgo the recommended treatments and assess the percentage of individuals who will experience worsening jaw pain, gingival (gum) recession, or both within a 3-to-5-year timeframe. By accurately predicting these outcomes, the model seeks to motivate patients to address the root cause proactively, ultimately saving time and pain while improving quality of life and avoiding much costlier treatments such as full-mouth rehabilitation to help recover the loss of vertical dimension of occlusion due to shortened clinical crowns because of bruxism, gingival grafts, etc.

Keywords: artificial intelligence, machine learning, predictive insights, bruxism, teeth grinding, therapeutic botox, nightguard, gingival recession, gum recession, jaw pain

Procedia PDF Downloads 92
17140 Predictive Analytics in Oil and Gas Industry

Authors: Suchitra Chnadrashekhar

Abstract:

Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.

Keywords: hydrocarbon, information technology, SAS, predictive analytics

Procedia PDF Downloads 359
17139 Data-Driven Crop Advisory – A Use Case on Grapes

Authors: Shailaja Grover, Purvi Tiwari, Vigneshwaran S. R., U. Dinesh Kumar

Abstract:

In India, grapes are one of the most important horticulture crops. Grapes are most vulnerable to downy mildew, which is one of the most devasting diseases. In the absence of a precise weather-based advisory system, farmers spray pesticides on their crops extensively. There are two main challenges associated with using these pesticides. Firstly, most of these sprays were panic sprays, which could have been avoided. Second, farmers use more expensive "Preventive and Eradicate" chemicals than "Systemic, Curative and Anti-sporulate" chemicals. When these chemicals are used indiscriminately, they can enter the fruit and cause health problems such as cancer. This paper utilizes decision trees and predictive modeling techniques to provide grape farmers with customized advice on grape disease management. This model is expected to reduce the overall use of chemicals by approximately 50% and the cost by around 70%. Most of the grapes produced will have relatively low residue levels of pesticides, i.e., below the permissible level.

Keywords: analytics in agriculture, downy mildew, weather based advisory, decision tree, predictive modelling

Procedia PDF Downloads 72
17138 Agriculture Yield Prediction Using Predictive Analytic Techniques

Authors: Nagini Sabbineni, Rajini T. V. Kanth, B. V. Kiranmayee

Abstract:

India’s economy primarily depends on agriculture yield growth and their allied agro industry products. The agriculture yield prediction is the toughest task for agricultural departments across the globe. The agriculture yield depends on various factors. Particularly countries like India, majority of agriculture growth depends on rain water, which is highly unpredictable. Agriculture growth depends on different parameters, namely Water, Nitrogen, Weather, Soil characteristics, Crop rotation, Soil moisture, Surface temperature and Rain water etc. In our paper, lot of Explorative Data Analysis is done and various predictive models were designed. Further various regression models like Linear, Multiple Linear, Non-linear models are tested for the effective prediction or the forecast of the agriculture yield for various crops in Andhra Pradesh and Telangana states.

Keywords: agriculture yield growth, agriculture yield prediction, explorative data analysis, predictive models, regression models

Procedia PDF Downloads 311
17137 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School

Authors: Martín Pratto Burgos

Abstract:

The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.

Keywords: machine-learning, engineering, university, education, computational models

Procedia PDF Downloads 93
17136 Meteosat Second Generation Image Compression Based on the Radon Transform and Linear Predictive Coding: Comparison and Performance

Authors: Cherifi Mehdi, Lahdir Mourad, Ameur Soltane

Abstract:

Image compression is used to reduce the number of bits required to represent an image. The Meteosat Second Generation satellite (MSG) allows the acquisition of 12 image files every 15 minutes. Which results a large databases sizes. The transform selected in the images compression should contribute to reduce the data representing the images. The Radon transform retrieves the Radon points that represent the sum of the pixels in a given angle for each direction. Linear predictive coding (LPC) with filtering provides a good decorrelation of Radon points using a Predictor constitute by the Symmetric Nearest Neighbor filter (SNN) coefficients, which result losses during decompression. Finally, Run Length Coding (RLC) gives us a high and fixed compression ratio regardless of the input image. In this paper, a novel image compression method based on the Radon transform and linear predictive coding (LPC) for MSG images is proposed. MSG image compression based on the Radon transform and the LPC provides a good compromise between compression and quality of reconstruction. A comparison of our method with other whose two based on DCT and one on DWT bi-orthogonal filtering is evaluated to show the power of the Radon transform in its resistibility against the quantization noise and to evaluate the performance of our method. Evaluation criteria like PSNR and the compression ratio allows showing the efficiency of our method of compression.

Keywords: image compression, radon transform, linear predictive coding (LPC), run lengthcoding (RLC), meteosat second generation (MSG)

Procedia PDF Downloads 418
17135 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 64
17134 Enhancing Predictive Accuracy in Pharmaceutical Sales through an Ensemble Kernel Gaussian Process Regression Approach

Authors: Shahin Mirshekari, Mohammadreza Moradi, Hossein Jafari, Mehdi Jafari, Mohammad Ensaf

Abstract:

This research employs Gaussian Process Regression (GPR) with an ensemble kernel, integrating Exponential Squared, Revised Matern, and Rational Quadratic kernels to analyze pharmaceutical sales data. Bayesian optimization was used to identify optimal kernel weights: 0.76 for Exponential Squared, 0.21 for Revised Matern, and 0.13 for Rational Quadratic. The ensemble kernel demonstrated superior performance in predictive accuracy, achieving an R² score near 1.0, and significantly lower values in MSE, MAE, and RMSE. These findings highlight the efficacy of ensemble kernels in GPR for predictive analytics in complex pharmaceutical sales datasets.

Keywords: Gaussian process regression, ensemble kernels, bayesian optimization, pharmaceutical sales analysis, time series forecasting, data analysis

Procedia PDF Downloads 68
17133 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities

Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun

Abstract:

As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.

Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning

Procedia PDF Downloads 55
17132 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran

Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard

Abstract:

Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.

Keywords: data mining, ischemic stroke, decision tree, Bayesian network

Procedia PDF Downloads 172
17131 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 53
17130 Insulin Resistance in Children and Adolescents in Relation to Body Mass Index, Waist Circumference and Body Fat Weight

Authors: E. Vlachopapadopoulou, E. Dikaiakou, E. Anagnostou, I. Panagiotopoulos, E. Kaloumenou, M. Kafetzi, A. Fotinou, S. Michalacos

Abstract:

Aim: To investigate the relation and impact of Body Mass Index (BMI), Waist Circumference (WC) and Body Fat Weight (BFW) on insulin resistance (MATSUDA INDEX < 2.5) in children and adolescents. Methods: Data from 95 overweight and obese children (47 boys and 48 girls) with mean age 10.7 ± 2.2 years were analyzed. ROC analysis was used to investigate the predictive ability of BMI, WC and BFW for insulin resistance and find the optimal cut-offs. The overall performance of the ROC analysis was quantified by computing area under the curve (AUC). Results: ROC curve analysis indicated that the optimal-cut off of WC for the prediction of insulin resistance was 97 cm with sensitivity equal to 75% and specificity equal to 73.1%. AUC was 0.78 (95% CI: 0.63-0.92, p=0.001). The sensitivity and specificity of obesity for the discrimination of participants with insulin resistance from those without insulin resistance were equal to 58.3% and 75%, respectively (AUC=0.67). BFW had a borderline predictive ability for insulin resistance (AUC=0.58, 95% CI: 0.43-0.74, p=0.101). The predictive ability of WC was equivalent with the correspondence predictive ability of BMI (p=0.891). Obese subjects had 4.2 times greater odds for having insulin resistance (95% CI: 1.71-10.30, p < 0.001), while subjects with WC more than 97 had 8.1 times greater odds for having insulin resistance (95% CI: 2.14-30.86, p=0.002). Conclusion: BMI and WC are important clinical factors that have significant clinical relation with insulin resistance in children and adolescents. The cut off of 97 cm for WC can identify children with greater likelihood for insulin resistance.

Keywords: body fat weight, body mass index, insulin resistance, obese children, waist circumference

Procedia PDF Downloads 318
17129 Synchronization of a Perturbed Satellite Attitude Motion

Authors: Sadaoui Djaouida

Abstract:

In this paper, the predictive control method is proposed to control the synchronization of two perturbed satellites attitude motion. Based on delayed feedback control of continuous-time systems combines with the prediction-based method of discrete-time systems, this approach only needs a single controller to realize synchronization, which has considerable significance in reducing the cost and complexity for controller implementation.

Keywords: predictive control, synchronization, satellite attitude, control engineering

Procedia PDF Downloads 554
17128 Distributed Coordination of Connected and Automated Vehicles at Multiple Interconnected Intersections

Authors: Zhiyuan Du, Baisravan Hom Chaudhuri, Pierluigi Pisu

Abstract:

In connected vehicle systems where wireless communication is available among the involved vehicles and intersection controllers, it is possible to design an intersection coordination strategy that leads the connected and automated vehicles (CAVs) travel through the road intersections without the conventional traffic light control. In this paper, we present a distributed coordination strategy for the CAVs at multiple interconnected intersections that aims at improving system fuel efficiency and system mobility. We present a distributed control solution where in the higher level, the intersection controllers calculate the road desired average velocity and optimally assign reference velocities of each vehicle. In the lower level, every vehicle is considered to use model predictive control (MPC) to track their reference velocity obtained from the higher level controller. The proposed method has been implemented on a simulation-based case with two-interconnected intersection network. Additionally, the effects of mixed vehicle types on the coordination strategy has been explored. Simulation results indicate the improvement on vehicle fuel efficiency and traffic mobility of the proposed method.

Keywords: connected vehicles, automated vehicles, intersection coordination systems, multiple interconnected intersections, model predictive control

Procedia PDF Downloads 355
17127 Redefining Infrastructure as Code Orchestration Using AI

Authors: Georges Bou Ghantous

Abstract:

This research delves into the transformative impact of Artificial Intelligence (AI) on Infrastructure as Code (IaaC) practices, specifically focusing on the redefinition of infrastructure orchestration. By harnessing AI technologies such as machine learning algorithms and predictive analytics, organizations can achieve unprecedented levels of efficiency and optimization in managing their infrastructure resources. AI-driven IaaC introduces proactive decision-making through predictive insights, enabling organizations to anticipate and address potential issues before they arise. Dynamic resource scaling, facilitated by AI, ensures that infrastructure resources can seamlessly adapt to fluctuating workloads and changing business requirements. Through case studies and best practices, this paper sheds light on the tangible benefits and challenges associated with AI-driven IaaC transformation, providing valuable insights for organizations navigating the evolving landscape of digital infrastructure management.

Keywords: artificial intelligence, infrastructure as code, efficiency optimization, predictive insights, dynamic resource scaling, proactive decision-making

Procedia PDF Downloads 32
17126 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator

Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib

Abstract:

Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.

Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model

Procedia PDF Downloads 305
17125 On-Line Data-Driven Multivariate Statistical Prediction Approach to Production Monitoring

Authors: Hyun-Woo Cho

Abstract:

Detection of incipient abnormal events in production processes is important to improve safety and reliability of manufacturing operations and reduce losses caused by failures. The construction of calibration models for predicting faulty conditions is quite essential in making decisions on when to perform preventive maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of process measurement data. The calibration model is used to predict faulty conditions from historical reference data. This approach utilizes variable selection techniques, and the predictive performance of several prediction methods are evaluated using real data. The results shows that the calibration model based on supervised probabilistic model yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.

Keywords: calibration model, monitoring, quality improvement, feature selection

Procedia PDF Downloads 354
17124 Predictive Value Modified Sick Neonatal Score (MSNS) On Critically Ill Neonates Outcome Treated in Neonatal Intensive Care Unit (NICU)

Authors: Oktavian Prasetia Wardana, Martono Tri Utomo, Risa Etika, Kartika Darma Handayani, Dina Angelika, Wurry Ayuningtyas

Abstract:

Background: Critically ill neonates are newborn babies with high-risk factors that potentially cause disability and/or death. Scoring systems for determining the severity of the disease have been widely developed as well as some designs for use in neonates. The SNAPPE-II method, which has been used as a mortality predictor scoring system in several referral centers, was found to be slow in assessing the outcome of critically ill neonates in the Neonatal Intensive Care Unit (NICU). Objective: To analyze the predictive value of MSNS on the outcome of critically ill neonates at the time of arrival up to 24 hours after being admitted to the NICU. Methods: A longitudinal observational analytic study based on medical record data was conducted from January to August 2022. Each sample was recorded from medical record data, including data on gestational age, mode of delivery, APGAR score at birth, resuscitation measures at birth, duration of resuscitation, post-resuscitation ventilation, physical examination at birth (including vital signs and any congenital abnormalities), the results of routine laboratory examinations, as well as the neonatal outcomes. Results: This study involved 105 critically ill neonates who were admitted to the NICU. The outcome of critically ill neonates was 50 (47.6%) neonates died, and 55 (52.4%) neonates lived. There were more males than females (61% vs. 39%). The mean gestational age of the subjects in this study was 33.8 ± 4.28 weeks, with the mean birth weight of the subjects being 1820.31 ± 33.18 g. The mean MSNS score of neonates with a deadly outcome was lower than that of the lived outcome. ROC curve with a cut point MSNS score <10.5 obtained an AUC of 93.5% (95% CI: 88.3-98.6) with a sensitivity value of 84% (95% CI: 80.5-94.9), specificity 80 % (CI 95%: 88.3-98.6), Positive Predictive Value (PPV) 79.2%, Negative Predictive Value (NPV) 84.6%, Risk Ratio (RR) 5.14 with Hosmer & Lemeshow test results p>0.05. Conclusion: The MSNS score has a good predictive value and good calibration of the outcomes of critically ill neonates admitted to the NICU.

Keywords: critically ill neonate, outcome, MSNS, NICU, predictive value

Procedia PDF Downloads 69
17123 Outcome of Using Penpat Pinyowattanasilp Equation for Prediction of 24-Hour Uptake, First and Second Therapeutic Doses Calculation in Graves’ Disease Patient

Authors: Piyarat Parklug, Busaba Supawattanaobodee, Penpat Pinyowattanasilp

Abstract:

The radioactive iodine thyroid uptake (RAIU) has been widely used to differentiate the cause of thyrotoxicosis and treatment. Twenty-four hours RAIU is routinely used to calculate the dose of radioactive iodine (RAI) therapy; however, 2 days protocol is required. This study aims to evaluate the modification of Penpat Pinyowattanasilp equation application by the exclusion of outlier data, 3 hours RAIU less than 20% and more than 80%, to improve prediction of 24-hour uptake. The equation is predicted 24 hours RAIU (P24RAIU) = 32.5+0.702 (3 hours RAIU). Then calculating separation first and second therapeutic doses in Graves’ disease patients. Methods; This study was a retrospective study at Faculty of Medicine Vajira Hospital in Bangkok, Thailand. Inclusion were Graves’ disease patients who visited RAI clinic between January 2014-March 2019. We divided subjects into 2 groups according to first and second therapeutic doses. Results; Our study had a total of 151 patients. The study was done in 115 patients with first RAI dose and 36 patients with second RAI dose. The P24RAIU are highly correlated with actual 24-hour RAIU in first and second therapeutic doses (r = 0.913, 95% CI = 0.876 to 0.939 and r = 0.806, 95% CI = 0.649 to 0.897). Bland-Altman plot shows that mean differences between predictive and actual 24 hours RAI in the first dose and second dose were 2.14% (95%CI 0.83-3.46) and 1.37% (95%CI -1.41-4.14). The mean first actual and predictive therapeutic doses are 8.33 ± 4.93 and 7.38 ± 3.43 milliCuries (mCi) respectively. The mean second actual and predictive therapeutic doses are 6.51 ± 3.96 and 6.01 ± 3.11 mCi respectively. The predictive therapeutic doses are highly correlated with the actual dose in first and second therapeutic doses (r = 0.907, 95% CI = 0.868 to 0.935 and r = 0.953, 95% CI = 0.909 to 0.976). Bland-Altman plot shows that mean difference between predictive and actual P24RAIU in the first dose and second dose were less than 1 mCi (-0.94 and -0.5 mCi). This modification equation application is simply used in clinical practice especially patient with 3 hours RAIU in range of 20-80% in a Thai population. Before use, this equation for other population should be tested for the correlation.

Keywords: equation, Graves’disease, prediction, 24-hour uptake

Procedia PDF Downloads 137
17122 Hydro-Gravimetric Ann Model for Prediction of Groundwater Level

Authors: Jayanta Kumar Ghosh, Swastik Sunil Goriwale, Himangshu Sarkar

Abstract:

Groundwater is one of the most valuable natural resources that society consumes for its domestic, industrial, and agricultural water supply. Its bulk and indiscriminate consumption affects the groundwater resource. Often, it has been found that the groundwater recharge rate is much lower than its demand. Thus, to maintain water and food security, it is necessary to monitor and management of groundwater storage. However, it is challenging to estimate groundwater storage (GWS) by making use of existing hydrological models. To overcome the difficulties, machine learning (ML) models are being introduced for the evaluation of groundwater level (GWL). Thus, the objective of this research work is to develop an ML-based model for the prediction of GWL. This objective has been realized through the development of an artificial neural network (ANN) model based on hydro-gravimetry. The model has been developed using training samples from field observations spread over 8 months. The developed model has been tested for the prediction of GWL in an observation well. The root means square error (RMSE) for the test samples has been found to be 0.390 meters. Thus, it can be concluded that the hydro-gravimetric-based ANN model can be used for the prediction of GWL. However, to improve the accuracy, more hydro-gravimetric parameter/s may be considered and tested in future.

Keywords: machine learning, hydro-gravimetry, ground water level, predictive model

Procedia PDF Downloads 125
17121 Predicting Financial Distress in South Africa

Authors: Nikki Berrange, Gizelle Willows

Abstract:

Business rescue has become increasingly popular since its inclusion in the Companies Act of South Africa in May 2011. The Alternate Exchange (AltX) of the Johannesburg Stock Exchange has experienced a marked increase in the number of companies entering business rescue. This study sampled twenty companies listed on the AltX to determine whether Altman’s Z-score model for emerging markets (ZEM) or Taffler’s Z-score model is a more accurate model in predicting financial distress for small to medium size companies in South Africa. The study was performed over three different time horizons; one, two and three years prior to the event of financial distress, in order to determine how many companies each model predicted would be unlikely to succeed as well as the predictive ability and accuracy of the respective models. The study found that Taffler’s Z-score model had a greater ability at predicting financial distress from all three-time horizons.

Keywords: Altman’s ZEM-score, Altman’s Z-score, AltX, business rescue, Taffler’s Z-score

Procedia PDF Downloads 370
17120 Predicting the Human Impact of Natural Onset Disasters Using Pattern Recognition Techniques and Rule Based Clustering

Authors: Sara Hasani

Abstract:

This research focuses on natural sudden onset disasters characterised as ‘occurring with little or no warning and often cause excessive injuries far surpassing the national response capacities’. Based on the panel analysis of the historic record of 4,252 natural onset disasters between 1980 to 2015, a predictive method was developed to predict the human impact of the disaster (fatality, injured, homeless) with less than 3% of errors. The geographical dispersion of the disasters includes every country where the data were available and cross-examined from various humanitarian sources. The records were then filtered into 4252 records of the disasters where the five predictive variables (disaster type, HDI, DRI, population, and population density) were clearly stated. The procedure was designed based on a combination of pattern recognition techniques and rule-based clustering for prediction and discrimination analysis to validate the results further. The result indicates that there is a relationship between the disaster human impact and the five socio-economic characteristics of the affected country mentioned above. As a result, a framework was put forward, which could predict the disaster’s human impact based on their severity rank in the early hours of disaster strike. The predictions in this model were outlined in two worst and best-case scenarios, which respectively inform the lower range and higher range of the prediction. A necessity to develop the predictive framework can be highlighted by noticing that despite the existing research in literature, a framework for predicting the human impact and estimating the needs at the time of the disaster is yet to be developed. This can further be used to allocate the resources at the response phase of the disaster where the data is scarce.

Keywords: disaster management, natural disaster, pattern recognition, prediction

Procedia PDF Downloads 153
17119 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 220