Search results for: predictive biomarker
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1253

Search results for: predictive biomarker

1013 Association of Mir-196a Expression in Esophageal Tissue with Barrett´s Esophagus and Esophageal Adenocarcinoma

Authors: Petra Borilova Linhartova, Michaela Ruckova, Sabina Sevcikova, Natalie Mlcuchova, Jan Bohm, Katerina Zukalova, Monika Vlachova, Jiri Dolina, Lumir Kunovsky, Radek Kroupa, Zdenek Pavlovsky, Zdenek Danek, Tereza Deissova, Lydie Izakovicova Holla, Ondrej Slaby, Zdenek Kala

Abstract:

Esophageal adenocarcinoma (EAC) is a highly aggressive malignancy that frequently develops from Barrett's esophagus (BE), a premalignant pathologic change occurring in the lower end of the esophagus. Specific microRNAs (miRNAs), small non-coding RNAs that function as posttranscriptional regulators of gene expression, were repeatedly proved to play key roles in the pathogenesis of these diseases. This pilot study aimed to analyze four selected miRNAs in esophageal tissues from healthy controls (HC) and patients with reflux esophagitis (RE)/BE/EAC, as well as to compare expression at the site of Barrett's mucosa/adenocarcinoma and healthy esophageal tissue outside the area of the main pathology in patients with BE/EAC. In this pilot study, 22 individuals (3 HC, 8 RE, 5 BE, 6 EAC) were included and endoscopically examined. RNA was isolated from the fresh-frozen esophageal tissue (stored in the RNAlater™ Stabilization Solution −70°C) using the AllPrep DNA/RNA/miRNA Universal Kit. Subsequent RT-qPCR analysis was performed using selected TaqMan MicroRNA Assays for miR-21, miR-34a, miR-196a, miR-196b, and endogenous control (RNU44). While the expression of miR-21 in the esophageal tissue with the main pathology was decreased in BE and EAC patients in comparison to the group of HC and RE patients (p=0.01), the expression of miR-196a was increased in the BE and EAC patients (p<0.01). Correlations between those miRNAs expression in tissue and severity of diagnosis were observed (p<0.05). In addition, miR-196a was significantly more expressed at the site with the main pathology than in paired adjacent esophageal tissue in BE and EAC patients (p<0.01). In conclusion, our pilot results showed that miR-196a, which regulates the proliferation, invasion, and migration (and was previously associated with esophageal squamous cell carcinoma and marked as a potential therapeutic target), could be a diagnostic tissue biomarker for BE and EAC as well.

Keywords: microRNA, barrett´s esophagus, esophageal adenocarcinoma, biomarker

Procedia PDF Downloads 112
1012 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 228
1011 Online Learning for Modern Business Models: Theoretical Considerations and Algorithms

Authors: Marian Sorin Ionescu, Olivia Negoita, Cosmin Dobrin

Abstract:

This scientific communication reports and discusses learning models adaptable to modern business problems and models specific to digital concepts and paradigms. In the PAC (probably approximately correct) learning model approach, in which the learning process begins by receiving a batch of learning examples, the set of learning processes is used to acquire a hypothesis, and when the learning process is fully used, this hypothesis is used in the prediction of new operational examples. For complex business models, a lot of models should be introduced and evaluated to estimate the induced results so that the totality of the results are used to develop a predictive rule, which anticipates the choice of new models. In opposition, for online learning-type processes, there is no separation between the learning (training) and predictive phase. Every time a business model is approached, a test example is considered from the beginning until the prediction of the appearance of a model considered correct from the point of view of the business decision. After choosing choice a part of the business model, the label with the logical value "true" is known. Some of the business models are used as examples of learning (training), which helps to improve the prediction mechanisms for future business models.

Keywords: machine learning, business models, convex analysis, online learning

Procedia PDF Downloads 142
1010 Green Wave Control Strategy for Optimal Energy Consumption by Model Predictive Control in Electric Vehicles

Authors: Furkan Ozkan, M. Selcuk Arslan, Hatice Mercan

Abstract:

Electric vehicles are becoming increasingly popular asa sustainable alternative to traditional combustion engine vehicles. However, to fully realize the potential of EVs in reducing environmental impact and energy consumption, efficient control strategies are essential. This study explores the application of green wave control using model predictive control for electric vehicles, coupled with energy consumption modeling using neural networks. The use of MPC allows for real-time optimization of the vehicles’ energy consumption while considering dynamic traffic conditions. By leveraging neural networks for energy consumption modeling, the EV's performance can be further enhanced through accurate predictions and adaptive control. The integration of these advanced control and modeling techniques aims to maximize energy efficiency and range while navigating urban traffic scenarios. The findings of this research offer valuable insights into the potential of green wave control for electric vehicles and demonstrate the significance of integrating MPC and neural network modeling for optimizing energy consumption. This work contributes to the advancement of sustainable transportation systems and the widespread adoption of electric vehicles. To evaluate the effectiveness of the green wave control strategy in real-world urban environments, extensive simulations were conducted using a high-fidelity vehicle model and realistic traffic scenarios. The results indicate that the integration of model predictive control and energy consumption modeling with neural networks had a significant impact on the energy efficiency and range of electric vehicles. Through the use of MPC, the electric vehicle was able to adapt its speed and acceleration profile in realtime to optimize energy consumption while maintaining travel time objectives. The neural network-based energy consumption modeling provided accurate predictions, enabling the vehicle to anticipate and respond to variations in traffic flow, further enhancing energy efficiency and range. Furthermore, the study revealed that the green wave control strategy not only reduced energy consumption but also improved the overall driving experience by minimizing abrupt acceleration and deceleration, leading to a smoother and more comfortable ride for passengers. These results demonstrate the potential for green wave control to revolutionize urban transportation by enhancing the performance of electric vehicles and contributing to a more sustainable and efficient mobility ecosystem.

Keywords: electric vehicles, energy efficiency, green wave control, model predictive control, neural networks

Procedia PDF Downloads 55
1009 Comparative Analysis of Predictive Models for Customer Churn Prediction in the Telecommunication Industry

Authors: Deepika Christopher, Garima Anand

Abstract:

To determine the best model for churn prediction in the telecom industry, this paper compares 11 machine learning algorithms, namely Logistic Regression, Support Vector Machine, Random Forest, Decision Tree, XGBoost, LightGBM, Cat Boost, AdaBoost, Extra Trees, Deep Neural Network, and Hybrid Model (MLPClassifier). It also aims to pinpoint the top three factors that lead to customer churn and conducts customer segmentation to identify vulnerable groups. According to the data, the Logistic Regression model performs the best, with an F1 score of 0.6215, 81.76% accuracy, 68.95% precision, and 56.57% recall. The top three attributes that cause churn are found to be tenure, Internet Service Fiber optic, and Internet Service DSL; conversely, the top three models in this article that perform the best are Logistic Regression, Deep Neural Network, and AdaBoost. The K means algorithm is applied to establish and analyze four different customer clusters. This study has effectively identified customers that are at risk of churn and may be utilized to develop and execute strategies that lower customer attrition.

Keywords: attrition, retention, predictive modeling, customer segmentation, telecommunications

Procedia PDF Downloads 58
1008 Coronin 1C and miR-128A as Potential Diagnostic Biomarkers for Glioblastoma Multiform

Authors: Denis Mustafov, Emmanouil Karteris, Maria Braoudaki

Abstract:

Glioblastoma multiform (GBM) is a heterogenous primary brain tumour that kills most affected patients. To the authors best knowledge, despite all research efforts there is no early diagnostic biomarker for GBM. MicroRNAs (miRNAs) are short non-coding RNA molecules which are deregulated in many cancers. The aim of this research was to determine miRNAs with a diagnostic impact and to potentially identify promising therapeutic targets for glioblastoma multiform. In silico analysis was performed to identify deregulated miRNAs with diagnostic relevance for glioblastoma. The expression profiles of the chosen miRNAs were then validated in vitro in the human glioblastoma cell lines A172 and U-87MG. Briefly, RNA extraction was carried out using the Trizol method, whilst miRNA extraction was performed using the mirVANA miRNA isolation kit. Quantitative Real-Time Polymerase Chain Reaction was performed to verify their expression. The presence of five target proteins within the A172 cell line was evaluated by Western blotting. The expression of the CORO1C protein within 32 GBM cases was examined via immunohistochemistry. The miRNAs identified in silico included miR-21-5p, miR-34a and miR-128a. These miRNAs were shown to target deregulated GBM genes, such as CDK6, E2F3, BMI1, JAG1, and CORO1C. miR-34a and miR-128a showed low expression profiles in comparison to a control miR-RNU-44 in both GBM cell lines suggesting tumour suppressor properties. Opposing, miR-21-5p demonstrated greater expression indicating that it could potentially function as an oncomiR. Western blotting revealed expression of all five proteins within the A172 cell line. In silico analysis also suggested that CORO1C is a target of miR-128a and miR-34a. Immunohistochemistry demonstrated that 75% of the GBM cases showed moderate to high expression of CORO1C protein. Greater understanding of the deregulated expression of miR-128a and the upregulation of CORO1C in GBM could potentially lead to the identification of a promising diagnostic biomarker signature for glioblastomas.

Keywords: non-coding RNAs, gene expression, brain tumours, immunohistochemistry

Procedia PDF Downloads 91
1007 Air Pollutants Exposure and Blood High Sensitivity C-Reactive Protein Concentrations in Healthy Pregnant Women

Authors: Gwo-Hwa Wan, Tai-Ho Hung, Fen-Fang Chung, Wan-Ying Lee, Hui-Ching Yang

Abstract:

Air pollutant exposure results in elevated concentrations of oxidative stress and inflammatory biomarkers in general populations. Increased concentrations of inflammatory biomarkers in pregnant women would be associated with preterm labor and low birth weight. To our best knowledge, the associations between air pollutants exposure and inflammation in pregnant women and fetuses are unknown, as well as their effects on fetal growth. This study aimed to evaluate the influences of outdoor air pollutants in northern Taiwan areas on the inflammatory biomarker (high sensitivity C-reactive protein, hs-CRP) concentration in the blood of healthy pregnant women and how the biomarker impacts fetal growth. In this study, 38 healthy pregnant women who are in their first trimester and live in northern Taiwan area were recruited from the Taipei Chang Gung Memorial Hospital. Personal characteristics and prenatal examination data (e.g., blood pressure) were obtained from recruited subjects. The concentrations of inflammatory mediators, hs-CRP, in the blood of healthy pregnant women were analyzed. Additionally, hourly data of air pollutants (PM10, SO2, NO2, O3, CO) concentrations were obtained from air quality monitoring stations in Taipei area, established by the Taiwan Environmental Protection Administration. The definition of lag 0 and lag 01 are the exposure to air pollutants on the day of blood withdrawal, and the average exposure to air pollutants one day before and on the day of blood withdrawal, respectively. The statistical analyses were conducted using SPSS software version 22.0 (SPSS, Inc., Chicago, IL, USA). This analytical result indicates that the healthy pregnant women aged between 28 and 42 years old. The body mass index before pregnancy averaged 21.51 (sd = 2.51) kg/m2. Around 90% of the pregnant women had never smoking habit, and 28.95% of them had allergic diseases. Approximately around 84% and 5.26% of the pregnant women worked at indoor and outdoor environments, respectively. The mean hematocrit level of the pregnant women was 37.10%, and the hemoglobin levels were ranged between 10.1 and 14.7 g/dL with 12.47 g/dL of mean value. The blood hs-CRP concentrations of healthy pregnant women in the first trimester ranged between 0.32 and 32.5 mg/L with 2.83 (sd = 5.69) mg/L of mean value. The blood hs-CRP concentrations were positively associated with ozone concentrations at lag 0-14 (r = 0.481, p = 0.017) in healthy pregnant women. Significant lag effects were identified in ozone at lag 0-14 with a positive excess concentration of blood hs-CRP.

Keywords: air pollutant, hs-CRP, pregnant woman, ozone, first trimester

Procedia PDF Downloads 257
1006 Detection of Autistic Children's Voice Based on Artificial Neural Network

Authors: Royan Dawud Aldian, Endah Purwanti, Soegianto Soelistiono

Abstract:

In this research we have been developed an automatic investigation to classify normal children voice or autistic by using modern computation technology that is computation based on artificial neural network. The superiority of this computation technology is its capability on processing and saving data. In this research, digital voice features are gotten from the coefficient of linear-predictive coding with auto-correlation method and have been transformed in frequency domain using fast fourier transform, which used as input of artificial neural network in back-propagation method so that will make the difference between normal children and autistic automatically. The result of back-propagation method shows that successful classification capability for normal children voice experiment data is 100% whereas, for autistic children voice experiment data is 100%. The success rate using back-propagation classification system for the entire test data is 100%.

Keywords: autism, artificial neural network, backpropagation, linier predictive coding, fast fourier transform

Procedia PDF Downloads 461
1005 Predictive Modelling Approach to Identify Spare Parts Inventory Obsolescence

Authors: Madhu Babu Cherukuri, Tamoghna Ghosh

Abstract:

Factory supply chain management spends billions of dollars every year to procure and manage equipment spare parts. Due to technology -and processes changes some of these spares become obsolete/dead inventory. Factories have huge dead inventory worth millions of dollars accumulating over time. This is due to lack of a scientific methodology to identify them and send the inventory back to the suppliers on a timely basis. The standard approach followed across industries to deal with this is: if a part is not used for a set pre-defined period of time it is declared dead. This leads to accumulation of dead parts over time and these parts cannot be sold back to the suppliers as it is too late as per contract agreement. Our main idea is the time period for identifying a part as dead cannot be a fixed pre-defined duration across all parts. Rather, it should depend on various properties of the part like historical consumption pattern, type of part, how many machines it is being used in, whether it- is a preventive maintenance part etc. We have designed a predictive algorithm which predicts part obsolescence well in advance with reasonable accuracy and which can help save millions.

Keywords: obsolete inventory, machine learning, big data, supply chain analytics, dead inventory

Procedia PDF Downloads 319
1004 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs

Authors: Anika Chebrolu

Abstract:

Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.

Keywords: drug design, multitargeticity, de-novo, reinforcement learning

Procedia PDF Downloads 99
1003 Determination of Community Based Reference Interval of Aspartate Aminotransferase to Platelet Ratio Index (APRI) among Healthy Populations in Mekelle City Tigray, Northern Ethiopia

Authors: Getachew Belay Kassahun

Abstract:

Background: Aspartate aminotransferase to Platelet Ratio Index (APRI) currently becomes a biomarker for screening liver fibrosis since liver biopsy procedure is invasive and variation in pathological interpretation. Clinical Laboratory Standard Institute recommends establishing age, sex and environment specific reference interval for biomarkers in a homogenous population. The current study was aimed to derive community based reference interval of APRI aged between 12 and 60 years old in Mekelle city Tigrai, Northern Ethiopia. Method: Six hundred eighty eight study participants were collected from three districts in Mekelle city. The 3 districts were selected through random sampling technique and sample size to kebelles (small administration) were distributed proportional to household number in each district. Lottery method was used at household level if more than 2 study participants to each age partition were found. A community based cross sectional in a total of 534 study participants, 264 male and 270 females, were included in the final laboratory and data analysis but around 154 study participants were excluded through exclusion criteria. Aspartate aminotransferase was analyzed through Biosystem chemistry analyzer and Sysmix machine was used to analyze platelet. Man Whitney U test non parametric stastical tool was used to appreciate stastical difference among gender after excluding the outliers through Box and Whisker. Result: The study appreciated stastical difference among gender for APRI reference interval. The combined, male and female reference interval in the current study was 0.098-0.390, 0.133-0.428 and 0.090-0.319 respectively. The upper and lower reference interval of males was higher than females in all age partition and there was no stastical difference (p-value (<0.05)) between age partition. Conclusion: The current study showed using sex specific reference interval is significant to APRI biomarker in clinical practice for result interpretation.

Keywords: reference interval, aspartate aminotransferase to platelet ratio Index, Ethiopia, tigray

Procedia PDF Downloads 116
1002 Comparison of Techniques for Detection and Diagnosis of Eccentricity in the Air-Gap Fault in Induction Motors

Authors: Abrahão S. Fontes, Carlos A. V. Cardoso, Levi P. B. Oliveira

Abstract:

The induction motors are used worldwide in various industries. Several maintenance techniques are applied to increase the operating time and the lifespan of these motors. Among these, the predictive maintenance techniques such as Motor Current Signature Analysis (MCSA), Motor Square Current Signature Analysis (MSCSA), Park's Vector Approach (PVA) and Park's Vector Square Modulus (PVSM) are used to detect and diagnose faults in electric motors, characterized by patterns in the stator current frequency spectrum. In this article, these techniques are applied and compared on a real motor, which has the fault of eccentricity in the air-gap. It was used as a theoretical model of an electric induction motor without fault in order to assist comparison between the stator current frequency spectrum patterns with and without faults. Metrics were purposed and applied to evaluate the sensitivity of each technique fault detection. The results presented here show that the above techniques are suitable for the fault of eccentricity in the air gap, whose comparison between these showed the suitability of each one.

Keywords: eccentricity in the air-gap, fault diagnosis, induction motors, predictive maintenance

Procedia PDF Downloads 351
1001 Optimal Tamping for Railway Tracks, Reducing Railway Maintenance Expenditures by the Use of Integer Programming

Authors: Rui Li, Min Wen, Kim Bang Salling

Abstract:

For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euros per kilometer per year. In order to reduce such maintenance expenditures, this paper presents a mixed 0-1 linear mathematical model designed to optimize the predictive railway tamping activities for ballast track in the planning horizon of three to four years. The objective function is to minimize the tamping machine actual costs. The approach of the research is using the simple dynamic model for modelling condition-based tamping process and the solution method for finding optimal condition-based tamping schedule. Seven technical and practical aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality recovery on the track quality after tamping operation; (5) Tamping machine operation practices (6) tamping budgets and (7) differentiating the open track from the station sections. A Danish railway track between Odense and Fredericia with 42.6 km of length is applied for a time period of three and four years in the proposed maintenance model. The generated tamping schedule is reasonable and robust. Based on the result from the Danish railway corridor, the total costs can be reduced significantly (50%) than the previous model which is based on optimizing the number of tamping. The different maintenance strategies have been discussed in the paper. The analysis from the results obtained from the model also shows a longer period of predictive tamping planning has more optimal scheduling of maintenance actions than continuous short term preventive maintenance, namely yearly condition-based planning.

Keywords: integer programming, railway tamping, predictive maintenance model, preventive condition-based maintenance

Procedia PDF Downloads 446
1000 Establishing a Surrogate Approach to Assess the Exposure Concentrations during Coating Process

Authors: Shan-Hong Ying, Ying-Fang Wang

Abstract:

A surrogate approach was deployed for assessing exposures of multiple chemicals at the selected working area of coating processes and applied to assess the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. For the selected area, 6 to 12 portable photoionization detector (PID) were placed uniformly in its workplace to measure its total VOCs concentrations (CT-VOCs) for 6 randomly selected workshifts. Simultaneously, one sampling strain was placed beside one of these portable PIDs, and the collected air sample was analyzed for individual concentration (CVOCi) of 5 VOCs (xylene, butanone, toluene, butyl acetate, and dimethylformamide). Predictive models were established by relating the CT-VOCs to CVOCi of each individual compound via simple regression analysis. The established predictive models were employed to predict each CVOCi based on the measured CT-VOC for each the similar working area using the same portable PID. Results show that predictive models obtained from simple linear regression analyses were found with an R2 = 0.83~0.99 indicating that CT-VOCs were adequate for predicting CVOCi. In order to verify the validity of the exposure prediction model, the sampling analysis of the above chemical substances was further carried out and the correlation between the measured value (Cm) and the predicted value (Cp) was analyzed. It was found that there is a good correction between the predicted value and measured value of each measured chemical substance (R2=0.83~0.98). Therefore, the surrogate approach could be assessed the exposure concentration of similar exposed groups using the same chemicals but different formula ratios. However, it is recommended to establish the prediction model between the chemical substances belonging to each coater and the direct-reading PID, which is more representative of reality exposure situation and more accurately to estimate the long-term exposure concentration of operators.

Keywords: exposure assessment, exposure prediction model, surrogate approach, TVOC

Procedia PDF Downloads 152
999 Assessment of Predictive Confounders for the Prevalence of Breast Cancer among Iraqi Population: A Retrospective Study from Baghdad, Iraq

Authors: Nadia H. Mohammed, Anmar Al-Taie, Fadia H. Al-Sultany

Abstract:

Although breast cancer prevalence continues to increase, mortality has been decreasing as a result of early detection and improvement in adjuvant systemic therapy. Nevertheless, this disease required further efforts to understand and identify the associated potential risk factors that could play a role in the prevalence of this malignancy among Iraqi women. The objective of this study was to assess the perception of certain predictive risk factors on the prevalence of breast cancer types among a sample of Iraqi women diagnosed with breast cancer. This was a retrospective observational study carried out at National Cancer Research Center in College of Medicine, Baghdad University from November 2017 to January 2018. Data of 100 patients with breast cancer whose biopsies examined in the National Cancer Research Center were included in this study. Data were collected to structure a detailed assessment regarding the patients’ demographic, medical and cancer records. The majority of study participants (94%) suffered from ductal breast cancer with mean age 49.57 years. Among those women, 48.9% were obese with body mass index (BMI) 35 kg/m2. 68.1% of them had positive family history of breast cancer and 66% had low parity. 40.4% had stage II ductal breast cancer followed by 25.5% with stage III. It was found that 59.6% and 68.1% had positive oestrogen receptor sensitivity and positive human epidermal growth factor (HER2/neu) receptor sensitivity respectively. In regard to the impact of prediction of certain variables on the incidence of ductal breast cancer, positive family history of breast cancer (P < 0.0001), low parity (P< 0.0001), stage I and II breast cancer (P = 0.02) and positive HER2/neu status (P < 0.0001) were significant predictive factors among the study participants. The results from this study provide relevant evidence for a significant positive and potential association between certain risk factors and the prevalence of breast cancer among Iraqi women.

Keywords: Ductal Breast Cancer, Hormone Sensitivity, Iraq, Risk Factors

Procedia PDF Downloads 128
998 Application of Global Predictive Real Time Control Strategy to Improve Flooding Prevention Performance of Urban Stormwater Basins

Authors: Shadab Shishegar, Sophie Duchesne, Genevieve Pelletier

Abstract:

Sustainability as one of the key elements of Smart cities, can be realized by employing Real Time Control Strategies for city’s infrastructures. Nowadays Stormwater management systems play an important role in mitigating the impacts of urbanization on natural hydrological cycle. These systems can be managed in such a way that they meet the smart cities standards. In fact, there is a huge potential for sustainable management of urban stormwater and also its adaptability to global challenges like climate change. Hence, a dynamically managed system that can adapt itself to instability of the environmental conditions is desirable. A Global Predictive Real Time Control approach is proposed in this paper to optimize the performance of stormwater management basins in terms of flooding prevention. To do so, a mathematical optimization model is developed then solved using Genetic Algorithm (GA). Results show an improved performance at system-level for the stormwater basins in comparison to static strategy.

Keywords: environmental sustainability, optimization, real time control, storm water management

Procedia PDF Downloads 178
997 Reinforcement Learning for Quality-Oriented Production Process Parameter Optimization Based on Predictive Models

Authors: Akshay Paranjape, Nils Plettenberg, Robert Schmitt

Abstract:

Producing faulty products can be costly for manufacturing companies and wastes resources. To reduce scrap rates in manufacturing, process parameters can be optimized using machine learning. Thus far, research mainly focused on optimizing specific processes using traditional algorithms. To develop a framework that enables real-time optimization based on a predictive model for an arbitrary production process, this study explores the application of reinforcement learning (RL) in this field. Based on a thorough review of literature about RL and process parameter optimization, a model based on maximum a posteriori policy optimization that can handle both numerical and categorical parameters is proposed. A case study compares the model to state–of–the–art traditional algorithms and shows that RL can find optima of similar quality while requiring significantly less time. These results are confirmed in a large-scale validation study on data sets from both production and other fields. Finally, multiple ways to improve the model are discussed.

Keywords: reinforcement learning, production process optimization, evolutionary algorithms, policy optimization, actor critic approach

Procedia PDF Downloads 98
996 Integrating Machine Learning and Rule-Based Decision Models for Enhanced B2B Sales Forecasting and Customer Prioritization

Authors: Wenqi Liu, Reginald Bailey

Abstract:

This study proposes a comprehensive and effective approach to business-to-business (B2B) sales forecasting by integrating advanced machine learning models with a rule-based decision-making framework. The methodology addresses the critical challenge of optimizing sales pipeline performance and improving conversion rates through predictive analytics and actionable insights. The first component involves developing a classification model to predict the likelihood of conversion, aiming to outperform traditional methods such as logistic regression in terms of accuracy, precision, recall, and F1 score. Feature importance analysis highlights key predictive factors, such as client revenue size and sales velocity, providing valuable insights into conversion dynamics. The second component focuses on forecasting sales value using a regression model, designed to achieve superior performance compared to linear regression by minimizing mean absolute error (MAE), mean squared error (MSE), and maximizing R-squared metrics. The regression analysis identifies primary drivers of sales value, further informing data-driven strategies. To bridge the gap between predictive modeling and actionable outcomes, a rule-based decision framework is introduced. This model categorizes leads into high, medium, and low priorities based on thresholds for conversion probability and predicted sales value. By combining classification and regression outputs, this framework enables sales teams to allocate resources effectively, focus on high-value opportunities, and streamline lead management processes. The integrated approach significantly enhances lead prioritization, increases conversion rates, and drives revenue generation, offering a robust solution to the declining pipeline conversion rates faced by many B2B organizations. Our findings demonstrate the practical benefits of blending machine learning with decision-making frameworks, providing a scalable, data-driven solution for strategic sales optimization. This study underscores the potential of predictive analytics to transform B2B sales operations, enabling more informed decision-making and improved organizational outcomes in competitive markets.

Keywords: machine learning, XGBoost, regression, decision making framework, system engineering

Procedia PDF Downloads 21
995 Identification of Candidate Congenital Heart Defects Biomarkers by Applying a Random Forest Approach on DNA Methylation Data

Authors: Kan Yu, Khui Hung Lee, Eben Afrifa-Yamoah, Jing Guo, Katrina Harrison, Jack Goldblatt, Nicholas Pachter, Jitian Xiao, Guicheng Brad Zhang

Abstract:

Background and Significance of the Study: Congenital Heart Defects (CHDs) are the most common malformation at birth and one of the leading causes of infant death. Although the exact etiology remains a significant challenge, epigenetic modifications, such as DNA methylation, are thought to contribute to the pathogenesis of congenital heart defects. At present, no existing DNA methylation biomarkers are used for early detection of CHDs. The existing CHD diagnostic techniques are time-consuming and costly and can only be used to diagnose CHDs after an infant was born. The present study employed a machine learning technique to analyse genome-wide methylation data in children with and without CHDs with the aim to find methylation biomarkers for CHDs. Methods: The Illumina Human Methylation EPIC BeadChip was used to screen the genome‐wide DNA methylation profiles of 24 infants diagnosed with congenital heart defects and 24 healthy infants without congenital heart defects. Primary pre-processing was conducted by using RnBeads and limma packages. The methylation levels of top 600 genes with the lowest p-value were selected and further investigated by using a random forest approach. ROC curves were used to analyse the sensitivity and specificity of each biomarker in both training and test sample sets. The functionalities of selected genes with high sensitivity and specificity were then assessed in molecular processes. Major Findings of the Study: Three genes (MIR663, FGF3, and FAM64A) were identified from both training and validating data by random forests with an average sensitivity and specificity of 85% and 95%. GO analyses for the top 600 genes showed that these putative differentially methylated genes were primarily associated with regulation of lipid metabolic process, protein-containing complex localization, and Notch signalling pathway. The present findings highlight that aberrant DNA methylation may play a significant role in the pathogenesis of congenital heart defects.

Keywords: biomarker, congenital heart defects, DNA methylation, random forest

Procedia PDF Downloads 159
994 Inclusion of Students with Disabilities (SWD) in Higher Education Institutions (HEIs): Self-Advocacy and Engagement as Central

Authors: Tadesse Abera

Abstract:

This study aimed to investigate the contribution of self-advocacy and engagement in the inclusion of SWDs in HEIs. A convergent parallel mixed methods design was employed. This article reports the quantitative strand. A total of 246 SWDs were selected through stratified proportionate random sampling technique from five public HEIs in Ethiopia. Data were collected through Self-advocacy questionnaire, student engagement scale, and college student experience questionnaire and analyzed through frequency, percentage, mean, standard deviation, correlation, one sample t-test and multiple regression. Both self-advocacy and engagement were found to have a predictive power on inclusion of respondents in the HEIs, where engagement was found to be more predictor. From the components of self-advocacy, knowledge of self and leadership and from engagement dimensions sense of belonging, cognitive, and valuing in their respective orders were found to have a stronger predictive power on the inclusion of respondents in the institutions. Based on the findings it was concluded that, if students with disabilities work hard to be self-determined, strive for realizing social justice, exert quality effort and seek active involvement, their inclusion in the institutions would be ensured.

Keywords: self-advocacy, engagement, inclusion, students with disabilities, higher education institution

Procedia PDF Downloads 78
993 Deepnic, A Method to Transform Each Variable into Image for Deep Learning

Authors: Nguyen J. M., Lucas G., Brunner M., Ruan S., Antonioli D.

Abstract:

Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip.

Keywords: tabular data, deep learning, perfect trees, NICS

Procedia PDF Downloads 91
992 Implementing a Neural Network on a Low-Power and Mobile Cluster to Aide Drivers with Predictive AI for Traffic Behavior

Authors: Christopher Lama, Alix Rieser, Aleksandra Molchanova, Charles Thangaraj

Abstract:

New technologies like Tesla’s Dojo have made high-performance embedded computing more available. Although automobile computing has developed and benefited enormously from these more recent technologies, the costs are still high, prohibitively high in some cases for broader adaptation, particularly for the after-market and enthusiast markets. This project aims to implement a Raspberry Pi-based low-power (under one hundred Watts) highly mobile computing cluster for a neural network. The computing cluster built from off-the-shelf components is more affordable and, therefore, makes wider adoption possible. The paper describes the design of the neural network, Raspberry Pi-based cluster, and applications the cluster will run. The neural network will use input data from sensors and cameras to project a live view of the road state as the user drives. The neural network will be trained to predict traffic behavior and generate warnings when potentially dangerous situations are predicted. The significant outcomes of this study will be two folds, firstly, to implement and test the low-cost cluster, and secondly, to ascertain the effectiveness of the predictive AI implemented on the cluster.

Keywords: CS pedagogy, student research, cluster computing, machine learning

Procedia PDF Downloads 103
991 Carbohydrate Intake Estimation in Type I Diabetic Patients Described by UVA/Padova Model

Authors: David A. Padilla, Rodolfo Villamizar

Abstract:

In recent years, closed loop control strategies have been developed in order to establish a healthy glucose profile in type 1 diabetic mellitus (T1DM) patients. However, the controller itself is unable to define a suitable reference trajectory for glucose. In this paper, a control strategy Is proposed where the shape of the reference trajectory is generated bases in the amount of carbohydrates present during the digestive process, due to the effect of carbohydrate intake. Since there no exists a sensor to measure the amount of carbohydrates consumed, an estimator is proposed. Thus this paper presents the entire process of designing a carbohydrate estimator, which allows estimate disturbance for a predictive controller (MPC) in a T1MD patient, the estimation will be used to establish a profile of reference and improve the response of the controller by providing the estimated information of ingested carbohydrates. The dynamics of the diabetic model used are due to the equations described by the UVA/Padova model of the T1DMS simulator, the system was developed and simulated in Simulink, taking into account the noise and limitations of the glucose control system actuators.

Keywords: estimation, glucose control, predictive controller, MPC, UVA/Padova

Procedia PDF Downloads 262
990 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: big data, k-NN, machine learning, traffic speed prediction

Procedia PDF Downloads 364
989 A Configurational Approach to Understand the Effect of Organizational Structure on Absorptive Capacity: Results from PLS and fsQCA

Authors: Murad Ali, Anderson Konan Seny Kan, Khalid A. Maimani

Abstract:

Based on the theory of organizational design and the theory of knowledge, this study uses complexity theory to explain and better understand the causal impacts of various patterns of organizational structural factors stimulating absorptive capacity (ACAP). Organizational structure can be thought of as heterogeneous configurations where various components are often intertwined. This study argues that impact of the traditional variables which define a firm’s organizational structure (centralization, formalization, complexity and integration) on ACAP is better understood in terms of set-theoretic relations rather than correlations. This study uses a data sample of 347 from a multiple industrial sector in South Korea. The results from PLS-SEM support all the hypothetical relationships among the variables. However, fsQCA results suggest the possible configurations of centralization, formalization, complexity, integration, age, size, industry and revenue factors that contribute to high level of ACAP. The results from fsQCA demonstrate the usefulness of configurational approaches in helping understand equifinality in the field of knowledge management. A recent fsQCA procedure based on a modeling subsample and holdout subsample is use in this study to assess the predictive validity of the model under investigation. The same type predictive analysis is also made through PLS-SEM. These analyses reveal a good relevance of causal solutions leading to high level of ACAP. In overall, the results obtained from combining PLS-SEM and fsQCA are very insightful. In particular, they could help managers to link internal organizational structural with ACAP. In other words, managers may comprehend finely how different components of organizational structure can increase the level of ACAP. The configurational approach may trigger new insights that could help managers prioritize selection criteria and understand the interactions between organizational structure and ACAP. The paper also discusses theoretical and managerial implications arising from these findings.

Keywords: absorptive capacity, organizational structure, PLS-SEM, fsQCA, predictive analysis, modeling subsample, holdout subsample

Procedia PDF Downloads 332
988 Prediction of Terrorist Activities in Nigeria using Bayesian Neural Network with Heterogeneous Transfer Functions

Authors: Tayo P. Ogundunmade, Adedayo A. Adepoju

Abstract:

Terrorist attacks in liberal democracies bring about a few pessimistic results, for example, sabotaged public support in the governments they target, disturbing the peace of a protected environment underwritten by the state, and a limitation of individuals from adding to the advancement of the country, among others. Hence, seeking for techniques to understand the different factors involved in terrorism and how to deal with those factors in order to completely stop or reduce terrorist activities is the topmost priority of the government in every country. This research aim is to develop an efficient deep learning-based predictive model for the prediction of future terrorist activities in Nigeria, addressing low-quality prediction accuracy problems associated with the existing solution methods. The proposed predictive AI-based model as a counterterrorism tool will be useful by governments and law enforcement agencies to protect the lives of individuals in society and to improve the quality of life in general. A Heterogeneous Bayesian Neural Network (HETBNN) model was derived with Gaussian error normal distribution. Three primary transfer functions (HOTTFs), as well as two derived transfer functions (HETTFs) arising from the convolution of the HOTTFs, are namely; Symmetric Saturated Linear transfer function (SATLINS ), Hyperbolic Tangent transfer function (TANH), Hyperbolic Tangent sigmoid transfer function (TANSIG), Symmetric Saturated Linear and Hyperbolic Tangent transfer function (SATLINS-TANH) and Symmetric Saturated Linear and Hyperbolic Tangent Sigmoid transfer function (SATLINS-TANSIG). Data on the Terrorist activities in Nigeria gathered through questionnaires for the purpose of this study were used. Mean Square Error (MSE), Mean Absolute Error (MAE) and Test Error are the forecast prediction criteria. The results showed that the HETFs performed better in terms of prediction and factors associated with terrorist activities in Nigeria were determined. The proposed predictive deep learning-based model will be useful to governments and law enforcement agencies as an effective counterterrorism mechanism to understand the parameters of terrorism and to design strategies to deal with terrorism before an incident actually happens and potentially causes the loss of precious lives. The proposed predictive AI-based model will reduce the chances of terrorist activities and is particularly helpful for security agencies to predict future terrorist activities.

Keywords: activation functions, Bayesian neural network, mean square error, test error, terrorism

Procedia PDF Downloads 166
987 The Predictive Role of Attachment and Adjustment in the Decision-Making Process in Infertility

Authors: A. Luli, A. Santona

Abstract:

It is rare for individuals that are involved in a relationship to think about the possibility of having procreation problems in the near present or in the future. However, infertility is a condition that affects millions of people all around the world. Often, infertile individuals have to deal with experiences of psychological, relational and social problems. In these cases, they have to review their choices and take into consideration, if it is necessary, new ones. Different studies have examined the different decisions that infertile individuals have to go through dealing with infertility and its treatment, but none of them is focused on the decision-making style used by infertile individuals to solve their problem and on the factors that influences it. The aim of this paper is to define the style of decision-making used by infertile persons to give a solution to the ‘problem’ and the potential predictive role of the attachment and of the dyadic adjustment. The total sample is composed by 251 participants, divided in two groups: the experimental group composed by 114 participants, 62 males and 52 females, age between 25 and 59 years, and the control group composed by 137 participants, 65 males and 72 females, age between 22 and 49 years. The battery of instruments used is composed by: the General Decision Making Style (GDMS), the Experiences in Close Relationships Questionnaire Revised (ECR-R), Dyadic Adjustment Scale (DAS), and the Symptom Checklist-90-R (SCL-90-R). The results from the analysis of the samples showed a prevalence of the rational decision-making style for both males and females. No significant statistical difference was found between the experimental and control group. Also the analyses showed a significant statistical relationship between the decision making styles and the adult attachment styles for both males and females. In this case, only for males, there was a significant statistical difference between the experimental and the control group. Another significant statistical relationship was founded between the decision making styles and the adjustment scales for both males and females. Also in this case, the difference between the two groups was founded to be significant only of males. These results contribute to enrich the literature on the subject of decision-making styles in infertile individuals, showing also the predictive role of the attachment styles and the adjustment, confirming in this was the few results in the literature.

Keywords: adjustment, attachment, decision-making style, infertility

Procedia PDF Downloads 333
986 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.

Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes

Procedia PDF Downloads 43
985 Evaluation of Features Extraction Algorithms for a Real-Time Isolated Word Recognition System

Authors: Tomyslav Sledevič, Artūras Serackis, Gintautas Tamulevičius, Dalius Navakauskas

Abstract:

This paper presents a comparative evaluation of features extraction algorithm for a real-time isolated word recognition system based on FPGA. The Mel-frequency cepstral, linear frequency cepstral, linear predictive and their cepstral coefficients were implemented in hardware/software design. The proposed system was investigated in the speaker-dependent mode for 100 different Lithuanian words. The robustness of features extraction algorithms was tested recognizing the speech records at different signals to noise rates. The experiments on clean records show highest accuracy for Mel-frequency cepstral and linear frequency cepstral coefficients. For records with 15 dB signal to noise rate the linear predictive cepstral coefficients give best result. The hard and soft part of the system is clocked on 50 MHz and 100 MHz accordingly. For the classification purpose, the pipelined dynamic time warping core was implemented. The proposed word recognition system satisfies the real-time requirements and is suitable for applications in embedded systems.

Keywords: isolated word recognition, features extraction, MFCC, LFCC, LPCC, LPC, FPGA, DTW

Procedia PDF Downloads 496
984 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare

Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl

Abstract:

Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.

Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval

Procedia PDF Downloads 202