Search results for: prediction analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29166

Search results for: prediction analysis

28836 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy

Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu

Abstract:

Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.

Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR

Procedia PDF Downloads 69
28835 Development of Terrorist Threat Prediction Model in Indonesia by Using Bayesian Network

Authors: Hilya Mudrika Arini, Nur Aini Masruroh, Budi Hartono

Abstract:

There are more than 20 terrorist threats from 2002 to 2012 in Indonesia. Despite of this fact, preventive solution through studies in the field of national security in Indonesia has not been conducted comprehensively. This study aims to provide a preventive solution by developing prediction model of the terrorist threat in Indonesia by using Bayesian network. There are eight stages to build the model, started from literature review, build and verify Bayesian belief network to what-if scenario. In order to build the model, four experts from different perspectives are utilized. This study finds several significant findings. First, news and the readiness of terrorist group are the most influent factor. Second, according to several scenarios of the news portion, it can be concluded that the higher positive news proportion, the higher probability of terrorist threat will occur. Therefore, the preventive solution to reduce the terrorist threat in Indonesia based on the model is by keeping the positive news portion to a maximum of 38%.

Keywords: Bayesian network, decision analysis, national security system, text mining

Procedia PDF Downloads 392
28834 Prediction of the Performance of a Bar-Type Piezoelectric Vibration Actuator Depending on the Frequency Using an Equivalent Circuit Analysis

Authors: J. H. Kim, J. H. Kwon, J. S. Park, K. J. Lim

Abstract:

This paper has investigated a technique that predicts the performance of a bar-type unimorph piezoelectric vibration actuator depending on the frequency. This paper has been proposed an equivalent circuit that can be easily analyzed for the bar-type unimorph piezoelectric vibration actuator. In the dynamic analysis, rigidity and resonance frequency, which are important mechanical elements, were derived using the basic beam theory. In the equivalent circuit analysis, the displacement and bandwidth of the piezoelectric vibration actuator depending on the frequency were predicted. Also, for the reliability of the derived equations, the predicted performance depending on the shape change was compared with the result of a finite element analysis program.

Keywords: actuator, piezoelectric, performance, unimorph

Procedia PDF Downloads 464
28833 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 176
28832 Deformation Severity Prediction in Sewer Pipelines

Authors: Khalid Kaddoura, Ahmed Assad, Tarek Zayed

Abstract:

Sewer pipelines are prone to deterioration over-time. In fact, their deterioration does not follow a fixed downward pattern. This is in fact due to the defects that propagate through their service life. Sewer pipeline defects are categorized into distinct groups. However, the main two groups are the structural and operational defects. By definition, the structural defects influence the structural integrity of the sewer pipelines such as deformation, cracks, fractures, holes, etc. However, the operational defects are the ones that affect the flow of the sewer medium in the pipelines such as: roots, debris, attached deposits, infiltration, etc. Yet, the process for each defect to emerge follows a cause and effect relationship. Deformation, which is the change of the sewer pipeline geometry, is one type of an influencing defect that could be found in many sewer pipelines due to many surrounding factors. This defect could lead to collapse if the percentage exceeds 15%. Therefore, it is essential to predict the deformation percentage before confronting such a situation. Accordingly, this study will predict the percentage of the deformation defect in sewer pipelines adopting the multiple regression analysis. Several factors will be considered in establishing the model, which are expected to influence the defamation defect severity. Besides, this study will construct a time-based curve to understand how the defect would evolve overtime. Thus, this study is expected to be an asset for decision-makers as it will provide informative conclusions about the deformation defect severity. As a result, inspections will be minimized and so the budgets.

Keywords: deformation, prediction, regression analysis, sewer pipelines

Procedia PDF Downloads 189
28831 Multiclass Support Vector Machines with Simultaneous Multi-Factors Optimization for Corporate Credit Ratings

Authors: Hyunchul Ahn, William X. S. Wong

Abstract:

Corporate credit rating prediction is one of the most important topics, which has been studied by researchers in the last decade. Over the last decade, researchers are pushing the limit to enhance the exactness of the corporate credit rating prediction model by applying several data-driven tools including statistical and artificial intelligence methods. Among them, multiclass support vector machine (MSVM) has been widely applied due to its good predictability. However, heuristics, for example, parameters of a kernel function, appropriate feature and instance subset, has become the main reason for the critics on MSVM, as they have dictate the MSVM architectural variables. This study presents a hybrid MSVM model that is intended to optimize all the parameter such as feature selection, instance selection, and kernel parameter. Our model adopts genetic algorithm (GA) to simultaneously optimize multiple heterogeneous design factors of MSVM.

Keywords: corporate credit rating prediction, Feature selection, genetic algorithms, instance selection, multiclass support vector machines

Procedia PDF Downloads 294
28830 Reliability-Simulation of Composite Tubular Structure under Pressure by Finite Elements Methods

Authors: Abdelkader Hocine, Abdelhakim Maizia

Abstract:

The exponential growth of reinforced fibers composite materials use has prompted researchers to step up their work on the prediction of their reliability. Owing to differences between the properties of the materials used for the composite, the manufacturing processes, the load combinations and types of environment, the prediction of the reliability of composite materials has become a primary task. Through failure criteria, TSAI-WU and the maximum stress, the reliability of multilayer tubular structures under pressure is the subject of this paper, where the failure probability of is estimated by the method of Monte Carlo.

Keywords: composite, design, monte carlo, tubular structure, reliability

Procedia PDF Downloads 464
28829 Analysis of Ferroresonant Overvoltages in Cable-fed Transformers

Authors: George Eduful, Ebenezer A. Jackson, Kingsford A. Atanga

Abstract:

This paper investigates the impacts of cable length and capacity of transformer on ferroresonant overvoltage in cable-fed transformers. The study was conducted by simulation using the EMTP RV. Results show that ferroresonance can cause dangerous overvoltages ranging from 2 to 5 per unit. These overvoltages impose stress on insulations of transformers and cables and subsequently result in system failures. Undertaking Basic Multiple Regression Analysis (BMR) on the results obtained, a statistical model was obtained in terms of cable length and transformer capacity. The model is useful for ferroresonant prediction and control in cable-fed transformers.

Keywords: ferroresonance, cable-fed transformers, EMTP RV, regression analysis

Procedia PDF Downloads 533
28828 An Interpretable Data-Driven Approach for the Stratification of the Cardiorespiratory Fitness

Authors: D.Mendes, J. Henriques, P. Carvalho, T. Rocha, S. Paredes, R. Cabiddu, R. Trimer, R. Mendes, A. Borghi-Silva, L. Kaminsky, E. Ashley, R. Arena, J. Myers

Abstract:

The continued exploration of clinically relevant predictive models continues to be an important pursuit. Cardiorespiratory fitness (CRF) portends clinical vital information and as such its accurate prediction is of high importance. Therefore, the aim of the current study was to develop a data-driven model, based on computational intelligence techniques and, in particular, clustering approaches, to predict CRF. Two prediction models were implemented and compared: 1) the traditional Wasserman/Hansen Equations; and 2) an interpretable clustering approach. Data used for this analysis were from the 'FRIEND - Fitness Registry and the Importance of Exercise: The National Data Base'; in the present study a subset of 10690 apparently healthy individuals were utilized. The accuracy of the models was performed through the computation of sensitivity, specificity, and geometric mean values. The results show the superiority of the clustering approach in the accurate estimation of CRF (i.e., maximal oxygen consumption).

Keywords: cardiorespiratory fitness, data-driven models, knowledge extraction, machine learning

Procedia PDF Downloads 286
28827 Model Averaging in a Multiplicative Heteroscedastic Model

Authors: Alan Wan

Abstract:

In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.

Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk

Procedia PDF Downloads 385
28826 Thermal and Starvation Effects on Lubricated Elliptical Contacts at High Rolling/Sliding Speeds

Authors: Vinod Kumar, Surjit Angra

Abstract:

The objective of this theoretical study is to develop simple design formulas for the prediction of minimum film thickness and maximum mean film temperature rise in lightly loaded high-speed rolling/sliding lubricated elliptical contacts incorporating starvation effect. Herein, the reported numerical analysis focuses on thermoelastohydrodynamically lubricated rolling/sliding elliptical contacts, considering the Newtonian rheology of lubricant for wide range of operating parameters, namely load characterized by Hertzian pressure (PH = 0.01 GPa to 0.10 GPa), rolling speed (>10 m/s), slip parameter (S varies up to 1.0), and ellipticity ratio (k = 1 to 5). Starvation is simulated by systematically reducing the inlet supply. This analysis reveals that influences of load, rolling speed, and level of starvation are significant on the minimum film thickness. However, the maximum mean film temperature rise is strongly influenced by slip in addition to load, rolling speed, and level of starvation. In the presence of starvation, reduction in minimum film thickness and increase in maximum mean film temperature are observed. Based on the results of this study, empirical relations are developed for the prediction of dimensionless minimum film thickness and dimensionless maximum mean film temperature rise at the contacts in terms of various operating parameters.

Keywords: starvation, lubrication, elliptical contact, traction, minimum film thickness

Procedia PDF Downloads 392
28825 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 121
28824 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 358
28823 Drug-Drug Interaction Prediction in Diabetes Mellitus

Authors: Rashini Maduka, C. R. Wijesinghe, A. R. Weerasinghe

Abstract:

Drug-drug interactions (DDIs) can happen when two or more drugs are taken together. Today DDIs have become a serious health issue due to adverse drug effects. In vivo and in vitro methods for identifying DDIs are time-consuming and costly. Therefore, in-silico-based approaches are preferred in DDI identification. Most machine learning models for DDI prediction are used chemical and biological drug properties as features. However, some drug features are not available and costly to extract. Therefore, it is better to make automatic feature engineering. Furthermore, people who have diabetes already suffer from other diseases and take more than one medicine together. Then adverse drug effects may happen to diabetic patients and cause unpleasant reactions in the body. In this study, we present a model with a graph convolutional autoencoder and a graph decoder using a dataset from DrugBank version 5.1.3. The main objective of the model is to identify unknown interactions between antidiabetic drugs and the drugs taken by diabetic patients for other diseases. We considered automatic feature engineering and used Known DDIs only as the input for the model. Our model has achieved 0.86 in AUC and 0.86 in AP.

Keywords: drug-drug interaction prediction, graph embedding, graph convolutional networks, adverse drug effects

Procedia PDF Downloads 100
28822 Prediction of Compressive Strength Using Artificial Neural Network

Authors: Vijay Pal Singh, Yogesh Chandra Kotiyal

Abstract:

Structures are a combination of various load carrying members which transfer the loads to the foundation from the superstructure safely. At the design stage, the loading of the structure is defined and appropriate material choices are made based upon their properties, mainly related to strength. The strength of materials kept on reducing with time because of many factors like environmental exposure and deformation caused by unpredictable external loads. Hence, to predict the strength of materials used in structures, various techniques are used. Among these techniques, Non-Destructive Techniques (NDT) are the one that can be used to predict the strength without damaging the structure. In the present study, the compressive strength of concrete has been predicted using Artificial Neural Network (ANN). The predicted strength was compared with the experimentally obtained actual compressive strength of concrete and equations were developed for different models. A good co-relation has been obtained between the predicted strength by these models and experimental values. Further, the co-relation has been developed using two NDT techniques for prediction of strength by regression analysis. It was found that the percentage error has been reduced between the predicted strength by using combined techniques in place of single techniques.

Keywords: rebound, ultra-sonic pulse, penetration, ANN, NDT, regression

Procedia PDF Downloads 428
28821 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 155
28820 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 309
28819 Quantitative Structure-Property Relationship Study of Base Dissociation Constants of Some Benzimidazoles

Authors: Sanja O. Podunavac-Kuzmanović, Lidija R. Jevrić, Strahinja Z. Kovačević

Abstract:

Benzimidazoles are a group of compounds with significant antibacterial, antifungal and anticancer activity. The studied compounds consist of the main benzimidazole structure with different combinations of substituens. This study is based on the two-dimensional and three-dimensional molecular modeling and calculation of molecular descriptors (physicochemical and lipophilicity descriptors) of structurally diverse benzimidazoles. Molecular modeling was carried out by using ChemBio3D Ultra version 14.0 software. The obtained 3D models were subjected to energy minimization using molecular mechanics force field method (MM2). The cutoff for structure optimization was set at a gradient of 0.1 kcal/Åmol. The obtained set of molecular descriptors was used in principal component analysis (PCA) of possible similarities and dissimilarities among the studied derivatives. After the molecular modeling, the quantitative structure-property relationship (QSPR) analysis was applied in order to get the mathematical models which can be used in prediction of pKb values of structurally similar benzimidazoles. The obtained models are based on statistically valid multiple linear regression (MLR) equations. The calculated cross-validation parameters indicate the high prediction ability of the established QSPR models. This study is financially supported by COST action CM1306 and the project No. 114-451-347/2015-02, financially supported by the Provincial Secretariat for Science and Technological Development of Vojvodina.

Keywords: benzimidazoles, chemometrics, molecular modeling, molecular descriptors, QSPR

Procedia PDF Downloads 289
28818 Electromagnetic Assessment of Submarine Power Cable Degradation Using Finite Element Method and Sensitivity Analysis

Authors: N. Boutra, N. Ravot, J. Benoit, O. Picon

Abstract:

Submarine power cables used for offshore wind farms electric energy distribution and transmission are subject to numerous threats. Some of the risks are associated with transport, installation and operating in harsh marine environment. This paper describes the feasibility of an electromagnetic low frequency sensing technique for submarine power cable failure prediction. The impact of a structural damage shape and material variability on the induced electric field is evaluated. The analysis is performed by modeling the cable using the finite element method, we use sensitivity analysis in order to identify the main damage characteristics affecting electric field variation. Lastly, we discuss the results obtained.

Keywords: electromagnetism, finite element method, sensitivity analysis, submarine power cables

Procedia PDF Downloads 356
28817 A Prediction Model Using the Price Cyclicality Function Optimized for Algorithmic Trading in Financial Market

Authors: Cristian Păuna

Abstract:

After the widespread release of electronic trading, automated trading systems have become a significant part of the business intelligence system of any modern financial investment company. An important part of the trades is made completely automatically today by computers using mathematical algorithms. The trading decisions are taken almost instantly by logical models and the orders are sent by low-latency automatic systems. This paper will present a real-time price prediction methodology designed especially for algorithmic trading. Based on the price cyclicality function, the methodology revealed will generate price cyclicality bands to predict the optimal levels for the entries and exits. In order to automate the trading decisions, the cyclicality bands will generate automated trading signals. We have found that the model can be used with good results to predict the changes in market behavior. Using these predictions, the model can automatically adapt the trading signals in real-time to maximize the trading results. The paper will reveal the methodology to optimize and implement this model in automated trading systems. After tests, it is proved that this methodology can be applied with good efficiency in different timeframes. Real trading results will be also displayed and analyzed in order to qualify the methodology and to compare it with other models. As a conclusion, it was found that the price prediction model using the price cyclicality function is a reliable trading methodology for algorithmic trading in the financial market.

Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, price prediction

Procedia PDF Downloads 184
28816 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 169
28815 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death

Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar

Abstract:

In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.

Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death

Procedia PDF Downloads 342
28814 Prediction of the Regioselectivity of 1,3-Dipolar Cycloaddition Reactions of Nitrile Oxides with 2(5H)-Furanones Using Recent Theoretical Reactivity Indices

Authors: Imad Eddine Charif, Wafaa Benchouk, Sidi Mohamed Mekelleche

Abstract:

The regioselectivity of a series of 16 1,3-dipolar cycloaddition reactions of nitrile oxides with 2(5H)-furanones has been analysed by means of global and local electrophilic and nucleophilic reactivity indices using density functional theory at the B3LYP level together with the 6-31G(d) basis set. The local electrophilicity and nucleophilicity indices, based on Fukui and Parr functions, have been calculated for the terminal sites, namely the C1 and O3 atoms of the 1,3-dipole and the C4 and C5 atoms of the dipolarophile. These local indices were calculated using both Mulliken and natural charges and spin densities. The results obtained show that the C5 atom of the 2(5H)-furanones is the most electrophilic site whereas the O3 atom of the nitrile oxides is the most nucleophilic centre. It turns out that the experimental regioselectivity is correctly reproduced, indicating that both Fukui- and Parr-based indices are efficient tools for the prediction of the regiochemistry of the studied reactions and could be used for the prediction of newly designed reactions of the same kind.

Keywords: 1, 3-dipolar cycloaddition, density functional theory, nitrile oxides, regioselectivity, reactivity indices

Procedia PDF Downloads 166
28813 Machine Learning Approaches to Water Usage Prediction in Kocaeli: A Comparative Study

Authors: Kasim Görenekli, Ali Gülbağ

Abstract:

This study presents a comprehensive analysis of water consumption patterns in Kocaeli province, Turkey, utilizing various machine learning approaches. We analyzed data from 5,000 water subscribers across residential, commercial, and official categories over an 80-month period from January 2016 to August 2022, resulting in a total of 400,000 records. The dataset encompasses water consumption records, weather information, weekends and holidays, previous months' consumption, and the influence of the COVID-19 pandemic.We implemented and compared several machine learning models, including Linear Regression, Random Forest, Support Vector Regression (SVR), XGBoost, Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU). Particle Swarm Optimization (PSO) was applied to optimize hyperparameters for all models.Our results demonstrate varying performance across subscriber types and models. For official subscribers, Random Forest achieved the highest R² of 0.699 with PSO optimization. For commercial subscribers, Linear Regression performed best with an R² of 0.730 with PSO. Residential water usage proved more challenging to predict, with XGBoost achieving the highest R² of 0.572 with PSO.The study identified key factors influencing water consumption, with previous months' consumption, meter diameter, and weather conditions being among the most significant predictors. The impact of the COVID-19 pandemic on consumption patterns was also observed, particularly in residential usage.This research provides valuable insights for effective water resource management in Kocaeli and similar regions, considering Turkey's high water loss rate and below-average per capita water supply. The comparative analysis of different machine learning approaches offers a comprehensive framework for selecting appropriate models for water consumption prediction in urban settings.

Keywords: mMachine learning, water consumption prediction, particle swarm optimization, COVID-19, water resource management

Procedia PDF Downloads 16
28812 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection

Authors: Muhammad Ali

Abstract:

Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.

Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection

Procedia PDF Downloads 125
28811 Clinical Prediction Score for Ruptured Appendicitis In ED

Authors: Thidathit Prachanukool, Chaiyaporn Yuksen, Welawat Tienpratarn, Sorravit Savatmongkorngul, Panvilai Tangkulpanich, Chetsadakon Jenpanitpong, Yuranan Phootothum, Malivan Phontabtim, Promphet Nuanprom

Abstract:

Background: Ruptured appendicitis has a high morbidity and mortality and requires immediate surgery. The Alvarado Score is used as a tool to predict the risk of acute appendicitis, but there is no such score for predicting rupture. This study aimed to developed the prediction score to determine the likelihood of ruptured appendicitis in an Asian population. Methods: This study was diagnostic, retrospectively cross-sectional and exploratory model at the Emergency Medicine Department in Ramathibodi Hospital between March 2016 and March 2018. The inclusion criteria were age >15 years and an available pathology report after appendectomy. Clinical factors included gender, age>60 years, right lower quadrant pain, migratory pain, nausea and/or vomiting, diarrhea, anorexia, fever>37.3°C, rebound tenderness, guarding, white blood cell count, polymorphonuclear white blood cells (PMN)>75%, and the pain duration before presentation. The predictive model and prediction score for ruptured appendicitis was developed by multivariable logistic regression analysis. Result: During the study period, 480 patients met the inclusion criteria; of these, 77 (16%) had ruptured appendicitis. Five independent factors were predictive of rupture, age>60 years, fever>37.3°C, guarding, PMN>75%, and duration of pain>24 hours to presentation. A score > 6 increased the likelihood ratio of ruptured appendicitis by 3.88 times. Conclusion: Using the Ramathibodi Welawat Ruptured Appendicitis Score. (RAMA WeRA Score) developed in this study, a score of > 6 was associated with ruptured appendicitis.

Keywords: predictive model, risk score, ruptured appendicitis, emergency room

Procedia PDF Downloads 165
28810 A Model of Foam Density Prediction for Expanded Perlite Composites

Authors: M. Arifuzzaman, H. S. Kim

Abstract:

Multiple sets of variables associated with expanded perlite particle consolidation in foam manufacturing were analyzed to develop a model for predicting perlite foam density. The consolidation of perlite particles based on the flotation method and compaction involves numerous variables leading to the final perlite foam density. The variables include binder content, compaction ratio, perlite particle size, various perlite particle densities and porosities, and various volumes of perlite at different stages of process. The developed model was found to be useful not only for prediction of foam density but also for optimization between compaction ratio and binder content to achieve a desired density. Experimental verification was conducted using a range of foam densities (0.15–0.5 g/cm3) produced with a range of compaction ratios (1.5-3.5), a range of sodium silicate contents (0.05–0.35 g/ml) in dilution, a range of expanded perlite particle sizes (1-4 mm), and various perlite densities (such as skeletal, material, bulk, and envelope densities). A close agreement between predictions and experimental results was found.

Keywords: expanded perlite, flotation method, foam density, model, prediction, sodium silicate

Procedia PDF Downloads 408
28809 Application of Artificial Neural Network for Prediction of Load-Haul-Dump Machine Performance Characteristics

Authors: J. Balaraju, M. Govinda Raj, C. S. N. Murthy

Abstract:

Every industry is constantly looking for enhancement of its day to day production and productivity. This can be possible only by maintaining the men and machinery at its adequate level. Prediction of performance characteristics plays an important role in performance evaluation of the equipment. Analytical and statistical approaches will take a bit more time to solve complex problems such as performance estimations as compared with software-based approaches. Keeping this in view the present study deals with an Artificial Neural Network (ANN) modelling of a Load-Haul-Dump (LHD) machine to predict the performance characteristics such as reliability, availability and preventive maintenance (PM). A feed-forward-back-propagation ANN technique has been used to model the Levenberg-Marquardt (LM) training algorithm. The performance characteristics were computed using Isograph Reliability Workbench 13.0 software. These computed values were validated using predicted output responses of ANN models. Further, recommendations are given to the industry based on the performed analysis for improvement of equipment performance.

Keywords: load-haul-dump, LHD, artificial neural network, ANN, performance, reliability, availability, preventive maintenance

Procedia PDF Downloads 150
28808 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies

Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong

Abstract:

To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.

Keywords: neural network, travel characteristics analysis, transportation choice, travel sharing rate, traffic resource allocation

Procedia PDF Downloads 138
28807 Satellite Statistical Data Approach for Upwelling Identification and Prediction in South of East Java and Bali Sea

Authors: Hary Aprianto Wijaya Siahaan, Bayu Edo Pratama

Abstract:

Sea fishery's potential to become one of the nation's assets which very contributed to Indonesia's economy. This fishery potential not in spite of the availability of the chlorophyll in the territorial waters of Indonesia. The research was conducted using three methods, namely: statistics, comparative and analytical. The data used include MODIS sea temperature data imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, MODIS data of chlorophyll-a imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, and Imaging results data ASCAT on MetOp and NOAA satellites with 27 km resolution in 2002-2015. The results of the processing of the data show that the incidence of upwelling in the south of East Java Sea began to happen in June identified with sea surface temperature anomaly below normal, the mass of the air that moves from the East to the West, and chlorophyll-a concentrations are high. In July the region upwelling events are increasingly expanding towards the West and reached its peak in August. Chlorophyll-a concentration prediction using multiple linear regression equations demonstrate excellent results to chlorophyll-a concentrations prediction in 2002 until 2015 with the correlation of predicted chlorophyll-a concentration indicate a value of 0.8 and 0.3 with RMSE value. On the chlorophyll-a concentration prediction in 2016 indicate good results despite a decline in the value of the correlation, where the correlation of predicted chlorophyll-a concentration in the year 2016 indicate a value 0.6, but showed improvement in RMSE values with 0.2.

Keywords: satellite, sea surface temperature, upwelling, wind stress

Procedia PDF Downloads 158