Search results for: time prediction algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20529

Search results for: time prediction algorithms

20469 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 213
20468 A Review on Comparative Analysis of Path Planning and Collision Avoidance Algorithms

Authors: Divya Agarwal, Pushpendra S. Bharti

Abstract:

Autonomous mobile robots (AMR) are expected as smart tools for operations in every automation industry. Path planning and obstacle avoidance is the backbone of AMR as robots have to reach their goal location avoiding obstacles while traversing through optimized path defined according to some criteria such as distance, time or energy. Path planning can be classified into global and local path planning where environmental information is known and unknown/partially known, respectively. A number of sensors are used for data collection. A number of algorithms such as artificial potential field (APF), rapidly exploring random trees (RRT), bidirectional RRT, Fuzzy approach, Purepursuit, A* algorithm, vector field histogram (VFH) and modified local path planning algorithm, etc. have been used in the last three decades for path planning and obstacle avoidance for AMR. This paper makes an attempt to review some of the path planning and obstacle avoidance algorithms used in the field of AMR. The review includes comparative analysis of simulation and mathematical computations of path planning and obstacle avoidance algorithms using MATLAB 2018a. From the review, it could be concluded that different algorithms may complete the same task (i.e. with a different set of instructions) in less or more time, space, effort, etc.

Keywords: path planning, obstacle avoidance, autonomous mobile robots, algorithms

Procedia PDF Downloads 224
20467 Grey Wolf Optimization Technique for Predictive Analysis of Products in E-Commerce: An Adaptive Approach

Authors: Shital Suresh Borse, Vijayalaxmi Kadroli

Abstract:

E-commerce industries nowadays implement the latest AI, ML Techniques to improve their own performance and prediction accuracy. This helps to gain a huge profit from the online market. Ant Colony Optimization, Genetic algorithm, Particle Swarm Optimization, Neural Network & GWO help many e-commerce industries for up-gradation of their predictive performance. These algorithms are providing optimum results in various applications, such as stock price prediction, prediction of drug-target interaction & user ratings of similar products in e-commerce sites, etc. In this study, customer reviews will play an important role in prediction analysis. People showing much interest in buying a lot of services& products suggested by other customers. This ultimately increases net profit. In this work, a convolution neural network (CNN) is proposed which further is useful to optimize the prediction accuracy of an e-commerce website. This method shows that CNN is used to optimize hyperparameters of GWO algorithm using an appropriate coding scheme. Accurate model results are verified by comparing them to PSO results whose hyperparameters have been optimized by CNN in Amazon's customer review dataset. Here, experimental outcome proves that this proposed system using the GWO algorithm achieves superior execution in terms of accuracy, precision, recovery, etc. in prediction analysis compared to the existing systems.

Keywords: prediction analysis, e-commerce, machine learning, grey wolf optimization, particle swarm optimization, CNN

Procedia PDF Downloads 104
20466 Improved Particle Swarm Optimization with Cellular Automata and Fuzzy Cellular Automata

Authors: Ramin Javadzadeh

Abstract:

The particle swarm optimization are Meta heuristic optimization method, which are used for clustering and pattern recognition applications are abundantly. These algorithms in multimodal optimization problems are more efficient than genetic algorithms. A major drawback in these algorithms is their slow convergence to global optimum and their weak stability can be considered in various running of these algorithms. In this paper, improved Particle swarm optimization is introduced for the first time to overcome its problems. The fuzzy cellular automata is used for improving the algorithm efficiently. The credibility of the proposed approach is evaluated by simulations, and it is shown that the proposed approach achieves better results can be achieved compared to the Particle swarm optimization algorithms.

Keywords: cellular automata, cellular learning automata, local search, optimization, particle swarm optimization

Procedia PDF Downloads 596
20465 Comparative Study of Deep Reinforcement Learning Algorithm Against Evolutionary Algorithms for Finding the Optimal Values in a Simulated Environment Space

Authors: Akshay Paranjape, Nils Plettenberg, Robert Schmitt

Abstract:

Traditional optimization methods like evolutionary algorithms are widely used in production processes to find an optimal or near-optimal solution of control parameters based on the simulated environment space of a process. These algorithms are computationally intensive and therefore do not provide the opportunity for real-time optimization. This paper utilizes the Deep Reinforcement Learning (DRL) framework to find an optimal or near-optimal solution for control parameters. A model based on maximum a posteriori policy optimization (Hybrid-MPO) that can handle both numerical and categorical parameters is used as a benchmark for comparison. A comparative study shows that DRL can find optimal solutions of similar quality as compared to evolutionary algorithms while requiring significantly less time making them preferable for real-time optimization. The results are confirmed in a large-scale validation study on datasets from production and other fields. A trained XGBoost model is used as a surrogate for process simulation. Finally, multiple ways to improve the model are discussed.

Keywords: reinforcement learning, evolutionary algorithms, production process optimization, real-time optimization, hybrid-MPO

Procedia PDF Downloads 104
20464 Comparison of Machine Learning Models for the Prediction of System Marginal Price of Greek Energy Market

Authors: Ioannis P. Panapakidis, Marios N. Moschakis

Abstract:

The Greek Energy Market is structured as a mandatory pool where the producers make their bid offers in day-ahead basis. The System Operator solves an optimization routine aiming at the minimization of the cost of produced electricity. The solution of the optimization problem leads to the calculation of the System Marginal Price (SMP). Accurate forecasts of the SMP can lead to increased profits and more efficient portfolio management from the producer`s perspective. Aim of this study is to provide a comparative analysis of various machine learning models such as artificial neural networks and neuro-fuzzy models for the prediction of the SMP of the Greek market. Machine learning algorithms are favored in predictions problems since they can capture and simulate the volatilities of complex time series.

Keywords: deregulated energy market, forecasting, machine learning, system marginal price

Procedia PDF Downloads 205
20463 Wait-Optimized Scheduler Algorithm for Efficient Process Scheduling in Computer Systems

Authors: Md Habibur Rahman, Jaeho Kim

Abstract:

Efficient process scheduling is a crucial factor in ensuring optimal system performance and resource utilization in computer systems. While various algorithms have been proposed over the years, there are still limitations to their effectiveness. This paper introduces a new Wait-Optimized Scheduler (WOS) algorithm that aims to minimize process waiting time by dividing them into two layers and considering both process time and waiting time. The WOS algorithm is non-preemptive and prioritizes processes with the shortest WOS. In the first layer, each process runs for a predetermined duration, and any unfinished process is subsequently moved to the second layer, resulting in a decrease in response time. Whenever the first layer is free or the number of processes in the second layer is twice that of the first layer, the algorithm sorts all the processes in the second layer based on their remaining time minus waiting time and sends one process to the first layer to run. This ensures that all processes eventually run, optimizing waiting time. To evaluate the performance of the WOS algorithm, we conducted experiments comparing its performance with traditional scheduling algorithms such as First-Come-First-Serve (FCFS) and Shortest-Job-First (SJF). The results showed that the WOS algorithm outperformed the traditional algorithms in reducing the waiting time of processes, particularly in scenarios with a large number of short tasks with long wait times. Our study highlights the effectiveness of the WOS algorithm in improving process scheduling efficiency in computer systems. By reducing process waiting time, the WOS algorithm can improve system performance and resource utilization. The findings of this study provide valuable insights for researchers and practitioners in developing and implementing efficient process scheduling algorithms.

Keywords: process scheduling, wait-optimized scheduler, response time, non-preemptive, waiting time, traditional scheduling algorithms, first-come-first-serve, shortest-job-first, system performance, resource utilization

Procedia PDF Downloads 81
20462 Solution Approaches for Some Scheduling Problems with Learning Effect and Job Dependent Delivery Times

Authors: M. Duran Toksari, Berrin Ucarkus

Abstract:

In this paper, we propose two algorithms to optimally solve makespan and total completion time scheduling problems with learning effect and job dependent delivery times in a single machine environment. The delivery time is the extra time to eliminate adverse effect between the main processing and delivery to the customer. In this paper, we introduce the job dependent delivery times for some single machine scheduling problems with position dependent learning effect, which are makespan are total completion. The results with respect to two algorithms proposed for solving of the each problem are compared with LINGO solutions for 50-jobs, 100-jobs and 150-jobs problems. The proposed algorithms can find the same results in shorter time.

Keywords: delivery Times, learning effect, makespan, scheduling, total completion time

Procedia PDF Downloads 463
20461 Automatic Queuing Model Applications

Authors: Fahad Suleiman

Abstract:

Queuing, in medical system is the process of moving patients in a specific sequence to a specific service according to the patients’ nature of illness. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the medical consultancy system, the different queuing algorithms that are used in healthcare system to serve the patients, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the medical queuing system that can analyses the queue status and take decision which patient to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

Keywords: queuing systems, queuing system models, scheduling algorithms, patients

Procedia PDF Downloads 340
20460 Housing Price Prediction Using Machine Learning Algorithms: The Case of Melbourne City, Australia

Authors: The Danh Phan

Abstract:

House price forecasting is a main topic in the real estate market research. Effective house price prediction models could not only allow home buyers and real estate agents to make better data-driven decisions but may also be beneficial for the property policymaking process. This study investigates the housing market by using machine learning techniques to analyze real historical house sale transactions in Australia. It seeks useful models which could be deployed as an application for house buyers and sellers. Data analytics show a high discrepancy between the house price in the most expensive suburbs and the most affordable suburbs in the city of Melbourne. In addition, experiments demonstrate that the combination of Stepwise and Support Vector Machine (SVM), based on the Mean Squared Error (MSE) measurement, consistently outperforms other models in terms of prediction accuracy.

Keywords: house price prediction, regression trees, neural network, support vector machine, stepwise

Procedia PDF Downloads 214
20459 A Deep Learning Approach to Real Time and Robust Vehicular Traffic Prediction

Authors: Bikis Muhammed, Sehra Sedigh Sarvestani, Ali R. Hurson, Lasanthi Gamage

Abstract:

Vehicular traffic events have overly complex spatial correlations and temporal interdependencies and are also influenced by environmental events such as weather conditions. To capture these spatial and temporal interdependencies and make more realistic vehicular traffic predictions, graph neural networks (GNN) based traffic prediction models have been extensively utilized due to their capability of capturing non-Euclidean spatial correlation very effectively. However, most of the already existing GNN-based traffic prediction models have some limitations during learning complex and dynamic spatial and temporal patterns due to the following missing factors. First, most GNN-based traffic prediction models have used static distance or sometimes haversine distance mechanisms between spatially separated traffic observations to estimate spatial correlation. Secondly, most GNN-based traffic prediction models have not incorporated environmental events that have a major impact on the normal traffic states. Finally, most of the GNN-based models did not use an attention mechanism to focus on only important traffic observations. The objective of this paper is to study and make real-time vehicular traffic predictions while incorporating the effect of weather conditions. To fill the previously mentioned gaps, our prediction model uses a real-time driving distance between sensors to build a distance matrix or spatial adjacency matrix and capture spatial correlation. In addition, our prediction model considers the effect of six types of weather conditions and has an attention mechanism in both spatial and temporal data aggregation. Our prediction model efficiently captures the spatial and temporal correlation between traffic events, and it relies on the graph attention network (GAT) and Bidirectional bidirectional long short-term memory (Bi-LSTM) plus attention layers and is called GAT-BILSTMA.

Keywords: deep learning, real time prediction, GAT, Bi-LSTM, attention

Procedia PDF Downloads 63
20458 Monthly River Flow Prediction Using a Nonlinear Prediction Method

Authors: N. H. Adenan, M. S. M. Noorani

Abstract:

River flow prediction is an essential to ensure proper management of water resources can be optimally distribute water to consumers. This study presents an analysis and prediction by using nonlinear prediction method involving monthly river flow data in Tanjung Tualang from 1976 to 2006. Nonlinear prediction method involves the reconstruction of phase space and local linear approximation approach. The phase space reconstruction involves the reconstruction of one-dimensional (the observed 287 months of data) in a multidimensional phase space to reveal the dynamics of the system. Revenue of phase space reconstruction is used to predict the next 72 months. A comparison of prediction performance based on correlation coefficient (CC) and root mean square error (RMSE) have been employed to compare prediction performance for nonlinear prediction method, ARIMA and SVM. Prediction performance comparisons show the prediction results using nonlinear prediction method is better than ARIMA and SVM. Therefore, the result of this study could be used to developed an efficient water management system to optimize the allocation water resources.

Keywords: river flow, nonlinear prediction method, phase space, local linear approximation

Procedia PDF Downloads 402
20457 Task Scheduling on Parallel System Using Genetic Algorithm

Authors: Jasbir Singh Gill, Baljit Singh

Abstract:

Scheduling and mapping the application task graph on multiprocessor parallel systems is considered as the most crucial and critical NP-complete problem. Many genetic algorithms have been proposed to solve such problems. In this paper, two genetic approach based algorithms have been designed and developed with or without task duplication. The proposed algorithms work on two fitness functions. The first fitness i.e. task fitness is used to minimize the total finish time of the schedule (schedule length) while the second fitness function i.e. process fitness is concerned with allocating the tasks to the available highly efficient processor from the list of available processors (load balance). Proposed genetic-based algorithms have been experimentally implemented and evaluated with other state-of-art popular and widely used algorithms.

Keywords: parallel computing, task scheduling, task duplication, genetic algorithm

Procedia PDF Downloads 337
20456 Understanding Health-Related Properties of Grapes by Pharmacokinetic Modelling of Intestinal Absorption

Authors: Sophie N. Selby-Pham, Yudie Wang, Louise Bennett

Abstract:

Consumption of grapes promotes health and reduces the risk of chronic diseases due to the action of grape phytochemicals in regulation of Oxidative Stress and Inflammation (OSI). The bioefficacy of phytochemicals depends on their absorption in the human body. The time required for phytochemicals to achieve maximal plasma concentration (Tₘₐₓ) after oral intake reflects the time window of maximal bioefficacy of phytochemicals, with Tₘₐₓ dependent on physicochemical properties of phytochemicals. This research collated physicochemical properties of grape phytochemicals from white and red grapes to predict their Tₘₐₓ using pharmacokinetic modelling. The predicted values of Tₘₐₓ were then compared to the measured Tₘₐₓ collected from clinical studies to determine the accuracy of prediction. In both liquid and solid intake forms, white grapes exhibit a shorter Tₘₐₓ range (0.5-2.5 h) versus red grapes (1.5-5h). The prediction accuracy of Tₘₐₓ for grape phytochemicals was 33.3% total error of prediction compared to the mean, indicating high prediction accuracy. Pharmacokinetic modelling allows prediction of Tₘₐₓ without costly clinical trials, informing dosing frequency for sustained presence of phytochemicals in the body to optimize the health benefits of phytochemicals.

Keywords: absorption kinetics, phytochemical, phytochemical absorption prediction model, Vitis vinifera

Procedia PDF Downloads 140
20455 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 328
20454 A Prediction Method for Large-Size Event Occurrences in the Sandpile Model

Authors: S. Channgam, A. Sae-Tang, T. Termsaithong

Abstract:

In this research, the occurrences of large size events in various system sizes of the Bak-Tang-Wiesenfeld sandpile model are considered. The system sizes (square lattice) of model considered here are 25×25, 50×50, 75×75 and 100×100. The cross-correlation between the ratio of sites containing 3 grain time series and the large size event time series for these 4 system sizes are also analyzed. Moreover, a prediction method of the large-size event for the 50×50 system size is also introduced. Lastly, it can be shown that this prediction method provides a slightly higher efficiency than random predictions.

Keywords: Bak-Tang-Wiesenfeld sandpile model, cross-correlation, avalanches, prediction method

Procedia PDF Downloads 372
20453 Project Time Prediction Model: A Case Study of Construction Projects in Sindh, Pakistan

Authors: Tauha Hussain Ali, Shabir Hussain Khahro, Nafees Ahmed Memon

Abstract:

Accurate prediction of project time for planning and bid preparation stage should contain realistic dates. Constructors use their experience to estimate the project duration for the new projects, which is based on intuitions. It has been a constant concern to both researchers and constructors to analyze the accurate prediction of project duration for bid preparation stage. In Pakistan, such study for time cost relationship has been lacked to predict duration performance for the construction projects. This study is an attempt to explore the time cost relationship that would conclude with a mathematical model to predict the time for the drainage rehabilitation projects in the province of Sindh, Pakistan. The data has been collected from National Engineering Services (NESPAK), Pakistan and regression analysis has been carried out for the analysis of results. Significant relationship has been found between time and cost of the construction projects in Sindh and the generated mathematical model can be used by the constructors to predict the project duration for the upcoming projects of same nature. This study also provides the professionals with a requisite knowledge to make decisions regarding project duration, which is significantly important to win the projects at the bid stage.

Keywords: BTC Model, project time, relationship of time cost, regression

Procedia PDF Downloads 374
20452 Fault Diagnosis of Manufacturing Systems Using AntTreeStoch with Parameter Optimization by ACO

Authors: Ouahab Kadri, Leila Hayet Mouss

Abstract:

In this paper, we present three diagnostic modules for complex and dynamic systems. These modules are based on three ant colony algorithms, which are AntTreeStoch, Lumer & Faieta and Binary ant colony. We chose these algorithms for their simplicity and their wide application range. However, we cannot use these algorithms in their basement forms as they have several limitations. To use these algorithms in a diagnostic system, we have proposed three variants. We have tested these algorithms on datasets issued from two industrial systems, which are clinkering system and pasteurization system.

Keywords: ant colony algorithms, complex and dynamic systems, diagnosis, classification, optimization

Procedia PDF Downloads 294
20451 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling

Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed

Abstract:

The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.

Keywords: streamflow, neural network, optimisation, algorithm

Procedia PDF Downloads 144
20450 pscmsForecasting: A Python Web Service for Time Series Forecasting

Authors: Ioannis Andrianakis, Vasileios Gkatas, Nikos Eleftheriadis, Alexios Ellinidis, Ermioni Avramidou

Abstract:

pscmsForecasting is an open-source web service that implements a variety of time series forecasting algorithms and exposes them to the user via the ubiquitous HTTP protocol. It allows developers to enhance their applications by adding time series forecasting functionalities through an intuitive and easy-to-use interface. This paper provides some background on time series forecasting and gives details about the implemented algorithms, aiming to enhance the end user’s understanding of the underlying methods before incorporating them into their applications. A detailed description of the web service’s interface and its various parameterizations is also provided. Being an open-source project, pcsmsForecasting can also be easily modified and tailored to the specific needs of each application.

Keywords: time series, forecasting, web service, open source

Procedia PDF Downloads 74
20449 Prediction of All-Beta Protein Secondary Structure Using Garnier-Osguthorpe-Robson Method

Authors: K. Tejasri, K. Suvarna Vani, S. Prathyusha, S. Ramya

Abstract:

Proteins are chained sequences of amino acids which are brought together by the peptide bonds. Many varying formations of the chains are possible due to multiple combinations of amino acids and rotation in numerous positions along the chain. Protein structure prediction is one of the crucial goals worked towards by the members of bioinformatics and theoretical chemistry backgrounds. Among the four different structure levels in proteins, we emphasize mainly the secondary level structure. Generally, the secondary protein basically comprises alpha-helix and beta-sheets. Multi-class classification problem of data with disparity is truly a challenge to overcome and has to be addressed for the beta strands. Imbalanced data distribution constitutes a couple of the classes of data having very limited training samples collated with other classes. The secondary structure data is extracted from the protein primary sequence, and the beta-strands are predicted using suitable machine learning algorithms.

Keywords: proteins, secondary structure elements, beta-sheets, beta-strands, alpha-helices, machine learning algorithms

Procedia PDF Downloads 88
20448 Traffic Prediction with Raw Data Utilization and Context Building

Authors: Zhou Yang, Heli Sun, Jianbin Huang, Jizhong Zhao, Shaojie Qiao

Abstract:

Traffic prediction is essential in a multitude of ways in modern urban life. The researchers of earlier work in this domain carry out the investigation chiefly with two major focuses: (1) the accurate forecast of future values in multiple time series and (2) knowledge extraction from spatial-temporal correlations. However, two key considerations for traffic prediction are often missed: the completeness of raw data and the full context of the prediction timestamp. Concentrating on the two drawbacks of earlier work, we devise an approach that can address these issues in a two-phase framework. First, we utilize the raw trajectories to a greater extent through building a VLA table and data compression. We obtain the intra-trajectory features with graph-based encoding and the intertrajectory ones with a grid-based model and the technique of back projection that restore their surrounding high-resolution spatial-temporal environment. To the best of our knowledge, we are the first to study direct feature extraction from raw trajectories for traffic prediction and attempt the use of raw data with the least degree of reduction. In the prediction phase, we provide a broader context for the prediction timestamp by taking into account the information that are around it in the training dataset. Extensive experiments on several well-known datasets have verified the effectiveness of our solution that combines the strength of raw trajectory data and prediction context. In terms of performance, our approach surpasses several state-of-the-art methods for traffic prediction.

Keywords: traffic prediction, raw data utilization, context building, data reduction

Procedia PDF Downloads 116
20447 Using Combination of Sets of Features of Molecules for Aqueous Solubility Prediction: A Random Forest Model

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Generally, absorption and bioavailability increase if solubility increases; therefore, it is crucial to predict them in drug discovery applications. Molecular descriptors and Molecular properties are traditionally used for the prediction of water solubility. There are various key descriptors that are used for this purpose, namely Drogan Descriptors, Morgan Descriptors, Maccs keys, etc., and each has different prediction capabilities with differentiating successes between different data sets. Another source for the prediction of solubility is structural features; they are commonly used for the prediction of solubility. However, there are little to no studies that combine three or more properties or descriptors for prediction to produce a more powerful prediction model. Unlike available models, we used a combination of those features in a random forest machine learning model for improved solubility prediction to better predict and, therefore, contribute to drug discovery systems.

Keywords: solubility, random forest, molecular descriptors, maccs keys

Procedia PDF Downloads 30
20446 Support Vector Regression Combined with Different Optimization Algorithms to Predict Global Solar Radiation on Horizontal Surfaces in Algeria

Authors: Laidi Maamar, Achwak Madani, Abdellah El Ahdj Abdellah

Abstract:

The aim of this work is to use Support Vector regression (SVR) combined with dragonfly, firefly, Bee Colony and particle swarm Optimization algorithm to predict global solar radiation on horizontal surfaces in some cities in Algeria. Combining these optimization algorithms with SVR aims principally to enhance accuracy by fine-tuning the parameters, speeding up the convergence of the SVR model, and exploring a larger search space efficiently; these parameters are the regularization parameter (C), kernel parameters, and epsilon parameter. By doing so, the aim is to improve the generalization and predictive accuracy of the SVR model. Overall, the aim is to leverage the strengths of both SVR and optimization algorithms to create a more powerful and effective regression model for various cities and under different climate conditions. Results demonstrate close agreement between predicted and measured data in terms of different metrics. In summary, SVM has proven to be a valuable tool in modeling global solar radiation, offering accurate predictions and demonstrating versatility when combined with other algorithms or used in hybrid forecasting models.

Keywords: support vector regression (SVR), optimization algorithms, global solar radiation prediction, hybrid forecasting models

Procedia PDF Downloads 22
20445 Stock Market Prediction Using Convolutional Neural Network That Learns from a Graph

Authors: Mo-Se Lee, Cheol-Hwi Ahn, Kee-Young Kwahk, Hyunchul Ahn

Abstract:

Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN (Convolutional Neural Network), which is known as effective solution for recognizing and classifying images, has been popularly applied to classification and prediction problems in various fields. In this study, we try to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. In specific, we propose to apply CNN as the binary classifier that predicts stock market direction (up or down) by using a graph as its input. That is, our proposal is to build a machine learning algorithm that mimics a person who looks at the graph and predicts whether the trend will go up or down. Our proposed model consists of four steps. In the first step, it divides the dataset into 5 days, 10 days, 15 days, and 20 days. And then, it creates graphs for each interval in step 2. In the next step, CNN classifiers are trained using the graphs generated in the previous step. In step 4, it optimizes the hyper parameters of the trained model by using the validation dataset. To validate our model, we will apply it to the prediction of KOSPI200 for 1,986 days in eight years (from 2009 to 2016). The experimental dataset will include 14 technical indicators such as CCI, Momentum, ROC and daily closing price of KOSPI200 of Korean stock market.

Keywords: convolutional neural network, deep learning, Korean stock market, stock market prediction

Procedia PDF Downloads 421
20444 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 115
20443 A Comparative Analysis of Asymmetric Encryption Schemes on Android Messaging Service

Authors: Mabrouka Algherinai, Fatma Karkouri

Abstract:

Today, Short Message Service (SMS) is an important means of communication. SMS is not only used in informal environment for communication and transaction, but it is also used in formal environments such as institutions, organizations, companies, and business world as a tool for communication and transactions. Therefore, there is a need to secure the information that is being transmitted through this medium to ensure security of information both in transit and at rest. But, encryption has been identified as a means to provide security to SMS messages in transit and at rest. Several past researches have proposed and developed several encryption algorithms for SMS and Information Security. This research aims at comparing the performance of common Asymmetric encryption algorithms on SMS security. The research employs the use of three algorithms, namely RSA, McEliece, and RABIN. Several experiments were performed on SMS of various sizes on android mobile device. The experimental results show that each of the three techniques has different key generation, encryption, and decryption times. The efficiency of an algorithm is determined by the time that it takes for encryption, decryption, and key generation. The best algorithm can be chosen based on the least time required for encryption. The obtained results show the least time when McEliece size 4096 is used. RABIN size 4096 gives most time for encryption and so it is the least effective algorithm when considering encryption. Also, the research shows that McEliece size 2048 has the least time for key generation, and hence, it is the best algorithm as relating to key generation. The result of the algorithms also shows that RSA size 1024 is the most preferable algorithm in terms of decryption as it gives the least time for decryption.

Keywords: SMS, RSA, McEliece, RABIN

Procedia PDF Downloads 155
20442 On Improving Breast Cancer Prediction Using GRNN-CP

Authors: Kefaya Qaddoum

Abstract:

The aim of this study is to predict breast cancer and to construct a supportive model that will stimulate a more reliable prediction as a factor that is fundamental for public health. In this study, we utilize general regression neural networks (GRNN) to replace the normal predictions with prediction periods to achieve a reasonable percentage of confidence. The mechanism employed here utilises a machine learning system called conformal prediction (CP), in order to assign consistent confidence measures to predictions, which are combined with GRNN. We apply the resulting algorithm to the problem of breast cancer diagnosis. The results show that the prediction constructed by this method is reasonable and could be useful in practice.

Keywords: neural network, conformal prediction, cancer classification, regression

Procedia PDF Downloads 275
20441 Principal Component Analysis Applied to the Electric Power Systems – Practical Guide; Practical Guide for Algorithms

Authors: John Morales, Eduardo Orduña

Abstract:

Currently the Principal Component Analysis (PCA) theory has been used to develop algorithms regarding to Electric Power Systems (EPS). In this context, this paper presents a practical tutorial of this technique detailed their concept, on-line and off-line mathematical foundations, which are necessary and desirables in EPS algorithms. Thus, features of their eigenvectors which are very useful to real-time process are explained, showing how it is possible to select these parameters through a direct optimization. On the other hand, in this work in order to show the application of PCA to off-line and on-line signals, an example step to step using Matlab commands is presented. Finally, a list of different approaches using PCA is presented, and some works which could be analyzed using this tutorial are presented.

Keywords: practical guide; on-line; off-line, algorithms, faults

Procedia PDF Downloads 552
20440 Real Time Detection, Prediction and Reconstitution of Rain Drops

Authors: R. Burahee, B. Chassinat, T. de Laclos, A. Dépée, A. Sastim

Abstract:

The purpose of this paper is to propose a solution to detect, predict and reconstitute rain drops in real time – during the night – using an embedded material with an infrared camera. To prevent the system from needing too high hardware resources, simple models are considered in a powerful image treatment algorithm reducing considerably calculation time in OpenCV software. Using a smart model – drops will be matched thanks to a process running through two consecutive pictures for implementing a sophisticated tracking system. With this system drops computed trajectory gives information for predicting their future location. Thanks to this technique, treatment part can be reduced. The hardware system composed by a Raspberry Pi is optimized to host efficiently this code for real time execution.

Keywords: reconstitution, prediction, detection, rain drop, real time, raspberry, infrared

Procedia PDF Downloads 406