Search results for: spatio-temporal prediction
1998 Estimating Cyclone Intensity Using INSAT-3D IR Images Based on Convolution Neural Network Model
Authors: Divvela Vishnu Sai Kumar, Deepak Arora, Sheenu Rizvi
Abstract:
Forecasting a cyclone through satellite images consists of the estimation of the intensity of the cyclone and predicting it before a cyclone comes. This research work can help people to take safety measures before the cyclone comes. The prediction of the intensity of a cyclone is very important to save lives and minimize the damage caused by cyclones. These cyclones are very costliest natural disasters that cause a lot of damage globally due to a lot of hazards. Authors have proposed five different CNN (Convolutional Neural Network) models that estimate the intensity of cyclones through INSAT-3D IR images. There are a lot of techniques that are used to estimate the intensity; the best model proposed by authors estimates intensity with a root mean squared error (RMSE) of 10.02 kts.Keywords: estimating cyclone intensity, deep learning, convolution neural network, prediction models
Procedia PDF Downloads 1311997 Application of EEG Wavelet Power to Prediction of Antidepressant Treatment Response
Authors: Dorota Witkowska, Paweł Gosek, Lukasz Swiecicki, Wojciech Jernajczyk, Bruce J. West, Miroslaw Latka
Abstract:
In clinical practice, the selection of an antidepressant often degrades to lengthy trial-and-error. In this work we employ a normalized wavelet power of alpha waves as a biomarker of antidepressant treatment response. This novel EEG metric takes into account both non-stationarity and intersubject variability of alpha waves. We recorded resting, 19-channel EEG (closed eyes) in 22 inpatients suffering from unipolar (UD, n=10) or bipolar (BD, n=12) depression. The EEG measurement was done at the end of the short washout period which followed previously unsuccessful pharmacotherapy. The normalized alpha wavelet power of 11 responders was markedly different than that of 11 nonresponders at several, mostly temporoparietal sites. Using the prediction of treatment response based on the normalized alpha wavelet power, we achieved 81.8% sensitivity and 81.8% specificity for channel T4.Keywords: alpha waves, antidepressant, treatment outcome, wavelet
Procedia PDF Downloads 3161996 Reliability Assessment of Various Empirical Formulas for Prediction of Scour Hole Depth (Plunge Pool) Using a Comprehensive Physical Model
Authors: Majid Galoie, Khodadad Safavi, Abdolreza Karami Nejad, Reza Roshan
Abstract:
In this study, a comprehensive scouring model has been developed in order to evaluate the accuracy of various empirical relationships which were suggested for prediction of scour hole depth in plunge pools by Martins, Mason, Chian and Veronese. For this reason, scour hole depths caused by free falling jets from a flip bucket to a plunge pool were investigated. In this study various discharges, angles, scouring times, etc. have been considered. The final results demonstrated that the all mentioned empirical formulas, except Mason formula, were reasonably agreement with the experimental data.Keywords: scour hole depth, plunge pool, physical model, reliability assessment
Procedia PDF Downloads 5371995 Neural Network Based Path Loss Prediction for Global System for Mobile Communication in an Urban Environment
Authors: Danladi Ali
Abstract:
In this paper, we measured GSM signal strength in the Dnepropetrovsk city in order to predict path loss in study area using nonlinear autoregressive neural network prediction and we also, used neural network clustering to determine average GSM signal strength receive at the study area. The nonlinear auto-regressive neural network predicted that the GSM signal is attenuated with the mean square error (MSE) of 2.6748dB, this attenuation value is used to modify the COST 231 Hata and the Okumura-Hata models. The neural network clustering revealed that -75dB to -95dB is received more frequently. This means that the signal strength received at the study is mostly weak signalKeywords: one-dimensional multilevel wavelets, path loss, GSM signal strength, propagation, urban environment and model
Procedia PDF Downloads 3821994 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information
Authors: Haifeng Wang, Haili Zhang
Abstract:
Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.Keywords: computational social science, movie preference, machine learning, SVM
Procedia PDF Downloads 2601993 Hybrid Renewable Energy System Development Towards Autonomous Operation: The Deployment Potential in Greece
Authors: Afroditi Zamanidou, Dionysios Giannakopoulos, Konstantinos Manolitsis
Abstract:
A notable amount of electrical energy demand in many countries worldwide is used to cover public energy demand for road, square and other public spaces’ lighting. Renewable energy can contribute in a significant way to the electrical energy demand coverage for public lighting. This paper focuses on the sizing and design of a hybrid energy system (HES) exploiting the solar-wind energy potential to meet the electrical energy needs of lighting roads, squares and other public spaces. Moreover, the proposed HES provides coverage of the electrical energy demand for a Wi-Fi hotspot and a charging hotspot for the end-users. Alongside the sizing of the energy production system of the proposed HES, in order to ensure a reliable supply without interruptions, a storage system is added and sized. Multiple scenarios of energy consumption are assumed and applied in order to optimize the sizing of the energy production system and the energy storage system. A database with meteorological prediction data for 51 areas in Greece is developed in order to assess the possible deployment of the proposed HES. Since there are detailed meteorological prediction data for all 51 areas under investigation, the use of these data is evaluated, comparing them to real meteorological data. The meteorological prediction data are exploited to form three hourly production profiles for each area for every month of the year; minimum, average and maximum energy production. The energy production profiles are combined with the energy consumption scenarios and the sizing results of the energy production system and the energy storage system are extracted and presented for every area. Finally, the economic performance of the proposed HES in terms of Levelized cost of energy is estimated by calculating and assessing construction, operation and maintenance costs.Keywords: energy production system sizing, Greece’s deployment potential, meteorological prediction data, wind-solar hybrid energy system, levelized cost of energy
Procedia PDF Downloads 1561992 Prediction Factor of Recurrence Supraventricular Tachycardia After Adenosine Treatment in the Emergency Department
Authors: Chaiyaporn Yuksen
Abstract:
Backgroud: Supraventricular tachycardia (SVT) is an abnormally fast atrial tachycardia characterized by narrow (≤ 120 ms) and constant QRS. Adenosine was the drug of choice; the first dose was 6 mg. It can be repeated with the second and third doses of 12 mg, with greater than 90% success. The study found that patients observed at 4 hours after normal sinus rhythm was no recurrence within 24 hours. The objective of this study was to investigate the factors that influence the recurrence of SVT after adenosine in the emergency department (ED). Method: The study was conducted retrospectively exploratory model, prognostic study at the Emergency Department (ED) in Faculty of Medicine, Ramathibodi Hospital, a university-affiliated super tertiary care hospital in Bangkok, Thailand. The study was conducted for ten years period between 2010 and 2020. The inclusion criteria were age > 15 years, visiting the ED with SVT, and treating with adenosine. Those patients were recorded with the recurrence SVT in ED. The multivariable logistic regression model developed the predictive model and prediction score for recurrence PSVT. Result: 264 patients met the study criteria. Of those, 24 patients (10%) had recurrence PSVT. Five independent factors were predictive of recurrence PSVT. There was age>65 years, heart rate (after adenosine) > 100 per min, structural heart disease, and dose of adenosine. The clinical risk score to predict recurrence PSVT is developed accuracy 74.41%. The score of >6 had the likelihood ratio of recurrence PSVT by 5.71 times Conclusion: The clinical predictive score of > 6 was associated with recurrence PSVT in ED.Keywords: clinical prediction score, SVT, recurrence, emergency department
Procedia PDF Downloads 1551991 An Experimental Study on Service Life Prediction of Self: Compacting Concrete Using Sorptivity as a Durability Index
Abstract:
Permeation properties have been widely used to quantify durability characteristics of concrete for assessing long term performance and sustainability. The processes of deterioration in concrete are mediated largely by water. There is a strong interest in finding a better way of assessing the material properties of concrete in terms of durability. Water sorptivity is a useful single material property which can be one of the measures of durability useful in service life planning and prediction, especially in severe environmental conditions. This paper presents the results of the comparative study of sorptivity of Self-Compacting Concrete (SCC) with conventionally vibrated concrete. SCC is a new, special type of concrete mixture, characterized by high resistance to segregation that can flow through intricate geometrical configuration in the presence of reinforcement, under its own mass, without vibration and compaction. SCC mixes were developed for the paste contents of 0.38, 0.41 and 0.43 with fly ash as the filler for different cement contents ranging from 300 to 450 kg/m3. The study shows better performance by SCC in terms of capillary absorption. The sorptivity value decreased as the volume of paste increased. The use of higher paste content in SCC can make the concrete robust with better densification of the micro-structure, improving the durability and making the concrete more sustainable with improved long term performance. The sorptivity based on secondary absorption can be effectively used as a durability index to predict the time duration required for the ingress of water to penetrate the concrete, which has practical significance.Keywords: self-compacting concrete, service life prediction, sorptivity, volume of paste
Procedia PDF Downloads 3221990 'CardioCare': A Cutting-Edge Fusion of IoT and Machine Learning to Bridge the Gap in Cardiovascular Risk Management
Authors: Arpit Patil, Atharav Bhagwat, Rajas Bhope, Pramod Bide
Abstract:
This research integrates IoT and ML to predict heart failure risks, utilizing the Framingham dataset. IoT devices gather real-time physiological data, focusing on heart rate dynamics, while ML, specifically Random Forest, predicts heart failure. Rigorous feature selection enhances accuracy, achieving over 90% prediction rate. This amalgamation marks a transformative step in proactive healthcare, highlighting early detection's critical role in cardiovascular risk mitigation. Challenges persist, necessitating continual refinement for improved predictive capabilities.Keywords: cardiovascular diseases, internet of things, machine learning, cardiac risk assessment, heart failure prediction, early detection, cardio data analysis
Procedia PDF Downloads 141989 Learning to Recommend with Negative Ratings Based on Factorization Machine
Authors: Caihong Sun, Xizi Zhang
Abstract:
Rating prediction is an important problem for recommender systems. The task is to predict the rating for an item that a user would give. Most of the existing algorithms for the task ignore the effect of negative ratings rated by users on items, but the negative ratings have a significant impact on users’ purchasing decisions in practice. In this paper, we present a rating prediction algorithm based on factorization machines that consider the effect of negative ratings inspired by Loss Aversion theory. The aim of this paper is to develop a concave and a convex negative disgust function to evaluate the negative ratings respectively. Experiments are conducted on MovieLens dataset. The experimental results demonstrate the effectiveness of the proposed methods by comparing with other four the state-of-the-art approaches. The negative ratings showed much importance in the accuracy of ratings predictions.Keywords: factorization machines, feature engineering, negative ratings, recommendation systems
Procedia PDF Downloads 2431988 Prediction of Cutting Tool Life in Drilling of Reinforced Aluminum Alloy Composite Using a Fuzzy Method
Authors: Mohammed T. Hayajneh
Abstract:
Machining of Metal Matrix Composites (MMCs) is very significant process and has been a main problem that draws many researchers to investigate the characteristics of MMCs during different machining process. The poor machining properties of hard particles reinforced MMCs make drilling process a rather interesting task. Unlike drilling of conventional materials, many problems can be seriously encountered during drilling of MMCs, such as tool wear and cutting forces. Cutting tool wear is a very significant concern in industries. Cutting tool wear not only influences the quality of the drilled hole, but also affects the cutting tool life. Prediction the cutting tool life during drilling is essential for optimizing the cutting conditions. However, the relationship between tool life and cutting conditions, tool geometrical factors and workpiece material properties has not yet been established by any machining theory. In this research work, fuzzy subtractive clustering system has been used to model the cutting tool life in drilling of Al2O3 particle reinforced aluminum alloy composite to investigate of the effect of cutting conditions on cutting tool life. This investigation can help in controlling and optimizing of cutting conditions when the process parameters are adjusted. The built model for prediction the tool life is identified by using drill diameter, cutting speed, and cutting feed rate as input data. The validity of the model was confirmed by the examinations under various cutting conditions. Experimental results have shown the efficiency of the model to predict cutting tool life.Keywords: composite, fuzzy, tool life, wear
Procedia PDF Downloads 2971987 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization
Authors: Soheila Sadeghi
Abstract:
Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction
Procedia PDF Downloads 611986 Real Time Detection, Prediction and Reconstitution of Rain Drops
Authors: R. Burahee, B. Chassinat, T. de Laclos, A. Dépée, A. Sastim
Abstract:
The purpose of this paper is to propose a solution to detect, predict and reconstitute rain drops in real time – during the night – using an embedded material with an infrared camera. To prevent the system from needing too high hardware resources, simple models are considered in a powerful image treatment algorithm reducing considerably calculation time in OpenCV software. Using a smart model – drops will be matched thanks to a process running through two consecutive pictures for implementing a sophisticated tracking system. With this system drops computed trajectory gives information for predicting their future location. Thanks to this technique, treatment part can be reduced. The hardware system composed by a Raspberry Pi is optimized to host efficiently this code for real time execution.Keywords: reconstitution, prediction, detection, rain drop, real time, raspberry, infrared
Procedia PDF Downloads 4201985 Performance Analysis of Artificial Neural Network with Decision Tree in Prediction of Diabetes Mellitus
Authors: J. K. Alhassan, B. Attah, S. Misra
Abstract:
Human beings have the ability to make logical decisions. Although human decision - making is often optimal, it is insufficient when huge amount of data is to be classified. medical dataset is a vital ingredient used in predicting patients health condition. In other to have the best prediction, there calls for most suitable machine learning algorithms. This work compared the performance of Artificial Neural Network (ANN) and Decision Tree Algorithms (DTA) as regards to some performance metrics using diabetes data. The evaluations was done using weka software and found out that DTA performed better than ANN. Multilayer Perceptron (MLP) and Radial Basis Function (RBF) were the two algorithms used for ANN, while RegTree and LADTree algorithms were the DTA models used. The Root Mean Squared Error (RMSE) of MLP is 0.3913,that of RBF is 0.3625, that of RepTree is 0.3174 and that of LADTree is 0.3206 respectively.Keywords: artificial neural network, classification, decision tree algorithms, diabetes mellitus
Procedia PDF Downloads 4101984 Comparison of Machine Learning Models for the Prediction of System Marginal Price of Greek Energy Market
Authors: Ioannis P. Panapakidis, Marios N. Moschakis
Abstract:
The Greek Energy Market is structured as a mandatory pool where the producers make their bid offers in day-ahead basis. The System Operator solves an optimization routine aiming at the minimization of the cost of produced electricity. The solution of the optimization problem leads to the calculation of the System Marginal Price (SMP). Accurate forecasts of the SMP can lead to increased profits and more efficient portfolio management from the producer`s perspective. Aim of this study is to provide a comparative analysis of various machine learning models such as artificial neural networks and neuro-fuzzy models for the prediction of the SMP of the Greek market. Machine learning algorithms are favored in predictions problems since they can capture and simulate the volatilities of complex time series.Keywords: deregulated energy market, forecasting, machine learning, system marginal price
Procedia PDF Downloads 2161983 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy
Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay
Abstract:
Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.Keywords: trauma, coagulopathy, prediction, model
Procedia PDF Downloads 1761982 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education
Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue
Abstract:
In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education
Procedia PDF Downloads 1091981 Combining the Deep Neural Network with the K-Means for Traffic Accident Prediction
Authors: Celso L. Fernando, Toshio Yoshii, Takahiro Tsubota
Abstract:
Understanding the causes of a road accident and predicting their occurrence is key to preventing deaths and serious injuries from road accident events. Traditional statistical methods such as the Poisson and the Logistics regressions have been used to find the association of the traffic environmental factors with the accident occurred; recently, an artificial neural network, ANN, a computational technique that learns from historical data to make a more accurate prediction, has emerged. Although the ability to make accurate predictions, the ANN has difficulty dealing with highly unbalanced attribute patterns distribution in the training dataset; in such circumstances, the ANN treats the minority group as noise. However, in the real world data, the minority group is often the group of interest; e.g., in the road traffic accident data, the events of the accident are the group of interest. This study proposes a combination of the k-means with the ANN to improve the predictive ability of the neural network model by alleviating the effect of the unbalanced distribution of the attribute patterns in the training dataset. The results show that the proposed method improves the ability of the neural network to make a prediction on a highly unbalanced distributed attribute patterns dataset; however, on an even distributed attribute patterns dataset, the proposed method performs almost like a standard neural network.Keywords: accident risks estimation, artificial neural network, deep learning, k-mean, road safety
Procedia PDF Downloads 1641980 Applying Artificial Neural Networks to Predict Speed Skater Impact Concussion Risk
Authors: Yilin Liao, Hewen Li, Paula McConvey
Abstract:
Speed skaters often face a risk of concussion when they fall on the ice floor and impact crash mats during practices and competitive races. Several variables, including those related to the skater, the crash mat, and the impact position (body side/head/feet impact), are believed to influence the severity of the skater's concussion. While computer simulation modeling can be employed to analyze these accidents, the simulation process is time-consuming and does not provide rapid information for coaches and teams to assess the skater's injury risk in competitive events. This research paper promotes the exploration of the feasibility of using AI techniques for evaluating skater’s potential concussion severity, and to develop a fast concussion prediction tool using artificial neural networks to reduce the risk of treatment delays for injured skaters. The primary data is collected through virtual tests and physical experiments designed to simulate skater-mat impact. It is then analyzed to identify patterns and correlations; finally, it is used to train and fine-tune the artificial neural networks for accurate prediction. The development of the prediction tool by employing machine learning strategies contributes to the application of AI methods in sports science and has theoretical involvements for using AI techniques in predicting and preventing sports-related injuries.Keywords: artificial neural networks, concussion, machine learning, impact, speed skater
Procedia PDF Downloads 1101979 Wildland Fire in Terai Arc Landscape of Lesser Himalayas Threatning the Tiger Habitat
Authors: Amit Kumar Verma
Abstract:
The present study deals with fire prediction model in Terai Arc Landscape, one of the most dramatic ecosystems in Asia where large, wide-ranging species such as tiger, rhinos, and elephant will thrive while bringing economic benefits to the local people. Forest fires cause huge economic and ecological losses and release considerable quantities of carbon into the air and is an important factor inflating the global burden of carbon emissions. Forest fire is an important factor of behavioral cum ecological habit of tiger in wild. Post fire changes i.e. micro and macro habitat directly affect the tiger habitat or land. Vulnerability of fire depicts the changes in microhabitat (humus, soil profile, litter, vegetation, grassland ecosystem). Microorganism like spider, annelids, arthropods and other favorable microorganism directly affect by the forest fire and indirectly these entire microorganisms are responsible for the development of tiger (Panthera tigris) habitat. On the other hand, fire brings depletion in prey species and negative movement of tiger from wild to human- dominated areas, which may leads the conflict i.e. dangerous for both tiger & human beings. Early forest fire prediction through mapping the risk zones can help minimize the fire frequency and manage forest fires thereby minimizing losses. Satellite data plays a vital role in identifying and mapping forest fire and recording the frequency with which different vegetation types are affected. Thematic hazard maps have been generated by using IDW technique. A prediction model for fire occurrence is developed for TAL. The fire occurrence records were collected from state forest department from 2000 to 2014. Disciminant function models was used for developing a prediction model for forest fires in TAL, random points for non-occurrence of fire have been generated. Based on the attributes of points of occurrence and non-occurrence, the model developed predicts the fire occurrence. The map of predicted probabilities classified the study area into five classes very high (12.94%), high (23.63%), moderate (25.87%), low(27.46%) and no fire (10.1%) based upon the intensity of hazard. model is able to classify 78.73 percent of points correctly and hence can be used for the purpose with confidence. Overall, also the model works correctly with almost 69% of points. This study exemplifies the usefulness of prediction model of forest fire and offers a more effective way for management of forest fire. Overall, this study depicts the model for conservation of tiger’s natural habitat and forest conservation which is beneficial for the wild and human beings for future prospective.Keywords: fire prediction model, forest fire hazard, GIS, landsat, MODIS, TAL
Procedia PDF Downloads 3521978 Allometric Models for Biomass Estimation in Savanna Woodland Area, Niger State, Nigeria
Authors: Abdullahi Jibrin, Aishetu Abdulkadir
Abstract:
The development of allometric models is crucial to accurate forest biomass/carbon stock assessment. The aim of this study was to develop a set of biomass prediction models that will enable the determination of total tree aboveground biomass for savannah woodland area in Niger State, Nigeria. Based on the data collected through biometric measurements of 1816 trees and destructive sampling of 36 trees, five species specific and one site specific models were developed. The sample size was distributed equally between the five most dominant species in the study site (Vitellaria paradoxa, Irvingia gabonensis, Parkia biglobosa, Anogeissus leiocarpus, Pterocarpus erinaceous). Firstly, the equations were developed for five individual species. Secondly these five species were mixed and were used to develop an allometric equation of mixed species. Overall, there was a strong positive relationship between total tree biomass and the stem diameter. The coefficient of determination (R2 values) ranging from 0.93 to 0.99 P < 0.001 were realised for the models; with considerable low standard error of the estimates (SEE) which confirms that the total tree above ground biomass has a significant relationship with the dbh. The F-test value for the biomass prediction models were also significant at p < 0.001 which indicates that the biomass prediction models are valid. This study recommends that for improved biomass estimates in the study site, the site specific biomass models should preferably be used instead of using generic models.Keywords: allometriy, biomass, carbon stock , model, regression equation, woodland, inventory
Procedia PDF Downloads 4481977 Rocket Launch Simulation for a Multi-Mode Failure Prediction Analysis
Authors: Mennatallah M. Hussein, Olivier de Weck
Abstract:
The advancement of space exploration demands a robust space launch services program capable of reliably propelling payloads into orbit. Despite rigorous testing and quality assurance, launch failures still occur, leading to significant financial losses and jeopardizing mission objectives. Traditional failure prediction methods often lack the sophistication to account for multi-mode failure scenarios, as well as the predictive capability in complex dynamic systems. Traditional approaches also rely on expert judgment, leading to variability in risk prioritization and mitigation strategies. Hence, there is a pressing need for robust approaches that enhance launch vehicle reliability from lift-off until it reaches its parking orbit through comprehensive simulation techniques. In this study, the developed model proposes a multi-mode launch vehicle simulation framework for predicting failure scenarios when incorporating new technologies, such as new propulsion systems or advanced staging separation mechanisms in the launch system. To this end, the model combined a 6-DOF system dynamics with comprehensive data analysis to simulate multiple failure modes impacting launch performance. The simulator utilizes high-fidelity physics-based simulations to capture the complex interactions between different subsystems and environmental conditions.Keywords: launch vehicle, failure prediction, propulsion anomalies, rocket launch simulation, rocket dynamics
Procedia PDF Downloads 341976 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models
Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti
Abstract:
In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics
Procedia PDF Downloads 551975 Reconstructability Analysis for Landslide Prediction
Authors: David Percy
Abstract:
Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.Keywords: reconstructability analysis, machine learning, landslides, raster analysis
Procedia PDF Downloads 681974 Resale Housing Development Board Price Prediction Considering Covid-19 through Sentiment Analysis
Authors: Srinaath Anbu Durai, Wang Zhaoxia
Abstract:
Twitter sentiment has been used as a predictor to predict price values or trends in both the stock market and housing market. The pioneering works in this stream of research drew upon works in behavioural economics to show that sentiment or emotions impact economic decisions. Latest works in this stream focus on the algorithm used as opposed to the data used. A literature review of works in this stream through the lens of data used shows that there is a paucity of work that considers the impact of sentiments caused due to an external factor on either the stock or the housing market. This is despite an abundance of works in behavioural economics that show that sentiment or emotions caused due to an external factor impact economic decisions. To address this gap, this research studies the impact of Twitter sentiment pertaining to the Covid-19 pandemic on resale Housing Development Board (HDB) apartment prices in Singapore. It leverages SNSCRAPE to collect tweets pertaining to Covid-19 for sentiment analysis, lexicon based tools VADER and TextBlob are used for sentiment analysis, Granger Causality is used to examine the relationship between Covid-19 cases and the sentiment score, and neural networks are leveraged as prediction models. Twitter sentiment pertaining to Covid-19 as a predictor of HDB price in Singapore is studied in comparison with the traditional predictors of housing prices i.e., the structural and neighbourhood characteristics. The results indicate that using Twitter sentiment pertaining to Covid19 leads to better prediction than using only the traditional predictors and performs better as a predictor compared to two of the traditional predictors. Hence, Twitter sentiment pertaining to an external factor should be considered as important as traditional predictors. This paper demonstrates the real world economic applications of sentiment analysis of Twitter data.Keywords: sentiment analysis, Covid-19, housing price prediction, tweets, social media, Singapore HDB, behavioral economics, neural networks
Procedia PDF Downloads 1181973 Combined Effect of Heat Stimulation and Delay Addition of Superplasticizer with Slag on Fresh and Hardened Property of Mortar
Authors: Antoni Wibowo, Harry Pujianto, Dewi Retno Sari Saputro
Abstract:
The stock market can provide huge profits in a relatively short time in financial sector; however, it also has a high risk for investors and traders if they are not careful to look the factors that affect the stock market. Therefore, they should give attention to the dynamic fluctuations and movements of the stock market to optimize profits from their investment. In this paper, we present a nonlinear autoregressive exogenous model (NARX) to predict the movements of stock market; especially, the movements of the closing price index. As case study, we consider to predict the movement of the closing price in Indonesia composite index (IHSG) and choose the best structures of NARX for IHSG’s prediction.Keywords: NARX (Nonlinear Autoregressive Exogenous Model), prediction, stock market, time series
Procedia PDF Downloads 2441972 Prediction of the Behavior of 304L Stainless Steel under Uniaxial and Biaxial Cyclic Loading
Authors: Aboussalih Amira, Zarza Tahar, Fedaoui Kamel, Hammoudi Saleh
Abstract:
This work focuses on the simulation of the prediction of the behaviour of austenitic stainless steel (SS) 304L under complex loading in stress and imposed strain. The Chaboche model is a cable to describe the response of the material by the combination of two isotropic and nonlinear kinematic work hardening, the model is implemented in the ZébuLon computer code. First, we represent the evolution of the axial stress as a function of the plastic strain through hysteresis loops revealing a hardening behaviour caused by the increase in stress by stress in the direction of tension/compression. In a second step, the study of the ratcheting phenomenon takes a key place in this work by the appearance of the average stress. In addition to the solicitation of the material in the biaxial direction in traction / torsion.Keywords: damage, 304L, Ratcheting, plastic strain
Procedia PDF Downloads 941971 Prediction of Conducted EMI Noise in a Converter
Abstract:
Due to higher switching frequencies, the conducted Electromagnetic interference (EMI) noise is generated in a converter. It degrades the performance of a switching converter. Therefore, it is an essential requirement to mitigate EMI noise of high performance converter. Moreover, it includes two types of emission such as common mode (CM) and differential mode (DM) noise. CM noise is due to parasitic capacitance present in a converter and DM noise is caused by switching current. However, there is dire need to understand the main cause of EMI noise. Hence, we propose a novel method to predict conducted EMI noise of different converter topologies during early stage. This paper also presents the comparison of conducted electromagnetic interference (EMI) noise due to different SMPS topologies. We also make an attempt to develop an EMI noise model for a converter which allows detailed performance analysis. The proposed method is applied to different converter, as an example, and experimental results are verified the novel prediction technique.Keywords: EMI, electromagnetic interference, SMPS, switch-mode power supply, common mode, CM, differential mode, DM, noise
Procedia PDF Downloads 12111970 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning
Authors: Shayla He
Abstract:
Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.Keywords: homeless, prediction, model, RNN
Procedia PDF Downloads 1211969 Performance Prediction Methodology of Slow Aging Assets
Authors: M. Ben Slimene, M.-S. Ouali
Abstract:
Asset management of urban infrastructures faces a multitude of challenges that need to be overcome to obtain a reliable measurement of performances. Predicting the performance of slowly aging systems is one of those challenges, which helps the asset manager to investigate specific failure modes and to undertake the appropriate maintenance and rehabilitation interventions to avoid catastrophic failures as well as to optimize the maintenance costs. This article presents a methodology for modeling the deterioration of slowly degrading assets based on an operating history. It consists of extracting degradation profiles by grouping together assets that exhibit similar degradation sequences using an unsupervised classification technique derived from artificial intelligence. The obtained clusters are used to build the performance prediction models. This methodology is applied to a sample of a stormwater drainage culvert dataset.Keywords: artificial Intelligence, clustering, culvert, regression model, slow degradation
Procedia PDF Downloads 112