Search results for: setting prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4055

Search results for: setting prediction

3635 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 176
3634 Memory Consolidation: Application of Retrieval Strategies in the Classroom

Authors: Eric Tardif, Nicolas Meylan

Abstract:

Recent studies suggest that the consolidation of episodic memory is better achieved through repeated retrieval than with the use of concept mapping or repeated study. Although such laboratory results highly appeal to educationalists, it remains to be shown whether they can be directly used in a classroom setting. Forty-five college students (42 girls; mean age 16.1 y/o) were asked to remember pairs of biology-related words (e.g. mitochondria-energy) in two configurations. The first configuration consisted of a three-minute study of pairs of words followed by a final one-minute test in which the first word of a pair was shown and the subject asked to write down the second associated word. This procedure was repeated three times. The second configuration consisted of a one-minute study of a list of pairs of words, which was immediately followed by a one-minute test. This procedure was repeated 6 times. Subjects filled out a small questionnaire assessing their general mood, level of fatigue, stress and motivation to do the exercise. One week later, subjects were given a final test using the same words. A total of 8 lists of words were studied and tested during the semester. Results showed that subjects recalled more correct words when using the second configuration, both within the study period and one week later, confirming laboratory findings. However, the general performance (mean items recalled) as well as the motivation to do the exercise gradually decreased during the semester. Motivation was positively correlated with performance (r=0.77, p<0.05). The results suggest that laboratory findings may provide some applications in education but other variables inherent to the classroom setting must also be considered.

Keywords: long-term, episodic memory, consolidation, retrieval, school setting

Procedia PDF Downloads 339
3633 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education

Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue

Abstract:

In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.

Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education

Procedia PDF Downloads 107
3632 Combining the Deep Neural Network with the K-Means for Traffic Accident Prediction

Authors: Celso L. Fernando, Toshio Yoshii, Takahiro Tsubota

Abstract:

Understanding the causes of a road accident and predicting their occurrence is key to preventing deaths and serious injuries from road accident events. Traditional statistical methods such as the Poisson and the Logistics regressions have been used to find the association of the traffic environmental factors with the accident occurred; recently, an artificial neural network, ANN, a computational technique that learns from historical data to make a more accurate prediction, has emerged. Although the ability to make accurate predictions, the ANN has difficulty dealing with highly unbalanced attribute patterns distribution in the training dataset; in such circumstances, the ANN treats the minority group as noise. However, in the real world data, the minority group is often the group of interest; e.g., in the road traffic accident data, the events of the accident are the group of interest. This study proposes a combination of the k-means with the ANN to improve the predictive ability of the neural network model by alleviating the effect of the unbalanced distribution of the attribute patterns in the training dataset. The results show that the proposed method improves the ability of the neural network to make a prediction on a highly unbalanced distributed attribute patterns dataset; however, on an even distributed attribute patterns dataset, the proposed method performs almost like a standard neural network.

Keywords: accident risks estimation, artificial neural network, deep learning, k-mean, road safety

Procedia PDF Downloads 163
3631 Applying Artificial Neural Networks to Predict Speed Skater Impact Concussion Risk

Authors: Yilin Liao, Hewen Li, Paula McConvey

Abstract:

Speed skaters often face a risk of concussion when they fall on the ice floor and impact crash mats during practices and competitive races. Several variables, including those related to the skater, the crash mat, and the impact position (body side/head/feet impact), are believed to influence the severity of the skater's concussion. While computer simulation modeling can be employed to analyze these accidents, the simulation process is time-consuming and does not provide rapid information for coaches and teams to assess the skater's injury risk in competitive events. This research paper promotes the exploration of the feasibility of using AI techniques for evaluating skater’s potential concussion severity, and to develop a fast concussion prediction tool using artificial neural networks to reduce the risk of treatment delays for injured skaters. The primary data is collected through virtual tests and physical experiments designed to simulate skater-mat impact. It is then analyzed to identify patterns and correlations; finally, it is used to train and fine-tune the artificial neural networks for accurate prediction. The development of the prediction tool by employing machine learning strategies contributes to the application of AI methods in sports science and has theoretical involvements for using AI techniques in predicting and preventing sports-related injuries.

Keywords: artificial neural networks, concussion, machine learning, impact, speed skater

Procedia PDF Downloads 109
3630 Wildland Fire in Terai Arc Landscape of Lesser Himalayas Threatning the Tiger Habitat

Authors: Amit Kumar Verma

Abstract:

The present study deals with fire prediction model in Terai Arc Landscape, one of the most dramatic ecosystems in Asia where large, wide-ranging species such as tiger, rhinos, and elephant will thrive while bringing economic benefits to the local people. Forest fires cause huge economic and ecological losses and release considerable quantities of carbon into the air and is an important factor inflating the global burden of carbon emissions. Forest fire is an important factor of behavioral cum ecological habit of tiger in wild. Post fire changes i.e. micro and macro habitat directly affect the tiger habitat or land. Vulnerability of fire depicts the changes in microhabitat (humus, soil profile, litter, vegetation, grassland ecosystem). Microorganism like spider, annelids, arthropods and other favorable microorganism directly affect by the forest fire and indirectly these entire microorganisms are responsible for the development of tiger (Panthera tigris) habitat. On the other hand, fire brings depletion in prey species and negative movement of tiger from wild to human- dominated areas, which may leads the conflict i.e. dangerous for both tiger & human beings. Early forest fire prediction through mapping the risk zones can help minimize the fire frequency and manage forest fires thereby minimizing losses. Satellite data plays a vital role in identifying and mapping forest fire and recording the frequency with which different vegetation types are affected. Thematic hazard maps have been generated by using IDW technique. A prediction model for fire occurrence is developed for TAL. The fire occurrence records were collected from state forest department from 2000 to 2014. Disciminant function models was used for developing a prediction model for forest fires in TAL, random points for non-occurrence of fire have been generated. Based on the attributes of points of occurrence and non-occurrence, the model developed predicts the fire occurrence. The map of predicted probabilities classified the study area into five classes very high (12.94%), high (23.63%), moderate (25.87%), low(27.46%) and no fire (10.1%) based upon the intensity of hazard. model is able to classify 78.73 percent of points correctly and hence can be used for the purpose with confidence. Overall, also the model works correctly with almost 69% of points. This study exemplifies the usefulness of prediction model of forest fire and offers a more effective way for management of forest fire. Overall, this study depicts the model for conservation of tiger’s natural habitat and forest conservation which is beneficial for the wild and human beings for future prospective.

Keywords: fire prediction model, forest fire hazard, GIS, landsat, MODIS, TAL

Procedia PDF Downloads 352
3629 Panganay-bunso Syndrome: A Contextualized Filipino Concept of Seniority in an Industrial Setting

Authors: Anne Camille P. Balabag, Via B. Cabarda, Ruffa Mae Lomboy, Aira Joyce Nesus

Abstract:

Nowadays, Filipinos seem to dominate the outsourcing industry, one factor that affects quality service is organization mismanagement. Traditionally, Filipino promotions are based on tenure rather than competence. Seniority refers to a superior rank that an employee holds in an industrial setting based on a position withheld in a company. Yet, seniority also holds the paradigm for Filipino family structures. With this, the researchers believe that Filipinos have a deeper take on seniority, which became the motivation for this study. The researchers wanted to contextualize the Filipino concept of seniority, the perception, and reactions of Filipino employees to its existence, and uncover their relevant experiences within the industrial setting. Following a phenomenological research design, data collected from ten (10) participants with various demographic backgrounds, chosen through purposive sampling, and interviewed utilizing a semi-structured interview and analyzed using thematic analysis revealed seven primary themes: (1) Reality of Tenureship and Competence, (2) Disparity in Age Influence, (3) Special Power of Seniority, (4) Seniority is Necessity, (5) The Filipino Organizational Values, (6) Art of Seniority in Human Resource, and (7) Confessions of the Inner Child. The findings suggest that seniority exists based on a ranking system created through human resource management and mirrored from traditional Filipino values. Also, the researchers identified three Filipino industrial values: respect, pakikipagkapwa-tao (treating others as a fellow human being), and utang na loob (debt of gratitude). Lastly, birth order was found to have direct and indirect effects on their conduct in an industrial context.

Keywords: organizational psychology, human resource management, filipino psychology, industrial values

Procedia PDF Downloads 122
3628 Allometric Models for Biomass Estimation in Savanna Woodland Area, Niger State, Nigeria

Authors: Abdullahi Jibrin, Aishetu Abdulkadir

Abstract:

The development of allometric models is crucial to accurate forest biomass/carbon stock assessment. The aim of this study was to develop a set of biomass prediction models that will enable the determination of total tree aboveground biomass for savannah woodland area in Niger State, Nigeria. Based on the data collected through biometric measurements of 1816 trees and destructive sampling of 36 trees, five species specific and one site specific models were developed. The sample size was distributed equally between the five most dominant species in the study site (Vitellaria paradoxa, Irvingia gabonensis, Parkia biglobosa, Anogeissus leiocarpus, Pterocarpus erinaceous). Firstly, the equations were developed for five individual species. Secondly these five species were mixed and were used to develop an allometric equation of mixed species. Overall, there was a strong positive relationship between total tree biomass and the stem diameter. The coefficient of determination (R2 values) ranging from 0.93 to 0.99 P < 0.001 were realised for the models; with considerable low standard error of the estimates (SEE) which confirms that the total tree above ground biomass has a significant relationship with the dbh. The F-test value for the biomass prediction models were also significant at p < 0.001 which indicates that the biomass prediction models are valid. This study recommends that for improved biomass estimates in the study site, the site specific biomass models should preferably be used instead of using generic models.

Keywords: allometriy, biomass, carbon stock , model, regression equation, woodland, inventory

Procedia PDF Downloads 448
3627 Rocket Launch Simulation for a Multi-Mode Failure Prediction Analysis

Authors: Mennatallah M. Hussein, Olivier de Weck

Abstract:

The advancement of space exploration demands a robust space launch services program capable of reliably propelling payloads into orbit. Despite rigorous testing and quality assurance, launch failures still occur, leading to significant financial losses and jeopardizing mission objectives. Traditional failure prediction methods often lack the sophistication to account for multi-mode failure scenarios, as well as the predictive capability in complex dynamic systems. Traditional approaches also rely on expert judgment, leading to variability in risk prioritization and mitigation strategies. Hence, there is a pressing need for robust approaches that enhance launch vehicle reliability from lift-off until it reaches its parking orbit through comprehensive simulation techniques. In this study, the developed model proposes a multi-mode launch vehicle simulation framework for predicting failure scenarios when incorporating new technologies, such as new propulsion systems or advanced staging separation mechanisms in the launch system. To this end, the model combined a 6-DOF system dynamics with comprehensive data analysis to simulate multiple failure modes impacting launch performance. The simulator utilizes high-fidelity physics-based simulations to capture the complex interactions between different subsystems and environmental conditions.

Keywords: launch vehicle, failure prediction, propulsion anomalies, rocket launch simulation, rocket dynamics

Procedia PDF Downloads 31
3626 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 53
3625 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 65
3624 Resale Housing Development Board Price Prediction Considering Covid-19 through Sentiment Analysis

Authors: Srinaath Anbu Durai, Wang Zhaoxia

Abstract:

Twitter sentiment has been used as a predictor to predict price values or trends in both the stock market and housing market. The pioneering works in this stream of research drew upon works in behavioural economics to show that sentiment or emotions impact economic decisions. Latest works in this stream focus on the algorithm used as opposed to the data used. A literature review of works in this stream through the lens of data used shows that there is a paucity of work that considers the impact of sentiments caused due to an external factor on either the stock or the housing market. This is despite an abundance of works in behavioural economics that show that sentiment or emotions caused due to an external factor impact economic decisions. To address this gap, this research studies the impact of Twitter sentiment pertaining to the Covid-19 pandemic on resale Housing Development Board (HDB) apartment prices in Singapore. It leverages SNSCRAPE to collect tweets pertaining to Covid-19 for sentiment analysis, lexicon based tools VADER and TextBlob are used for sentiment analysis, Granger Causality is used to examine the relationship between Covid-19 cases and the sentiment score, and neural networks are leveraged as prediction models. Twitter sentiment pertaining to Covid-19 as a predictor of HDB price in Singapore is studied in comparison with the traditional predictors of housing prices i.e., the structural and neighbourhood characteristics. The results indicate that using Twitter sentiment pertaining to Covid19 leads to better prediction than using only the traditional predictors and performs better as a predictor compared to two of the traditional predictors. Hence, Twitter sentiment pertaining to an external factor should be considered as important as traditional predictors. This paper demonstrates the real world economic applications of sentiment analysis of Twitter data.

Keywords: sentiment analysis, Covid-19, housing price prediction, tweets, social media, Singapore HDB, behavioral economics, neural networks

Procedia PDF Downloads 115
3623 Combined Effect of Heat Stimulation and Delay Addition of Superplasticizer with Slag on Fresh and Hardened Property of Mortar

Authors: Antoni Wibowo, Harry Pujianto, Dewi Retno Sari Saputro

Abstract:

The stock market can provide huge profits in a relatively short time in financial sector; however, it also has a high risk for investors and traders if they are not careful to look the factors that affect the stock market. Therefore, they should give attention to the dynamic fluctuations and movements of the stock market to optimize profits from their investment. In this paper, we present a nonlinear autoregressive exogenous model (NARX) to predict the movements of stock market; especially, the movements of the closing price index. As case study, we consider to predict the movement of the closing price in Indonesia composite index (IHSG) and choose the best structures of NARX for IHSG’s prediction.

Keywords: NARX (Nonlinear Autoregressive Exogenous Model), prediction, stock market, time series

Procedia PDF Downloads 244
3622 Prediction of the Behavior of 304L Stainless Steel under Uniaxial and Biaxial Cyclic Loading

Authors: Aboussalih Amira, Zarza Tahar, Fedaoui Kamel, Hammoudi Saleh

Abstract:

This work focuses on the simulation of the prediction of the behaviour of austenitic stainless steel (SS) 304L under complex loading in stress and imposed strain. The Chaboche model is a cable to describe the response of the material by the combination of two isotropic and nonlinear kinematic work hardening, the model is implemented in the ZébuLon computer code. First, we represent the evolution of the axial stress as a function of the plastic strain through hysteresis loops revealing a hardening behaviour caused by the increase in stress by stress in the direction of tension/compression. In a second step, the study of the ratcheting phenomenon takes a key place in this work by the appearance of the average stress. In addition to the solicitation of the material in the biaxial direction in traction / torsion.

Keywords: damage, 304L, Ratcheting, plastic strain

Procedia PDF Downloads 94
3621 Prediction of Conducted EMI Noise in a Converter

Authors: Jon Cobb, Nasir

Abstract:

Due to higher switching frequencies, the conducted Electromagnetic interference (EMI) noise is generated in a converter. It degrades the performance of a switching converter. Therefore, it is an essential requirement to mitigate EMI noise of high performance converter. Moreover, it includes two types of emission such as common mode (CM) and differential mode (DM) noise. CM noise is due to parasitic capacitance present in a converter and DM noise is caused by switching current. However, there is dire need to understand the main cause of EMI noise. Hence, we propose a novel method to predict conducted EMI noise of different converter topologies during early stage. This paper also presents the comparison of conducted electromagnetic interference (EMI) noise due to different SMPS topologies. We also make an attempt to develop an EMI noise model for a converter which allows detailed performance analysis. The proposed method is applied to different converter, as an example, and experimental results are verified the novel prediction technique.

Keywords: EMI, electromagnetic interference, SMPS, switch-mode power supply, common mode, CM, differential mode, DM, noise

Procedia PDF Downloads 1208
3620 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 120
3619 Performance Prediction Methodology of Slow Aging Assets

Authors: M. Ben Slimene, M.-S. Ouali

Abstract:

Asset management of urban infrastructures faces a multitude of challenges that need to be overcome to obtain a reliable measurement of performances. Predicting the performance of slowly aging systems is one of those challenges, which helps the asset manager to investigate specific failure modes and to undertake the appropriate maintenance and rehabilitation interventions to avoid catastrophic failures as well as to optimize the maintenance costs. This article presents a methodology for modeling the deterioration of slowly degrading assets based on an operating history. It consists of extracting degradation profiles by grouping together assets that exhibit similar degradation sequences using an unsupervised classification technique derived from artificial intelligence. The obtained clusters are used to build the performance prediction models. This methodology is applied to a sample of a stormwater drainage culvert dataset.

Keywords: artificial Intelligence, clustering, culvert, regression model, slow degradation

Procedia PDF Downloads 110
3618 The Use of Venous Glucose, Serum Lactate and Base Deficit as Biochemical Predictors of Mortality in Polytraumatized Patients: Acomparative with Trauma and Injury Severity Score and Acute Physiology and Chronic Health Evalution IV

Authors: Osama Moustafa Zayed

Abstract:

Aim of the work: To evaluate the effectiveness of venous glucose, levels of serum lactate and base deficit in polytraumatized patients as simple parameters to predict the mortality in these patients. Compared to the predictive value of Trauma and injury severity (TRISS) and Acute Physiology And Chronic Health Evaluation IV (APACHE IV). Introduction: Trauma is a serious global health problem, accounting for approximately one in 10 deaths worldwide. Trauma accounts for 5 million deaths per year. Prediction of mortality in trauma patients is an important part of trauma care. Several trauma scores have been devised to predict injury severity and risk of mortality. The trauma and injury severity score (TRISS) was most common used. Regardless of the accuracy of trauma scores, is based on an anatomical description of every injury and cannot be assigned to the patients until a full diagnostic procedure has been performed. So we hypothesized that alterations in admission glucose, lactate levels and base deficit would be an early and easy rapid predictor of mortality. Patient and Method: a comparative cross-sectional study. 282 Polytraumatized patients attended to the Emergency Department(ED) of the Suez Canal university Hospital constituted. The period from 1/1/2012 to 1/4/2013 was included. Results: We found that the best cut off value of TRISS probability of survival score for prediction of mortality among poly-traumatized patients is = 90, with 77% sensitivity and 89% specificity using area under the ROC curve (0.89) at (95%CI). APACHE IV demonstrated 67% sensitivity and 95% specificity at 95% CI at cut off point 99. The best cutoff value of Random Blood Sugar (RBS) for prediction of mortality was>140 mg/dl, with 89%, sensitivity, 49% specificity. The best cut off value of base deficit for prediction of mortality was less than -5.6 with 64% sensitivity, 93% specificity. The best cutoff point of lactate for prediction of mortality was > 2.6 mmol/L with 92%, sensitivity, 42% specificity. Conclusion: According to our results from all evaluated predictors of mortality (laboratory and scores) and mortality based on the estimated cutoff values using ROC curves analysis, the highest risk of mortality was found using a cutoff value of 90 in TRISS score while with laboratory parameters the highest risk of mortality was with serum lactate > 2.6 . Although that all of the three parameter are accurate in predicting mortality in poly-traumatized patients and near with each other, as in serum lactate the area under the curve 0.82, in BD 0.79 and 0.77 in RBS.

Keywords: APACHE IV, emergency department, polytraumatized patients, serum lactate

Procedia PDF Downloads 294
3617 Prediction of Oxygen Transfer and Gas Hold-Up in Pneumatic Bioreactors Containing Viscous Newtonian Fluids

Authors: Caroline E. Mendes, Alberto C. Badino

Abstract:

Pneumatic reactors have been widely employed in various sectors of the chemical industry, especially where are required high heat and mass transfer rates. This study aimed to obtain correlations that allow the prediction of gas hold-up (Ԑ) and volumetric oxygen transfer coefficient (kLa), and compare these values, for three models of pneumatic reactors on two scales utilizing Newtonian fluids. Values of kLa were obtained using the dynamic pressure-step method, while  was used for a new proposed measure. Comparing the three models of reactors studied, it was observed that the mass transfer was superior to draft-tube airlift, reaching  of 0.173 and kLa of 0.00904s-1. All correlations showed good fit to the experimental data (R2≥94%), and comparisons with correlations from the literature demonstrate the need for further similar studies due to shortage of data available, mainly for airlift reactors and high viscosity fluids.

Keywords: bubble column, internal loop airlift, gas hold-up, kLa

Procedia PDF Downloads 274
3616 Calibration of Site Effect Parameters in the GMPM BSSA 14 for the Region of Spain

Authors: Gonzalez Carlos, Martinez Fransisco

Abstract:

The creation of a seismic prediction model that considers all the regional variations and perfectly adjusts its results to the response spectra is very complicated. To achieve statistically acceptable results, it is necessary to process a sufficiently robust data set, and even if high efficiencies are achieved, this model will only work properly in this region. However, when using it in other regions, differences are found due to different parameters that have not been calibrated to other regions, such as the site effect. The fact that impedance contrasts, as well as other factors belonging to the site, have a great influence on the local response is well known, which is why this work, using the residual method, is intended to establish a regional calibration of the corresponding parameters site effect for the Spain region in the global GMPM BSSA 14.

Keywords: GMPM, seismic prediction equations, residual method, response spectra, impedance contrast

Procedia PDF Downloads 84
3615 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles

Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi

Abstract:

Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.

Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing

Procedia PDF Downloads 178
3614 Parametric Appraisal of Robotic Arc Welding of Mild Steel Material by Principal Component Analysis-Fuzzy with Taguchi Technique

Authors: Amruta Rout, Golak Bihari Mahanta, Gunji Bala Murali, Bibhuti Bhusan Biswal, B. B. V. L. Deepak

Abstract:

The use of industrial robots for performing welding operation is one of the chief sign of contemporary welding in these days. The weld joint parameter and weld process parameter modeling is one of the most crucial aspects of robotic welding. As weld process parameters affect the weld joint parameters differently, a multi-objective optimization technique has to be utilized to obtain optimal setting of weld process parameter. In this paper, a hybrid optimization technique, i.e., Principal Component Analysis (PCA) combined with fuzzy logic has been proposed to get optimal setting of weld process parameters like wire feed rate, welding current. Gas flow rate, welding speed and nozzle tip to plate distance. The weld joint parameters considered for optimization are the depth of penetration, yield strength, and ultimate strength. PCA is a very efficient multi-objective technique for converting the correlated and dependent parameters into uncorrelated and independent variables like the weld joint parameters. Also in this approach, no need for checking the correlation among responses as no individual weight has been assigned to responses. Fuzzy Inference Engine can efficiently consider these aspects into an internal hierarchy of it thereby overcoming various limitations of existing optimization approaches. At last Taguchi method is used to get the optimal setting of weld process parameters. Therefore, it has been concluded the hybrid technique has its own advantages which can be used for quality improvement in industrial applications.

Keywords: robotic arc welding, weld process parameters, weld joint parameters, principal component analysis, fuzzy logic, Taguchi method

Procedia PDF Downloads 179
3613 Cooperative Coevolution for Neuro-Evolution of Feed Forward Networks for Time Series Prediction Using Hidden Neuron Connections

Authors: Ravneil Nand

Abstract:

Cooperative coevolution uses problem decomposition methods to solve a larger problem. The problem decomposition deals with breaking down the larger problem into a number of smaller sub-problems depending on their method. Different problem decomposition methods have their own strengths and limitations depending on the neural network used and application problem. In this paper we are introducing a new problem decomposition method known as Hidden-Neuron Level Decomposition (HNL). The HNL method is competing with established problem decomposition method in time series prediction. The results show that the proposed approach has improved the results in some benchmark data sets when compared to the standalone method and has competitive results when compared to methods from literature.

Keywords: cooperative coevaluation, feed forward network, problem decomposition, neuron, synapse

Procedia PDF Downloads 335
3612 Numerical Prediction of Entropy Generation in Heat Exchangers

Authors: Nadia Allouache

Abstract:

The concept of second law is assumed to be important to optimize the energy losses in heat exchangers. The present study is devoted to the numerical prediction of entropy generation due to heat transfer and friction in a double tube heat exchanger partly or fully filled with a porous medium. The goal of this work is to find the optimal conditions that allow minimizing entropy generation. For this purpose, numerical modeling based on the control volume method is used to describe the flow and heat transfer phenomena in the fluid and the porous medium. Effects of the porous layer thickness, its permeability, and the effective thermal conductivity have been investigated. Unexpectedly, the fully porous heat exchanger yields a lower entropy generation than the partly porous case or the fluid case even if the friction increases the entropy generation.

Keywords: heat exchangers, porous medium, second law approach, turbulent flow

Procedia PDF Downloads 300
3611 The Economics of Justice as Fairness

Authors: Antonio Abatemarco, Francesca Stroffolini

Abstract:

In the economic literature, Rawls’ Theory of Justice is usually interpreted in a two-stage setting, where a priority to the worst off individual is imposed as a distributive value judgment. In this paper, instead, we model Rawls’ Theory in a three-stage setting, that is, a separating line is drawn between the original position, the educational stage, and the working life. Hence, in this paper, we challenge the common interpretation of Rawls’ Theory of Justice as Fairness by showing that this Theory goes well beyond the definition of a distributive value judgment, in such a way as to embrace efficiency issues as well. In our model, inequalities are shown to be permitted as far as they stimulate a greater effort in education in the population, and so economic growth. To our knowledge, this is the only possibility for the inequality to be ‘bought’ by both the most-, and above all, the least-advantaged individual as suggested by the Difference Principle. Finally, by recalling the old tradition of ‘universal ex-post efficiency’, we show that a unique optimal social contract does not exist behind the veil of ignorance; more precisely, the sole set of potentially Rawls-optimal social contracts can be identified a priori, and partial justice orderings derived accordingly.

Keywords: justice, Rawls, inequality, social contract

Procedia PDF Downloads 222
3610 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka

Authors: Sakshi Dhumale, Madhushree C., Amba Shetty

Abstract:

The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.

Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability

Procedia PDF Downloads 57
3609 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement

Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti

Abstract:

Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.

Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing

Procedia PDF Downloads 108
3608 Big Data: Appearance and Disappearance

Authors: James Moir

Abstract:

The mainstay of Big Data is prediction in that it allows practitioners, researchers, and policy analysts to predict trends based upon the analysis of large and varied sources of data. These can range from changing social and political opinions, patterns in crimes, and consumer behaviour. Big Data has therefore shifted the criterion of success in science from causal explanations to predictive modelling and simulation. The 19th-century science sought to capture phenomena and seek to show the appearance of it through causal mechanisms while 20th-century science attempted to save the appearance and relinquish causal explanations. Now 21st-century science in the form of Big Data is concerned with the prediction of appearances and nothing more. However, this pulls social science back in the direction of a more rule- or law-governed reality model of science and away from a consideration of the internal nature of rules in relation to various practices. In effect Big Data offers us no more than a world of surface appearance and in doing so it makes disappear any context-specific conceptual sensitivity.

Keywords: big data, appearance, disappearance, surface, epistemology

Procedia PDF Downloads 420
3607 Influence of Magnetized Water on the Split Tensile Strength of Concrete

Authors: Justine Cyril E. Nunag, Nestor B. Sabado Jr., Jienne Chester M. Tolosa

Abstract:

Concrete has high compressive strength but a low-tension strength. The small tensile strength of concrete is regarded as its primary weakness, which is why it is typically reinforced with steel, a material that is resistant to tension. Even with steel, however, cracking can occur. In strengthening concrete, only a few researchers have modified the water to be used in a concrete mix. This study aims to compare the split tensile strength of normal structural concrete to concrete prepared with magnetic water and a quick setting admixture. In this context, magnetic water is defined as tap water that has undergone a magnetic process to become magnetized water. To test the hypothesis that magnetized concrete leads to higher split tensile strength, twenty concrete specimens were made. There were five groups, each with five samples, that were differentiated by the number of cycles (0, 50, 100, and 150). The data from the Universal Testing Machine's split tensile strength were then analyzed using various statistical models and tests to determine the significant effect of magnetized water. The result showed a moderate (+0.579) but still significant degree of correlation. The researchers also discovered that using magnetic water for 50 cycles did not result in a significant increase in the concrete's split tensile strength, which influenced the analysis of variance. These results suggest that a concrete mix containing magnetic water and a quick-setting admixture alters the typical split tensile strength of normal concrete. Magnetic water has a significant impact on concrete tensile strength. The hardness property of magnetic water influenced the split tensile strength of concrete. In addition, a higher number of cycles results in a strong water magnetism. The laboratory test results show that a higher cycle translates to a higher tensile strength.

Keywords: hardness property, magnetic water, quick-setting admixture, split tensile strength, universal testing machine

Procedia PDF Downloads 146
3606 Prediction of Childbearing Orientations According to Couples' Sexual Review Component

Authors: Razieh Rezaeekalantari

Abstract:

Objective: The purpose of this study was to investigate the prediction of parenting orientations in terms of the components of couples' sexual review. Methods: This was a descriptive correlational research method. The population consisted of 500 couples referring to Sari Health Center. Two hundred and fifteen (215) people were selected randomly by using Krejcie-Morgan-sample-size-table. For data collection, the childbearing orientations scale and the Multidimensional Sexual Self-Concept Questionnaire were used. Result: For data analysis, the mean and standard deviation were used and to analyze the research hypothesis regression correlation and inferential statistics were used. Conclusion: The findings indicate that there is not a significant relationship between the tendency to childbearing and the predictive value of sexual review (r = 0.84) with significant level (sig = 219.19) (P < 0.05). So, with 95% confidence, we conclude that there is not a meaningful relationship between sexual orientation and tendency to child-rearing.

Keywords: couples referring, health center, sexual review component, parenting orientations

Procedia PDF Downloads 219