Search results for: random intercepts model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18221

Search results for: random intercepts model

18041 A Study of Behavioral Phenomena Using an Artificial Neural Network

Authors: Yudhajit Datta

Abstract:

Will is a phenomenon that has puzzled humanity for a long time. It is a belief that Will Power of an individual affects the success achieved by an individual in life. It is thought that a person endowed with great will power can overcome even the most crippling setbacks of life while a person with a weak will cannot make the most of life even the greatest assets. Behavioral aspects of the human experience such as will are rarely subjected to quantitative study owing to the numerous uncontrollable parameters involved. This work is an attempt to subject the phenomena of will to the test of an artificial neural network. The claim being tested is that will power of an individual largely determines success achieved in life. In the study, an attempt is made to incorporate the behavioral phenomenon of will into a computational model using data pertaining to the success of individuals obtained from an experiment. A neural network is to be trained using data based upon part of the model, and subsequently used to make predictions regarding will corresponding to data points of success. If the prediction is in agreement with the model values, the model is to be retained as a candidate. Ultimately, the best-fit model from among the many different candidates is to be selected, and used for studying the correlation between success and will.

Keywords: will power, will, success, apathy factor, random factor, characteristic function, life story

Procedia PDF Downloads 379
18040 A Genetic Based Algorithm to Generate Random Simple Polygons Using a New Polygon Merge Algorithm

Authors: Ali Nourollah, Mohsen Movahedinejad

Abstract:

In this paper a new algorithm to generate random simple polygons from a given set of points in a two dimensional plane is designed. The proposed algorithm uses a genetic algorithm to generate polygons with few vertices. A new merge algorithm is presented which converts any two polygons into a simple polygon. This algorithm at first changes two polygons into a polygonal chain and then the polygonal chain is converted into a simple polygon. The process of converting a polygonal chain into a simple polygon is based on the removal of intersecting edges. The merge algorithm has the time complexity of O ((r+s) *l) where r and s are the size of merging polygons and l shows the number of intersecting edges removed from the polygonal chain. It will be shown that 1 < l < r+s. The experiments results show that the proposed algorithm has the ability to generate a great number of different simple polygons and has better performance in comparison to celebrated algorithms such as space partitioning and steady growth.

Keywords: Divide and conquer, genetic algorithm, merge polygons, Random simple polygon generation.

Procedia PDF Downloads 533
18039 Exact Solutions for Steady Response of Nonlinear Systems under Non-White Excitation

Authors: Yaping Zhao

Abstract:

In the present study, the exact solutions for the steady response of quasi-linear systems under non-white wide-band random excitation are considered by means of the stochastic averaging method. The non linearity of the systems contains the power-law damping and the cross-product term of the power-law damping and displacement. The drift and diffusion coefficients of the Fokker-Planck-Kolmogorov (FPK) equation after averaging are obtained by a succinct approach. After solving the averaged FPK equation, the joint probability density function and the marginal probability density function in steady state are attained. In the process of resolving, the eigenvalue problem of ordinary differential equation is handled by integral equation method. Some new results are acquired and the novel method to deal with the problems in nonlinear random vibration is proposed.

Keywords: random vibration, stochastic averaging method, FPK equation, transition probability density

Procedia PDF Downloads 503
18038 Classification for Obstructive Sleep Apnea Syndrome Based on Random Forest

Authors: Cheng-Yu Tsai, Wen-Te Liu, Shin-Mei Hsu, Yin-Tzu Lin, Chi Wu

Abstract:

Background: Obstructive Sleep apnea syndrome (OSAS) is a common respiratory disorder during sleep. In addition, Body parameters were identified high predictive importance for OSAS severity. However, the effects of body parameters on OSAS severity remain unclear. Objective: In this study, the objective is to establish a prediction model for OSAS by using body parameters and investigate the effects of body parameters in OSAS. Methodologies: Severity was quantified as the polysomnography and the mean hourly number of greater than 3% dips in oxygen saturation during examination in a hospital in New Taipei City (Taiwan). Four levels of OSAS severity were classified by the apnea and hypopnea index (AHI) with American Academy of Sleep Medicine (AASM) guideline. Body parameters, including neck circumference, waist size, and body mass index (BMI) were obtained from questionnaire. Next, dividing the collecting subjects into two groups: training and testing groups. The training group was used to establish the random forest (RF) to predicting, and test group was used to evaluated the accuracy of classification. Results: There were 3330 subjects recruited in this study, whom had been done polysomnography for evaluating severity for OSAS. A RF of 1000 trees achieved correctly classified 79.94 % of test cases. When further evaluated on the test cohort, RF showed the waist and BMI as the high import factors in OSAS. Conclusion It is possible to provide patient with prescreening by body parameters which can pre-evaluate the health risks.

Keywords: apnea and hypopnea index, Body parameters, obstructive sleep apnea syndrome, Random Forest

Procedia PDF Downloads 153
18037 Micromechanical Modeling of Fiber-Matrix Debonding in Unidirectional Composites

Authors: M. Palizvan, M. T. Abadi, M. H. Sadr

Abstract:

Due to variations in damage mechanisms in the microscale, the behavior of fiber-reinforced composites is nonlinear and difficult to model. To make use of computational advantages, homogenization method is applied to the micro-scale model in order to minimize the cost at the expense of detail of local microscale phenomena. In this paper, the effective stiffness is calculated using the homogenization of nonlinear behavior of a composite representative volume element (RVE) containing fiber-matrix debonding. The damage modes for the RVE are considered by using cohesive elements and contacts for the cohesive behavior of the interface between fiber and matrix. To predict more realistic responses of composite materials, different random distributions of fibers are proposed besides square and hexagonal arrays. It was shown that in some cases, there is quite different damage behavior in different fiber distributions. A comprehensive comparison has been made between different graphs.

Keywords: homogenization, cohesive zone model, fiber-matrix debonding, RVE

Procedia PDF Downloads 167
18036 Heart Attack Prediction Using Several Machine Learning Methods

Authors: Suzan Anwar, Utkarsh Goyal

Abstract:

Heart rate (HR) is a predictor of cardiovascular, cerebrovascular, and all-cause mortality in the general population, as well as in patients with cardio and cerebrovascular diseases. Machine learning (ML) significantly improves the accuracy of cardiovascular risk prediction, increasing the number of patients identified who could benefit from preventive treatment while avoiding unnecessary treatment of others. This research examines relationship between the individual's various heart health inputs like age, sex, cp, trestbps, thalach, oldpeaketc, and the likelihood of developing heart disease. Machine learning techniques like logistic regression and decision tree, and Python are used. The results of testing and evaluating the model using the Heart Failure Prediction Dataset show the chance of a person having a heart disease with variable accuracy. Logistic regression has yielded an accuracy of 80.48% without data handling. With data handling (normalization, standardscaler), the logistic regression resulted in improved accuracy of 87.80%, decision tree 100%, random forest 100%, and SVM 100%.

Keywords: heart rate, machine learning, SVM, decision tree, logistic regression, random forest

Procedia PDF Downloads 138
18035 Survival Data with Incomplete Missing Categorical Covariates

Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar

Abstract:

The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.

Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution

Procedia PDF Downloads 405
18034 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption

Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko

Abstract:

Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.

Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.

Procedia PDF Downloads 110
18033 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales

Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias

Abstract:

Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.

Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline

Procedia PDF Downloads 80
18032 Interaction between Space Syntax and Agent-Based Approaches for Vehicle Volume Modelling

Authors: Chuan Yang, Jing Bie, Panagiotis Psimoulis, Zhong Wang

Abstract:

Modelling and understanding vehicle volume distribution over the urban network are essential for urban design and transport planning. The space syntax approach was widely applied as the main conceptual and methodological framework for contemporary vehicle volume models with the help of the statistical method of multiple regression analysis (MRA). However, the MRA model with space syntax variables shows a limitation in vehicle volume predicting in accounting for the crossed effect of the urban configurational characters and socio-economic factors. The aim of this paper is to construct models by interacting with the combined impact of the street network structure and socio-economic factors. In this paper, we present a multilevel linear (ML) and an agent-based (AB) vehicle volume model at an urban scale interacting with space syntax theoretical framework. The ML model allowed random effects of urban configurational characteristics in different urban contexts. And the AB model was developed with the incorporation of transformed space syntax components of the MRA models into the agents’ spatial behaviour. Three models were implemented in the same urban environment. The ML model exhibit superiority over the original MRA model in identifying the relative impacts of the configurational characters and macro-scale socio-economic factors that shape vehicle movement distribution over the city. Compared with the ML model, the suggested AB model represented the ability to estimate vehicle volume in the urban network considering the combined effects of configurational characters and land-use patterns at the street segment level.

Keywords: space syntax, vehicle volume modeling, multilevel model, agent-based model

Procedia PDF Downloads 145
18031 The Impact of Inpatient New Boarding Policy on Emergency Department Overcrowding: A Discrete Event Simulation Study

Authors: Wheyming Tina Song, Chi-Hao Hong

Abstract:

In this study, we investigate the effect of a new boarding policy - short stay, on the overcrowding efficiency in emergency department (ED). The decision variables are no. of short stay beds for least acuity ED patients. The performance measurements used are national emergency department overcrowding score (NEDOCS) and ED retention rate (the percentage that patients stay in ED over than 48 hours in one month). Discrete event simulation (DES) is used as an analysis tool to evaluate the strategy. Also, common random number (CRN) technique is applied to enhance the simulation precision. The DES model was based on a census of 6 months' patients who were treated in the ED of the National Taiwan University Hospital Yunlin Branch. Our results show that the new short-stay boarding significantly impacts both the NEDOCS and ED retention rate when the no. of short stay beds is more than three.

Keywords: emergency department (ED), common random number (CRN), national emergency department overcrowding score (NEDOCS), discrete event simulation (DES)

Procedia PDF Downloads 348
18030 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation

Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro

Abstract:

This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.

Keywords: acceptance, block size, mixed linear model, testing order, testing order

Procedia PDF Downloads 321
18029 Supervised-Component-Based Generalised Linear Regression with Multiple Explanatory Blocks: THEME-SCGLR

Authors: Bry X., Trottier C., Mortier F., Cornu G., Verron T.

Abstract:

We address component-based regularization of a Multivariate Generalized Linear Model (MGLM). A set of random responses Y is assumed to depend, through a GLM, on a set X of explanatory variables, as well as on a set T of additional covariates. X is partitioned into R conceptually homogeneous blocks X1, ... , XR , viewed as explanatory themes. Variables in each Xr are assumed many and redundant. Thus, Generalised Linear Regression (GLR) demands regularization with respect to each Xr. By contrast, variables in T are assumed selected so as to demand no regularization. Regularization is performed searching each Xr for an appropriate number of orthogonal components that both contribute to model Y and capture relevant structural information in Xr. We propose a very general criterion to measure structural relevance (SR) of a component in a block, and show how to take SR into account within a Fisher-scoring-type algorithm in order to estimate the model. We show how to deal with mixed-type explanatory variables. The method, named THEME-SCGLR, is tested on simulated data.

Keywords: Component-Model, Fisher Scoring Algorithm, GLM, PLS Regression, SCGLR, SEER, THEME

Procedia PDF Downloads 395
18028 Multilevel Regression Model - Evaluate Relationship Between Early Years’ Activities of Daily Living and Alzheimer’s Disease Onset Accounting for Influence of Key Sociodemographic Factors Using a Longitudinal Household Survey Data

Authors: Linyi Fan, C.J. Schumaker

Abstract:

Background: Biomedical efforts to treat Alzheimer’s disease (AD) have typically produced mixed to poor results, while more lifestyle-focused treatments such as exercise may fare better than existing biomedical treatments. A few promising studies have indicated that activities of daily life (ADL) may be a useful way of predicting AD. However, the existing cross-sectional studies fail to show how functional-related issues such as ADL in early years predict AD and how social factors influence health either in addition to or in interaction with individual risk factors. This study would helpbetterscreening and early treatments for the elderly population and healthcare practice. The findings have significance academically and practically in terms of creating positive social change. Methodology: The purpose of this quantitative historical, correlational study was to examine the relationship between early years’ ADL and the development of AD in later years. The studyincluded 4,526participantsderived fromRAND HRS dataset. The Health and Retirement Study (HRS) is a longitudinal household survey data set that is available forresearchof retirement and health among the elderly in the United States. The sample was selected by the completion of survey questionnaire about AD and dementia. The variablethat indicates whether the participant has been diagnosed with AD was the dependent variable. The ADL indices and changes in ADL were the independent variables. A four-step multilevel regression model approach was utilized to address the research questions. Results: Amongst 4,526 patients who completed the AD and dementia questionnaire, 144 (3.1%) were diagnosed with AD. Of the 4,526 participants, 3,465 (76.6%) have high school and upper education degrees,4,074 (90.0%) were above poverty threshold. The model evaluatedthe effect of ADL and change in ADL on onset of AD in late years while allowing the intercept of the model to vary by level of education. The results suggested that the only significant predictor of the onset of AD was changes in early years’ ADL (b = 20.253, z = 2.761, p < .05). However, the result of the sensitivity analysis (b = 7.562, z = 1.900, p =.058), which included more control variables and increased the observation period of ADL, are not supported this finding. The model also estimated whether the variances of random effect vary by Level-2 variables. The results suggested that the variances associated with random slopes were approximately zero, suggesting that the relationship between early years’ ADL were not influenced bysociodemographic factors. Conclusion: The finding indicated that an increase in changes in ADL leads to an increase in the probability of onset AD in the future. However, this finding is not support in a broad observation period model. The study also failed to reject the hypothesis that the sociodemographic factors explained significant amounts of variance in random effect. Recommendations were then made for future research and practice based on these limitations and the significance of the findings.

Keywords: alzheimer’s disease, epidemiology, moderation, multilevel modeling

Procedia PDF Downloads 135
18027 Bayesian Meta-Analysis to Account for Heterogeneity in Studies Relating Life Events to Disease

Authors: Elizabeth Stojanovski

Abstract:

Associations between life events and various forms of cancers have been identified. The purpose of a recent random-effects meta-analysis was to identify studies that examined the association between adverse events associated with changes to financial status including decreased income and breast cancer risk. The same association was studied in four separate studies which displayed traits that were not consistent between studies such as the study design, location and time frame. It was of interest to pool information from various studies to help identify characteristics that differentiated study results. Two random-effects Bayesian meta-analysis models are proposed to combine the reported estimates of the described studies. The proposed models allow major sources of variation to be taken into account, including study level characteristics, between study variance, and within study variance and illustrate the ease with which uncertainty can be incorporated using a hierarchical Bayesian modelling approach.

Keywords: random-effects, meta-analysis, Bayesian, variation

Procedia PDF Downloads 160
18026 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 96
18025 Quantum Statistical Machine Learning and Quantum Time Series

Authors: Omar Alzeley, Sergey Utev

Abstract:

Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.

Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series

Procedia PDF Downloads 469
18024 Quantitative Analysis of the Trade Potential of the United States with Members of the European Union: A Gravity Model Approach

Authors: Zahid Ahmad, Nauman Ali

Abstract:

This study has estimated the trade between USA and individual members of European Union using Gravity Model of Trade as The USA has a complex trade relationship with the European countries consist of a large number of consumers, which make USA dependent on EU for major of its total world trade. However, among the member of EU, the trade potential of USA with individual members of EU is not known. Panel data techniques e.g. Random Effect, Fixed Effect and Pooled Panel have been applied to secondary quantitative data to analyze the Trade between USA and EU. Trade Potential of USA with individual members of EU has been obtained using the ratio of Actual trade of USA with EU members and the trade as predicted by Gravity Model. The Study concluded that the USA has greater trade potential with 16 members of EU, including Croatia, Portugal and United Kingdom on top. On the other hand, Finland, Ireland, and France are the top countries with which the USA has exhaustive trade potential.

Keywords: analytical technique, economic, gravity, international trade, significant

Procedia PDF Downloads 305
18023 Assessment of Carbon Dioxide Separation by Amine Solutions Using Electrolyte Non-Random Two-Liquid and Peng-Robinson Models: Carbon Dioxide Absorption Efficiency

Authors: Arash Esmaeili, Zhibang Liu, Yang Xiang, Jimmy Yun, Lei Shao

Abstract:

A high pressure carbon dioxide (CO2) absorption from a specific gas in a conventional column has been evaluated by the Aspen HYSYS simulator using a wide range of single absorbents and blended solutions to estimate the outlet CO2 concentration, absorption efficiency and CO2 loading to choose the most proper solution in terms of CO2 capture for environmental concerns. The property package (Acid Gas-Chemical Solvent) which is compatible with all applied solutions for the simulation in this study, estimates the properties based on an electrolyte non-random two-liquid (E-NRTL) model for electrolyte thermodynamics and Peng-Robinson equation of state for the vapor and liquid hydrocarbon phases. Among all the investigated single amines as well as blended solutions, piperazine (PZ) and the mixture of piperazine and monoethanolamine (MEA) have been found as the most effective absorbents respectively for CO2 absorption with high reactivity based on the simulated operational conditions.

Keywords: absorption, amine solutions, Aspen HYSYS, carbon dioxide, simulation

Procedia PDF Downloads 185
18022 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning

Authors: Pei Yi Lin

Abstract:

Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.

Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model

Procedia PDF Downloads 75
18021 A New Nonlinear State-Space Model and Its Application

Authors: Abdullah Eqal Al Mazrooei

Abstract:

In this work, a new nonlinear model will be introduced. The model is in the state-space form. The nonlinearity of this model is in the state equation where the state vector is multiplied by its self. This technique makes our model generalizes many famous models as Lotka-Volterra model and Lorenz model which have many applications in the real life. We will apply our new model to estimate the wind speed by using a new nonlinear estimator which suitable to work with our model.

Keywords: nonlinear systems, state-space model, Kronecker product, nonlinear estimator

Procedia PDF Downloads 691
18020 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 54
18019 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 40
18018 Modeling the Impacts of Road Construction on Lands Values

Authors: Maha Almumaiz, Harry Evdorides

Abstract:

Change in land value typically occurs when a new interurban road construction causes an increase in accessibility; this change in the adjacent lands values differs according to land characteristics such as geographic location, land use type, land area and sale time (appraisal time). A multiple regression model is obtained to predict the percent change in land value (CLV) based on four independent variables namely land distance from the constructed road, area of land, nature of land use and time from the works completion of the road. The random values of percent change in land value were generated using Microsoft Excel with a range of up to 35%. The trend of change in land value with the four independent variables was determined from the literature references. The statistical analysis and model building process has been made by using the IBM SPSS V23 software. The Regression model suggests, for lands that are located within 3 miles as the straight distance from the road, the percent CLV is between (0-35%) which is depending on many factors including distance from the constructed road, land use, land area and time from works completion of the new road.

Keywords: interurban road, land use types, new road construction, percent CLV, regression model

Procedia PDF Downloads 266
18017 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil

Authors: M. Seguini, D. Nedjar

Abstract:

An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.

Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability

Procedia PDF Downloads 414
18016 Lifetime Improvement of IEEE.802.15.6 Sensors in Scheduled Access Mode

Authors: Latif Adnane, C. E. Ait Zaouiat, M. Eddabbah

Abstract:

In Wireless Body Area Networks, the issue of systems lifetime is a big challenge to complete. In this paper, we have tackled this subject to suggest some solutions. For this aim, we have studied some batteries characteristics related to human body temperature. Moreover, we have analyzed a mathematical model which defines sensors lifetime (battery lifetime). Based on this model, we note that the random access increases the energy consumption, because nodes are waking up during the whole superframe period. Results show that using scheduled mode access of IEEE 802.15.6 maximizes the lifetime function, by setting nodes in the sleep mode in the inactive period of transmission.

Keywords: battery, energy consumption, IEEE 802.15.6, lifetime, polling

Procedia PDF Downloads 345
18015 Unlocking Green Hydrogen Potential: A Machine Learning-Based Assessment

Authors: Said Alshukri, Mazhar Hussain Malik

Abstract:

Green hydrogen is hydrogen produced using renewable energy sources. In the last few years, Oman aimed to reduce its dependency on fossil fuels. Recently, the hydrogen economy has become a global trend, and many countries have started to investigate the feasibility of implementing this sector. Oman created an alliance to establish the policy and rules for this sector. With motivation coming from both global and local interest in green hydrogen, this paper investigates the potential of producing hydrogen from wind and solar energies in three different locations in Oman, namely Duqm, Salalah, and Sohar. By using machine learning-based software “WEKA” and local metrological data, the project was designed to figure out which location has the highest wind and solar energy potential. First, various supervised models were tested to obtain their prediction accuracy, and it was found that the Random Forest (RF) model has the best prediction performance. The RF model was applied to 2021 metrological data for each location, and the results indicated that Duqm has the highest wind and solar energy potential. The system of one wind turbine in Duqm can produce 8335 MWh/year, which could be utilized in the water electrolysis process to produce 88847 kg of hydrogen mass, while a solar system consisting of 2820 solar cells is estimated to produce 1666.223 MWh/ year which is capable of producing 177591 kg of hydrogen mass.

Keywords: green hydrogen, machine learning, wind and solar energies, WEKA, supervised models, random forest

Procedia PDF Downloads 79
18014 KSVD-SVM Approach for Spontaneous Facial Expression Recognition

Authors: Dawood Al Chanti, Alice Caplier

Abstract:

Sparse representations of signals have received a great deal of attention in recent years. In this paper, the interest of using sparse representation as a mean for performing sparse discriminative analysis between spontaneous facial expressions is demonstrated. An automatic facial expressions recognition system is presented. It uses a KSVD-SVM approach which is made of three main stages: A pre-processing and feature extraction stage, which solves the problem of shared subspace distribution based on the random projection theory, to obtain low dimensional discriminative and reconstructive features; A dictionary learning and sparse coding stage, which uses the KSVD model to learn discriminative under or over dictionaries for sparse coding; Finally a classification stage, which uses a SVM classifier for facial expressions recognition. Our main concern is to be able to recognize non-basic affective states and non-acted expressions. Extensive experiments on the JAFFE static acted facial expressions database but also on the DynEmo dynamic spontaneous facial expressions database exhibit very good recognition rates.

Keywords: dictionary learning, random projection, pose and spontaneous facial expression, sparse representation

Procedia PDF Downloads 305
18013 Reliability Analysis for Cyclic Fatigue Life Prediction in Railroad Bolt Hole

Authors: Hasan Keshavarzian, Tayebeh Nesari

Abstract:

Bolted rail joint is one of the most vulnerable areas in railway track. A comprehensive approach was developed for studying the reliability of fatigue crack initiation of railroad bolt hole under random axle loads and random material properties. The operation condition was also considered as stochastic variables. In order to obtain the comprehensive probability model of fatigue crack initiation life prediction in railroad bolt hole, we used FEM, response surface method (RSM), and reliability analysis. Combined energy-density based and critical plane based fatigue concept is used for the fatigue crack prediction. The dynamic loads were calculated according to the axle load, speed, and track properties. The results show that axle load is most sensitive parameter compared to Poisson’s ratio in fatigue crack initiation life. Also, the reliability index decreases slowly due to high cycle fatigue regime in this area.

Keywords: rail-wheel tribology, rolling contact mechanic, finite element modeling, reliability analysis

Procedia PDF Downloads 381
18012 Development of Geo-computational Model for Analysis of Lassa Fever Dynamics and Lassa Fever Outbreak Prediction

Authors: Adekunle Taiwo Adenike, I. K. Ogundoyin

Abstract:

Lassa fever is a neglected tropical virus that has become a significant public health issue in Nigeria, with the country having the greatest burden in Africa. This paper presents a Geo-Computational Model for Analysis and Prediction of Lassa Fever Dynamics and Outbreaks in Nigeria. The model investigates the dynamics of the virus with respect to environmental factors and human populations. It confirms the role of the rodent host in virus transmission and identifies how climate and human population are affected. The proposed methodology is carried out on a Linux operating system using the OSGeoLive virtual machine for geographical computing, which serves as a base for spatial ecology computing. The model design uses Unified Modeling Language (UML), and the performance evaluation uses machine learning algorithms such as random forest, fuzzy logic, and neural networks. The study aims to contribute to the control of Lassa fever, which is achievable through the combined efforts of public health professionals and geocomputational and machine learning tools. The research findings will potentially be more readily accepted and utilized by decision-makers for the attainment of Lassa fever elimination.

Keywords: geo-computational model, lassa fever dynamics, lassa fever, outbreak prediction, nigeria

Procedia PDF Downloads 93