Search results for: generative models
6200 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models
Authors: Danielle Shackley, Yetunde Folajimi
Abstract:
As more people turn to the internet seeking health-related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores to text, ranging from positive, neutral, and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing and tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial, and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced, and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process and substituting the Naive Bayes for a deep learning neural network model.Keywords: sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model
Procedia PDF Downloads 976199 ID + PD: Training Instructional Designers to Foster and Facilitate Learning Communities in Digital Spaces
Authors: Belkis L. Cabrera
Abstract:
Contemporary technological innovations have reshaped possibility, interaction, communication, engagement, education, and training. Indeed, today, a high-quality technology enhanced learning experience can be transformative as much for the learner as for the educator-trainer. As innovative technologies continue to facilitate, support, foster, and enhance collaboration, problem-solving, creativity, adaptiveness, multidisciplinarity, and communication, the field of instructional design (ID) also continues to develop and expand. Shifting its focus from media to the systematic design of instruction, or rather from the gadgets and devices themselves to the theories, models, and impact of implementing educational technology, the evolution of ID marks a restructuring of the teaching, learning, and training paradigms. However, with all of its promise, this latter component of ID remains underdeveloped. The majority of ID models are crafted and guided by learning theories and, therefore, most models are constructed around student and educator roles rather than trainer roles. Thus, when these models or systems are employed for training purposes, they usually have to be re-fitted, tweaked, and stretched to meet the training needs. This paper is concerned with the training or professional development (PD) facet of instructional design and how ID models built on teacher-to-teacher interaction and dialogue can support the creation of professional learning communities (PLCs) or communities of practice (CoPs), which can augment learning and PD experiences for all. Just as technology is changing the face of education, so too can it change the face of PD within the educational realm. This paper not only provides a new ID model but using innovative technologies such as Padlet and Thinkbinder, this paper presents a concrete example of how a traditional body-to-body, brick, and mortar learning community can be transferred and transformed into the online context.Keywords: communities of practice, e-learning, educational reform, instructional design, professional development, professional learning communities, technology, training
Procedia PDF Downloads 3406198 Adding a Degree of Freedom to Opinion Dynamics Models
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 1196197 Reconfigurable Device for 3D Visualization of Three Dimensional Surfaces
Authors: Robson da C. Santos, Carlos Henrique de A. S. P. Coutinho, Lucas Moreira Dias, Gerson Gomes Cunha
Abstract:
The article refers to the development of an augmented reality 3D display, through the control of servo motors and projection of image with aid of video projector on the model. Augmented Reality is a branch that explores multiple approaches to increase real-world view by viewing additional information along with the real scene. The article presents the broad use of electrical, electronic, mechanical and industrial automation for geospatial visualizations, applications in mathematical models with the visualization of functions and 3D surface graphics and volumetric rendering that are currently seen in 2D layers. Application as a 3D display for representation and visualization of Digital Terrain Model (DTM) and Digital Surface Models (DSM), where it can be applied in the identification of canyons in the marine area of the Campos Basin, Rio de Janeiro, Brazil. The same can execute visualization of regions subject to landslides, as in Serra do Mar - Agra dos Reis and Serranas cities both in the State of Rio de Janeiro. From the foregoing, loss of human life and leakage of oil from pipelines buried in these regions may be anticipated in advance. The physical design consists of a table consisting of a 9 x 16 matrix of servo motors, totalizing 144 servos, a mesh is used on the servo motors for visualization of the models projected by a retro projector. Each model for by an image pre-processing, is sent to a server to be converted and viewed from a software developed in C # Programming Language.Keywords: visualization, 3D models, servo motors, C# programming language
Procedia PDF Downloads 3426196 Modeling Stream Flow with Prediction Uncertainty by Using SWAT Hydrologic and RBNN Neural Network Models for Agricultural Watershed in India
Authors: Ajai Singh
Abstract:
Simulation of hydrological processes at the watershed outlet through modelling approach is essential for proper planning and implementation of appropriate soil conservation measures in Damodar Barakar catchment, Hazaribagh, India where soil erosion is a dominant problem. This study quantifies the parametric uncertainty involved in simulation of stream flow using Soil and Water Assessment Tool (SWAT), a watershed scale model and Radial Basis Neural Network (RBNN), an artificial neural network model. Both the models were calibrated and validated based on measured stream flow and quantification of the uncertainty in SWAT model output was assessed using ‘‘Sequential Uncertainty Fitting Algorithm’’ (SUFI-2). Though both the model predicted satisfactorily, but RBNN model performed better than SWAT with R2 and NSE values of 0.92 and 0.92 during training, and 0.71 and 0.70 during validation period, respectively. Comparison of the results of the two models also indicates a wider prediction interval for the results of the SWAT model. The values of P-factor related to each model shows that the percentage of observed stream flow values bracketed by the 95PPU in the RBNN model as 91% is higher than the P-factor in SWAT as 87%. In other words the RBNN model estimates the stream flow values more accurately and with less uncertainty. It could be stated that RBNN model based on simple input could be used for estimation of monthly stream flow, missing data, and testing the accuracy and performance of other models.Keywords: SWAT, RBNN, SUFI 2, bootstrap technique, stream flow, simulation
Procedia PDF Downloads 3706195 Supplier Relationship Management and Selection Strategies: A Literature Review
Authors: Priyesh Kumar Singh, S. K. Sharma, Sanjay Verma, C. Samuel
Abstract:
Supplier Relationship Management (SRM), is strategic planning and managing of all interactions with suppliers to maximize its value. Its application varies from construction industries to healthcare system and investment banks to aviation industries. Several buyer-supplier relationship models, as well as supplier selection and evaluation strategies, have been documented by many academicians and researchers. In this paper, through a comprehensive literature review of over 30 published papers, different theoretical models, empirical data and conclusions were analysed relating to SRM to find its role in establishing better supplier relationships. These journal articles were searched by using the keyword “supplier relationship management,” in databases of Mendeley Library, ProQuest, EBSCO and Google Scholar. This paper reviews the academic literature on different relationship models, supplier evaluation, and selection strategies to discuss its implications in different situations. It also describes the dominant factors responsible for buyer-supplier relationships such trust and power. Finally, conclusions have been drawn which can be validated by various researchers and can help practitioners in industries.Keywords: supplier relationship management, supplier performance, supplier evaluation, supplier selection strategies
Procedia PDF Downloads 2816194 Application of Regularized Low-Rank Matrix Factorization in Personalized Targeting
Authors: Kourosh Modarresi
Abstract:
The Netflix problem has brought the topic of “Recommendation Systems” into the mainstream of computer science, mathematics, and statistics. Though much progress has been made, the available algorithms do not obtain satisfactory results. The success of these algorithms is rarely above 5%. This work is based on the belief that the main challenge is to come up with “scalable personalization” models. This paper uses an adaptive regularization of inverse singular value decomposition (SVD) that applies adaptive penalization on the singular vectors. The results show far better matching for recommender systems when compared to the ones from the state of the art models in the industry.Keywords: convex optimization, LASSO, regression, recommender systems, singular value decomposition, low rank approximation
Procedia PDF Downloads 4556193 Transition Economies, Typology, and Models: The Case of Libya
Authors: Abderahman Efhialelbum
Abstract:
The period since the fall of the Berlin Wall on November 9, 1989, and the collapse of the former Soviet Union in December 1985 has seen a major change in the economies and labour markets of Eastern Europe. The events also had reverberating effects across Asia and South America and parts of Africa, including Libya. This article examines the typologies and the models of transition economies. Also, it sheds light on the Libyan transition in particular and the impact of Qadhafi’s regime on the transition process. Finally, it illustrates how the Libyan transition process followed the trajectory of other countries using economic indicators such as free trade, property rights, and inflation.Keywords: transition, economy, typology, model, Libya
Procedia PDF Downloads 1576192 Teaching Physics: History, Models, and Transformation of Physics Education Research
Authors: N. Didiş Körhasan, D. Kaltakçı Gürel
Abstract:
Many students have difficulty in learning physics from elementary to university level. In addition, students' expectancy, attitude, and motivation may be influenced negatively with their experience (failure) and prejudice about physics learning. For this reason, physics educators, who are also physics teachers, search for the best ways to make students' learning of physics easier by considering cognitive, affective, and psychomotor issues in learning. This research critically discusses the history of physics education, fundamental pedagogical approaches, and models to teach physics, and transformation of physics education with recent research.Keywords: pedagogy, physics, physics education, science education
Procedia PDF Downloads 2646191 Modeling Of The Random Impingement Erosion Due To The Impact Of The Solid Particles
Authors: Siamack A. Shirazi, Farzin Darihaki
Abstract:
Solid particles could be found in many multiphase flows, including transport pipelines and pipe fittings. Such particles interact with the pipe material and cause erosion which threats the integrity of the system. Therefore, predicting the erosion rate is an important factor in the design and the monitor of such systems. Mechanistic models can provide reliable predictions for many conditions while demanding only relatively low computational cost. Mechanistic models utilize a representative particle trajectory to predict the impact characteristics of the majority of the particle impacts that cause maximum erosion rate in the domain. The erosion caused by particle impacts is not only due to the direct impacts but also random impingements. In the present study, an alternative model has been introduced to describe the erosion due to random impingement of particles. The present model provides a realistic trend for erosion with changes in the particle size and particle Stokes number. The present model is examined against the experimental data and CFD simulation results and indicates better agreement with the data incomparison to the available models in the literature.Keywords: erosion, mechanistic modeling, particles, multiphase flow, gas-liquid-solid
Procedia PDF Downloads 1696190 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis
Authors: Petr Gurný
Abstract:
One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the credit-scoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.Keywords: credit-scoring models, multidimensional subordinated Lévy model, probability of default
Procedia PDF Downloads 4566189 Real Estate Trend Prediction with Artificial Intelligence Techniques
Authors: Sophia Liang Zhou
Abstract:
For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.Keywords: linear regression, random forest, artificial neural network, real estate price prediction
Procedia PDF Downloads 1036188 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’
Authors: Luminiţa Duţică, Gheorghe Duţică
Abstract:
One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.Keywords: heterophony, modalism, serialism, synchrony, syntax
Procedia PDF Downloads 3456187 Simulation to Detect Virtual Fractional Flow Reserve in Coronary Artery Idealized Models
Authors: Nabila Jaman, K. E. Hoque, S. Sawall, M. Ferdows
Abstract:
Coronary artery disease (CAD) is one of the most lethal diseases of the cardiovascular diseases. Coronary arteries stenosis and bifurcation angles closely interact for myocardial infarction. We want to use computer-aided design model coupled with computational hemodynamics (CHD) simulation for detecting several types of coronary artery stenosis with different locations in an idealized model for identifying virtual fractional flow reserve (vFFR). The vFFR provides us the information about the severity of stenosis in the computational models. Another goal is that we want to imitate patient-specific computed tomography coronary artery angiography model for constructing our idealized models with different left anterior descending (LAD) and left circumflex (LCx) bifurcation angles. Further, we want to analyze whether the bifurcation angles has an impact on the creation of narrowness in coronary arteries or not. The numerical simulation provides the CHD parameters such as wall shear stress (WSS), velocity magnitude and pressure gradient (PGD) that allow us the information of stenosis condition in the computational domain.Keywords: CAD, CHD, vFFR, bifurcation angles, coronary stenosis
Procedia PDF Downloads 1576186 ‘Non-Legitimate’ Voices as L2 Models: Towards Becoming a Legitimate L2 Speaker
Authors: M. Rilliard
Abstract:
Based on a Multiliteracies-inspired and sociolinguistically-informed advanced French composition class, this study employed autobiographical narratives from speakers traditionally considered non-legitimate models for L2 teaching purposes of inspiring students to develop an authentic L2 voice and to see themselves as legitimate L2 speakers. Students explored their L2 identities in French through a self-inspired fictional character. Two autobiographical narratives of identity quest by non-traditional French speakers provided them guidance through this process: the novel Le Bleu des Abeilles (2013) and the film Qu’Allah Bénisse la France (2014). Written and French oral productions for different genres, as well as metalinguistic reflections in English, were collected and analyzed. Results indicate that ideas and materials that were relatable to students, namely relatable experiences and relatable language, were most useful to them in developing their L2 voices and achieving authentic and legitimate L2 speakership. These results point towards the benefits of using non-traditional speakers as pedagogical models, as they serve to legitimize students’ sense of their own L2-speakership, which ultimately leads them towards a better, more informed, mastery of the language.Keywords: foreign language classroom, L2 identity, L2 learning and teaching, L2 writing, sociolinguistics
Procedia PDF Downloads 1336185 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria
Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter
Abstract:
Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis
Procedia PDF Downloads 756184 Geometric Simplification Method of Building Energy Model Based on Building Performance Simulation
Authors: Yan Lyu, Yiqun Pan, Zhizhong Huang
Abstract:
In the design stage of a new building, the energy model of this building is often required for the analysis of the performance on energy efficiency. In practice, a certain degree of geometric simplification should be done in the establishment of building energy models, since the detailed geometric features of a real building are hard to be described perfectly in most energy simulation engine, such as ESP-r, eQuest or EnergyPlus. Actually, the detailed description is not necessary when the result with extremely high accuracy is not demanded. Therefore, this paper analyzed the relationship between the error of the simulation result from building energy models and the geometric simplification of the models. Finally, the following two parameters are selected as the indices to characterize the geometric feature of in building energy simulation: the southward projected area and total side surface area of the building, Based on the parameterization method, the simplification from an arbitrary column building to a typical shape (a cuboid) building can be made for energy modeling. The result in this study indicates that this simplification would only lead to the error that is less than 7% for those buildings with the ratio of southward projection length to total perimeter of the bottom of 0.25~0.35, which can cover most situations.Keywords: building energy model, simulation, geometric simplification, design, regression
Procedia PDF Downloads 1816183 On Hyperbolic Gompertz Growth Model (HGGM)
Authors: S. O. Oyamakin, A. U. Chukwu,
Abstract:
We proposed a Hyperbolic Gompertz Growth Model (HGGM), which was developed by introducing a stabilizing parameter called θ using hyperbolic sine function into the classical gompertz growth equation. The resulting integral solution obtained deterministically was reprogrammed into a statistical model and used in modeling the height and diameter of Pines (Pinus caribaea). Its ability in model prediction was compared with the classical gompertz growth model, an approach which mimicked the natural variability of height/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using goodness of fit tests and model selection criteria. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the compliance of the error term to normality assumptions while using testing the independence of the error term using the runs test. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic gompertz growth models better than the source model (classical gompertz growth model) while the results of R2, Adj. R2, MSE, and AIC confirmed the predictive power of the Hyperbolic Monomolecular growth models over its source model.Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, gompertz
Procedia PDF Downloads 4416182 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions
Authors: Timothy Kayode Samson, Adedoyin Isola Lawal
Abstract:
The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH
Procedia PDF Downloads 1196181 Self-Supervised Pretraining on Sequences of Functional Magnetic Resonance Imaging Data for Transfer Learning to Brain Decoding Tasks
Authors: Sean Paulsen, Michael Casey
Abstract:
In this work we present a self-supervised pretraining framework for transformers on functional Magnetic Resonance Imaging (fMRI) data. First, we pretrain our architecture on two self-supervised tasks simultaneously to teach the model a general understanding of the temporal and spatial dynamics of human auditory cortex during music listening. Our pretraining results are the first to suggest a synergistic effect of multitask training on fMRI data. Second, we finetune the pretrained models and train additional fresh models on a supervised fMRI classification task. We observe significantly improved accuracy on held-out runs with the finetuned models, which demonstrates the ability of our pretraining tasks to facilitate transfer learning. This work contributes to the growing body of literature on transformer architectures for pretraining and transfer learning with fMRI data, and serves as a proof of concept for our pretraining tasks and multitask pretraining on fMRI data.Keywords: transfer learning, fMRI, self-supervised, brain decoding, transformer, multitask training
Procedia PDF Downloads 906180 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece
Authors: Panagiotis Karadimos, Leonidas Anthopoulos
Abstract:
Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA
Procedia PDF Downloads 1346179 A Numerical Study on the Influence of CO2 Dilution on Combustion Characteristics of a Turbulent Diffusion Flame
Authors: Yasaman Tohidi, Rouzbeh Riazi, Shidvash Vakilipour, Masoud Mohammadi
Abstract:
The objective of the present study is to numerically investigate the effect of CO2 replacement of N2 in air stream on the flame characteristics of the CH4 turbulent diffusion flame. The Open source Field Operation and Manipulation (OpenFOAM) has been used as the computational tool. In this regard, laminar flamelet and modified k-ε models have been utilized as combustion and turbulence models, respectively. Results reveal that the presence of CO2 in air stream changes the flame shape and maximum flame temperature. Also, CO2 dilution causes an increment in CO mass fraction.Keywords: CH4 diffusion flame, CO2 dilution, OpenFOAM, turbulent flame
Procedia PDF Downloads 2766178 Effect of Soil Corrosion in Failures of Buried Gas Pipelines
Authors: Saima Ali, Pathamanathan Rajeev, Imteaz A. Monzur
Abstract:
In this paper, a brief review of the corrosion mechanism in buried pipe and modes of failure is provided together with the available corrosion models. Moreover, the sensitivity analysis is performed to understand the influence of corrosion model parameters on the remaining life estimation. Further, the probabilistic analysis is performed to propagate the uncertainty in the corrosion model on the estimation of the renaming life of the pipe. Finally, the comparison among the corrosion models on the basis of the remaining life estimation will be provided to improve the renewal plan.Keywords: corrosion, pit depth, sensitivity analysis, exposure period
Procedia PDF Downloads 5306177 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs
Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle
Abstract:
Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.Keywords: meteorological data, Washington D.C., DCNet data, NAM model
Procedia PDF Downloads 2346176 Assessment of Sex Differences in Serum Urea and Creatinine Level in Response to Spinal Cord Injury Using Albino Rat Models
Authors: Waziri B. I., Elkhashab M. M.
Abstract:
Background: One of the most serious consequences of spinal cord injury (SCI) is progressive deterioration of renal function mostly as a result of urine stasis and ascending infection of the paralyzed bladder. This necessitates for investigation of early changes in serum urea and creatinine and associated sex related differences in response to SCI. Methods: A total of 24 adult albino rats weighing above 150g were divided equally into two groups, a control and experimental group (n = 12) each containing an equal number of male and female rats. The experimental group animals were paralyzed by complete transection of spinal cord below T4 level after deep anesthesia with ketamine 75mg/kg. Blood samples were collected from both groups five days post SCI for analysis. Mean values of serum urea (mmol/L) and creatinine (µmol/L) for both groups were compared. P < 0.05 was considered as significant. Results: The results showed significantly higher levels (P < 0.05) of serum urea and creatinine in the male SCI models with mean values of 92.12 ± 0.98 and 2573 ± 70.97 respectively compared with their controls where the mean values for serum urea and creatinine were 6.31 ± 1.48 and 476. 95 ± 4.67 respectively. In the female SCI models, serum urea 13.11 ± 0.81 and creatinine 519.88 ± 31.13 were not significantly different from that of female controls with serum urea and creatinine levels of 11.71 ± 1.43 and 493.69 ± 17.10 respectively (P > 0.05). Conclusion: Spinal cord injury caused a significant increase in serum Urea and Creatinine levels in the male models compared to the females. This indicated that males might have higher risk of renal dysfunction following SCI.Keywords: albino rats, creatinine, spinal cord injury (SCI), urea
Procedia PDF Downloads 1396175 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults
Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter
Abstract:
Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization
Procedia PDF Downloads 1446174 Investigating Knowledge Management in Financial Organisation: Proposing a New Model for Implementing Knowledge Management
Authors: Ziba R. Tehrani, Sanaz Moayer
Abstract:
In the age of the knowledge-based economy, knowledge management has become a key factor in sustainable competitive advantage. Knowledge management is discovering, acquiring, developing, sharing, maintaining, evaluating, and using right knowledge in right time by right person in organization; which is accomplished by creating a right link between human resources, information technology, and appropriate structure, to achieve organisational goals. Studying knowledge management financial institutes shows the knowledge management in banking system is not different from other industries but because of complexity of bank’s environment, the implementation is more difficult. The bank managers found out that implementation of knowledge management will bring many advantages to financial institutes, one of the most important of which is reduction of threat to lose subsequent information of personnel job quit. Also Special attention to internal conditions and environment of the financial institutes and avoidance from copy-making in designing the knowledge management is a critical issue. In this paper, it is tried first to define knowledge management concept and introduce existing models of knowledge management; then some of the most important models which have more similarities with other models will be reviewed. In second step according to bank requirements with focus on knowledge management approach, most major objectives of knowledge management are identified. For gathering data in this stage face to face interview is used. Thirdly these specified objectives are analysed with the response of distribution of questionnaire which is gained through managers and expert staffs of ‘Karafarin Bank’. Finally based on analysed data, some features of exiting models are selected and a new conceptual model will be proposed.Keywords: knowledge management, financial institute, knowledge management model, organisational knowledge
Procedia PDF Downloads 3606173 Cross-Dialect Sentence Transformation: A Comparative Analysis of Language Models for Adapting Sentences to British English
Authors: Shashwat Mookherjee, Shruti Dutta
Abstract:
This study explores linguistic distinctions among American, Indian, and Irish English dialects and assesses various Language Models (LLMs) in their ability to generate British English translations from these dialects. Using cosine similarity analysis, the study measures the linguistic proximity between original British English translations and those produced by LLMs for each dialect. The findings reveal that Indian and Irish English translations maintain notably high similarity scores, suggesting strong linguistic alignment with British English. In contrast, American English exhibits slightly lower similarity, reflecting its distinct linguistic traits. Additionally, the choice of LLM significantly impacts translation quality, with Llama-2-70b consistently demonstrating superior performance. The study underscores the importance of selecting the right model for dialect translation, emphasizing the role of linguistic expertise and contextual understanding in achieving accurate translations.Keywords: cross-dialect translation, language models, linguistic similarity, multilingual NLP
Procedia PDF Downloads 766172 Empirical Model for the Estimation of Global Solar Radiation on Horizontal Surface in Algeria
Authors: Malika Fekih, Abdenour Bourabaa, Rafika Hariti, Mohamed Saighi
Abstract:
In Algeria the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Empirical constants for these models have been estimated and the results obtained have been tested statistically. The results show encouraging agreement between estimated and measured values.Keywords: global solar radiation, empirical model, semi arid areas, climatological parameters
Procedia PDF Downloads 5026171 Coarse-Grained Molecular Simulations to Estimate Thermophysical Properties of Phase Equilibria
Authors: Hai Hoang, Thanh Xuan Nguyen Thi, Guillaume Galliero
Abstract:
Coarse-Grained (CG) molecular simulations have shown to be an efficient way to estimate thermophysical (static and dynamic) properties of fluids. Several strategies have been developed and reported in the literature for defining CG molecular models. Among them, those based on a top-down strategy (i.e. CG molecular models related to macroscopic observables), despite being heuristic, have increasingly gained attention. This is probably due to its simplicity in implementation and its ability to provide reasonable results for not only simple but also complex systems. Regarding simple Force-Fields associated with these CG molecular models, it has been found that the four parameters Mie chain model is one of the best compromises to describe thermophysical static properties (e.g. phase diagram, saturation pressure). However, parameterization procedures of these Mie-chain GC molecular models given in literature are generally insufficient to simultaneously provide static and dynamic (e.g. viscosity) properties. To deal with such situations, we have extended the corresponding states by using a quantity associated with the liquid viscosity. Results obtained from molecular simulations have shown that our approach is able to yield good estimates for both static and dynamic thermophysical properties for various real non-associating fluids. In addition, we will show that on simple (e.g. phase diagram, saturation pressure) and complex (e.g. thermodynamic response functions, thermodynamic energy potentials) static properties, results of our scheme generally provides improved results compared to existing approaches.Keywords: coarse-grained model, mie potential, molecular simulations, thermophysical properties, phase equilibria
Procedia PDF Downloads 336