Search results for: panel models
6591 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 1546590 Understanding the Role of Gas Hydrate Morphology on the Producibility of a Hydrate-Bearing Reservoir
Authors: David Lall, Vikram Vishal, P. G. Ranjith
Abstract:
Numerical modeling of gas production from hydrate-bearing reservoirs requires the solution of various thermal, hydrological, chemical, and mechanical phenomena in a coupled manner. Among the various reservoir properties that influence gas production estimates, the distribution of permeability across the domain is one of the most crucial parameters since it determines both heat transfer and mass transfer. The aspect of permeability in hydrate-bearing reservoirs is particularly complex compared to conventional reservoirs since it depends on the saturation of gas hydrates and hence, is dynamic during production. The dependence of permeability on hydrate saturation is mathematically represented using permeability-reduction models, which are specific to the expected morphology of hydrate accumulations (such as grain-coating or pore-filling hydrates). In this study, we demonstrate the impact of various permeability-reduction models, and consequently, different morphologies of hydrate deposits on the estimates of gas production using depressurization at the reservoir scale. We observe significant differences in produced water volumes and cumulative mass of produced gas between the models, thereby highlighting the uncertainty in production behavior arising from the ambiguity in the prevalent gas hydrate morphology.Keywords: gas hydrate morphology, multi-scale modeling, THMC, fluid flow in porous media
Procedia PDF Downloads 2186589 Hybrid Direct Numerical Simulation and Large Eddy Simulating Wall Models Approach for the Analysis of Turbulence Entropy
Authors: Samuel Ahamefula
Abstract:
Turbulent motion is a highly nonlinear and complex phenomenon, and its modelling is still very challenging. In this study, we developed a hybrid computational approach to accurately simulate fluid turbulence phenomenon. The focus is coupling and transitioning between Direct Numerical Simulation (DNS) and Large Eddy Simulating Wall Models (LES-WM) regions. In the framework, high-order fidelity fluid dynamical methods are utilized to simulate the unsteady compressible Navier-Stokes equations in the Eulerian format on the unstructured moving grids. The coupling and transitioning of DNS and LES-WM are conducted through the linearly staggered Dirichlet-Neumann coupling scheme. The high-fidelity framework is verified and validated based on namely, DNS ability for capture full range of turbulent scales, giving accurate results and LES-WM efficiency in simulating near-wall turbulent boundary layer by using wall models.Keywords: computational methods, turbulence modelling, turbulence entropy, navier-stokes equations
Procedia PDF Downloads 986588 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours
Authors: Fikret Yalcinkaya, Hamza Unsal
Abstract:
To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models
Procedia PDF Downloads 1796587 Aggregate Production Planning Framework in a Multi-Product Factory: A Case Study
Authors: Ignatio Madanhire, Charles Mbohwa
Abstract:
This study looks at the best model of aggregate planning activity in an industrial entity and uses the trial and error method on spreadsheets to solve aggregate production planning problems. Also linear programming model is introduced to optimize the aggregate production planning problem. Application of the models in a furniture production firm is evaluated to demonstrate that practical and beneficial solutions can be obtained from the models. Finally some benchmarking of other furniture manufacturing industries was undertaken to assess relevance and level of use in other furniture firmsKeywords: aggregate production planning, trial and error, linear programming, furniture industry
Procedia PDF Downloads 5556586 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1216585 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models
Authors: I. V. Pinto, M. R. Sooriyarachchi
Abstract:
It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error
Procedia PDF Downloads 1426584 Grid Connected Photovoltaic Micro Inverter
Authors: S. J. Bindhu, Edwina G. Rodrigues, Jijo Balakrishnan
Abstract:
A grid-connected photovoltaic (PV) micro inverter with good performance properties is proposed in this paper. The proposed inverter with a quadrupler, having more efficiency and less voltage stress across the diodes. The stress that come across the diodes that use in the inverter section is considerably low in the proposed converter, also the protection scheme that we provided can eliminate the chances of the error due to fault. The proposed converter is implemented using perturb and observe algorithm so that the fluctuation in the voltage can be reduce and can attain maximum power point. Finally, some simulation and experimental results are also presented to demonstrate the effectiveness of the proposed converter.Keywords: DC-DC converter, MPPT, quadrupler, PV panel
Procedia PDF Downloads 8406583 Inappropriate Prescribing Defined by START and STOPP Criteria and Its Association with Adverse Drug Events among Older Hospitalized Patients
Authors: Mohd Taufiq bin Azmy, Yahaya Hassan, Shubashini Gnanasan, Loganathan Fahrni
Abstract:
Inappropriate prescribing in older patients has been associated with resource utilization and adverse drug events (ADE) such as hospitalization, morbidity and mortality. Globally, there is a lack of published data on ADE induced by inappropriate prescribing. Our study is specific to an older population and is aimed at identifying risk factors for ADE and to develop a model that will link ADE to inappropriate prescribing. The design of the study was prospective whereby computerized medical records of 302 hospitalized elderly aged 65 years and above in 3 public hospitals in Malaysia (Hospital Serdang, Hospital Selayang and Hospital Sungai Buloh) were studied over a 7 month period from September 2013 until March 2014. Potentially inappropriate medications and potential prescribing omissions were determined using the published and validated START-STOPP criteria. Patients who had at least one inappropriate medication were included in Phase II of the study where ADE were identified by local expert consensus panel based on the published and validated Naranjo ADR probability scale. The panel also assessed whether ADE were causal or contributory to current hospitalization. The association between inappropriate prescribing and ADE (hospitalization, mortality and adverse drug reactions) was determined by identifying whether or not the former was causal or contributory to the latter. Rate of ADE avoidability was also determined. Our findings revealed that the prevalence of potential inappropriate prescribing was 58.6%. A total of ADEs were detected in 31 of 105 patients (29.5%) when STOPP criteria were used to identify potentially inappropriate medication; All of the 31 ADE (100%) were considered causal or contributory to admission. Of the 31 ADEs, 28 (90.3%) were considered avoidable or potentially avoidable. After adjusting for age, sex, comorbidity, dementia, baseline activities of daily living function, and number of medications, the likelihood of a serious avoidable ADE increased significantly when a potentially inappropriate medication was prescribed (odds ratio, 11.18; 95% confidence interval [CI], 5.014 - 24.93; p < .001). The medications identified by STOPP criteria, are significantly associated with avoidable ADE in older people that cause or contribute to urgent hospitalization but contributed less towards morbidity and mortality. Findings of the study underscore the importance of preventing inappropriate prescribing.Keywords: adverse drug events, appropriate prescribing, health services research
Procedia PDF Downloads 3986582 Using Machine Learning to Classify Different Body Parts and Determine Healthiness
Authors: Zachary Pan
Abstract:
Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.Keywords: body part, healthcare, machine learning, neural networks
Procedia PDF Downloads 1036581 Review of Hydrologic Applications of Conceptual Models for Precipitation-Runoff Process
Authors: Oluwatosin Olofintoye, Josiah Adeyemo, Gbemileke Shomade
Abstract:
The relationship between rainfall and runoff is an important issue in surface water hydrology therefore the understanding and development of accurate rainfall-runoff models and their applications in water resources planning, management and operation are of paramount importance in hydrological studies. This paper reviews some of the previous works on the rainfall-runoff process modeling. The hydrologic applications of conceptual models and artificial neural networks (ANNs) for the precipitation-runoff process modeling were studied. Gradient training methods such as error back-propagation (BP) and evolutionary algorithms (EAs) are discussed in relation to the training of artificial neural networks and it is shown that application of EAs to artificial neural networks training could be an alternative to other training methods. Therefore, further research interest to exploit the abundant expert knowledge in the area of artificial intelligence for the solution of hydrologic and water resources planning and management problems is needed.Keywords: artificial intelligence, artificial neural networks, evolutionary algorithms, gradient training method, rainfall-runoff model
Procedia PDF Downloads 4536580 The Effect of Symmetry on the Perception of Happiness and Boredom in Design Products
Authors: Michele Sinico
Abstract:
The present research investigates the effect of symmetry on the perception of happiness and boredom in design products. Three experiments were carried out in order to verify the degree of the visual expressive value on different models of bookcases, wall clocks, and chairs. 60 participants directly indicated the degree of happiness and boredom using 7-point rating scales. The findings show that the participants acknowledged a different value of expressive quality in the different product models. Results show also that symmetry is not a significant constraint for an emotional design project.Keywords: product experience, emotional design, symmetry, expressive qualities
Procedia PDF Downloads 1466579 Airliner-UAV Flight Formation in Climb Regime
Authors: Pavel Zikmund, Robert Popela
Abstract:
Extreme formation is a theoretical concept of self-sustain flight when a big Airliner is followed by a small UAV glider flying in airliner’s wake vortex. The paper presents results of climb analysis with a goal to lift the gliding UAV to airliner’s cruise altitude. Wake vortex models, the UAV drag polar and basic parameters and airliner’s climb profile are introduced at first. Then, flight performance of the UAV in the wake vortex is evaluated by analytical methods. Time history of optimal distance between the airliner and the UAV during the climb is determined. The results are encouraging, therefore available UAV drag margin for electricity generation is figured out for different vortex models.Keywords: flight in formation, self-sustained flight, UAV, wake vortex
Procedia PDF Downloads 4366578 Cascade Multilevel Inverter-Based Grid-Tie Single-Phase and Three-Phase-Photovoltaic Power System Controlling and Modeling
Authors: Syed Masood Hussain
Abstract:
An effective control method, including system-level control and pulse width modulation for quasi-Z-source cascade multilevel inverter (qZS-CMI) based grid-tie photovoltaic (PV) power system is proposed. The system-level control achieves the grid-tie current injection, independent maximum power point tracking (MPPT) for separate PV panels, and dc-link voltage balance for all quasi-Z-source H-bridge inverter (qZS-HBI) modules. A recent upsurge in the study of photovoltaic (PV) power generation emerges, since they directly convert the solar radiation into electric power without hampering the environment. However, the stochastic fluctuation of solar power is inconsistent with the desired stable power injected to the grid, owing to variations of solar irradiation and temperature. To fully exploit the solar energy, extracting the PV panels’ maximum power and feeding them into grids at unity power factor become the most important. The contributions have been made by the cascade multilevel inverter (CMI). Nevertheless, the H-bridge inverter (HBI) module lacks boost function so that the inverter KVA rating requirement has to be increased twice with a PV voltage range of 1:2; and the different PV panel output voltages result in imbalanced dc-link voltages. However, each HBI module is a two-stage inverter, and many extra dc–dc converters not only increase the complexity of the power circuit and control and the system cost, but also decrease the efficiency. Recently, the Z-source/quasi-Z-source cascade multilevel inverter (ZS/qZS-CMI)-based PV systems were proposed. They possess the advantages of both traditional CMI and Z-source topologies. In order to properly operate the ZS/qZS-CMI, the power injection, independent control of dc-link voltages, and the pulse width modulation (PWM) are necessary. The main contributions of this paper include: 1) a novel multilevel space vector modulation (SVM) technique for the single phase qZS-CMI is proposed, which is implemented without additional resources; 2) a grid-connected control for the qZS-CMI based PV system is proposed, where the all PV panel voltage references from their independent MPPTs are used to control the grid-tie current; the dual-loop dc-link peak voltage control.Keywords: Quzi-Z source inverter, Photo voltaic power system, space vector modulation, cascade multilevel inverter
Procedia PDF Downloads 5406577 Control of Photovoltaic System Interfacing Grid
Authors: Zerzouri Nora
Abstract:
In this paper, author presented the generalities of a photovoltaic system study and simulation. Author inserted the DC-DC converter to raise the voltage level and improve the operation of the PV panel by continuing the operating point at maximum power by using the Perturb and Observe technique (P&O). The connection to the network is made by inserting a three-phase voltage inverter allowing synchronization with the network the inverter is controlled by a PWM control. The simulation results allow the author to visualize the operation of the different components of the system, as well as the behavior of the system during the variation of meteorological values.Keywords: photovoltaic generator PV, boost converter, P&O MPPT, PWM inverter, three phase grid
Procedia PDF Downloads 1176576 Factors of Non-Conformity Behavior and the Emergence of a Ponzi Game in the Riba-Free (Interest-Free) Banking System of Iran
Authors: Amir Hossein Ghaffari Nejad, Forouhar Ferdowsi, Reza Mashhadi
Abstract:
In the interest-free banking system of Iran, the savings of society are in the form of bank deposits, and banks using the Islamic contracts, allocate the resources to applicants for obtaining facilities and credit. In the meantime, the central bank, with the aim of introducing monetary policy, determines the maximum interest rate on bank deposits in terms of macroeconomic requirements. But in recent years, the country's economic constraints with the stagflation and the consequence of the institutional weaknesses of the financial market of Iran have resulted in massive disturbances in the balance sheet of the banking system, resulting in a period of mismatch maturity in the banks' assets and liabilities and the implementation of a Ponzi game. This issue caused determination of the interest rate in long-term bank deposit contracts to be associated with non-observance of the maximum rate set by the central bank. The result of this condition was in the allocation of new sources of equipment to meet past commitments towards the old depositors and, as a result, a significant part of the supply of equipment was leaked out of the facilitating cycle and credit crunch emerged. The purpose of this study is to identify the most important factors affecting the occurrence of non-confirmatory financial banking behavior using data from 19 public and private banks of Iran. For this purpose, the causes of this non-confirmatory behavior of banks have been investigated using the panel vector autoregression method (PVAR) for the period of 2007-2015. Granger's causality test results suggest that the return of parallel markets for bank deposits, non-performing loans and the high share of the ratio of facilities to banks' deposits are all a cause of the formation of non-confirmatory behavior. Also, according to the results of impulse response functions and variance decomposition, NPL and the ratio of facilities to deposits have the highest long-term effect and also have a high contribution to explaining the changes in banks' non-confirmatory behavior in determining the interest rate on deposits.Keywords: non-conformity behavior, Ponzi Game, panel vector autoregression, nonperforming loans
Procedia PDF Downloads 2176575 Problem Gambling in the Conceptualization of Health Professionals: A Qualitative Analysis of the Discourses Produced by Psychologists, Psychiatrists and General Practitioners
Authors: T. Marinaci, C. Venuleo
Abstract:
Different conceptualizations of disease affect patient care. This study aims to address this gap. It explores how health professionals conceptualize gambling problem, addiction and the goals of recovery process. In-depth, semi-structured, open-ended interviews were conducted with Italian psychologists, psychiatrists, general practitioners, and support staff (N= 114), working within health centres for the treatment of addiction (public health services or therapeutic communities) or medical offices. A Lexical Correspondence Analysis (LCA) was applied to the verbatim transcripts. LCA allowed to identify two main factorial dimensions, which organize similarity and dissimilarity in the discourses of the interviewed. The first dimension labelled 'Models of relationship with the problem', concerns two different models of relationship with the health problem: one related to the request for help and the process of taking charge and the other related to the identification of the psychopathology underlying the disorder. The second dimension, labelled 'Organisers of the intervention' reflects the dialectic between two ways to address the problem. On the one hand, they are the gambling dynamics and its immediate life-consequences to organize the intervention (whatever the request of the user is); on the other hand, they are the procedures and the tools which characterize the health service to organize the way the professionals deal with the user’ s problem (whatever it is and despite the specify of the user’s request). The results highlight how, despite the differences, the respondents share a central assumption: understanding gambling problem implies the reference to the gambler’s identity, more than, for instance, to the relational, social, cultural or political context where the gambler lives. A passive stance is attributed to the user, who does not play any role in the definition of the goal of the intervention. The results will be discussed to highlight the relationship between professional models and users’ ways to understand and deal with the problems related to gambling.Keywords: cultural models, health professionals, intervention models, problem gambling
Procedia PDF Downloads 1546574 Probing Syntax Information in Word Representations with Deep Metric Learning
Authors: Bowen Ding, Yihao Kuang
Abstract:
In recent years, with the development of large-scale pre-trained lan-guage models, building vector representations of text through deep neural network models has become a standard practice for natural language processing tasks. From the performance on downstream tasks, we can know that the text representation constructed by these models contains linguistic information, but its encoding mode and extent are unclear. In this work, a structural probe is proposed to detect whether the vector representation produced by a deep neural network is embedded with a syntax tree. The probe is trained with the deep metric learning method, so that the distance between word vectors in the metric space it defines encodes the distance of words on the syntax tree, and the norm of word vectors encodes the depth of words on the syntax tree. The experiment results on ELMo and BERT show that the syntax tree is encoded in their parameters and the word representations they produce.Keywords: deep metric learning, syntax tree probing, natural language processing, word representations
Procedia PDF Downloads 646573 Prediction of Bodyweight of Cattle by Artificial Neural Networks Using Digital Images
Authors: Yalçın Bozkurt
Abstract:
Prediction models were developed for accurate prediction of bodyweight (BW) by using Digital Images of beef cattle body dimensions by Artificial Neural Networks (ANN). For this purpose, the animal data were collected at a private slaughter house and the digital images and the weights of each live animal were taken just before they were slaughtered and the body dimensions such as digital wither height (DJWH), digital body length (DJBL), digital body depth (DJBD), digital hip width (DJHW), digital hip height (DJHH) and digital pin bone length (DJPL) were determined from the images, using the data with 1069 observations for each traits. Then, prediction models were developed by ANN. Digital body measurements were analysed by ANN for body prediction and R2 values of DJBL, DJWH, DJHW, DJBD, DJHH and DJPL were approximately 94.32, 91.31, 80.70, 83.61, 89.45 and 70.56 % respectively. It can be concluded that in management situations where BW cannot be measured it can be predicted accurately by measuring DJBL and DJWH alone or both DJBD and even DJHH and different models may be needed to predict BW in different feeding and environmental conditions and breedsKeywords: artificial neural networks, bodyweight, cattle, digital body measurements
Procedia PDF Downloads 3726572 Forecasting Equity Premium Out-of-Sample with Sophisticated Regression Training Techniques
Authors: Jonathan Iworiso
Abstract:
Forecasting the equity premium out-of-sample is a major concern to researchers in finance and emerging markets. The quest for a superior model that can forecast the equity premium with significant economic gains has resulted in several controversies on the choice of variables and suitable techniques among scholars. This research focuses mainly on the application of Regression Training (RT) techniques to forecast monthly equity premium out-of-sample recursively with an expanding window method. A broad category of sophisticated regression models involving model complexity was employed. The RT models include Ridge, Forward-Backward (FOBA) Ridge, Least Absolute Shrinkage and Selection Operator (LASSO), Relaxed LASSO, Elastic Net, and Least Angle Regression were trained and used to forecast the equity premium out-of-sample. In this study, the empirical investigation of the RT models demonstrates significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk. Thus, the forecasting models appeared to guarantee an investor in a market setting who optimally reallocates a monthly portfolio between equities and risk-free treasury bills using equity premium forecasts at minimal risk.Keywords: regression training, out-of-sample forecasts, expanding window, statistical predictability, economic significance, utility gains
Procedia PDF Downloads 1066571 Structure of Turbulence Flow in the Wire-Wrappes Fuel Assemblies of BREST-OD-300
Authors: Dmitry V. Fomichev, Vladimir I. Solonin
Abstract:
In this paper, experimental and numerical study of hydrodynamic characteristics of the air coolant flow in the test wire-wrapped assembly is presented. The test assembly has 37 rods, which are similar to the real fuel pins of the BREST-OD-300 fuel assemblies geometrically. Air open loop test facility installed at the “Nuclear Power Plants and Installations” department of BMSTU was used to obtain the experimental data. The obtaining altitudinal distribution of static pressure in the near-wall test assembly as well as velocity and temperature distribution of coolant flow in the test sections can give us some new knowledge about the mechanism of formation of the turbulence flow structure in the wire wrapped fuel assemblies. Numerical simulations of the turbulence flow has been accomplished using ANSYS Fluent 14.5. Different non-local turbulence models have been considered, such as standard and RNG k-e models and k-w SST model. Results of numerical simulations of the flow based on the considered turbulence models give the best agreement with the experimental data and help us to carry out strong analysis of flow characteristics.Keywords: wire-spaces fuel assembly, turbulent flow structure, computation fluid dynamics
Procedia PDF Downloads 4586570 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review
Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari
Abstract:
The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency
Procedia PDF Downloads 1616569 Development of an Interactive Display-Control Layout Design System for Trains Based on Train Drivers’ Mental Models
Authors: Hyeonkyeong Yang, Minseok Son, Taekbeom Yoo, Woojin Park
Abstract:
Human error is the most salient contributing factor to railway accidents. To reduce the frequency of human errors, many researchers and train designers have adopted ergonomic design principles for designing display-control layout in rail cab. There exist a number of approaches for designing the display control layout based on optimization methods. However, the ergonomically optimized layout design may not be the best design for train drivers, since the drivers have their own mental models based on their experiences. Consequently, the drivers may prefer the existing display-control layout design over the optimal design, and even show better driving performance using the existing design compared to that using the optimal design. Thus, in addition to ergonomic design principles, train drivers’ mental models also need to be considered for designing display-control layout in rail cab. This paper developed an ergonomic assessment system of display-control layout design, and an interactive layout design system that can generate design alternatives and calculate ergonomic assessment score in real-time. The design alternatives generated from the interactive layout design system may not include the optimal design from the ergonomics point of view. However, the system’s strength is that it considers train drivers’ mental models, which can help generate alternatives that are more friendly and easier to use for train drivers. Also, with the developed system, non-experts in ergonomics, such as train drivers, can refine the design alternatives and improve ergonomic assessment score in real-time.Keywords: display-control layout design, interactive layout design system, mental model, train drivers
Procedia PDF Downloads 3056568 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection
Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew
Abstract:
The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.
Procedia PDF Downloads 456567 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices
Authors: S. Srinivasan, E. Cretu
Abstract:
The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape
Procedia PDF Downloads 1336566 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 746565 Engagement Analysis Using DAiSEE Dataset
Authors: Naman Solanki, Souraj Mondal
Abstract:
With the world moving towards online communication, the video datastore has exploded in the past few years. Consequently, it has become crucial to analyse participant’s engagement levels in online communication videos. Engagement prediction of people in videos can be useful in many domains, like education, client meetings, dating, etc. Video-level or frame-level prediction of engagement for a user involves the development of robust models that can capture facial micro-emotions efficiently. For the development of an engagement prediction model, it is necessary to have a widely-accepted standard dataset for engagement analysis. DAiSEE is one of the datasets which consist of in-the-wild data and has a gold standard annotation for engagement prediction. Earlier research done using the DAiSEE dataset involved training and testing standard models like CNN-based models, but the results were not satisfactory according to industry standards. In this paper, a multi-level classification approach has been introduced to create a more robust model for engagement analysis using the DAiSEE dataset. This approach has recorded testing accuracies of 0.638, 0.7728, 0.8195, and 0.866 for predicting boredom level, engagement level, confusion level, and frustration level, respectively.Keywords: computer vision, engagement prediction, deep learning, multi-level classification
Procedia PDF Downloads 1126564 Prediction of Soil Liquefaction by Using UBC3D-PLM Model in PLAXIS
Authors: A. Daftari, W. Kudla
Abstract:
Liquefaction is a phenomenon in which the strength and stiffness of a soil is reduced by earthquake shaking or other rapid cyclic loading. Liquefaction and related phenomena have been responsible for huge amounts of damage in historical earthquakes around the world. Modelling of soil behaviour is the main step in soil liquefaction prediction process. Nowadays, several constitutive models for sand have been presented. Nevertheless, only some of them can satisfy this mechanism. One of the most useful models in this term is UBCSAND model. In this research, the capability of this model is considered by using PLAXIS software. The real data of superstition hills earthquake 1987 in the Imperial Valley was used. The results of the simulation have shown resembling trend of the UBC3D-PLM model.Keywords: liquefaction, plaxis, pore-water pressure, UBC3D-PLM
Procedia PDF Downloads 3086563 Building Information Management in Context of Urban Spaces, Analysis of Current Use and Possibilities
Authors: Lucie Jirotková, Daniel Macek, Andrea Palazzo, Veronika Malinová
Abstract:
Currently, the implementation of 3D models in the construction industry is gaining popularity. Countries around the world are developing their own modelling standards and implement the use of 3D models into their individual permitting processes. Another theme that needs to be addressed are public building spaces and their subsequent maintenance, where the usage of BIM methodology is directly offered. The significant benefit of the implementation of Building Information Management is the information transfer. The 3D model contains not only the spatial representation of the item shapes but also various parameters that are assigned to the individual elements, which are easily traceable, mainly because they are all stored in one place in the BIM model. However, it is important to keep the data in the models up to date to achieve useability of the model throughout the life cycle of the building. It is now becoming standard practice to use BIM models in the construction of buildings, however, the building environment is very often neglected. Especially in large-scale development projects, the public space of buildings is often forwarded to municipalities, which obtains the ownership and are in charge of its maintenance. A 3D model of the building surroundings would include both the above-ground visible elements of the development as well as the underground parts, such as the technological facilities of water features, electricity lines for public lighting, etc. The paper shows the possibilities of a model in the field of information for the handover of premises, the following maintenance and decision making. The attributes and spatial representation of the individual elements make the model a reliable foundation for the creation of "Smart Cities". The paper analyses the current use of the BIM methodology and presents the state-of-the-art possibilities of development.Keywords: BIM model, urban space, BIM methodology, facility management
Procedia PDF Downloads 1236562 Evaluating Robustness of Conceptual Rainfall-runoff Models under Climate Variability in Northern Tunisia
Authors: H. Dakhlaoui, D. Ruelland, Y. Tramblay, Z. Bargaoui
Abstract:
To evaluate the impact of climate change on water resources at the catchment scale, not only future projections of climate are necessary but also robust rainfall-runoff models that are able to be fairly reliable under changing climate conditions. This study aims at assessing the robustness of three conceptual rainfall-runoff models (GR4j, HBV and IHACRES) on five basins in Northern Tunisia under long-term climate variability. Their robustness was evaluated according to a differential split sample test based on a climate classification of the observation period regarding simultaneously precipitation and temperature conditions. The studied catchments are situated in a region where climate change is likely to have significant impacts on runoff and they already suffer from scarcity of water resources. They cover the main hydrographical basins of Northern Tunisia (High Medjerda, Zouaraâ, Ichkeul and Cap bon), which produce the majority of surface water resources in Tunisia. The streamflow regime of the basins can be considered as natural since these basins are located upstream from storage-dams and in areas where withdrawals are negligible. A 30-year common period (1970‒2000) was considered to capture a large spread of hydro-climatic conditions. The calibration was based on the Kling-Gupta Efficiency (KGE) criterion, while the evaluation of model transferability is performed according to the Nash-Suttfliff efficiency criterion and volume error. The three hydrological models were shown to have similar behaviour under climate variability. Models prove a better ability to simulate the runoff pattern when transferred toward wetter periods compared to the case when transferred to drier periods. The limits of transferability are beyond -20% of precipitation and +1.5 °C of temperature in comparison with the calibration period. The deterioration of model robustness could in part be explained by the climate dependency of some parameters.Keywords: rainfall-runoff modelling, hydro-climate variability, model robustness, uncertainty, Tunisia
Procedia PDF Downloads 291