Search results for: discrete choice models
8167 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data
Authors: M. A. Meslem
Abstract:
For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.Keywords: quasigeoid, gravity aomalies, covariance, GGM
Procedia PDF Downloads 1378166 Analysis of the Introduction of Carsharing in the Context of Developing Countries: A Case Study Based on On-Board Carsharing Survey in Kabul, Afghanistan
Authors: Mustafa Rezazada, Takuya Maruyama
Abstract:
Cars have a strong integration with the human being since its introduction, and this interaction is more evident in the urban context. Therefore, shifting city residents from driving private vehicles to public transits has been a big challenge. Accordingly, carsharing as an innovative, environmentally friendly transport alternative had a significant contribution to this transition so far. It helped to reduce the numbers of household car ownership, declining demand for on-street parking, dropping the numbers of kilometers traveled by car, and affects the future of mobility by decreasing the Green House Gases (GHS) emissions’ and the numbers of new cars to be purchased otherwise. However, majorities of carsharing researches were conducted in highly developed cities, and less attention has been paid to the cities of developing countries. This study is conducted in the Capital of Afghanistan, Kabul to investigate the current transport pattern, user behavior, and to examine the possibility of introducing the carsharing system. This study established a new survey method called Onboard Carsharing Survey OCS. In this survey, the carpooling passengers aboard are interviewed following the Onboard Transit Survey OTS guideline with a few refinements. The survey focuses on respondents’ daily travel behavior and hypothetical stated choice of carsharing opportunities. Moreover, it followed by an aggregate analysis at the end. The survey results indicate the following: two-thirds of the respondents 62% have been carpooling every day since 5 years or more, more than half of the respondents are not satisfied with current modes, besides other attributes the Traffic Congestion, Environment and Insufficient Public Transport were ranked the most critical in daily transportation by survey participants. Moreover, 68.24% of the respondent chose Carsharing over carpooling under different choice game scenarios. Overall, the findings in this research show that Kabul City is a potential underground for the introduction of Carsharing in the future. Taken together, insufficient public transit, dissatisfaction with current modes, and their stated interest will affect the future of carsharing positively in Kabul City. The modal choice in this study is limited to carpooling and carsharing; more choice sets, including bus, cycling, and walking, will have to be added to evaluate further.Keywords: carsharing, developing countries, Kabul Afghanistan, onboard carsharing survey, transportation, urban planning
Procedia PDF Downloads 1358165 Race-Making in Teacher Narratives: Defining Black Educational Access and Opportunity Via the Stories Teachers Tell
Authors: Carla O'Connor, Shanta' Robinson, Alaina Neal, Elan Hope, Adam Hengen, Samantha Drotar
Abstract:
In this paper, we provide a preliminary analysis of the stories teachers tell about their Black students in their efforts to make sense of and professionally resolve the underperformance of Black students in their district. The teachers themselves hail from three demographically distinct districts that participate in the state coordinated inter-district school choice system. The districts are Varuna Hills (a pseudonym, as are all other names in this manuscript), a district that serves a predominantly White and affluent community; Newport, a district that serves a socioeconomically diverse but still majority White population; and Aspen, a district in which the student body is predominantly Black and predominantly working to lower middle class. Relying upon teacher focus group interviews in each of these districts which share a common reform context, we show how teachers’ everyday and narrative discourse makes meaning of the bodies and achievement of Black students and their families. More specifically, we show that these discourses construct Black students as interlopers, as suffering from extraordinary neediness, and in dire need of proper parenting. Our analysis reveals that there are nuances by which the teachers articulate these discourses with the nuances being a function of how the schools of choice reform context intersects with the demographics of each school and beliefs about the demographics of the schools of choice population. We unpack the racialized and classed nature of these narratives and the implications for teachers’ personal practical knowledge.Keywords: black achievement, educational access and opportunity, race and schooling, teacher knowledge and education
Procedia PDF Downloads 4228164 Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models
Authors: Virender Singh, Mathew Rees, Simon Hampton, Sivaram Annadurai
Abstract:
Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy.Keywords: plant identification, CNN, image processing, vision transformer, classification
Procedia PDF Downloads 1048163 Online Battery Equivalent Circuit Model Estimation on Continuous-Time Domain Using Linear Integral Filter Method
Authors: Cheng Zhang, James Marco, Walid Allafi, Truong Q. Dinh, W. D. Widanage
Abstract:
Equivalent circuit models (ECMs) are widely used in battery management systems in electric vehicles and other battery energy storage systems. The battery dynamics and the model parameters vary under different working conditions, such as different temperature and state of charge (SOC) levels, and therefore online parameter identification can improve the modelling accuracy. This paper presents a way of online ECM parameter identification using a continuous time (CT) estimation method. The CT estimation method has several advantages over discrete time (DT) estimation methods for ECM parameter identification due to the widely separated battery dynamic modes and fast sampling. The presented method can be used for online SOC estimation. Test data are collected using a lithium ion cell, and the experimental results show that the presented CT method achieves better modelling accuracy compared with the conventional DT recursive least square method. The effectiveness of the presented method for online SOC estimation is also verified on test data.Keywords: electric circuit model, continuous time domain estimation, linear integral filter method, parameter and SOC estimation, recursive least square
Procedia PDF Downloads 3838162 Sensitivity and Uncertainty Analysis of One Dimensional Shape Memory Alloy Constitutive Models
Authors: A. B. M. Rezaul Islam, Ernur Karadogan
Abstract:
Shape memory alloys (SMAs) are known for their shape memory effect and pseudoelasticity behavior. Their thermomechanical behaviors are modeled by numerous researchers using microscopic thermodynamic and macroscopic phenomenological point of view. Tanaka, Liang-Rogers and Ivshin-Pence models are some of the most popular SMA macroscopic phenomenological constitutive models. They describe SMA behavior in terms of stress, strain and temperature. These models involve material parameters and they have associated uncertainty present in them. At different operating temperatures, the uncertainty propagates to the output when the material is subjected to loading followed by unloading. The propagation of uncertainty while utilizing these models in real-life application can result in performance discrepancies or failure at extreme conditions. To resolve this, we used probabilistic approach to perform the sensitivity and uncertainty analysis of Tanaka, Liang-Rogers, and Ivshin-Pence models. Sobol and extended Fourier Amplitude Sensitivity Testing (eFAST) methods have been used to perform the sensitivity analysis for simulated isothermal loading/unloading at various operating temperatures. As per the results, it is evident that the models vary due to the change in operating temperature and loading condition. The average and stress-dependent sensitivity indices present the most significant parameters at several temperatures. This work highlights the sensitivity and uncertainty analysis results and shows comparison of them at different temperatures and loading conditions for all these models. The analysis presented will aid in designing engineering applications by eliminating the probability of model failure due to the uncertainty in the input parameters. Thus, it is recommended to have a proper understanding of sensitive parameters and the uncertainty propagation at several operating temperatures and loading conditions as per Tanaka, Liang-Rogers, and Ivshin-Pence model.Keywords: constitutive models, FAST sensitivity analysis, sensitivity analysis, sobol, shape memory alloy, uncertainty analysis
Procedia PDF Downloads 1448161 Measuring Environmental Efficiency of Energy in OPEC Countries
Authors: Bahram Fathi, Seyedhossein Sajadifar, Naser Khiabani
Abstract:
Data envelopment analysis (DEA) has recently gained popularity in energy efficiency analysis. A common feature of the previously proposed DEA models for measuring energy efficiency performance is that they treat energy consumption as an input within a production framework without considering undesirable outputs. However, energy use results in the generation of undesirable outputs as byproducts of producing desirable outputs. Within a joint production framework of both desirable and undesirable outputs, this paper presents several DEA-type linear programming models for measuring energy efficiency performance. In addition to considering undesirable outputs, our models treat different energy sources as different inputs so that changes in energy mix could be accounted for in evaluating energy efficiency. The proposed models are applied to measure the energy efficiency performances of 12 OPEC countries and the results obtained are presented.Keywords: energy efficiency, undesirable outputs, data envelopment analysis
Procedia PDF Downloads 7368160 Enhancing Model Interoperability and Reuse by Designing and Developing a Unified Metamodel Standard
Authors: Arash Gharibi
Abstract:
Mankind has always used models to solve problems. Essentially, models are simplified versions of reality, whose need stems from having to deal with complexity; many processes or phenomena are too complex to be described completely. Thus a fundamental model requirement is that it contains the characteristic features that are essential in the context of the problem to be solved or described. Models are used in virtually every scientific domain to deal with various problems. During the recent decades, the number of models has increased exponentially. Publication of models as part of original research has traditionally been in in scientific periodicals, series, monographs, agency reports, national journals and laboratory reports. This makes it difficult for interested groups and communities to stay informed about the state-of-the-art. During the modeling process, many important decisions are made which impact the final form of the model. Without a record of these considerations, the final model remains ill-defined and open to varying interpretations. Unfortunately, the details of these considerations are often lost or in case there is any existing information about a model, it is likely to be written intuitively in different layouts and in different degrees of detail. In order to overcome these issues, different domains have attempted to implement their own approaches to preserve their models’ information in forms of model documentation. The most frequently cited model documentation approaches show that they are domain specific, not to applicable to the existing models and evolutionary flexibility and intrinsic corrections and improvements are not possible with the current approaches. These issues are all because of a lack of unified standards for model documentation. As a way forward, this research will propose a new standard for capturing and managing models’ information in a unified way so that interoperability and reusability of models become possible. This standard will also be evolutionary, meaning members of modeling realm could contribute to its ongoing developments and improvements. In this paper, the current 3 of the most common metamodels are reviewed and according to pros and cons of each, a new metamodel is proposed.Keywords: metamodel, modeling, interoperability, reuse
Procedia PDF Downloads 1988159 Implied Adjusted Volatility by Leland Option Pricing Models: Evidence from Australian Index Options
Authors: Mimi Hafizah Abdullah, Hanani Farhah Harun, Nik Ruzni Nik Idris
Abstract:
With the implied volatility as an important factor in financial decision-making, in particular in option pricing valuation, and also the given fact that the pricing biases of Leland option pricing models and the implied volatility structure for the options are related, this study considers examining the implied adjusted volatility smile patterns and term structures in the S&P/ASX 200 index options using the different Leland option pricing models. The examination of the implied adjusted volatility smiles and term structures in the Australian index options market covers the global financial crisis in the mid-2007. The implied adjusted volatility was found to escalate approximately triple the rate prior the crisis.Keywords: implied adjusted volatility, financial crisis, Leland option pricing models, Australian index options
Procedia PDF Downloads 3798158 Evaluation of Environmental, Technical, and Economic Indicators of a Fused Deposition Modeling Process
Authors: M. Yosofi, S. Ezeddini, A. Ollivier, V. Lavaste, C. Mayousse
Abstract:
Additive manufacturing processes have changed significantly in a wide range of industries and their application progressed from rapid prototyping to production of end-use products. However, their environmental impact is still a rather open question. In order to support the growth of this technology in the industrial sector, environmental aspects should be considered and predictive models may help monitor and reduce the environmental footprint of the processes. This work presents predictive models based on a previously developed methodology for the environmental impact evaluation combined with a technical and economical assessment. Here we applied the methodology to the Fused Deposition Modeling process. First, we present the predictive models relative to different types of machines. Then, we present a decision-making tool designed to identify the optimum manufacturing strategy regarding technical, economic, and environmental criteria.Keywords: additive manufacturing, decision-makings, environmental impact, predictive models
Procedia PDF Downloads 1318157 Effect of Particle Shape on Monotonic and Cyclic Biaxial Behaviour of Sand Using Discrete Element Method
Authors: Raj Banerjee, Y. M. Parulekar, Aniruddha Sengupta, J. Chattopadhyay
Abstract:
This study proposes a Discrete Element Method (DEM) simulation using a commercial software PFC 2D (2019) for quantitatively simulating the monotonic and cyclic behaviour of sand using irregular shapes of sand grains. A preliminary analysis of the number of particles for optimal Representative Element Volume (REV) simulation of dimension 35mm x 35mm x 70mm using the scaled Grain Size Distribution (GSD) of sand is carried out. Subsequently, the effect of particle shape on the performance of sand during monotonic and cyclic bi-axial tests is assessed using numerical simulation. The validation of the numerical simulation for one case is carried out using the test results from the literature. Further numerical studies are performed in which the particles in REV are simulated by mixing round discs with irregular clumps (100% round disc, 75% round disc 25% irregular clump, 50% round disc 50% irregular clump, 25% round disc 75% irregular clump, 100% irregular clump) in different proportions using Dry Deposition (DD) method. The macro response for monotonic loading shows that irregular sand has a higher strength than round particles and that the Mohr-Coulomb failure envelope depends on the shape of the grains. During cyclic loading, it is observed that the liquefaction resistance curve (Cyclic Stress Ratio (CSR)-Number of cycles (N)) of sand is dependent on the combination of particle shapes with different proportions.Keywords: biaxial test, particle shape, monotonic, cyclic
Procedia PDF Downloads 728156 Leveraging Unannotated Data to Improve Question Answering for French Contract Analysis
Authors: Touila Ahmed, Elie Louis, Hamza Gharbi
Abstract:
State of the art question answering models have recently shown impressive performance especially in a zero-shot setting. This approach is particularly useful when confronted with a highly diverse domain such as the legal field, in which it is increasingly difficult to have a dataset covering every notion and concept. In this work, we propose a flexible generative question answering approach to contract analysis as well as a weakly supervised procedure to leverage unannotated data and boost our models’ performance in general, and their zero-shot performance in particular.Keywords: question answering, contract analysis, zero-shot, natural language processing, generative models, self-supervision
Procedia PDF Downloads 1948155 Dow Polyols near Infrared Chemometric Model Reduction Based on Clustering: Reducing Thirty Global Hydroxyl Number (OH) Models to Less Than Five
Authors: Wendy Flory, Kazi Czarnecki, Matthijs Mercy, Mark Joswiak, Mary Beth Seasholtz
Abstract:
Polyurethane Materials are present in a wide range of industrial segments such as Furniture, Building and Construction, Composites, Automotive, Electronics, and more. Dow is one of the leaders for the manufacture of the two main raw materials, Isocyanates and Polyols used to produce polyurethane products. Dow is also a key player for the manufacture of Polyurethane Systems/Formulations designed for targeted applications. In 1990, the first analytical chemometric models were developed and deployed for use in the Dow QC labs of the polyols business for the quantification of OH, water, cloud point, and viscosity. Over the years many models have been added; there are now over 140 models for quantification and hundreds for product identification, too many to be reasonable for support. There are 29 global models alone for the quantification of OH across > 70 products at many sites. An attempt was made to consolidate these into a single model. While the consolidated model proved good statistics across the entire range of OH, several products had a bias by ASTM E1655 with individual product validation. This project summary will show the strategy for global model updates for OH, to reduce the number of models for quantification from over 140 to 5 or less using chemometric methods. In order to gain an understanding of the best product groupings, we identify clusters by reducing spectra to a few dimensions via Principal Component Analysis (PCA) and Uniform Manifold Approximation and Projection (UMAP). Results from these cluster analyses and a separate validation set allowed dow to reduce the number of models for predicting OH from 29 to 3 without loss of accuracy.Keywords: hydroxyl, global model, model maintenance, near infrared, polyol
Procedia PDF Downloads 1358154 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue
Authors: Rachel Y. Zhang, Christopher K. Anderson
Abstract:
A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine
Procedia PDF Downloads 1338153 Text Similarity in Vector Space Models: A Comparative Study
Authors: Omid Shahmirzadi, Adam Lugowski, Kenneth Younge
Abstract:
Automatic measurement of semantic text similarity is an important task in natural language processing. In this paper, we evaluate the performance of different vector space models to perform this task. We address the real-world problem of modeling patent-to-patent similarity and compare TFIDF (and related extensions), topic models (e.g., latent semantic indexing), and neural models (e.g., paragraph vectors). Contrary to expectations, the added computational cost of text embedding methods is justified only when: 1) the target text is condensed; and 2) the similarity comparison is trivial. Otherwise, TFIDF performs surprisingly well in other cases: in particular for longer and more technical texts or for making finer-grained distinctions between nearest neighbors. Unexpectedly, extensions to the TFIDF method, such as adding noun phrases or calculating term weights incrementally, were not helpful in our context.Keywords: big data, patent, text embedding, text similarity, vector space model
Procedia PDF Downloads 1758152 The Use of Tourism Destination Management for Image Branding as a Preferable Choice of Foreign Policy
Authors: Mehtab Alam, Mudiarasan Kuppusamy
Abstract:
Image branding is the prominent and well-guided phenomena of managing tourism destinations. It examines the image of cities forming as brand identity. Transformation of cities into tourist destinations is obligatory for the current management practices to be used for foreign policy. The research considers the features of perception, destination accommodation, destination quality, traveler revisit, destination information system, and behavioral image for tourism destination management. Using the quantitative and qualitative research methodology, the objective is to examine and investigate the opportunities for destination branding. It investigates the features and management of tourism destinations in Abbottabad city of Pakistan through SPSS and NVivo 12 software. The prospective outlook of the results and coding reflects the significant contribution of integrated destination management for image branding, where Abbottabad has the potential to become a destination city. The positive impact of branding integrates tourism management as it is fulfilling travelers’ requirements to influence the choice of destination for innovative foreign policy.Keywords: image branding, destination management, tourism, foreign policy, innovative
Procedia PDF Downloads 958151 Energy Consumption Statistic of Gas-Solid Fluidized Beds through Computational Fluid Dynamics-Discrete Element Method Simulations
Authors: Lei Bi, Yunpeng Jiao, Chunjiang Liu, Jianhua Chen, Wei Ge
Abstract:
Two energy paths are proposed from thermodynamic viewpoints. Energy consumption means total power input to the specific system, and it can be decomposed into energy retention and energy dissipation. Energy retention is the variation of accumulated mechanical energy in the system, and energy dissipation is the energy converted to heat by irreversible processes. Based on the Computational Fluid Dynamics-Discrete Element Method (CFD-DEM) framework, different energy terms are quantified from the specific flow elements of fluid cells and particles as well as their interactions with the wall. Direct energy consumption statistics are carried out for both cold and hot flow in gas-solid fluidization systems. To clarify the statistic method, it is necessary to identify which system is studied: the particle-fluid system or the particle sub-system. For the cold flow, the total energy consumption of the particle sub-system can predict the onset of bubbling and turbulent fluidization, while the trends of local energy consumption can reflect the dynamic evolution of mesoscale structures. For the hot flow, different heat transfer mechanisms are analyzed, and the original solver is modified to reproduce the experimental results. The influence of the heat transfer mechanisms and heat source on energy consumption is also investigated. The proposed statistic method has proven to be energy-conservative and easy to conduct, and it is hopeful to be applied to other multiphase flow systems.Keywords: energy consumption statistic, gas-solid fluidization, CFD-DEM, regime transition, heat transfer mechanism
Procedia PDF Downloads 688150 Geographic Information System for District Level Energy Performance Simulations
Authors: Avichal Malhotra, Jerome Frisch, Christoph van Treeck
Abstract:
The utilization of semantic, cadastral and topological data from geographic information systems (GIS) has exponentially increased for building and urban-scale energy performance simulations. Urban planners, simulation scientists, and researchers use virtual 3D city models for energy analysis, algorithms and simulation tools. For dynamic energy simulations at city and district level, this paper provides an overview of the available GIS data models and their levels of detail. Adhering to different norms and standards, these models also intend to describe building and construction industry data. For further investigations, CityGML data models are considered for simulations. Though geographical information modelling has considerably many different implementations, extensions of virtual city data can also be made for domain specific applications. Highlighting the use of the extended CityGML models for energy researches, a brief introduction to the Energy Application Domain Extension (ADE) along with its significance is made. Consequently, addressing specific input simulation data, a workflow using Modelica underlining the usage of GIS information and the quantification of its significance over annual heating energy demand is presented in this paper.Keywords: CityGML, EnergyADE, energy performance simulation, GIS
Procedia PDF Downloads 1698149 Free Will and Compatibilism in Decision Theory: A Solution to Newcomb’s Paradox
Authors: Sally Heyeon Hwang
Abstract:
Within decision theory, there are normative principles that dictate how one should act in addition to empirical theories of actual behavior. As a normative guide to one’s actual behavior, evidential or causal decision-theoretic equations allow one to identify outcomes with maximal utility values. The choice that each person makes, however, will, of course, differ according to varying assignments of weight and probability values. Regarding these different choices, it remains a subject of considerable philosophical controversy whether individual subjects have the capacity to exercise free will with respect to the assignment of probabilities, or whether instead the assignment is in some way constrained. A version of this question is given a precise form in Richard Jeffrey’s assumption that free will is necessary for Newcomb’s paradox to count as a decision problem. This paper will argue, against Jeffrey, that decision theory does not require the assumption of libertarian freedom. One of the hallmarks of decision-making is its application across a wide variety of contexts; the implications of a background assumption of free will is similarly varied. One constant across the contexts of decision is that there are always at least two levels of choice for a given agent, depending on the degree of prior constraint. Within the context of Newcomb’s problem, when the predictor is attempting to guess the choice the agent will make, he or she is analyzing the determined aspects of the agent such as past characteristics, experiences, and knowledge. On the other hand, as David Lewis’ backtracking argument concerning the relationship between past and present events brings to light, there are similarly varied ways in which the past can actually be dependent on the present. One implication of this argument is that even in deterministic settings, an agent can have more free will than it may seem. This paper will thus argue against the view that a stable background assumption of free will or determinism in decision theory is necessary, arguing instead for a compatibilist decision theory yielding a novel treatment of Newcomb’s problem.Keywords: decision theory, compatibilism, free will, Newcomb’s problem
Procedia PDF Downloads 3218148 Talent-to-Vec: Using Network Graphs to Validate Models with Data Sparsity
Authors: Shaan Khosla, Jon Krohn
Abstract:
In a recruiting context, machine learning models are valuable for recommendations: to predict the best candidates for a vacancy, to match the best vacancies for a candidate, and compile a set of similar candidates for any given candidate. While useful to create these models, validating their accuracy in a recommendation context is difficult due to a sparsity of data. In this report, we use network graph data to generate useful representations for candidates and vacancies. We use candidates and vacancies as network nodes and designate a bi-directional link between them based on the candidate interviewing for the vacancy. After using node2vec, the embeddings are used to construct a validation dataset with a ranked order, which will help validate new recommender systems.Keywords: AI, machine learning, NLP, recruiting
Procedia PDF Downloads 848147 Bridging the Gap between Different Interfaces for Business Process Modeling
Authors: Katalina Grigorova, Kaloyan Mironov
Abstract:
The paper focuses on the benefits of business process modeling. Although this discipline is developing for many years, there is still necessity of creating new opportunities to meet the ever-increasing users’ needs. Because one of these needs is related to the conversion of business process models from one standard to another, the authors have developed a converter between BPMN and EPC standards using workflow patterns as intermediate tool. Nowadays there are too many systems for business process modeling. The variety of output formats is almost the same as the systems themselves. This diversity additionally hampers the conversion of the models. The presented study is aimed at discussing problems due to differences in the output formats of various modeling environments.Keywords: business process modeling, business process modeling standards, workflow patterns, converting models
Procedia PDF Downloads 5878146 Diagnosis of Induction Machine Faults by DWT
Authors: Hamidreza Akbari
Abstract:
In this paper, for detection of inclined eccentricity in an induction motor, time–frequency analysis of the stator startup current is carried out. For this purpose, the discrete wavelet transform is used. Data are obtained from simulations, using winding function approach. The results show the validity of the approach for detecting the fault and discriminating with respect to other faults.Keywords: induction machine, fault, DWT, electric
Procedia PDF Downloads 3508145 Hybrid Project Management Model Based on Lean and Agile Approach
Authors: Fatima-Zahra Eddoug, Jamal Benhra, Rajaa Benabbou
Abstract:
Several project management models exist in the literature and the most used ones are the hybrids for their multiple advantages. Our objective in this paper is to analyze the existing models, which are based on the Lean and Agile approaches and to propose a novel framework with the convenient tools that will allow efficient management of a general project. To create the desired framework, we were based essentially on 7 existing models. Only the Scrum tool among the agile tools was identified by several authors to be appropriate for project management. In contrast, multiple lean tools were proposed in different phases of the project.Keywords: agility, hybrid project management, lean, scrum
Procedia PDF Downloads 1388144 Video Foreground Detection Based on Adaptive Mixture Gaussian Model for Video Surveillance Systems
Authors: M. A. Alavianmehr, A. Tashk, A. Sodagaran
Abstract:
Modeling background and moving objects are significant techniques for video surveillance and other video processing applications. This paper presents a foreground detection algorithm that is robust against illumination changes and noise based on adaptive mixture Gaussian model (GMM), and provides a novel and practical choice for intelligent video surveillance systems using static cameras. In the previous methods, the image of still objects (background image) is not significant. On the contrary, this method is based on forming a meticulous background image and exploiting it for separating moving objects from their background. The background image is specified either manually, by taking an image without vehicles, or is detected in real-time by forming a mathematical or exponential average of successive images. The proposed scheme can offer low image degradation. The simulation results demonstrate high degree of performance for the proposed method.Keywords: image processing, background models, video surveillance, foreground detection, Gaussian mixture model
Procedia PDF Downloads 5168143 Correlation between Seismic Risk Insurance Indexes and Uninhabitability Indexes of Buildings in Morocco
Authors: Nabil Mekaoui, Nacer Jabour, Abdelhamid Allaoui, Abderahim Oulidi
Abstract:
The reliability of several insurance indexes of the seismic risk is evaluated and compared for an efficient seismic risk coverage of buildings in Morocco, thus, reducing the basic risk. A large database of earthquake ground motions is established from recent seismic events in Morocco and synthetic ground motions compatible with the design spectrum in order to conduct nonlinear time history analyses on three building models representative of the building stock in Morocco. The uninhabitability index is evaluated based on the simulated damage index, then correlated with preselected insurance indexes. Interestingly, the commonly used peak ground acceleration index showed poor correlation when compared with other indexes, such as spectral accelerations at low periods. Recommendations on the choice of suitable insurance indexes are formulated for efficient seismic risk coverage in Morocco.Keywords: catastrophe modeling, damage, earthquake, reinsurance, seismic hazard, trigger index, vulnerability
Procedia PDF Downloads 698142 Multiple Linear Regression for Rapid Estimation of Subsurface Resistivity from Apparent Resistivity Measurements
Authors: Sabiu Bala Muhammad, Rosli Saad
Abstract:
Multiple linear regression (MLR) models for fast estimation of true subsurface resistivity from apparent resistivity field measurements are developed and assessed in this study. The parameters investigated were apparent resistivity (ρₐ), horizontal location (X) and depth (Z) of measurement as the independent variables; and true resistivity (ρₜ) as the dependent variable. To achieve linearity in both resistivity variables, datasets were first transformed into logarithmic domain following diagnostic checks of normality of the dependent variable and heteroscedasticity to ensure accurate models. Four MLR models were developed based on hierarchical combination of the independent variables. The generated MLR coefficients were applied to another data set to estimate ρₜ values for validation. Contours of the estimated ρₜ values were plotted and compared to the observed data plots at the colour scale and blanking for visual assessment. The accuracy of the models was assessed using coefficient of determination (R²), standard error (SE) and weighted mean absolute percentage error (wMAPE). It is concluded that the MLR models can estimate ρₜ for with high level of accuracy.Keywords: apparent resistivity, depth, horizontal location, multiple linear regression, true resistivity
Procedia PDF Downloads 2768141 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour
Authors: Libor Zachoval, Daire O Broin, Oisin Cawley
Abstract:
E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI
Procedia PDF Downloads 1218140 Evaluation of Newly Synthesized Steroid Derivatives Using In silico Molecular Descriptors and Chemometric Techniques
Authors: Milica Ž. Karadžić, Lidija R. Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Z. Kovačević, Anamarija I. Mandić, Katarina Penov-Gaši, Andrea R. Nikolić, Aleksandar M. Oklješa
Abstract:
This study considered selection of the in silico molecular descriptors and the models for newly synthesized steroid derivatives description and their characterization using chemometric techniques. Multiple linear regression (MLR) models were established and gave the best molecular descriptors for quantitative structure-retention relationship (QSRR) modeling of the retention of the investigated molecules. MLR models were without multicollinearity among the selected molecular descriptors according to the variance inflation factor (VIF) values. Used molecular descriptors were ranked using generalized pair correlation method (GPCM). In this method, the significant difference between independent variables can be noticed regardless almost equal correlation between dependent variable. Generated MLR models were statistically and cross-validated and the best models were kept. Models were ranked using sum of ranking differences (SRD) method. According to this method, the most consistent QSRR model can be found and similarity or dissimilarity between the models could be noticed. In this study, SRD was performed using average values of experimentally observed data as a golden standard. Chemometric analysis was conducted in order to characterize newly synthesized steroid derivatives for further investigation regarding their potential biological activity and further synthesis. This article is based upon work from COST Action (CM1105), supported by COST (European Cooperation in Science and Technology).Keywords: generalized pair correlation method, molecular descriptors, regression analysis, steroids, sum of ranking differences
Procedia PDF Downloads 3478139 The Effect of Initial Sample Size and Increment in Simulation Samples on a Sequential Selection Approach
Authors: Mohammad H. Almomani
Abstract:
In this paper, we argue the effect of the initial sample size, and the increment in simulation samples on the performance of a sequential approach that used in selecting the top m designs when the number of alternative designs is very large. The sequential approach consists of two stages. In the first stage the ordinal optimization is used to select a subset that overlaps with the set of actual best k% designs with high probability. Then in the second stage the optimal computing budget is used to select the top m designs from the selected subset. We apply the selection approach on a generic example under some parameter settings, with a different choice of initial sample size and the increment in simulation samples, to explore the impacts on the performance of this approach. The results show that the choice of initial sample size and the increment in simulation samples does affect the performance of a selection approach.Keywords: Large Scale Problems, Optimal Computing Budget Allocation, ordinal optimization, simulation optimization
Procedia PDF Downloads 3558138 Numerical Simulation on Two Components Particles Flow in Fluidized Bed
Authors: Wang Heng, Zhong Zhaoping, Guo Feihong, Wang Jia, Wang Xiaoyi
Abstract:
Flow of gas and particles in fluidized beds is complex and chaotic, which is difficult to measure and analyze by experiments. Some bed materials with bad fluidized performance always fluidize with fluidized medium. The material and the fluidized medium are different in many properties such as density, size and shape. These factors make the dynamic process more complex and the experiment research more limited. Numerical simulation is an efficient way to describe the process of gas-solid flow in fluidized bed. One of the most popular numerical simulation methods is CFD-DEM, i.e., computational fluid dynamics-discrete element method. The shapes of particles are always simplified as sphere in most researches. Although sphere-shaped particles make the calculation of particle uncomplicated, the effects of different shapes are disregarded. However, in practical applications, the two-component systems in fluidized bed also contain sphere particles and non-sphere particles. Therefore, it is needed to study the two component flow of sphere particles and non-sphere particles. In this paper, the flows of mixing were simulated as the flow of molding biomass particles and quartz in fluidized bad. The integrated model was built on an Eulerian–Lagrangian approach which was improved to suit the non-sphere particles. The constructed methods of cylinder-shaped particles were different when it came to different numerical methods. Each cylinder-shaped particle was constructed as an agglomerate of fictitious small particles in CFD part, which means the small fictitious particles gathered but not combined with each other. The diameter of a fictitious particle d_fic and its solid volume fraction inside a cylinder-shaped particle α_fic, which is called the fictitious volume fraction, are introduced to modify the drag coefficient β by introducing the volume fraction of the cylinder-shaped particles α_cld and sphere-shaped particles α_sph. In a computational cell, the void ε, can be expressed as ε=1-〖α_cld α〗_fic-α_sph. The Ergun equation and the Wen and Yu equation were used to calculate β. While in DEM method, cylinder-shaped particles were built by multi-sphere method, in which small sphere element merged with each other. Soft sphere model was using to get the connect force between particles. The total connect force of cylinder-shaped particle was calculated as the sum of the small sphere particles’ forces. The model (size=1×0.15×0.032 mm3) contained 420000 sphere-shaped particles (diameter=0.8 mm, density=1350 kg/m3) and 60 cylinder-shaped particles (diameter=10 mm, length=10 mm, density=2650 kg/m3). Each cylinder-shaped particle was constructed by 2072 small sphere-shaped particles (d=0.8 mm) in CFD mesh and 768 sphere-shaped particles (d=3 mm) in DEM mesh. The length of CFD and DEM cells are 1 mm and 2 mm. Superficial gas velocity was changed in different models as 1.0 m/s, 1.5 m/s, 2.0m/s. The results of simulation were compared with the experimental results. The movements of particles were regularly as fountain. The effect of superficial gas velocity on cylinder-shaped particles was stronger than that of sphere-shaped particles. The result proved this present work provided a effective approach to simulation the flow of two component particles.Keywords: computational fluid dynamics, discrete element method, fluidized bed, multiphase flow
Procedia PDF Downloads 326