Search results for: computational model(s)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8000

Search results for: computational model(s)

7610 Estimating Lost Digital Video Frames Using Unidirectional and Bidirectional Estimation Based on Autoregressive Time Model

Authors: Navid Daryasafar, Nima Farshidfar

Abstract:

In this article, we make attempt to hide error in video with an emphasis on the time-wise use of autoregressive (AR) models. To resolve this problem, we assume that all information in one or more video frames is lost. Then, lost frames are estimated using analogous Pixels time information in successive frames. Accordingly, after presenting autoregressive models and how they are applied to estimate lost frames, two general methods are presented for using these models. The first method which is the same standard method of autoregressive models estimates lost frame in unidirectional form. Usually, in such condition, previous frames information is used for estimating lost frame. Yet, in the second method, information from the previous and next frames is used for estimating the lost frame. As a result, this method is known as bidirectional estimation. Then, carrying out a series of tests, performance of each method is assessed in different modes. And, results are compared.

Keywords: error steganography, unidirectional estimation, bidirectional estimation, AR linear estimation

Procedia PDF Downloads 509
7609 Validating Condition-Based Maintenance Algorithms through Simulation

Authors: Marcel Chevalier, Léo Dupont, Sylvain Marié, Frédérique Roffet, Elena Stolyarova, William Templier, Costin Vasile

Abstract:

Industrial end-users are currently facing an increasing need to reduce the risk of unexpected failures and optimize their maintenance. This calls for both short-term analysis and long-term ageing anticipation. At Schneider Electric, we tackle those two issues using both machine learning and first principles models. Machine learning models are incrementally trained from normal data to predict expected values and detect statistically significant short-term deviations. Ageing models are constructed by breaking down physical systems into sub-assemblies, then determining relevant degradation modes and associating each one to the right kinetic law. Validating such anomaly detection and maintenance models is challenging, both because actual incident and ageing data are rare and distorted by human interventions, and incremental learning depends on human feedback. To overcome these difficulties, we propose to simulate physics, systems, and humans -including asset maintenance operations- in order to validate the overall approaches in accelerated time and possibly choose between algorithmic alternatives.

Keywords: degradation models, ageing, anomaly detection, soft sensor, incremental learning

Procedia PDF Downloads 101
7608 Computational Simulations on Stability of Model Predictive Control for Linear Discrete-Time Stochastic Systems

Authors: Tomoaki Hashimoto

Abstract:

Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial time and a moving terminal time. This paper examines the stability of model predictive control for linear discrete-time systems with additive stochastic disturbances. A sufficient condition for the stability of the closed-loop system with model predictive control is derived by means of a linear matrix inequality. The objective of this paper is to show the results of computational simulations in order to verify the validity of the obtained stability condition.

Keywords: computational simulations, optimal control, predictive control, stochastic systems, discrete-time systems

Procedia PDF Downloads 406
7607 Learning Predictive Models for Efficient Energy Management of Exhibition Hall

Authors: Jeongmin Kim, Eunju Lee, Kwang Ryel Ryu

Abstract:

This paper addresses the problem of predictive control for energy management of large-scaled exhibition halls, where a lot of energy is consumed to maintain internal atmosphere under certain required conditions. Predictive control achieves better energy efficiency by optimizing the operation of air-conditioning facilities with not only the current but also some future status taken into account. In this paper, we propose to use predictive models learned from past sensor data of hall environment, for use in optimizing the operating plan for the air-conditioning facilities by simulating future environmental change. We have implemented an emulator of an exhibition hall by using EnergyPlus, a widely used building energy emulation tool, to collect data for learning environment-change models. Experimental results show that the learned models predict future change highly accurately on a short-term basis.

Keywords: predictive control, energy management, machine learning, optimization

Procedia PDF Downloads 246
7606 Empirical Roughness Progression Models of Heavy Duty Rural Pavements

Authors: Nahla H. Alaswadko, Rayya A. Hassan, Bayar N. Mohammed

Abstract:

Empirical deterministic models have been developed to predict roughness progression of heavy duty spray sealed pavements for a dataset representing rural arterial roads. The dataset provides a good representation of the relevant network and covers a wide range of operating and environmental conditions. A sample with a large size of historical time series data for many pavement sections has been collected and prepared for use in multilevel regression analysis. The modelling parameters include road roughness as performance parameter and traffic loading, time, initial pavement strength, reactivity level of subgrade soil, climate condition, and condition of drainage system as predictor parameters. The purpose of this paper is to report the approaches adopted for models development and validation. The study presents multilevel models that can account for the correlation among time series data of the same section and to capture the effect of unobserved variables. Study results show that the models fit the data very well. The contribution and significance of relevant influencing factors in predicting roughness progression are presented and explained. The paper concludes that the analysis approach used for developing the models confirmed their accuracy and reliability by well-fitting to the validation data.

Keywords: roughness progression, empirical model, pavement performance, heavy duty pavement

Procedia PDF Downloads 147
7605 Wind Power Forecast Error Simulation Model

Authors: Josip Vasilj, Petar Sarajcev, Damir Jakus

Abstract:

One of the major difficulties introduced with wind power penetration is the inherent uncertainty in production originating from uncertain wind conditions. This uncertainty impacts many different aspects of power system operation, especially the balancing power requirements. For this reason, in power system development planing, it is necessary to evaluate the potential uncertainty in future wind power generation. For this purpose, simulation models are required, reproducing the performance of wind power forecasts. This paper presents a wind power forecast error simulation models which are based on the stochastic process simulation. Proposed models capture the most important statistical parameters recognized in wind power forecast error time series. Furthermore, two distinct models are presented based on data availability. First model uses wind speed measurements on potential or existing wind power plant locations, while the seconds model uses statistical distribution of wind speeds.

Keywords: wind power, uncertainty, stochastic process, Monte Carlo simulation

Procedia PDF Downloads 457
7604 A Comparative Study of Regional Climate Models and Global Coupled Models over Uttarakhand

Authors: Sudip Kumar Kundu, Charu Singh

Abstract:

As a great physiographic divide, the Himalayas affecting a large system of water and air circulation which helps to determine the climatic condition in the Indian subcontinent to the south and mid-Asian highlands to the north. It creates obstacles by defending chill continental air from north side into India in winter and also defends rain-bearing southwesterly monsoon to give up maximum precipitation in that area in monsoon season. Nowadays extreme weather conditions such as heavy precipitation, cloudburst, flash flood, landslide and extreme avalanches are the regular happening incidents in the region of North Western Himalayan (NWH). The present study has been planned to investigate the suitable model(s) to find out the rainfall pattern over that region. For this investigation, selected models from Coordinated Regional Climate Downscaling Experiment (CORDEX) and Coupled Model Intercomparison Project Phase 5 (CMIP5) has been utilized in a consistent framework for the period of 1976 to 2000 (historical). The ability of these driving models from CORDEX domain and CMIP5 has been examined according to their capability of the spatial distribution as well as time series plot of rainfall over NWH in the rainy season and compared with the ground-based Indian Meteorological Department (IMD) gridded rainfall data set. It is noted from the analysis that the models like MIROC5 and MPI-ESM-LR from the both CORDEX and CMIP5 provide the best spatial distribution of rainfall over NWH region. But the driving models from CORDEX underestimates the daily rainfall amount as compared to CMIP5 driving models as it is unable to capture daily rainfall data properly when it has been plotted for time series (TS) individually for the state of Uttarakhand (UK) and Himachal Pradesh (HP). So finally it can be said that the driving models from CMIP5 are better than CORDEX domain models to investigate the rainfall pattern over NWH region.

Keywords: global warming, rainfall, CMIP5, CORDEX, NWH

Procedia PDF Downloads 148
7603 Operator Splitting Scheme for the Inverse Nagumo Equation

Authors: Sharon-Yasotha Veerayah-Mcgregor, Valipuram Manoranjan

Abstract:

A backward or inverse problem is known to be an ill-posed problem due to its instability that easily emerges with any slight change within the conditions of the problem. Therefore, only a limited number of numerical approaches are available to solve a backward problem. This paper considers the Nagumo equation, an equation that describes impulse propagation in nerve axons, which also models population growth with the Allee effect. A creative operator splitting numerical scheme is constructed to solve the inverse Nagumo equation. Computational simulations are used to verify that this scheme is stable, accurate, and efficient.

Keywords: inverse/backward equation, operator-splitting, Nagumo equation, ill-posed, finite-difference

Procedia PDF Downloads 61
7602 Predicting Options Prices Using Machine Learning

Authors: Krishang Surapaneni

Abstract:

The goal of this project is to determine how to predict important aspects of options, including the ask price. We want to compare different machine learning models to learn the best model and the best hyperparameters for that model for this purpose and data set. Option pricing is a relatively new field, and it can be very complicated and intimidating, especially to inexperienced people, so we want to create a machine learning model that can predict important aspects of an option stock, which can aid in future research. We tested multiple different models and experimented with hyperparameter tuning, trying to find some of the best parameters for a machine-learning model. We tested three different models: a Random Forest Regressor, a linear regressor, and an MLP (multi-layer perceptron) regressor. The most important feature in this experiment is the ask price; this is what we were trying to predict. In the field of stock pricing prediction, there is a large potential for error, so we are unable to determine the accuracy of the models based on if they predict the pricing perfectly. Due to this factor, we determined the accuracy of the model by finding the average percentage difference between the predicted and actual values. We tested the accuracy of the machine learning models by comparing the actual results in the testing data and the predictions made by the models. The linear regression model performed worst, with an average percentage error of 17.46%. The MLP regressor had an average percentage error of 11.45%, and the random forest regressor had an average percentage error of 7.42%

Keywords: finance, linear regression model, machine learning model, neural network, stock price

Procedia PDF Downloads 58
7601 The Martingale Options Price Valuation for European Puts Using Stochastic Differential Equation Models

Authors: H. C. Chinwenyi, H. D. Ibrahim, F. A. Ahmed

Abstract:

In modern financial mathematics, valuing derivatives such as options is often a tedious task. This is simply because their fair and correct prices in the future are often probabilistic. This paper examines three different Stochastic Differential Equation (SDE) models in finance; the Constant Elasticity of Variance (CEV) model, the Balck-Karasinski model, and the Heston model. The various Martingales option price valuation formulas for these three models were obtained using the replicating portfolio method. Also, the numerical solution of the derived Martingales options price valuation equations for the SDEs models was carried out using the Monte Carlo method which was implemented using MATLAB. Furthermore, results from the numerical examples using published data from the Nigeria Stock Exchange (NSE), all share index data show the effect of increase in the underlying asset value (stock price) on the value of the European Put Option for these models. From the results obtained, we see that an increase in the stock price yields a decrease in the value of the European put option price. Hence, this guides the option holder in making a quality decision by not exercising his right on the option.

Keywords: equivalent martingale measure, European put option, girsanov theorem, martingales, monte carlo method, option price valuation formula

Procedia PDF Downloads 112
7600 Face Recognition Using Eigen Faces Algorithm

Authors: Shweta Pinjarkar, Shrutika Yawale, Mayuri Patil, Reshma Adagale

Abstract:

Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this, demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application. Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this , demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application.

Keywords: face detection, face recognition, eigen faces, algorithm

Procedia PDF Downloads 338
7599 Modelling the Art Historical Canon: The Use of Dynamic Computer Models in Deconstructing the Canon

Authors: Laura M. F. Bertens

Abstract:

There is a long tradition of visually representing the art historical canon, in schematic overviews and diagrams. This is indicative of the desire for scientific, ‘objective’ knowledge of the kind (seemingly) produced in the natural sciences. These diagrams will, however, always retain an element of subjectivity and the modelling methods colour our perception of the represented information. In recent decades visualisations of art historical data, such as hand-drawn diagrams in textbooks, have been extended to include digital, computational tools. These tools significantly increase modelling strength and functionality. As such, they might be used to deconstruct and amend the very problem caused by traditional visualisations of the canon. In this paper, the use of digital tools for modelling the art historical canon is studied, in order to draw attention to the artificial nature of the static models that art historians are presented with in textbooks and lectures, as well as to explore the potential of digital, dynamic tools in creating new models. To study the way diagrams of the canon mediate the represented information, two modelling methods have been used on two case studies of existing diagrams. The tree diagram Stammbaum der neudeutschen Kunst (1823) by Ferdinand Olivier has been translated to a social network using the program Visone, and the famous flow chart Cubism and Abstract Art (1936) by Alfred Barr has been translated to an ontological model using Protégé Ontology Editor. The implications of the modelling decisions have been analysed in an art historical context. The aim of this project has been twofold. On the one hand the translation process makes explicit the design choices in the original diagrams, which reflect hidden assumptions about the Western canon. Ways of organizing data (for instance ordering art according to artist) have come to feel natural and neutral and implicit biases and the historically uneven distribution of power have resulted in underrepresentation of groups of artists. Over the last decades, scholars from fields such as Feminist Studies, Postcolonial Studies and Gender Studies have considered this problem and tried to remedy it. The translation presented here adds to this deconstruction by defamiliarizing the traditional models and analysing the process of reconstructing new models, step by step, taking into account theoretical critiques of the canon, such as the feminist perspective discussed by Griselda Pollock, amongst others. On the other hand, the project has served as a pilot study for the use of digital modelling tools in creating dynamic visualisations of the canon for education and museum purposes. Dynamic computer models introduce functionalities that allow new ways of ordering and visualising the artworks in the canon. As such, they could form a powerful tool in the training of new art historians, introducing a broader and more diverse view on the traditional canon. Although modelling will always imply a simplification and therefore a distortion of reality, new modelling techniques can help us get a better sense of the limitations of earlier models and can provide new perspectives on already established knowledge.

Keywords: canon, ontological modelling, Protege Ontology Editor, social network modelling, Visone

Procedia PDF Downloads 105
7598 Parametric Urbanism: A Climate Responsive Urban Form for the MENA Region

Authors: Norhan El Dallal

Abstract:

The MENA region is a challenging, rapid urbanizing region, with a special profile; culturally, socially, economically and environmentally. Despite the diversity between different countries of the MENA region they all share similar urban challenges where extensive interventions are crucial. A climate sensitive region as the MENA region requires special attention for development, adaptation and mitigation. Integrating climatic and environmental parameters into the planning process to create a responsive urban form is the aim of this research in which “Parametric Urbanism” as a trend serves as a tool to reach a more sustainable urban morphology. An attempt to parameterize the relation between the climate and the urban form in a detailed manner is the main objective of the thesis. The aim is relating the different passive approaches suitable for the MENA region with the design guidelines of each and every part of the planning phase. Various conceptual scenarios for the network pattern and block subdivision generation based on computational models are the next steps after the parameterization. These theoretical models could be applied on different climatic zones of the dense communities of the MENA region to achieve an energy efficient neighborhood or city with respect to the urban form, morphology, and urban planning pattern. A final criticism of the theoretical model is to be conducted showing the feasibility of the proposed solutions economically. Finally some push and pull policies are to be proposed to help integrate these solutions into the planning process.

Keywords: parametric urbanism, climate responsive, urban form, urban and regional studies

Procedia PDF Downloads 455
7597 Developing Location-allocation Models in the Three Echelon Supply Chain

Authors: Mehdi Seifbarghy, Zahra Mansouri

Abstract:

In this paper a few location-allocation models are developed in a multi-echelon supply chain including suppliers, manufacturers, distributors and retailers. The objectives are maximizing demand coverage, minimizing the total distance of distributors from suppliers, minimizing some facility establishment costs and minimizing the environmental effects. Since nature of the given models is multi-objective, we suggest a number of goal-based solution techniques such L-P metric, goal programming, multi-choice goal programming and goal attainment in order to solve the problems.

Keywords: location, multi-echelon supply chain, covering, goal programming

Procedia PDF Downloads 538
7596 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices

Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu

Abstract:

Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.

Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction

Procedia PDF Downloads 76
7595 Intensive Use of Software in Teaching and Learning Calculus

Authors: Nodelman V.

Abstract:

Despite serious difficulties in the assimilation of the conceptual system of Calculus, software in the educational process is used only occasionally, and even then, mainly for illustration purposes. The following are a few reasons: The non-trivial nature of the studied material, Lack of skills in working with software, Fear of losing time working with software, The variety of the software itself, the corresponding interface, syntax, and the methods of working with the software, The need to find suitable models, and familiarize yourself with working with them, Incomplete compatibility of the found models with the content and teaching methods of the studied material. This paper proposes an active use of the developed non-commercial software VusuMatica, which allows removing these restrictions through Broad support for the studied mathematical material (and not only Calculus). As a result - no need to select the right software, Emphasizing the unity of mathematics, its intrasubject and interdisciplinary relations, User-friendly interface, Absence of special syntax in defining mathematical objects, Ease of building models of the studied material and manipulating them, Unlimited flexibility of models thanks to the ability to redefine objects, which allows exploring objects characteristics, and considering examples and counterexamples of the concepts under study. The construction of models is based on an original approach to the analysis of the structure of the studied concepts. Thanks to the ease of construction, students are able not only to use ready-made models but also to create them on their own and explore the material studied with their help. The presentation includes examples of using VusuMatica in studying the concepts of limit and continuity of a function, its derivative, and integral.

Keywords: counterexamples, limitations and requirements, software, teaching and learning calculus, user-friendly interface and syntax

Procedia PDF Downloads 56
7594 Rheological and Computational Analysis of Crude Oil Transportation

Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh

Abstract:

Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.

Keywords: surfactant, natural, crude oil, rheology, CFD, viscosity

Procedia PDF Downloads 408
7593 Identifying Promoters and Their Types Based on a Two-Layer Approach

Authors: Bin Liu

Abstract:

Prokaryotic promoter, consisted of two short DNA sequences located at in -35 and -10 positions, is responsible for controlling the initiation and expression of gene expression. Different types of promoters have different functions, and their consensus sequences are similar. In addition, their consensus sequences may be different for the same type of promoter, which poses difficulties for promoter identification. Unfortunately, all existing computational methods treat promoter identification as a binary classification task and can only identify whether a query sequence belongs to a specific promoter type. It is desired to develop computational methods for effectively identifying promoters and their types. Here, a two-layer predictor is proposed to try to deal with the problem. The first layer is designed to predict whether a given sequence is a promoter and the second layer predicts the type of promoter that is judged as a promoter. Meanwhile, we also analyze the importance of feature and sequence conversation in two aspects: promoter identification and promoter type identification. To the best knowledge of ours, it is the first computational predictor to detect promoters and their types.

Keywords: promoter, promoter type, random forest, sequence information

Procedia PDF Downloads 165
7592 Nanoparticles on Biological Biomarquers Models: Paramecium Tetraurelia and Helix aspersa

Authors: H. Djebar, L. Khene, M. Boucenna, M. R. Djebar, M. N. Khebbeb, M. Djekoun

Abstract:

Currently in toxicology, use of alternative models permits to understand the mechanisms of toxicity at different levels of cells. Objectives of our research concern the determination of NPs ZnO, TiO2, AlO2, and FeO2 effect on ciliate protist freshwater Paramecium sp and Helix aspersa. The result obtained show that NPs increased antioxidative enzyme activity like catalase, glutathione –S-transferase and level GSH. Also, cells treated with high concentrations of NPs showed a high level of MDA. In conclusion, observations from growth and enzymatic parameters suggest on one hand that treatment with NPs provokes an oxidative stress and on the other that snale and paramecium are excellent alternatives models for ecotoxicological studies.

Keywords: NPs, GST, catalase, GSH, MDA, toxicity, snale and paramecium

Procedia PDF Downloads 258
7591 Multiscale Process Modeling of Ceramic Matrix Composites

Authors: Marianna Maiaru, Gregory M. Odegard, Josh Kemppainen, Ivan Gallegos, Michael Olaya

Abstract:

Ceramic matrix composites (CMCs) are typically used in applications that require long-term mechanical integrity at elevated temperatures. CMCs are usually fabricated using a polymer precursor that is initially polymerized in situ with fiber reinforcement, followed by a series of cycles of pyrolysis to transform the polymer matrix into a rigid glass or ceramic. The pyrolysis step typically generates volatile gasses, which creates porosity within the polymer matrix phase of the composite. Subsequent cycles of monomer infusion, polymerization, and pyrolysis are often used to reduce the porosity and thus increase the durability of the composite. Because of the significant expense of such iterative processing cycles, new generations of CMCs with improved durability and manufacturability are difficult and expensive to develop using standard Edisonian approaches. The goal of this research is to develop a computational process-modeling-based approach that can be used to design the next generation of CMC materials with optimized material and processing parameters for maximum strength and efficient manufacturing. The process modeling incorporates computational modeling tools, including molecular dynamics (MD), to simulate the material at multiple length scales. Results from MD simulation are used to inform the continuum-level models to link molecular-level characteristics (material structure, temperature) to bulk-level performance (strength, residual stresses). Processing parameters are optimized such that process-induced residual stresses are minimized and laminate strength is maximized. The multiscale process modeling method developed with this research can play a key role in the development of future CMCs for high-temperature and high-strength applications. By combining multiscale computational tools and process modeling, new manufacturing parameters can be established for optimal fabrication and performance of CMCs for a wide range of applications.

Keywords: digital engineering, finite elements, manufacturing, molecular dynamics

Procedia PDF Downloads 78
7590 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 142
7589 Computational Experiment on Evolution of E-Business Service Ecosystem

Authors: Xue Xiao, Sun Hao, Liu Donghua

Abstract:

E-commerce is experiencing rapid development and evolution, but traditional research methods are difficult to fully demonstrate the relationship between micro factors and macro evolution in the development process of e-commerce, which cannot provide accurate assessment for the existing strategies and predict the future evolution trends. To solve these problems, this paper presents the concept of e-commerce service ecosystem based on the characteristics of e-commerce and business ecosystem theory, describes e-commerce environment as a complex adaptive system from the perspective of ecology, constructs a e-commerce service ecosystem model by using Agent-based modeling method and Java language in RePast simulation platform and conduct experiment through the way of computational experiment, attempt to provide a suitable and effective researching method for the research on e-commerce evolution. By two experiments, it can be found that system model built in this paper is able to show the evolution process of e-commerce service ecosystem and the relationship between micro factors and macro emergence. Therefore, the system model constructed by Agent-based method and computational experiment provides proper means to study the evolution of e-commerce ecosystem.

Keywords: e-commerce service ecosystem, complex system, agent-based modeling, computational experiment

Procedia PDF Downloads 326
7588 Optimizing the Morphology and Flow Patterns of Scaffold Perfusion Systems for Effective Cell Deposition Using Computational Fluid Dynamics

Authors: Vineeth Siripuram, Abhineet Nigam

Abstract:

A bioreactor is an engineered system that supports a biologically active environment. Along the years, the advancements in bioreactors have been widely accepted all over the world for varied applications ranging from sewage treatment to tissue cloning. Driven by tissue and organ shortage, tissue engineering has emerged as an alternative to transplantation for the reconstruction of lost or damaged organs. In this study, Computational fluid dynamics (CFD) has been used to model porous medium flow in scaffolds (taken from the literature) with different flow patterns. A detailed analysis of different scaffold geometries and their influence on cell deposition in the perfusion system is been carried out using Computational fluid dynamics (CFD). Considering the fact that, the scaffold should mimic the organs or tissues structures in a three-dimensional manner, certain assumptions were made accordingly. The research on scaffolds has been extensively carried out in different bioreactors. However, there has been less focus on the morphology of the scaffolds and the flow patterns in which the perfusion system is laid upon. The objective of this paper is to employ a computational approach using CFD simulation to determine the optimal morphology and the anisotropic measurements of the various samples of scaffolds. Using predictive computational modelling approach, variables which exert dominant effects on the cell deposition within the scaffold were prioritised and corresponding changes in morphology of scaffold and flow patterns in the perfusion systems are made. A Eulerian approach was carried on in multiple CFD simulations, and it is observed that the morphological and topological changes in the scaffold perfusion system are of great importance in the commercial applications of scaffolds.

Keywords: cell seeding, CFD, flow patterns, modelling, perfusion systems, scaffold

Procedia PDF Downloads 136
7587 A Novel Algorithm for Parsing IFC Models

Authors: Raninder Kaur Dhillon, Mayur Jethwa, Hardeep Singh Rai

Abstract:

Information technology has made a pivotal progress across disparate disciplines, one of which is AEC (Architecture, Engineering and Construction) industry. CAD is a form of computer-aided building modulation that architects, engineers and contractors use to create and view two- and three-dimensional models. The AEC industry also uses building information modeling (BIM), a newer computerized modeling system that can create four-dimensional models; this software can greatly increase productivity in the AEC industry. BIM models generate open source IFC (Industry Foundation Classes) files which aim for interoperability for exchanging information throughout the project lifecycle among various disciplines. The methods developed in previous studies require either an IFC schema or MVD and software applications, such as an IFC model server or a Building Information Modeling (BIM) authoring tool, to extract a partial or complete IFC instance model. This paper proposes an efficient algorithm for extracting a partial and total model from an Industry Foundation Classes (IFC) instance model without an IFC schema or a complete IFC model view definition (MVD).

Keywords: BIM, CAD, IFC, MVD

Procedia PDF Downloads 273
7586 Forecasting Performance Comparison of Autoregressive Fractional Integrated Moving Average and Jordan Recurrent Neural Network Models on the Turbidity of Stream Flows

Authors: Daniel Fulus Fom, Gau Patrick Damulak

Abstract:

In this study, the Autoregressive Fractional Integrated Moving Average (ARFIMA) and Jordan Recurrent Neural Network (JRNN) models were employed to model the forecasting performance of the daily turbidity flow of White Clay Creek (WCC). The two methods were applied to the log difference series of the daily turbidity flow series of WCC. The measurements of error employed to investigate the forecasting performance of the ARFIMA and JRNN models are the Root Mean Square Error (RMSE) and the Mean Absolute Error (MAE). The outcome of the investigation revealed that the forecasting performance of the JRNN technique is better than the forecasting performance of the ARFIMA technique in the mean square error sense. The results of the ARFIMA and JRNN models were obtained by the simulation of the models using MATLAB version 8.03. The significance of using the log difference series rather than the difference series is that the log difference series stabilizes the turbidity flow series than the difference series on the ARFIMA and JRNN.

Keywords: auto regressive, mean absolute error, neural network, root square mean error

Procedia PDF Downloads 246
7585 Preliminary Conceptions of 3D Prototyping Model to Experimental Investigation in Hypersonic Shock Tunnels

Authors: Thiago Victor Cordeiro Marcos, Joao Felipe de Araujo Martos, Ronaldo de Lima Cardoso, David Romanelli Pinto, Paulo Gilberto de Paula Toro, Israel da Silveira Rego, Antonio Carlos de Oliveira

Abstract:

Currently, the use of 3D rapid prototyping, also known as 3D printing, has been investigated by some universities around the world as an innovative technique, fast, flexible and cheap for a direct plastic models manufacturing that are lighter and with complex geometries to be tested for hypersonic shock tunnel. Initially, the purpose is integrated prototyped parts with metal models that actually are manufactured through of the conventional machining and hereafter replace them with completely prototyped models. The mechanical design models to be tested in hypersonic shock tunnel are based on conventional manufacturing processes, therefore are limited forms and standard geometries. The use of 3D rapid prototyping offers a range of options that enables geometries innovation and ways to be used for the design new models. The conception and project of a prototyped model for hypersonic shock tunnel should be rethought and adapted when comparing the conventional manufacturing processes, in order to fully exploit the creativity and flexibility that are allowed by the 3D prototyping process. The objective of this paper is to compare the conception and project of a 3D rapid prototyping model and a conventional machining model, while showing the advantages and disadvantages of each process and the benefits that 3D prototyping can bring to the manufacture of models to be tested in hypersonic shock tunnel.

Keywords: 3D printing, 3D prototyping, experimental research, hypersonic shock tunnel

Procedia PDF Downloads 439
7584 A General Framework to Successfully Operate the Digital Transformation Process in the Post-COVID Era

Authors: Driss Kettani

Abstract:

In this paper, we shed light on “Digital Divide 2.0,” which we see as COVID-19’s Version of the Digital Divide! We believe that “Fighting” against Digital Divide 2.0 necessitates for a Country to be seriously advanced in the Global Digital Transformation that is, naturally, a complex, delicate, costly and long-term Process. We build an argument supporting our assumption and, from there, we present the foundations of a computational framework to guide and streamline Digital Transformation at all levels.

Keywords: digital divide 2.0, digital transformation, ICTs for development, computational outcomes assessment

Procedia PDF Downloads 143
7583 Sensitivity Based Robust Optimization Using 9 Level Orthogonal Array and Stepwise Regression

Authors: K. K. Lee, H. W. Han, H. L. Kang, T. A. Kim, S. H. Han

Abstract:

For the robust optimization of the manufacturing product design, there are design objectives that must be achieved, such as a minimization of the mean and standard deviation in objective functions within the required sensitivity constraints. The authors utilized the sensitivity of objective functions and constraints with respect to the effective design variables to reduce the computational burden associated with the evaluation of the probabilities. The individual mean and sensitivity values could be estimated easily by using the 9 level orthogonal array based response surface models optimized by the stepwise regression. The present study evaluates a proposed procedure from the robust optimization of rubber domes that are commonly used for keyboard switching, by using the 9 level orthogonal array and stepwise regression along with a desirability function. In addition, a new robust optimization process, i.e., the I2GEO (Identify, Integrate, Generate, Explore and Optimize), was proposed on the basis of the robust optimization in rubber domes. The optimized results from the response surface models and the estimated results by using the finite element analysis were consistent within a small margin of error. The standard deviation of objective function is decreasing 54.17% with suggested sensitivity based robust optimization. (Business for Cooperative R&D between Industry, Academy, and Research Institute funded Korea Small and Medium Business Administration in 2017, S2455569)

Keywords: objective function, orthogonal array, response surface model, robust optimization, stepwise regression

Procedia PDF Downloads 266
7582 Neural Machine Translation for Low-Resource African Languages: Benchmarking State-of-the-Art Transformer for Wolof

Authors: Cheikh Bamba Dione, Alla Lo, Elhadji Mamadou Nguer, Siley O. Ba

Abstract:

In this paper, we propose two neural machine translation (NMT) systems (French-to-Wolof and Wolof-to-French) based on sequence-to-sequence with attention and transformer architectures. We trained our models on a parallel French-Wolof corpus of about 83k sentence pairs. Because of the low-resource setting, we experimented with advanced methods for handling data sparsity, including subword segmentation, back translation, and the copied corpus method. We evaluate the models using the BLEU score and find that transformer outperforms the classic seq2seq model in all settings, in addition to being less sensitive to noise. In general, the best scores are achieved when training the models on word-level-based units. For subword-level models, using back translation proves to be slightly beneficial in low-resource (WO) to high-resource (FR) language translation for the transformer (but not for the seq2seq) models. A slight improvement can also be observed when injecting copied monolingual text in the target language. Moreover, combining the copied method data with back translation leads to a substantial improvement of the translation quality.

Keywords: backtranslation, low-resource language, neural machine translation, sequence-to-sequence, transformer, Wolof

Procedia PDF Downloads 118
7581 The Influence of Contact Models on Discrete Element Modeling of the Ballast Layer Subjected to Cyclic Loading

Authors: Peyman Aela, Lu Zong, Guoqing Jing

Abstract:

Recently, there has been growing interest in numerical modeling of ballast railway tracks. A commonly used mechanistic modeling approach for ballast is the discrete element method (DEM). Up to now, the effects of the contact model on ballast particle behavior have not been precisely examined. In this regard, selecting the appropriate contact model is mainly associated with the particle characteristics and the loading condition. Since ballast is cohesionless material, different contact models, including the linear spring, Hertz-Mindlin, and Hysteretic models, could be used to calculate particle-particle or wall-particle contact forces. Moreover, the simulation of a dynamic test is vital to investigate the effect of damping parameters on the ballast deformation. In this study, ballast box tests were simulated by DEM to examine the influence of different contact models on the mechanical behavior of the ballast layer under cyclic loading. This paper shows how the contact model can affect the deformation and damping of a ballast layer subjected to cyclic loading in a ballast box.

Keywords: ballast, contact model, cyclic loading, DEM

Procedia PDF Downloads 161