Search results for: forecasting models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2678

Search results for: forecasting models

2468 Study of Adsorption Isotherm Models on Rare Earth Elements Biosorption for Separation Purposes

Authors: Nice Vasconcelos Coimbra, Fábio dos Santos Gonçalves, Marisa Nascimento, Ellen Cristine Giese

Abstract:

The development of chemical routes for the recovery and separation of rare earth elements (REE) is seen as a priority and strategic action by several countries demanding these elements. Among the possibilities of alternative routes, the biosorption process has been evaluated in our laboratory. In this theme, the present work attempts to assess and fit the solution equilibrium data in Langmuir, Freundlich and DKR isothermal models, based on the biosorption results of the lanthanum and samarium elements by Bacillus subtilis immobilized on calcium alginate gel. It was observed that the preference of adsorption of REE by the immobilized biomass followed the order Sm (III)> La (III). It can be concluded that among the studied isotherms models, the Langmuir model presented better mathematical results than the Freundlich and DKR models.

Keywords: Rare earth elements, biosorption, Bacillus subtilis, adsorption isotherm models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 740
2467 Assessment of Modern RANS Models for the C3X Vane Film Cooling Prediction

Authors: Mikhail Gritskevich, Sebastian Hohenstein

Abstract:

The paper presents the results of a detailed assessment of several modern Reynolds Averaged Navier-Stokes (RANS) turbulence models for prediction of C3X vane film cooling at various injection regimes. Three models are considered, namely the Shear Stress Transport (SST) model, the modification of the SST model accounting for the streamlines curvature (SST-CC), and the Explicit Algebraic Reynolds Stress Model (EARSM). It is shown that all the considered models face with a problem in prediction of the adiabatic effectiveness in the vicinity of the cooling holes; however, accounting for the Reynolds stress anisotropy within the EARSM model noticeably increases the solution accuracy. On the other hand, further downstream all the models provide a reasonable agreement with the experimental data for the adiabatic effectiveness and among the considered models the most accurate results are obtained with the use EARMS.

Keywords: Discrete holes film cooling, Reynolds Averaged Navier-Stokes, Reynolds stress tensor anisotropy, turbulent heat transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1122
2466 Economic Forecasting Model in Practice Using the Regression Analysis: The Relationship of Price, Domestic Output, Gross National Product, and Trend Variable of Gas or Oil Production

Authors: Ashiquer Rahman, Ummey Salma, Afrin Jannat

Abstract:

Recently, oil has become more influential in almost every economic sector as a key material. As can be seen from the news, when there are some changes in an oil price or Organization of the Petroleum Exporting Countries (OPEC) announces a new strategy, its effect spreads to every part of the economy directly and indirectly. That’s a reason why people always observe the oil price and try to forecast the changes of it. The most important factor affecting the price is its supply which is determined by the number of wildcats drilled. Therefore, a study in relation between the number of wellheads and other economic variables may give us some understanding of the mechanism indicated the amount of oil supplies. In this paper, we will consider a relationship between the number of wellheads and three key factors: price of the wellhead, domestic output, and Gross National Product (GNP) constant dollars. We also add trend variables in the models because the consumption of oil varies from time to time. Moreover, this paper will use an econometrics method to estimate parameters in the model, apply some tests to verify the result we acquire, and then conclude the model.

Keywords: Price, domestic output, GNP, trend variable, wildcat activity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33
2465 Using Reservoir Models for Monitoring Geothermal Surface Features

Authors: John P. O’Sullivan, Thomas M. P. Ratouis, Michael J. O’Sullivan

Abstract:

As the use of geothermal energy grows internationally more effort is required to monitor and protect areas with rare and important geothermal surface features. A number of approaches are presented for developing and calibrating numerical geothermal reservoir models that are capable of accurately representing geothermal surface features. The approaches are discussed in the context of cases studies of the Rotorua geothermal system and the Orakei-korako geothermal system, both of which contain important surface features. The results show that models are able to match the available field data accurately and hence can be used as valuable tools for predicting the future response of the systems to changes in use.

Keywords: Geothermal reservoir models, surface features, monitoring, TOUGH2.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2072
2464 Dam Operation Management Criteria during Floods: Case Study of Dez Dam in Southwest Iran

Authors: Ali Heidari

Abstract:

This paper presents the principles for improving flood mitigation operation in multipurpose dams and maximizing reservoir performance during flood occurrence with a focus on the real-time operation of gated spillways. The criteria of operation include the safety of dams during flood management, minimizing the downstream flood risk by decreasing the flood hazard and fulfilling water supply and other purposes of the dam operation in mid and long terms horizons. The parameters deemed to be important include flood inflow, outlet capacity restrictions, downstream flood inundation damages, economic revenue of dam operation, and environmental and sedimentation restrictions. A simulation model was used to determine the real-time release of the Dez Dam located in the Dez Rivers in southwest Iran, considering the gate regulation curves for the gated spillway. The results of the simulation model show that there is a possibility to improve the current procedures used in the real-time operation of the dams, particularly using gate regulation curves and early flood forecasting system results. The Dez Dam operation data show that in one of the best flood control records, 17% of the total active volume and flood control pool of the reservoir have not been used in decreasing the downstream flood hazard despite the availability of a flood forecasting system.

Keywords: Dam operation, flood control criteria, Dez Dam, Iran.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 386
2463 Allometric Models for Biomass Estimation in Savanna Woodland Area, Niger State, Nigeria

Authors: Abdullahi Jibrin, Aishetu Abdulkadir

Abstract:

The development of allometric models is crucial to accurate forest biomass/carbon stock assessment. The aim of this study was to develop a set of biomass prediction models that will enable the determination of total tree aboveground biomass for savannah woodland area in Niger State, Nigeria. Based on the data collected through biometric measurements of 1816 trees and destructive sampling of 36 trees, five species specific and one site specific models were developed. The sample size was distributed equally between the five most dominant species in the study site (Vitellaria paradoxa, Irvingia gabonensis, Parkia biglobosa, Anogeissus leiocarpus, Pterocarpus erinaceous). Firstly, the equations were developed for five individual species. Secondly these five species were mixed and were used to develop an allometric equation of mixed species. Overall, there was a strong positive relationship between total tree biomass and the stem diameter. The coefficient of determination (R2 values) ranging from 0.93 to 0.99 P < 0.001 were realised for the models; with considerable low standard error of the estimates (SEE) which confirms that the total tree above ground biomass has a significant relationship with the dbh. F-test values for the biomass prediction models were also significant at p < 0.001 which indicates that the biomass prediction models are valid. This study recommends that for improved biomass estimates in the study site, the site specific biomass models should preferably be used instead of using generic models.

Keywords: Allometriy, biomass, carbon stock, model, regression equation, woodland, inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2788
2462 Development of a Real-Time Energy Models for Photovoltaic Water Pumping System

Authors: Ammar Mahjoubi, Ridha Fethi Mechlouch, Belgacem Mahdhaoui, Ammar Ben Brahim

Abstract:

This purpose of this paper is to develop and validate a model to accurately predict the cell temperature of a PV module that adapts to various mounting configurations, mounting locations, and climates while only requiring readily available data from the module manufacturer. Results from this model are also compared to results from published cell temperature models. The models were used to predict real-time performance from a PV water pumping systems in the desert of Medenine, south of Tunisia using 60-min intervals of measured performance data during one complete year. Statistical analysis of the predicted results and measured data highlight possible sources of errors and the limitations and/or adequacy of existing models, to describe the temperature and efficiency of PV-cells and consequently, the accuracy of performance of PV water pumping systems prediction models.

Keywords: Temperature of a photovoltaic module, Predicted models, PV water pumping systems efficiency, Simulation, Desert of southern Tunisia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
2461 Comparison of Material Constitutive Models Used in FEA of Low Volume Roads

Authors: Lenka Ševelová, Aleš Florian

Abstract:

Appropriate and progressive tool for analyzing behavior of low volume roads are probabilistic models used in reliability analyses. The necessary part of the probabilistic model is the deterministic model of structural behavior. The FE model of low volume roads is created in the ANSYS software. It is able to determine the state of stress and deformation in any point of the structure and thus generate data required for the reliability analysis. The paper compares two material constitutive models used for modeling of unbound non-homogenous materials used in low volume roads. The first model is linear elastic model according to Hook theory (H model), the second one is nonlinear elastic-plastic Drucker-Prager model (D-P model).

Keywords: FEA, FEM, geotechnical materials, low volume roads, material constitutive models, pavement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2886
2460 Linking Business Process Models and System Models Based on Business Process Modelling

Authors: Faisal A. Aburub

Abstract:

Organizations today need to invest in software in order to run their businesses, and to the organizations’ objectives, the software should be in line with the business process. This research presents an approach for linking process models and system models. Particularly, the new approach aims to synthesize sequence diagram based on role activity diagram (RAD) model. The approach includes four steps namely: Create business process model using RAD, identify computerized activities, identify entities in sequence diagram and identify messages in sequence diagram. The new approach has been validated using the process of student registration in University of Petra as a case study. Further research is required to validate the new approach using different domains.

Keywords: Business process modelling, system models, role activity diagrams, sequence diagrams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
2459 Covering-based Rough sets Based on the Refinement of Covering-element

Authors: Jianguo Tang, Kun She, William Zhu

Abstract:

Covering-based rough sets is an extension of rough sets and it is based on a covering instead of a partition of the universe. Therefore it is more powerful in describing some practical problems than rough sets. However, by extending the rough sets, covering-based rough sets can increase the roughness of each model in recognizing objects. How to obtain better approximations from the models of a covering-based rough sets is an important issue. In this paper, two concepts, determinate elements and indeterminate elements in a universe, are proposed and given precise definitions respectively. This research makes a reasonable refinement of the covering-element from a new viewpoint. And the refinement may generate better approximations of covering-based rough sets models. To prove the theory above, it is applied to eight major coveringbased rough sets models which are adapted from other literature. The result is, in all these models, the lower approximation increases effectively. Correspondingly, in all models, the upper approximation decreases with exceptions of two models in some special situations. Therefore, the roughness of recognizing objects is reduced. This research provides a new approach to the study and application of covering-based rough sets.

Keywords: Determinate element, indeterminate element, refinementof covering-element, refinement of covering, covering-basedrough sets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1322
2458 A Comparison of Different Soft Computing Models for Credit Scoring

Authors: Nnamdi I. Nwulu, Shola G. Oroja

Abstract:

It has become crucial over the years for nations to improve their credit scoring methods and techniques in light of the increasing volatility of the global economy. Statistical methods or tools have been the favoured means for this; however artificial intelligence or soft computing based techniques are becoming increasingly preferred due to their proficient and precise nature and relative simplicity. This work presents a comparison between Support Vector Machines and Artificial Neural Networks two popular soft computing models when applied to credit scoring. Amidst the different criteria-s that can be used for comparisons; accuracy, computational complexity and processing times are the selected criteria used to evaluate both models. Furthermore the German credit scoring dataset which is a real world dataset is used to train and test both developed models. Experimental results obtained from our study suggest that although both soft computing models could be used with a high degree of accuracy, Artificial Neural Networks deliver better results than Support Vector Machines.

Keywords: Artificial Neural Networks, Credit Scoring, SoftComputing Models, Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2128
2457 In silico Simulations for DNA Shuffling Experiments

Authors: Luciana Montera

Abstract:

DNA shuffling is a powerful method used for in vitro evolute molecules with specific functions and has application in areas such as, for example, pharmaceutical, medical and agricultural research. The success of such experiments is dependent on a variety of parameters and conditions that, sometimes, can not be properly pre-established. Here, two computational models predicting DNA shuffling results is presented and their use and results are evaluated against an empirical experiment. The in silico and in vitro results show agreement indicating the importance of these two models and motivating the study and development of new models.

Keywords: Computer simulation, DNA shuffling, in silico andin vitro comparison.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1727
2456 Comparison of Machine Learning Techniques for Single Imputation on Audiograms

Authors: Sarah Beaver, Renee Bryce

Abstract:

Audiograms detect hearing impairment, but missing values pose problems. This work explores imputations in an attempt to improve accuracy. This work implements Linear Regression, Lasso, Linear Support Vector Regression, Bayesian Ridge, K Nearest Neighbors (KNN), and Random Forest machine learning techniques to impute audiogram frequencies ranging from 125 Hz to 8000 Hz. The data contain patients who had or were candidates for cochlear implants. Accuracy is compared across two different Nested Cross-Validation k values. Over 4000 audiograms were used from 800 unique patients. Additionally, training on data combines and compares left and right ear audiograms versus single ear side audiograms. The accuracy achieved using Root Mean Square Error (RMSE) values for the best models for Random Forest ranges from 4.74 to 6.37. The R2 values for the best models for Random Forest ranges from .91 to .96. The accuracy achieved using RMSE values for the best models for KNN ranges from 5.00 to 7.72. The R2 values for the best models for KNN ranges from .89 to .95. The best imputation models received R2 between .89 to .96 and RMSE values less than 8dB. We also show that the accuracy of classification predictive models performed better with our imputation models versus constant imputations by a two percent increase.

Keywords: Machine Learning, audiograms, data imputations, single imputations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 158
2455 Some Solid Transportation Models with Crisp and Rough Costs

Authors: Pradip Kundu, Samarjit Kar, Manoranjan Maiti

Abstract:

In this paper, some practical solid transportation models are formulated considering per trip capacity of each type of conveyances with crisp and rough unit transportation costs. This is applicable for the system in which full vehicles, e.g. trucks, rail coaches are to be booked for transportation of products so that transportation cost is determined on the full of the conveyances. The models with unit transportation costs as rough variables are transformed into deterministic forms using rough chance constrained programming with the help of trust measure. Numerical examples are provided to illustrate the proposed models in crisp environment as well as with unit transportation costs as rough variables.

Keywords: Solid transportation problem, Rough set, Rough variable, Trust measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2622
2454 Soil-Structure Interaction Models for the Reinforced Foundation System: A State-of-the-Art Review

Authors: Ashwini V. Chavan, Sukhanand S. Bhosale

Abstract:

Challenges of weak soil subgrade are often resolved either by stabilization or reinforcing it. However, it is also practiced to reinforce the granular fill to improve the load-settlement behavior of it over weak soil strata. The inclusion of reinforcement in the engineered granular fill provided a new impetus for the development of enhanced Soil-Structure Interaction (SSI) models, also known as mechanical foundation models or lumped parameter models. Several researchers have been working in this direction to understand the mechanism of granular fill-reinforcement interaction and the response of weak soil under the application of load. These models have been developed by extending available SSI models such as the Winkler Model, Pasternak Model, Hetenyi Model, Kerr Model etc., and are helpful to visualize the load-settlement behavior of a physical system through 1-D and 2-D analysis considering beam and plate resting on the foundation, respectively. Based on the literature survey, these models are categorized as ‘Reinforced Pasternak Model,’ ‘Double Beam Model,’ ‘Reinforced Timoshenko Beam Model,’ and ‘Reinforced Kerr Model’. The present work reviews the past 30+ years of research in the field of SSI models for reinforced foundation systems, presenting the conceptual development of these models systematically and discussing their limitations. A flow-chart showing procedure for compution of deformation and mobilized tension is also incorporated in the paper. Special efforts are taken to tabulate the parameters and their significance in the load-settlement analysis, which may be helpful in future studies for the comparison and enhancement of results and findings of physical models. 

Keywords: geosynthetics, mathematical modeling, reinforced foundation, soil-structure interaction, ground improvement, soft soil

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 652
2453 Low-Cost and Highly Accurate Motion Models for Three-Dimensional Local Landmark-based Autonomous Navigation

Authors: Gheorghe Galben, Daniel N. Aloi

Abstract:

Recently, the Spherical Motion Models (SMM-s) have been introduced [1]. These new models have been developed for 3D local landmark-base Autonomous Navigation (AN). This paper is revealing new arguments and experimental results to support the SMM-s characteristics. The accuracy and the robustness in performing a specific task are the main concerns of the new investigations. To analyze their performances of the SMM-s, the most powerful tools of estimation theory, the extended Kalman filter (EKF) and unscented Kalman filter (UKF), which give the best estimations in noisy environments, have been employed. The Monte Carlo validation implementations used to test the stability and robustness of the models have been employed as well.

Keywords: Autonomous navigation, extended kalman filter, unscented kalman filter, localization algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1310
2452 Statistical (Radio) Path Loss Modelling: For RF Propagations within localized Indoor and Outdoor Environments of the Academic Building of INTI University College (Laureate International Universities)

Authors: Emmanuel O.O. Ojakominor, Tian F. Lai

Abstract:

A handful of propagation textbooks that discuss radio frequency (RF) propagation models merely list out the models and perhaps discuss them rather briefly; this may well be frustrating for the potential first time modeller who's got no idea on how these models could have been derived. This paper fundamentally provides an overture in modelling the radio channel. Explicitly, for the modelling practice discussed here, signal strength field measurements had to be conducted beforehand (this was done at 469 MHz); to be precise, this paper primarily concerns empirically/statistically modelling the radio channel, and thus provides results obtained from empirically modelling the environments in question. This paper, on the whole, proposes three propagation models, corresponding to three experimented environments. Perceptibly, the models have been derived by way of making the most use of statistical measures. Generally speaking, the first two models were derived via simple linear regression analysis, whereas the third have been originated using multiple regression analysis (with five various predictors). Additionally, as implied by the title of this paper, both indoor and outdoor environments have been experimented; however, (somewhat) two of the environments are neither entirely indoor nor entirely outdoor. The other environment, however, is completely indoor.

Keywords: RF propagation, radio channel modelling, statistical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2432
2451 Mixtures of Monotone Networks for Prediction

Authors: Marina Velikova, Hennie Daniels, Ad Feelders

Abstract:

In many data mining applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. In this paper we consider partially monotone prediction problems, where the target variable depends monotonically on some of the input variables but not on all. We propose a novel method to construct prediction models, where monotone dependences with respect to some of the input variables are preserved by virtue of construction. Our method belongs to the class of mixture models. The basic idea is to convolute monotone neural networks with weight (kernel) functions to make predictions. By using simulation and real case studies, we demonstrate the application of our method. To obtain sound assessment for the performance of our approach, we use standard neural networks with weight decay and partially monotone linear models as benchmark methods for comparison. The results show that our approach outperforms partially monotone linear models in terms of accuracy. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.

Keywords: mixture models, monotone neural networks, partially monotone models, partially monotone problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1245
2450 Analyzing Data on Breastfeeding Using Dispersed Statistical Models

Authors: Naushad Mamode Khan, Cheika Jahangeer, Maleika Heenaye-Mamode Khan

Abstract:

Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. Exclusive breastfeeding during the first 6 months of life is very important as it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, it helps to reduce the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we make a survey of the factors that influence exclusive breastfeeding and use two dispersed statistical models to analyze data. The models are the Generalized Poisson regression model and the Com-Poisson regression models.

Keywords: Exclusive breastfeeding, regression model, generalized poisson, com-poisson.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1561
2449 Zero Truncated Strict Arcsine Model

Authors: Y. N. Phang, E. F. Loh

Abstract:

The zero truncated model is usually used in modeling count data without zero. It is the opposite of zero inflated model. Zero truncated Poisson and zero truncated negative binomial models are discussed and used by some researchers in analyzing the abundance of rare species and hospital stay. Zero truncated models are used as the base in developing hurdle models. In this study, we developed a new model, the zero truncated strict arcsine model, which can be used as an alternative model in modeling count data without zero and with extra variation. Two simulated and one real life data sets are used and fitted into this developed model. The results show that the model provides a good fit to the data. Maximum likelihood estimation method is used in estimating the parameters.

Keywords: Hurdle models, maximum likelihood estimation method, positive count data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1857
2448 AnQL: A Query Language for Annotation Documents

Authors: Neerja Bhatnagar, Ben A. Juliano, Renee S. Renner

Abstract:

This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.

Keywords: Annotation query language, data annotations, data annotation models, semantic data annotations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844
2447 ANN Models for Microstrip Line Synthesis and Analysis

Authors: Dr.K.Sri Rama Krishna, J.Lakshmi Narayana, Dr.L.Pratap Reddy

Abstract:

Microstrip lines, widely used for good reason, are broadband in frequency and provide circuits that are compact and light in weight. They are generally economical to produce since they are readily adaptable to hybrid and monolithic integrated circuit (IC) fabrication technologies at RF and microwave frequencies. Although, the existing EM simulation models used for the synthesis and analysis of microstrip lines are reasonably accurate, they are computationally intensive and time consuming. Neural networks recently gained attention as fast and flexible vehicles to microwave modeling, simulation and optimization. After learning and abstracting from microwave data, through a process called training, neural network models are used during microwave design to provide instant answers to the task learned.This paper presents simple and accurate ANN models for the synthesis and analysis of Microstrip lines to more accurately compute the characteristic parameters and the physical dimensions respectively for the required design specifications.

Keywords: Neural Models, Algorithms, Microstrip Lines, Analysis, Synthesis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2149
2446 An Owl Ontology for Commonkads Template Knowledge Models

Authors: B. A. Gobin, R. K. Subramanian

Abstract:

This paper gives an overview of how an OWL ontology has been created to represent template knowledge models defined in CML that are provided by CommonKADS. CommonKADS is a mature knowledge engineering methodology which proposes the use of template knowledge model for knowledge modelling. The aim of developing this ontology is to present the template knowledge model in a knowledge representation language that can be easily understood and shared in the knowledge engineering community. Hence OWL is used as it has become a standard for ontology and also it already has user friendly tools for viewing and editing.

Keywords: Ontology, OWL, Template Knowledge Models, CommonKADS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1793
2445 Comparison of Three Turbulence Models in Wear Prediction of Multi-Size Particulate Flow through Rotating Channel

Authors: Pankaj K. Gupta, Krishnan V. Pagalthivarthi

Abstract:

The present work compares the performance of three turbulence modeling approach (based on the two-equation k -ε model) in predicting erosive wear in multi-size dense slurry flow through rotating channel. All three turbulence models include rotation modification to the production term in the turbulent kineticenergy equation. The two-phase flow field obtained numerically using Galerkin finite element methodology relates the local flow velocity and concentration to the wear rate via a suitable wear model. The wear models for both sliding wear and impact wear mechanisms account for the particle size dependence. Results of predicted wear rates using the three turbulence models are compared for a large number of cases spanning such operating parameters as rotation rate, solids concentration, flow rate, particle size distribution and so forth. The root-mean-square error between FE-generated data and the correlation between maximum wear rate and the operating parameters is found less than 2.5% for all the three models.

Keywords: Rotating channel, maximum wear rate, multi-sizeparticulate flow, k −ε turbulence models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1771
2444 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer

Abstract:

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Keywords: Non-stationary, BINARMA(1, 1) model, Poisson Innovations, CML

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 588
2443 Modeling of Normal and Atherosclerotic Blood Vessels using Finite Element Methods and Artificial Neural Networks

Authors: K. Kamalanand, S. Srinivasan

Abstract:

Analysis of blood vessel mechanics in normal and diseased conditions is essential for disease research, medical device design and treatment planning. In this work, 3D finite element models of normal vessel and atherosclerotic vessel with 50% plaque deposition were developed. The developed models were meshed using finite number of tetrahedral elements. The developed models were simulated using actual blood pressure signals. Based on the transient analysis performed on the developed models, the parameters such as total displacement, strain energy density and entropy per unit volume were obtained. Further, the obtained parameters were used to develop artificial neural network models for analyzing normal and atherosclerotic blood vessels. In this paper, the objectives of the study, methodology and significant observations are presented.

Keywords: Blood vessel, atherosclerosis, finite element model, artificial neural networks

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2308
2442 Interoperability in Component Based Software Development

Authors: M. Madiajagan, B. Vijayakumar

Abstract:

The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.

Keywords: Interoperability, component packaging, communication technology, heterogeneous platform, component interface, middleware.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2786
2441 A Study of Panel Logit Model and Adaptive Neuro-Fuzzy Inference System in the Prediction of Financial Distress Periods

Authors: Ε. Giovanis

Abstract:

The purpose of this paper is to present two different approaches of financial distress pre-warning models appropriate for risk supervisors, investors and policy makers. We examine a sample of the financial institutions and electronic companies of Taiwan Security Exchange (TSE) market from 2002 through 2008. We present a binary logistic regression with paned data analysis. With the pooled binary logistic regression we build a model including more variables in the regression than with random effects, while the in-sample and out-sample forecasting performance is higher in random effects estimation than in pooled regression. On the other hand we estimate an Adaptive Neuro-Fuzzy Inference System (ANFIS) with Gaussian and Generalized Bell (Gbell) functions and we find that ANFIS outperforms significant Logit regressions in both in-sample and out-of-sample periods, indicating that ANFIS is a more appropriate tool for financial risk managers and for the economic policy makers in central banks and national statistical services.

Keywords: ANFIS, Binary logistic regression, Financialdistress, Panel data

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2341
2440 Human Pose Estimation using Active Shape Models

Authors: Changhyuk Jang, Keechul Jung

Abstract:

Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body research using Active Shape Models, such as human detection, primarily take the form of silhouette of human body. This technique is not able to estimate accurately for human pose to concern two arms and legs, as the silhouette of human body represents the shape as out of round. To solve this problem, we applied the human body model as stick-figure, “skeleton". The skeleton model of human body can give consideration to various shapes of human pose. To obtain effective estimation result, we applied background subtraction and deformed matching algorithm of primary Active Shape Models in the fitting process. The images which were used to make the model were 600 human bodies, and the model has 17 landmark points which indicate body junction and key features of human pose. The maximum iteration for the fitting process was 30 times and the execution time was less than .03 sec.

Keywords: Active shape models, skeleton, pose estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2415
2439 Simulation of Reactive Distillation: Comparison of Equilibrium and Nonequilibrium Stage Models

Authors: Asfaw Gezae Daful

Abstract:

In the present study, two distinctly different approaches are followed for modeling of reactive distillation column, the equilibrium stage model and the nonequilibrium stage model. These models are simulated with a computer code developed in the present study using MATLAB programming. In the equilibrium stage models, the vapor and liquid phases are assumed to be in equilibrium and allowance is made for finite reaction rates, where as in the nonequilibrium stage models simultaneous mass transfer and reaction rates are considered. These simulated model results are validated from the experimental data reported in the literature. The simulated results of equilibrium and nonequilibrium models are compared for concentration, temperature and reaction rate profiles in a reactive distillation column for Methyl Tert Butyle Ether (MTBE) production. Both the models show similar trend for the concentration, temperature and reaction rate profiles but the nonequilibrium model predictions are higher and closer to the experimental values reported in the literature.

Keywords: Reactive Distillation, Equilibrium model, Nonequilibrium model, Methyl Tert-Butyl Ether

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4205