Search results for: computational models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3477

Search results for: computational models

3177 Application of Adaptive Network-Based Fuzzy Inference System in Macroeconomic Variables Forecasting

Authors: Ε. Giovanis

Abstract:

In this paper we apply an Adaptive Network-Based Fuzzy Inference System (ANFIS) with one input, the dependent variable with one lag, for the forecasting of four macroeconomic variables of US economy, the Gross Domestic Product, the inflation rate, six monthly treasury bills interest rates and unemployment rate. We compare the forecasting performance of ANFIS with those of the widely used linear autoregressive and nonlinear smoothing transition autoregressive (STAR) models. The results are greatly in favour of ANFIS indicating that is an effective tool for macroeconomic forecasting used in academic research and in research and application by the governmental and other institutions

Keywords: Linear models, Macroeconomics, Neuro-Fuzzy, Non-Linear models

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1777
3176 CFD Analysis of Passive Cooling Building by Using Solar Chimney System

Authors: Naci Kalkan, Ihsan Dagtekin

Abstract:

This research presents the design and analysis of solar air-conditioning systems particularly solar chimney which is a passive strategy for natural ventilation, and demonstrates the structures of these systems’ using Computational Fluid Dynamic (CFD) and finally compares the results with several examples, which have been studied experimentally and carried out previously. In order to improve the performance of solar chimney system, highly efficient sub-system components are considered for the design. The general purpose of the research is to understand how efficiently solar chimney systems generate cooling, and is to improve the efficient of such systems for integration with existing and future domestic buildings.

Keywords: Solar cooling system, solar chimney, active and passive solar technologies, natural ventilation, cavity depth, CFD models for solar chimney.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2726
3175 PM10 Prediction and Forecasting Using CART: A Case Study for Pleven, Bulgaria

Authors: Snezhana G. Gocheva-Ilieva, Maya P. Stoimenova

Abstract:

Ambient air pollution with fine particulate matter (PM10) is a systematic permanent problem in many countries around the world. The accumulation of a large number of measurements of both the PM10 concentrations and the accompanying atmospheric factors allow for their statistical modeling to detect dependencies and forecast future pollution. This study applies the classification and regression trees (CART) method for building and analyzing PM10 models. In the empirical study, average daily air data for the city of Pleven, Bulgaria for a period of 5 years are used. Predictors in the models are seven meteorological variables, time variables, as well as lagged PM10 variables and some lagged meteorological variables, delayed by 1 or 2 days with respect to the initial time series, respectively. The degree of influence of the predictors in the models is determined. The selected best CART models are used to forecast future PM10 concentrations for two days ahead after the last date in the modeling procedure and show very accurate results.

Keywords: Cross-validation, decision tree, lagged variables, short-term forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 705
3174 Environmental Modeling of Storm Water Channels

Authors: L. Grinis

Abstract:

Turbulent flow in complex geometries receives considerable attention due to its importance in many engineering applications. It has been the subject of interest for many researchers. Some of these interests include the design of storm water channels. The design of these channels requires testing through physical models. The main practical limitation of physical models is the so called “scale effect”, that is, the fact that in many cases only primary physical mechanisms can be correctly represented, while secondary mechanisms are often distorted. These observations form the basis of our study, which centered on problems associated with the design of storm water channels near the Dead Sea, in Israel. To help reach a final design decision we used different physical models. Our research showed good coincidence with the results of laboratory tests and theoretical calculations, and allowed us to study different effects of fluid flow in an open channel. We determined that problems of this nature cannot be solved only by means of theoretical calculation and computer simulation. This study demonstrates the use of physical models to help resolve very complicated problems of fluid flow through baffles and similar structures. The study applies these models and observations to different construction and multiphase water flows, among them, those that include sand and stone particles, a significant attempt to bring to the testing laboratory a closer association with reality.

Keywords: Baffles, open channel, physical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897
3173 Early Warning System of Financial Distress Based On Credit Cycle Index

Authors: Bi-Huei Tsai

Abstract:

Previous studies on financial distress prediction choose the conventional failing and non-failing dichotomy; however, the distressed extent differs substantially among different financial distress events. To solve the problem, “non-distressed”, “slightlydistressed” and “reorganization and bankruptcy” are used in our article to approximate the continuum of corporate financial health. This paper explains different financial distress events using the two-stage method. First, this investigation adopts firm-specific financial ratios, corporate governance and market factors to measure the probability of various financial distress events based on multinomial logit models. Specifically, the bootstrapping simulation is performed to examine the difference of estimated misclassifying cost (EMC). Second, this work further applies macroeconomic factors to establish the credit cycle index and determines the distressed cut-off indicator of the two-stage models using such index. Two different models, one-stage and two-stage prediction models are developed to forecast financial distress, and the results acquired from different models are compared with each other, and with the collected data. The findings show that the one-stage model has the lower misclassification error rate than the two-stage model. The one-stage model is more accurate than the two-stage model.

Keywords: Multinomial logit model, corporate governance, company failure, reorganization, bankruptcy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2667
3172 Numerical Treatment of Matrix Differential Models Using Matrix Splines

Authors: Kholod M. Abualnaja

Abstract:

This paper consider the solution of the matrix differential models using quadratic, cubic, quartic, and quintic splines. Also using the Taylor’s and Picard’s matrix methods, one illustrative example is included.

Keywords: Matrix Splines, Cubic Splines, Quartic Splines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690
3171 The Analysis of Different Classes of Weighted Fuzzy Petri Nets and Their Features

Authors: Yurii Bloshko, Oksana Olar

Abstract:

This paper presents the analysis of six different classes of Petri nets: fuzzy Petri nets (FPN), generalized fuzzy Petri nets (GFPN), parameterized fuzzy Petri nets (PFPN), T2GFPN, flexible generalized fuzzy Petri nets (FGFPN), binary Petri nets (BPN). These classes were simulated in the special software PNeS® for the analysis of its pros and cons on the example of models which are dedicated to the decision-making process of passenger transport logistics. The paper includes the analysis of two approaches: when input values are filled with the experts’ knowledge; when fuzzy expectations represented by output values are added to the point. These approaches fulfill the possibilities of triples of functions which are replaced with different combinations of t-/s-norms.

Keywords: Fuzzy petri net, intelligent computational techniques, knowledge representation, triangular norms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 414
3170 Data Mining Classification Methods Applied in Drug Design

Authors: Mária Stachová, Lukáš Sobíšek

Abstract:

Data mining incorporates a group of statistical methods used to analyze a set of information, or a data set. It operates with models and algorithms, which are powerful tools with the great potential. They can help people to understand the patterns in certain chunk of information so it is obvious that the data mining tools have a wide area of applications. For example in the theoretical chemistry data mining tools can be used to predict moleculeproperties or improve computer-assisted drug design. Classification analysis is one of the major data mining methodologies. The aim of thecontribution is to create a classification model, which would be able to deal with a huge data set with high accuracy. For this purpose logistic regression, Bayesian logistic regression and random forest models were built using R software. TheBayesian logistic regression in Latent GOLD software was created as well. These classification methods belong to supervised learning methods. It was necessary to reduce data matrix dimension before construct models and thus the factor analysis (FA) was used. Those models were applied to predict the biological activity of molecules, potential new drug candidates.

Keywords: data mining, classification, drug design, QSAR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2831
3169 Application of the Least Squares Method in the Adjustment of Chlorodifluoromethane (HCFC-142b) Regression Models

Authors: L. J. de Bessa Neto, V. S. Filho, J. V. Ferreira Nunes, G. C. Bergamo

Abstract:

There are many situations in which human activities have significant effects on the environment. Damage to the ozone layer is one of them. The objective of this work is to use the Least Squares Method, considering the linear, exponential, logarithmic, power and polynomial models of the second degree, to analyze through the coefficient of determination (R²), which model best fits the behavior of the chlorodifluoromethane (HCFC-142b) in parts per trillion between 1992 and 2018, as well as estimates of future concentrations between 5 and 10 periods, i.e. the concentration of this pollutant in the years 2023 and 2028 in each of the adjustments. A total of 809 observations of the concentration of HCFC-142b in one of the monitoring stations of gases precursors of the deterioration of the ozone layer during the period of time studied were selected and, using these data, the statistical software Excel was used for make the scatter plots of each of the adjustment models. With the development of the present study, it was observed that the logarithmic fit was the model that best fit the data set, since besides having a significant R² its adjusted curve was compatible with the natural trend curve of the phenomenon.

Keywords: Chlorodifluoromethane (HCFC-142b), ozone (O3), least squares method, regression models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 797
3168 Comparison of Response Surface Designs in a Spherical Region

Authors: Boonorm Chomtee, John J. Borkowski

Abstract:

The objective of the research is to study and compare response surface designs: Central composite designs (CCD), Box- Behnken designs (BBD), Small composite designs (SCD), Hybrid designs, and Uniform shell designs (USD) over sets of reduced models when the design is in a spherical region for 3 and 4 design variables. The two optimality criteria ( D and G ) are considered which larger values imply a better design. The comparison of design optimality criteria of the response surface designs across the full second order model and sets of reduced models for 3 and 4 factors based on the two criteria are presented.

Keywords: design optimality criteria, reduced models, response surface design, spherical design region

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1243
3167 Performance Prediction of a 5MW Wind Turbine Blade Considering Aeroelastic Effect

Authors: Dong-Hyun Kim, Yoo-Han Kim

Abstract:

In this study, aeroelastic response and performance analyses have been conducted for a 5MW-Class composite wind turbine blade model. Advanced coupled numerical method based on computational fluid dynamics (CFD) and computational flexible multi-body dynamics (CFMBD) has been developed in order to investigate aeroelastic responses and performance characteristics of the rotating composite blade. Reynolds-Averaged Navier-Stokes (RANS) equations with k-ω SST turbulence model were solved for unsteady flow problems on the rotating turbine blade model. Also, structural analyses considering rotating effect have been conducted using the general nonlinear finite element method. A fully implicit time marching scheme based on the Newmark direct integration method is applied to solve the coupled aeroelastic governing equations of the 3D turbine blade for fluid-structure interaction (FSI) problems. Detailed dynamic responses and instantaneous velocity contour on the blade surfaces which considering flow-separation effects were presented to show the multi-physical phenomenon of the huge rotating wind- turbine blade model.

Keywords: Computational Fluid Dynamics (CFD), Computational Multi-Body Dynamics (CMBD), Reynolds-averageNavier-Stokes (RANS), Fluid Structure Interaction (FSI), FiniteElement Method (FEM)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2891
3166 Modeling of Surface Roughness for Flow over a Complex Vegetated Surface

Authors: Wichai Pattanapol, Sarah J. Wakes, Michael J. Hilton, Katharine J.M. Dickinson

Abstract:

Turbulence modeling of large-scale flow over a vegetated surface is complex. Such problems involve large scale computational domains, while the characteristics of flow near the surface are also involved. In modeling large scale flow, surface roughness including vegetation is generally taken into account by mean of roughness parameters in the modified law of the wall. However, the turbulence structure within the canopy region cannot be captured with this method, another method which applies source/sink terms to model plant drag can be used. These models have been developed and tested intensively but with a simple surface geometry. This paper aims to compare the use of roughness parameter, and additional source/sink terms in modeling the effect of plant drag on wind flow over a complex vegetated surface. The RNG k-ε turbulence model with the non-equilibrium wall function was tested with both cases. In addition, the k-ω turbulence model, which is claimed to be computationally stable, was also investigated with the source/sink terms. All numerical results were compared to the experimental results obtained at the study site Mason Bay, Stewart Island, New Zealand. In the near-surface region, it is found that the results obtained by using the source/sink term are more accurate than those using roughness parameters. The k-ω turbulence model with source/sink term is more appropriate as it is more accurate and more computationally stable than the RNG k-ε turbulence model. At higher region, there is no significant difference amongst the results obtained from all simulations.

Keywords: CFD, canopy flow, surface roughness, turbulence models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2927
3165 Analytical Model Based Evaluation of Human Machine Interfaces Using Cognitive Modeling

Authors: Belkacem Chikhaoui, Helene Pigot

Abstract:

Cognitive models allow predicting some aspects of utility and usability of human machine interfaces (HMI), and simulating the interaction with these interfaces. The action of predicting is based on a task analysis, which investigates what a user is required to do in terms of actions and cognitive processes to achieve a task. Task analysis facilitates the understanding of the system-s functionalities. Cognitive models are part of the analytical approaches, that do not associate the users during the development process of the interface. This article presents a study about the evaluation of a human machine interaction with a contextual assistant-s interface using ACTR and GOMS cognitive models. The present work shows how these techniques may be applied in the evaluation of HMI, design and research by emphasizing firstly the task analysis and secondly the time execution of the task. In order to validate and support our results, an experimental study of user performance is conducted at the DOMUS laboratory, during the interaction with the contextual assistant-s interface. The results of our models show that the GOMS and ACT-R models give good and excellent predictions respectively of users performance at the task level, as well as the object level. Therefore, the simulated results are very close to the results obtained in the experimental study.

Keywords: HMI, interface evaluation, Analytical evaluation, cognitivemodeling, user modeling, user performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
3164 Aggregation Scheduling Algorithms in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.

Keywords: Data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 784
3163 Forecasting Rainfall in Thailand: A Case Study of Nakhon Ratchasima Province

Authors: N. Sopipan

Abstract:

In this paper, we study the rainfall using a time series for weather stations in Nakhon Ratchasima province in Thailand by various statistical methods to enable us to analyse the behaviour of rainfall in the study areas. Time-series analysis is an important tool in modelling and forecasting rainfall. The ARIMA and Holt-Winter models were built on the basis of exponential smoothing. All the models proved to be adequate. Therefore it is possible to give information that can help decision makers establish strategies for the proper planning of agriculture, drainage systems and other water resource applications in Nakhon Ratchasima province. We obtained the best performance from forecasting with the ARIMA Model(1,0,1)(1,0,1)12.

Keywords: ARIMA Models, Exponential Smoothing, Holt- Winter model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2663
3162 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-

Authors: Nieto Bernal Wilson, Carmona Suarez Edgar

Abstract:

The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects.  Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.

Keywords: Data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464
3161 MABENA Strategic Management Model for Local Companies

Authors: Kaveh Mohammad Cyrus, Shadi Sanagoo

Abstract:

MABENA model is a complementary model in comparison with traditional models such as HCMS, CMS and etc. New factors, which have effects on preparation of strategic plans and their sequential order in MABENA model is the platform of presented road map in this paper.Study review shows, factors such as emerging new critical success factors for strategic planning, improvement of international strategic models, increasing the maturity of companies and emerging new needs leading to design a new model which can be responsible for new critical factors and solve the limitations of previous strategic management models. Preparation of strategic planning need more factors than introduced in traditional models. The needed factors includes determining future Critical Success Factors and competencies, defining key processes, determining the maturity of the processes, considering all aspects of the external environment etc. Description of aforementioned requirements, the outcomes and their order is developing and presenting the MABENA model-s road map in this paper. This study presents a road map for strategic planning of the Iranian organizations.

Keywords: Competitive Advantage, Process Maturity, StrategicPlanning, Strategic potential

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2143
3160 Developing a Conjugate Heat Transfer Solver

Authors: Mansour A. Al Qubeissi

Abstract:

The current paper presents a numerical approach in solving the conjugate heat transfer problems. A heat conduction code is coupled internally with a computational fluid dynamics solver for developing a couple conjugate heat transfer solver. Methodology of treating non-matching meshes at interface has also been proposed. The validation results of 1D and 2D cases for the developed conjugate heat transfer code have shown close agreement with the solutions given by analysis.

Keywords: Computational Fluid Dynamics, Conjugate Heat transfer, Heat Conduction, Heat Transfer

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
3159 Fast and Efficient Algorithms for Evaluating Uniform and Nonuniform Lagrange and Newton Curves

Authors: Taweechai Nuntawisuttiwong, Natasha Dejdumrong

Abstract:

Newton-Lagrange Interpolations are widely used in numerical analysis. However, it requires a quadratic computational time for their constructions. In computer aided geometric design (CAGD), there are some polynomial curves: Wang-Ball, DP and Dejdumrong curves, which have linear time complexity algorithms. Thus, the computational time for Newton-Lagrange Interpolations can be reduced by applying the algorithms of Wang-Ball, DP and Dejdumrong curves. In order to use Wang-Ball, DP and Dejdumrong algorithms, first, it is necessary to convert Newton-Lagrange polynomials into Wang-Ball, DP or Dejdumrong polynomials. In this work, the algorithms for converting from both uniform and non-uniform Newton-Lagrange polynomials into Wang-Ball, DP and Dejdumrong polynomials are investigated. Thus, the computational time for representing Newton-Lagrange polynomials can be reduced into linear complexity. In addition, the other utilizations of using CAGD curves to modify the Newton-Lagrange curves can be taken.

Keywords: Newton interpolation, Lagrange interpolation, linear complexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594
3158 Interoperability Maturity Models for Consideration When Using School Management Systems in South Africa: A Scoping Review

Authors: Keneilwe Maremi, Marlien Herselman, Adele Botha

Abstract:

The main purpose and focus of this paper are to determine the Interoperability Maturity Models to consider when using School Management Systems (SMS). The importance of this is to inform and help schools with knowing which Interoperability Maturity Model is best suited for their SMS. To address the purpose, this paper will apply a scoping review to ensure that all aspects are provided. The scoping review will include papers written from 2012-2019 and a comparison of the different types of Interoperability Maturity Models will be discussed in detail, which includes the background information, the levels of interoperability, and area for consideration in each Maturity Model. The literature was obtained from the following databases: IEEE Xplore and Scopus, the following search engines were used: Harzings, and Google Scholar. The topic of the paper was used as a search term for the literature and the term ‘Interoperability Maturity Models’ was used as a keyword. The data were analyzed in terms of the definition of Interoperability, Interoperability Maturity Models, and levels of interoperability. The results provide a table that shows the focus area of concern for each Maturity Model (based on the scoping review where only 24 papers were found to be best suited for the paper out of 740 publications initially identified in the field). This resulted in the most discussed Interoperability Maturity Model for consideration (Information Systems Interoperability Maturity Model (ISIMM) and Organizational Interoperability Maturity Model for C2 (OIM)).

Keywords: Interoperability, Interoperability Maturity Model, School Management System, scoping review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 764
3157 Comparison of Two Interval Models for Interval-Valued Differential Evolution

Authors: Hidehiko Okada

Abstract:

The author previously proposed an extension of differential evolution. The proposed method extends the processes of DE to handle interval numbers as genotype values so that DE can be applied to interval-valued optimization problems. The interval DE can employ either of two interval models, the lower and upper model or the center and width model, for specifying genotype values. Ability of the interval DE in searching for solutions may depend on the model. In this paper, the author compares the two models to investigate which model contributes better for the interval DE to find better solutions. Application of the interval DE is evolutionary training of interval-valued neural networks. A result of preliminary study indicates that the CW model is better than the LU model: the interval DE with the CW model could evolve better neural networks. 

Keywords: Evolutionary algorithms, differential evolution, neural network, neuroevolution, interval arithmetic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648
3156 Hygric Performance of a Sandstone Wall Retrofitted with Interior Thermal Insulation

Authors: J. Maděra, M. Jerman, R. Černý

Abstract:

Temperature, relative humidity and overhygroscopic moisture fields in a sandstone wall provided with interior thermal insulation were calculated in order to assess the hygric performance of the retrofitted wall. Computational simulations showed that during the time period of 10 years which was subject of investigation no overhygroscopic moisture appeared in the analyzed building envelope so that it performed in a satisfactory way from the hygric point of view.

Keywords: Sandstone wall, interior thermal insulation, moisture, computational modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519
3155 Operating System Based Virtualization Models in Cloud Computing

Authors: Dev Ras Pandey, Bharat Mishra, S. K. Tripathi

Abstract:

Cloud computing is ready to transform the structure of businesses and learning through supplying the real-time applications and provide an immediate help for small to medium sized businesses. The ability to run a hypervisor inside a virtual machine is important feature of virtualization and it is called nested virtualization. In today’s growing field of information technology, many of the virtualization models are available, that provide a convenient approach to implement, but decision for a single model selection is difficult. This paper explains the applications of operating system based virtualization in cloud computing with an appropriate/suitable model with their different specifications and user’s requirements. In the present paper, most popular models are selected, and the selection was based on container and hypervisor based virtualization. Selected models were compared with a wide range of user’s requirements as number of CPUs, memory size, nested virtualization supports, live migration and commercial supports, etc. and we identified a most suitable model of virtualization.

Keywords: Virtualization, OS based virtualization, container and hypervisor based virtualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1928
3154 The Use of Performance Indicators for Evaluating Models of Drying Jackfruit (Artocarpus heterophyllus L.): Page, Midilli, and Lewis

Authors: D. S. C. Soares, D. G. Costa, J. T. S., A. K. S. Abud, T. P. Nunes, A. M. Oliveira Júnior

Abstract:

Mathematical models of drying are used for the purpose of understanding the drying process in order to determine important parameters for design and operation of the dryer. The jackfruit is a fruit with high consumption in the Northeast and perishability. It is necessary to apply techniques to improve their conservation for longer in order to diffuse it by regions with low consumption. This study aimed to analyze several mathematical models (Page, Lewis, and Midilli) to indicate one that best fits the conditions of convective drying process using performance indicators associated with each model: accuracy (Af) and noise factors (Bf), mean square error (RMSE) and standard error of prediction (% SEP). Jackfruit drying was carried out in convective type tray dryer at a temperature of 50°C for 9 hours. It is observed that the model Midili was more accurate with Af: 1.39, Bf: 1.33, RMSE: 0.01%, and SEP: 5.34. However, the use of the Model Midilli is not appropriate for purposes of control process due to need four tuning parameters. With the performance indicators used in this paper, the Page model showed similar results with only two parameters. It is concluded that the best correlation between the experimental and estimated data is given by the Page’s model.

Keywords: Drying, models, jackfruit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2406
3153 A Study on Mode of Collapse of Metallic Shells Having Combined Tube-Frusta Geometry Subjected to Axial Compression

Authors: P. K. Gupta

Abstract:

The present paper deals with the experimental and computational study of axial collapse of the aluminum metallic shells having combined tube-frusta geometry between two parallel plates. Shells were having bottom two third lengths as frusta and remaining top one third lengths as tube. Shells were compressed to recognize their modes of collapse and associated energy absorption capability. An axisymmetric Finite Element computational model of collapse process is presented and analysed, using a non-linear FE code FORGE2. Six noded isoparametric triangular elements were used to discretize the deforming shell. The material of the shells was idealized as rigid visco-plastic. To validate the computational model experimental and computed results of the deformed shapes and their corresponding load-compression and energy-compression curves were compared. With the help of the obtained results progress of the axisymmetric mode of collapse has been presented, analysed and discussed.

Keywords: Axial compression, crashworthiness, energy absorption, FORGE2, metallic shells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1463
3152 An Improved Quality Adaptive Rate Filtering Technique Based on the Level Crossing Sampling

Authors: Saeed Mian Qaisar, Laurent Fesquet, Marc Renaudin

Abstract:

Mostly the systems are dealing with time varying signals. The Power efficiency can be achieved by adapting the system activity according to the input signal variations. In this context an adaptive rate filtering technique, based on the level crossing sampling is devised. It adapts the sampling frequency and the filter order by following the input signal local variations. Thus, it correlates the processing activity with the signal variations. Interpolation is required in the proposed technique. A drastic reduction in the interpolation error is achieved by employing the symmetry during the interpolation process. Processing error of the proposed technique is calculated. The computational complexity of the proposed filtering technique is deduced and compared to the classical one. Results promise a significant gain of the computational efficiency and hence of the power consumption.

Keywords: Level Crossing Sampling, Activity Selection, Rate Filtering, Computational Complexity, Interpolation Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
3151 Analysis of Textual Data Based On Multiple 2-Class Classification Models

Authors: Shigeaki Sakurai, Ryohei Orihara

Abstract:

This paper proposes a new method for analyzing textual data. The method deals with items of textual data, where each item is described based on various viewpoints. The method acquires 2- class classification models of the viewpoints by applying an inductive learning method to items with multiple viewpoints. The method infers whether the viewpoints are assigned to the new items or not by using the models. The method extracts expressions from the new items classified into the viewpoints and extracts characteristic expressions corresponding to the viewpoints by comparing the frequency of expressions among the viewpoints. This paper also applies the method to questionnaire data given by guests at a hotel and verifies its effect through numerical experiments.

Keywords: Text mining, Multiple viewpoints, Differential analysis, Questionnaire data

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1275
3150 Heuristic Method for Judging the Computational Stability of the Difference Schemes of the Biharmonic Equation

Authors: Guang Zeng, Jin Huang, Zicai Li

Abstract:

In this paper, we research the standard 13-point difference schemes for solving the biharmonic equation. Heuristic method is applied to judging the stability of multi-level difference schemes of the biharmonic equation. It is showed that the standard 13-point difference schemes are stable.

Keywords: Finite-difference equation, computational stability, hirt method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1337
3149 Recent Advances on Computational Proteomics

Authors: Sérgio F. Sousa, Nuno M. F. S. A. Cerqueira, Marta A. S. Perez, Irina S. Moreira, António J. M.Ribeiro, Ana R. A. P. Neves, Maria J. Ramos, Pedro A. Fernandes

Abstract:

In this work we report the recent progresses that have been achieved by our group in the last half decade on the field of computational proteomics. Specifically, we discuss the application of Molecular Dynamics Simulations and Electronic Structure Calculations in drug design, in the clarification of the structural and dynamic properties of proteins and enzymes and in the understanding of the catalytic and inhibition mechanism of cancer-related enzymes. A set of examples illustrate the concepts and help to introduce the reader into this important and fast moving field.

Keywords: Enzyme, Molecular Dynamics, Protein, Quantum Mechanics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1934
3148 Comparison of different Channel Modeling Techniques used in the BPLC Systems

Authors: Justinian Anatory, Nelson Theethayi

Abstract:

The paper compares different channel models used for modeling Broadband Power-Line Communication (BPLC) system. The models compared are Zimmermann and Dostert, Philipps, Anatory et al and Anatory et al generalized Transmission Line (TL) model. The validity of each model was compared in time domain with ATP-EMTP software which uses transmission line approach. It is found that for a power-line network with minimum number of branches all the models give similar signal/pulse time responses compared with ATP-EMTP software; however, Zimmermann and Dostert model indicates the same amplitude but different time delay. It is observed that when the numbers of branches are increased only generalized TL theory approach results are comparable with ATPEMTP results. Also the Multi-Carrier Spread Spectrum (MC-SS) system was applied to check the implication of such behavior on the modulation schemes. It is observed that using Philipps on the underground cable can predict the performance up to 25dB better than other channel models which can misread the actual performance of the system. Also modified Zimmermann and Dostert under multipath can predict a better performance of about 5dB better than the actual predicted by Generalized TL theory. It is therefore suggested for a realistic BPLC system design and analyses the model based on generalized TL theory be used.

Keywords: Broadband Power line Channel Models, loadimpedance, Branched network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802