Search results for: financial models
2878 Simulating and Forecasting Qualitative Marcoeconomic Models Using Rule-Based Fuzzy Cognitive Maps
Authors: Spiros Mazarakis, George Matzavinos, Peter P. Groumpos
Abstract:
Economic models are complex dynamic systems with a lot of uncertainties and fuzzy data. Conventional modeling approaches using well known methods and techniques cannot provide realistic and satisfactory answers to today-s challenging economic problems. Qualitative modeling using fuzzy logic and intelligent system theories can be used to model macroeconomic models. Fuzzy Cognitive maps (FCM) is a new method been used to model the dynamic behavior of complex systems. For the first time FCMs and the Mamdani Model of Intelligent control is used to model macroeconomic models. This new model is referred as the Mamdani Rule-Based Fuzzy Cognitive Map (MBFCM) and provides the academic and research community with a new promising integrated advanced computational model. A new economic model is developed for a qualitative approach to Macroeconomic modeling. Fuzzy Controllers for such models are designed. Simulation results for an economic scenario are provided and extensively discussed
Keywords: Macroeconomic Models, Mamdani Rule Based- FCMs(MBFCMs), Qualitative and Dynamics System, Simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19002877 Some Solid Transportation Models with Crisp and Rough Costs
Authors: Pradip Kundu, Samarjit Kar, Manoranjan Maiti
Abstract:
In this paper, some practical solid transportation models are formulated considering per trip capacity of each type of conveyances with crisp and rough unit transportation costs. This is applicable for the system in which full vehicles, e.g. trucks, rail coaches are to be booked for transportation of products so that transportation cost is determined on the full of the conveyances. The models with unit transportation costs as rough variables are transformed into deterministic forms using rough chance constrained programming with the help of trust measure. Numerical examples are provided to illustrate the proposed models in crisp environment as well as with unit transportation costs as rough variables.
Keywords: Solid transportation problem, Rough set, Rough variable, Trust measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26222876 Soil-Structure Interaction Models for the Reinforced Foundation System: A State-of-the-Art Review
Authors: Ashwini V. Chavan, Sukhanand S. Bhosale
Abstract:
Challenges of weak soil subgrade are often resolved either by stabilization or reinforcing it. However, it is also practiced to reinforce the granular fill to improve the load-settlement behavior of it over weak soil strata. The inclusion of reinforcement in the engineered granular fill provided a new impetus for the development of enhanced Soil-Structure Interaction (SSI) models, also known as mechanical foundation models or lumped parameter models. Several researchers have been working in this direction to understand the mechanism of granular fill-reinforcement interaction and the response of weak soil under the application of load. These models have been developed by extending available SSI models such as the Winkler Model, Pasternak Model, Hetenyi Model, Kerr Model etc., and are helpful to visualize the load-settlement behavior of a physical system through 1-D and 2-D analysis considering beam and plate resting on the foundation, respectively. Based on the literature survey, these models are categorized as ‘Reinforced Pasternak Model,’ ‘Double Beam Model,’ ‘Reinforced Timoshenko Beam Model,’ and ‘Reinforced Kerr Model’. The present work reviews the past 30+ years of research in the field of SSI models for reinforced foundation systems, presenting the conceptual development of these models systematically and discussing their limitations. A flow-chart showing procedure for compution of deformation and mobilized tension is also incorporated in the paper. Special efforts are taken to tabulate the parameters and their significance in the load-settlement analysis, which may be helpful in future studies for the comparison and enhancement of results and findings of physical models.
Keywords: geosynthetics, mathematical modeling, reinforced foundation, soil-structure interaction, ground improvement, soft soil
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6522875 Low-Cost and Highly Accurate Motion Models for Three-Dimensional Local Landmark-based Autonomous Navigation
Authors: Gheorghe Galben, Daniel N. Aloi
Abstract:
Recently, the Spherical Motion Models (SMM-s) have been introduced [1]. These new models have been developed for 3D local landmark-base Autonomous Navigation (AN). This paper is revealing new arguments and experimental results to support the SMM-s characteristics. The accuracy and the robustness in performing a specific task are the main concerns of the new investigations. To analyze their performances of the SMM-s, the most powerful tools of estimation theory, the extended Kalman filter (EKF) and unscented Kalman filter (UKF), which give the best estimations in noisy environments, have been employed. The Monte Carlo validation implementations used to test the stability and robustness of the models have been employed as well.
Keywords: Autonomous navigation, extended kalman filter, unscented kalman filter, localization algorithms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13112874 Statistical (Radio) Path Loss Modelling: For RF Propagations within localized Indoor and Outdoor Environments of the Academic Building of INTI University College (Laureate International Universities)
Authors: Emmanuel O.O. Ojakominor, Tian F. Lai
Abstract:
A handful of propagation textbooks that discuss radio frequency (RF) propagation models merely list out the models and perhaps discuss them rather briefly; this may well be frustrating for the potential first time modeller who's got no idea on how these models could have been derived. This paper fundamentally provides an overture in modelling the radio channel. Explicitly, for the modelling practice discussed here, signal strength field measurements had to be conducted beforehand (this was done at 469 MHz); to be precise, this paper primarily concerns empirically/statistically modelling the radio channel, and thus provides results obtained from empirically modelling the environments in question. This paper, on the whole, proposes three propagation models, corresponding to three experimented environments. Perceptibly, the models have been derived by way of making the most use of statistical measures. Generally speaking, the first two models were derived via simple linear regression analysis, whereas the third have been originated using multiple regression analysis (with five various predictors). Additionally, as implied by the title of this paper, both indoor and outdoor environments have been experimented; however, (somewhat) two of the environments are neither entirely indoor nor entirely outdoor. The other environment, however, is completely indoor.
Keywords: RF propagation, radio channel modelling, statistical methods.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24332873 Mixtures of Monotone Networks for Prediction
Authors: Marina Velikova, Hennie Daniels, Ad Feelders
Abstract:
In many data mining applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. In this paper we consider partially monotone prediction problems, where the target variable depends monotonically on some of the input variables but not on all. We propose a novel method to construct prediction models, where monotone dependences with respect to some of the input variables are preserved by virtue of construction. Our method belongs to the class of mixture models. The basic idea is to convolute monotone neural networks with weight (kernel) functions to make predictions. By using simulation and real case studies, we demonstrate the application of our method. To obtain sound assessment for the performance of our approach, we use standard neural networks with weight decay and partially monotone linear models as benchmark methods for comparison. The results show that our approach outperforms partially monotone linear models in terms of accuracy. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.Keywords: mixture models, monotone neural networks, partially monotone models, partially monotone problems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12462872 Analyzing Data on Breastfeeding Using Dispersed Statistical Models
Authors: Naushad Mamode Khan, Cheika Jahangeer, Maleika Heenaye-Mamode Khan
Abstract:
Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. Exclusive breastfeeding during the first 6 months of life is very important as it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, it helps to reduce the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we make a survey of the factors that influence exclusive breastfeeding and use two dispersed statistical models to analyze data. The models are the Generalized Poisson regression model and the Com-Poisson regression models.
Keywords: Exclusive breastfeeding, regression model, generalized poisson, com-poisson.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15622871 Zero Truncated Strict Arcsine Model
Authors: Y. N. Phang, E. F. Loh
Abstract:
The zero truncated model is usually used in modeling count data without zero. It is the opposite of zero inflated model. Zero truncated Poisson and zero truncated negative binomial models are discussed and used by some researchers in analyzing the abundance of rare species and hospital stay. Zero truncated models are used as the base in developing hurdle models. In this study, we developed a new model, the zero truncated strict arcsine model, which can be used as an alternative model in modeling count data without zero and with extra variation. Two simulated and one real life data sets are used and fitted into this developed model. The results show that the model provides a good fit to the data. Maximum likelihood estimation method is used in estimating the parameters.
Keywords: Hurdle models, maximum likelihood estimation method, positive count data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18572870 The Contribution of Edgeworth, Bootstrap and Monte Carlo Methods in Financial Data
Authors: Edlira Donefski, Tina Donefski, Lorenc Ekonomi
Abstract:
Edgeworth Approximation, Bootstrap and Monte Carlo Simulations have a considerable impact on the achieving certain results related to different problems taken into study. In our paper, we have treated a financial case related to the effect that have the components of a Cash-Flow of one of the most successful businesses in the world, as the financial activity, operational activity and investing activity to the cash and cash equivalents at the end of the three-months period. To have a better view of this case we have created a Vector Autoregression model, and after that we have generated the impulse responses in the terms of Asymptotic Analysis (Edgeworth Approximation), Monte Carlo Simulations and Residual Bootstrap based on the standard errors of every series created. The generated results consisted of the common tendencies for the three methods applied, that consequently verified the advantage of the three methods in the optimization of the model that contains many variants.
Keywords: Autoregression, Bootstrap, Edgeworth Expansion, Monte Carlo Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5952869 An Ontology for Investment in Chinese Steel Company
Authors: Liming Chen, Baoxin Xiu, Zhaoyun Ding, Bin Liu, Xianqiang Zhu
Abstract:
In the era of big data, public investors are faced with more complicated information related to investment decisions than ever before. To survive in the fierce competition, it has become increasingly urgent for investors to combine multi-source knowledge and evaluate the companies’ true value efficiently. For this, a rule-based ontology reasoning method is proposed to support steel companies’ value assessment. Considering the delay in financial disclosure and based on cost-benefit analysis, this paper introduces the supply chain enterprises financial analysis and constructs the ontology model used to value the value of steel company. In addition, domain knowledge is formally expressed with the help of Web Ontology Language (OWL) language and SWRL (Semantic Web Rule Language) rules. Finally, a case study on a steel company in China proved the effectiveness of the method we proposed.
Keywords: Financial ontology, steel company, supply chain, ontology reasoning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5952868 Neuro-Fuzzy Network Based On Extended Kalman Filtering for Financial Time Series
Authors: Chokri Slim
Abstract:
The neural network's performance can be measured by efficiency and accuracy. The major disadvantages of neural network approach are that the generalization capability of neural networks is often significantly low, and it may take a very long time to tune the weights in the net to generate an accurate model for a highly complex and nonlinear systems. This paper presents a novel Neuro-fuzzy architecture based on Extended Kalman filter. To test the performance and applicability of the proposed neuro-fuzzy model, simulation study of nonlinear complex dynamic system is carried out. The proposed method can be applied to an on-line incremental adaptive learning for the prediction of financial time series. A benchmark case studie is used to demonstrate that the proposed model is a superior neuro-fuzzy modeling technique.
Keywords: Neuro-fuzzy, Extended Kalman filter, nonlinear systems, financial time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20122867 AnQL: A Query Language for Annotation Documents
Authors: Neerja Bhatnagar, Ben A. Juliano, Renee S. Renner
Abstract:
This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.
Keywords: Annotation query language, data annotations, data annotation models, semantic data annotations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18452866 ANN Models for Microstrip Line Synthesis and Analysis
Authors: Dr.K.Sri Rama Krishna, J.Lakshmi Narayana, Dr.L.Pratap Reddy
Abstract:
Microstrip lines, widely used for good reason, are broadband in frequency and provide circuits that are compact and light in weight. They are generally economical to produce since they are readily adaptable to hybrid and monolithic integrated circuit (IC) fabrication technologies at RF and microwave frequencies. Although, the existing EM simulation models used for the synthesis and analysis of microstrip lines are reasonably accurate, they are computationally intensive and time consuming. Neural networks recently gained attention as fast and flexible vehicles to microwave modeling, simulation and optimization. After learning and abstracting from microwave data, through a process called training, neural network models are used during microwave design to provide instant answers to the task learned.This paper presents simple and accurate ANN models for the synthesis and analysis of Microstrip lines to more accurately compute the characteristic parameters and the physical dimensions respectively for the required design specifications.Keywords: Neural Models, Algorithms, Microstrip Lines, Analysis, Synthesis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21502865 An Owl Ontology for Commonkads Template Knowledge Models
Authors: B. A. Gobin, R. K. Subramanian
Abstract:
This paper gives an overview of how an OWL ontology has been created to represent template knowledge models defined in CML that are provided by CommonKADS. CommonKADS is a mature knowledge engineering methodology which proposes the use of template knowledge model for knowledge modelling. The aim of developing this ontology is to present the template knowledge model in a knowledge representation language that can be easily understood and shared in the knowledge engineering community. Hence OWL is used as it has become a standard for ontology and also it already has user friendly tools for viewing and editing.Keywords: Ontology, OWL, Template Knowledge Models, CommonKADS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17942864 Financial Burden of Family for the Children with Autism Spectrum Disorder
Authors: M. R. Bhuiyan, S. M. M. Hossain, M. Z. Islam
Abstract:
Autism Spectrum Disorder (ASD) is the fastest growing serious developmental disorder characterized by social deficits, communicative difficulties, and repetitive behaviors. ASD is an emerging public health issue globally which is associated with huge financial burden to the family, community and the nation. The aim of this study was to assess the financial burden of family for the children with Autism spectrum Disorder. This cross-sectional study was carried out from July 2015 to June 2016 among 154 children with ASD to assess the financial burden of family. Data were collected by face-to-face interview with semi-structured questionnaire following systematic random sampling technique. Majority (73.4%) children were male and mean (±SD) age was 6.66 ± 2.97 years. Most (88.8%) of the children were from urban areas with average monthly family income Tk. 41785.71±23936.45. Average monthly direct cost of the children was Tk.17656.49 ± 9984.35, while indirect cost was Tk. 13462.90 ± 9713.54 and total treatment cost was Tk. 23076.62 ± 15341.09. Special education cost (Tk. 4871.00), cost of therapy (Tk. 4124.07) and travel cost (Tk. 3988.31) were the major types of direct cost, while loss of income (Tk.14570.18) was the chief indirect cost incurred by the families. The study found that majority (59.8%) of the children attended special schools were incurred Tk.20001-78700 as total treatment cost, which were statistically significant (p<0.001). Again, families with higher monthly family income incurred higher treatment cost (r=0.526, p<0.05). Difference between mean direct and indirect cost was found significant (t=4.190, df=61, p<0.001). According to the analysis of variance, mean difference of father’s educational status among direct cost (F=10.337, p<0.001) and total treatment cost (F=7.841, p<0.001), which were statistically significant. The study revealed that maximum children with ASD were under five years, three-fourth were male. According to monthly family income, maximum family were in middle class. The study recommends cost effective interventions and financial safety-net measures to reduce the financial burden of families for the children with ASD.
Keywords: Autism spectrum disorder, financial burden, direct cost, indirect cost, Special education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12332863 Comparison of Three Turbulence Models in Wear Prediction of Multi-Size Particulate Flow through Rotating Channel
Authors: Pankaj K. Gupta, Krishnan V. Pagalthivarthi
Abstract:
The present work compares the performance of three turbulence modeling approach (based on the two-equation k -ε model) in predicting erosive wear in multi-size dense slurry flow through rotating channel. All three turbulence models include rotation modification to the production term in the turbulent kineticenergy equation. The two-phase flow field obtained numerically using Galerkin finite element methodology relates the local flow velocity and concentration to the wear rate via a suitable wear model. The wear models for both sliding wear and impact wear mechanisms account for the particle size dependence. Results of predicted wear rates using the three turbulence models are compared for a large number of cases spanning such operating parameters as rotation rate, solids concentration, flow rate, particle size distribution and so forth. The root-mean-square error between FE-generated data and the correlation between maximum wear rate and the operating parameters is found less than 2.5% for all the three models.Keywords: Rotating channel, maximum wear rate, multi-sizeparticulate flow, k −ε turbulence models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17722862 Modeling of Normal and Atherosclerotic Blood Vessels using Finite Element Methods and Artificial Neural Networks
Authors: K. Kamalanand, S. Srinivasan
Abstract:
Analysis of blood vessel mechanics in normal and diseased conditions is essential for disease research, medical device design and treatment planning. In this work, 3D finite element models of normal vessel and atherosclerotic vessel with 50% plaque deposition were developed. The developed models were meshed using finite number of tetrahedral elements. The developed models were simulated using actual blood pressure signals. Based on the transient analysis performed on the developed models, the parameters such as total displacement, strain energy density and entropy per unit volume were obtained. Further, the obtained parameters were used to develop artificial neural network models for analyzing normal and atherosclerotic blood vessels. In this paper, the objectives of the study, methodology and significant observations are presented.Keywords: Blood vessel, atherosclerosis, finite element model, artificial neural networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23082861 Interoperability in Component Based Software Development
Authors: M. Madiajagan, B. Vijayakumar
Abstract:
The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.
Keywords: Interoperability, component packaging, communication technology, heterogeneous platform, component interface, middleware.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27872860 Knowledge Based Model for Power Transformer Life Cycle Management Using Knowledge Engineering
Authors: S. S. Bhandari, N. Chakpitak, K. Meksamoot, T. Chandarasupsang
Abstract:
Under the limitation of investment budget, a utility company is required to maximize the utilization of their existing assets during their life cycle satisfying both engineering and financial requirements. However, utility does not have knowledge about the status of each asset in the portfolio neither in terms of technical nor financial values. This paper presents a knowledge based model for the utility companies in order to make an optimal decision on power transformer with their utilization. CommonKADS methodology, a structured development for knowledge and expertise representation, is utilized for designing and developing knowledge based model. A case study of One MVA power transformer of Nepal Electricity Authority is presented. The results show that the reusable knowledge can be categorized, modeled and utilized within the utility company using the proposed methodologies. Moreover, the results depict that utility company can achieve both engineering and financial benefits from its utilization.Keywords: CommonKADS, Knowledge Engineering, LifeCycle Management, Power Transformer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23042859 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.Keywords: Deep learning, artificial neural networks, energy price forecasting, Turkey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10982858 Human Pose Estimation using Active Shape Models
Authors: Changhyuk Jang, Keechul Jung
Abstract:
Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body research using Active Shape Models, such as human detection, primarily take the form of silhouette of human body. This technique is not able to estimate accurately for human pose to concern two arms and legs, as the silhouette of human body represents the shape as out of round. To solve this problem, we applied the human body model as stick-figure, “skeleton". The skeleton model of human body can give consideration to various shapes of human pose. To obtain effective estimation result, we applied background subtraction and deformed matching algorithm of primary Active Shape Models in the fitting process. The images which were used to make the model were 600 human bodies, and the model has 17 landmark points which indicate body junction and key features of human pose. The maximum iteration for the fitting process was 30 times and the execution time was less than .03 sec.
Keywords: Active shape models, skeleton, pose estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24162857 Elucidating the Influence of Demographics and Psychological Traits on Investment Biases
Authors: Huei-Wen Lin
Abstract:
This study explored the relationship between psychological traits, demographics and financial behavioral biases for individual investors in Taiwan stock market. By using questionnaire survey method conducted in 2010, there are 554 valid convenient samples collected to examine the determinants of three types of behavioral biases. Based on literature review, two hypothesized models are constructed and further used to evaluate the effects of big five personality traits and demographic variables on investment biases through Structural Equation Model (SEM) analysis. The results showed that investment biases of individual investors are significantly related to four personality traits as well as some demographics.Keywords: Behavioral finance, Big Five, Disposition effect, Herding, Overconfidence, Personality traits.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39582856 Simulation of Reactive Distillation: Comparison of Equilibrium and Nonequilibrium Stage Models
Authors: Asfaw Gezae Daful
Abstract:
In the present study, two distinctly different approaches are followed for modeling of reactive distillation column, the equilibrium stage model and the nonequilibrium stage model. These models are simulated with a computer code developed in the present study using MATLAB programming. In the equilibrium stage models, the vapor and liquid phases are assumed to be in equilibrium and allowance is made for finite reaction rates, where as in the nonequilibrium stage models simultaneous mass transfer and reaction rates are considered. These simulated model results are validated from the experimental data reported in the literature. The simulated results of equilibrium and nonequilibrium models are compared for concentration, temperature and reaction rate profiles in a reactive distillation column for Methyl Tert Butyle Ether (MTBE) production. Both the models show similar trend for the concentration, temperature and reaction rate profiles but the nonequilibrium model predictions are higher and closer to the experimental values reported in the literature.
Keywords: Reactive Distillation, Equilibrium model, Nonequilibrium model, Methyl Tert-Butyl Ether
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42062855 Classification of Business Models of Italian Bancassurance by Balance Sheet Indicators
Authors: Andrea Bellucci, Martina Tofi
Abstract:
The aim of paper is to analyze business models of bancassurance in Italy for life business. The life insurance business is very developed in the Italian market and banks branches have 80% of the market share. Given its maturity, the life insurance market needs to consolidate its organizational form to allow for the development of non-life business, which nowadays collects few premiums but represents a great opportunity to enlarge the market share of bancassurance using its strength in the distribution channel while the market share of independent agents is decreasing. Starting with the main business model of bancassurance for life business, this paper will analyze the performances of life companies in the Italian market by balance sheet indicators and by main discriminant variables of business models. The study will observe trends from 2013 to 2015 for the Italian market by exploiting a database managed by Associazione Nazionale delle Imprese di Assicurazione (ANIA). The applied approach is based on a bottom-up analysis starting with variables and indicators to define business models’ classification. The statistical classification algorithm proposed by Ward is employed to design business models’ profiles. Results from the analysis will be a representation of the main business models built by their profile related to indicators. In that way, an unsupervised analysis is developed that has the limit of its judgmental dimension based on research opinion, but it is possible to obtain a design of effective business models.
Keywords: Balance sheet indicators, Bancassurance, business models, ward algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12612854 The Strengths and Limitations of the Statistical Modeling of Complex Social Phenomenon: Focusing on SEM, Path Analysis, or Multiple Regression Models
Authors: Jihye Jeon
Abstract:
This paper analyzes the conceptual framework of three statistical methods, multiple regression, path analysis, and structural equation models. When establishing research model of the statistical modeling of complex social phenomenon, it is important to know the strengths and limitations of three statistical models. This study explored the character, strength, and limitation of each modeling and suggested some strategies for accurate explaining or predicting the causal relationships among variables. Especially, on the studying of depression or mental health, the common mistakes of research modeling were discussed.Keywords: Multiple regression, path analysis, structural equation models, statistical modeling, social and psychological phenomenon.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 92502853 Mathematical Rescheduling Models for Railway Services
Authors: Zuraida Alwadood, Adibah Shuib, Norlida Abd Hamid
Abstract:
This paper presents the review of past studies concerning mathematical models for rescheduling passenger railway services, as part of delay management in the occurrence of railway disruption. Many past mathematical models highlighted were aimed at minimizing the service delays experienced by passengers during service disruptions. Integer programming (IP) and mixed-integer programming (MIP) models are critically discussed, focusing on the model approach, decision variables, sets and parameters. Some of them have been tested on real-life data of railway companies worldwide, while a few have been validated on fictive data. Based on selected literatures on train rescheduling, this paper is able to assist researchers in the model formulation by providing comprehensive analyses towards the model building. These analyses would be able to help in the development of new approaches in rescheduling strategies or perhaps to enhance the existing rescheduling models and make them more powerful or more applicable with shorter computing time.
Keywords: Mathematical modelling, Mixed-integer programming, Railway rescheduling, Service delays.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32512852 The Classification Model for Hard Disk Drive Functional Tests under Sparse Data Conditions
Authors: S. Pattanapairoj, D. Chetchotsak
Abstract:
This paper proposed classification models that would be used as a proxy for hard disk drive (HDD) functional test equitant which required approximately more than two weeks to perform the HDD status classification in either “Pass" or “Fail". These models were constructed by using committee network which consisted of a number of single neural networks. This paper also included the method to solve the problem of sparseness data in failed part, which was called “enforce learning method". Our results reveal that the constructed classification models with the proposed method could perform well in the sparse data conditions and thus the models, which used a few seconds for HDD classification, could be used to substitute the HDD functional tests.Keywords: Sparse data, Classifications, Committee network
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17362851 Imputing Missing Data in Electronic Health Records: A Comparison of Linear and Non-Linear Imputation Models
Authors: Alireza Vafaei Sadr, Vida Abedi, Jiang Li, Ramin Zand
Abstract:
Missing data is a common challenge in medical research and can lead to biased or incomplete results. When the data bias leaks into models, it further exacerbates health disparities; biased algorithms can lead to misclassification and reduced resource allocation and monitoring as part of prevention strategies for certain minorities and vulnerable segments of patient populations, which in turn further reduce data footprint from the same population – thus, a vicious cycle. This study compares the performance of six imputation techniques grouped into Linear and Non-Linear models, on two different real-world electronic health records (EHRs) datasets, representing 17864 patient records. The mean absolute percentage error (MAPE) and root mean squared error (RMSE) are used as performance metrics, and the results show that the Linear models outperformed the Non-Linear models in terms of both metrics. These results suggest that sometimes Linear models might be an optimal choice for imputation in laboratory variables in terms of imputation efficiency and uncertainty of predicted values.
Keywords: EHR, Machine Learning, imputation, laboratory variables, algorithmic bias.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742850 Using Model to Plan of Strategic Objectives
Authors: Terezie Bartusková, Jitka Baňařová, Zuzana Kusněřová
Abstract:
Importance of strategic planning is unquestionable. However, the practical implementation of a strategic plan faces too many obstacles. The aim of the article is explained the importance of strategic planning and to find how companies in Moravian-Silesian Region deal with strategic planning, and to introduce the model, which helps to set strategic goals in financial indicators area. This model should be part of the whole process of strategic planning and can be use to predict the future values of financial indicators of the company with regard to the factor, which influence these indicators.
Keywords: Planning of Potentials, Planning of Strategic Objectives, Portfolio Planning, Significant Factors, Strategic Planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12432849 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network
Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.Keywords: Big data, k-NN, machine learning, traffic speed prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1376