Search results for: Statistical model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8189

Search results for: Statistical model

8099 Real-time Tracking in Image Sequences based-on Parameters Updating with Temporal and Spatial Neighborhoods Mixture Gaussian Model

Authors: Hu Haibo, Zhao Hong

Abstract:

Gaussian mixture background model is widely used in moving target detection of the image sequences. However, traditional Gaussian mixture background model usually considers the time continuity of the pixels, and establishes background through statistical distribution of pixels without taking into account the pixels- spatial similarity, which will cause noise, imperfection and other problems. This paper proposes a new Gaussian mixture modeling approach, which combines the color and gradient of the spatial information, and integrates the spatial information of the pixel sequences to establish Gaussian mixture background. The experimental results show that the movement background can be extracted accurately and efficiently, and the algorithm is more robust, and can work in real time in tracking applications.

Keywords: Gaussian mixture model, real-time tracking, sequence image, gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
8098 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
8097 Applications of Artificial Neural Network to Building Statistical Models for Qualifying and Indexing Radiation Treatment Plans

Authors: Pei-Ju Chao, Tsair-Fwu Lee, Wei-Luen Huang, Long-Chang Chen, Te-Jen Su, Wen-Ping Chen

Abstract:

The main goal in this paper is to quantify the quality of different techniques for radiation treatment plans, a back-propagation artificial neural network (ANN) combined with biomedicine theory was used to model thirteen dosimetric parameters and to calculate two dosimetric indices. The correlations between dosimetric indices and quality of life were extracted as the features and used in the ANN model to make decisions in the clinic. The simulation results show that a trained multilayer back-propagation neural network model can help a doctor accept or reject a plan efficiently. In addition, the models are flexible and whenever a new treatment technique enters the market, the feature variables simply need to be imported and the model re-trained for it to be ready for use.

Keywords: neural network, dosimetric index, radiation treatment, tumor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
8096 Towards Integrating Statistical Color Features for Human Skin Detection

Authors: Mohd Zamri Osman, Mohd Aizaini Maarof, Mohd Foad Rohani

Abstract:

Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969.

Keywords: Color space, neural network, random forest, skin detection, statistical feature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900
8095 Network State Classification based on the Statistical properties of RTT for an Adaptive Multi-State Proactive Transport Protocol for Satellite based Networks

Authors: Mohanchur Sakar, K.K.Shukla, K.S.Dasgupta

Abstract:

This paper attempts to establish the fact that Multi State Network Classification is essential for performance enhancement of Transport protocols over Satellite based Networks. A model to classify Multi State network condition taking into consideration both congestion and channel error is evolved. In order to arrive at such a model an analysis of the impact of congestion and channel error on RTT values has been carried out using ns2. The analysis results are also reported in the paper. The inference drawn from this analysis is used to develop a novel statistical RTT based model for multi state network classification. An Adaptive Multi State Proactive Transport Protocol consisting of Proactive Slow Start, State based Error Recovery, Timeout Action and Proactive Reduction is proposed which uses the multi state network state classification model. This paper also confirms through detail simulation and analysis that a prior knowledge about the overall characteristics of the network helps in enhancing the performance of the protocol over satellite channel which is significantly affected due to channel noise and congestion. The necessary augmentation of ns2 simulator is done for simulating the multi state network classification logic. This simulation has been used in detail evaluation of the protocol under varied levels of congestion and channel noise. The performance enhancement of this protocol with reference to established protocols namely TCP SACK and Vegas has been discussed. The results as discussed in this paper clearly reveal that the proposed protocol always outperforms its peers and show a significant improvement in very high error conditions as envisaged in the design of the protocol.

Keywords: GEO, ns2, Proactive TCP, SACK, Vegas

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1389
8094 Statistical Modeling of Accelerated Pavement Failure Using Response Surface Methodology

Authors: Anshu Manik, Kasthurirangan Gopalakrishnan, Siddhartha K. Khaitan

Abstract:

Rutting is one of the major load-related distresses in airport flexible pavements. Rutting in paving materials develop gradually with an increasing number of load applications, usually appearing as longitudinal depressions in the wheel paths and it may be accompanied by small upheavals to the sides. Significant research has been conducted to determine the factors which affect rutting and how they can be controlled. Using the experimental design concepts, a series of tests can be conducted while varying levels of different parameters, which could be the cause for rutting in airport flexible pavements. If proper experimental design is done, the results obtained from these tests can give a better insight into the causes of rutting and the presence of interactions and synergisms among the system variables which have influence on rutting. Although traditionally, laboratory experiments are conducted in a controlled fashion to understand the statistical interaction of variables in such situations, this study is an attempt to identify the critical system variables influencing airport flexible pavement rut depth from a statistical DoE perspective using real field data from a full-scale test facility. The test results do strongly indicate that the response (rut depth) has too much noise in it and it would not allow determination of a good model. From a statistical DoE perspective, two major changes proposed for this experiment are: (1) actual replication of the tests is definitely required, (2) nuisance variables need to be identified and blocked properly. Further investigation is necessary to determine possible sources of noise in the experiment.

Keywords: Airport Pavement, Design of Experiments, Rutting, NAPTF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625
8093 A Stochastic Analytic Hierarchy Process Based Weighting Model for Sustainability Measurement in an Organization

Authors: Faramarz Khosravi, Gokhan Izbirak

Abstract:

A weighted statistical stochastic based Analytical Hierarchy Process (AHP) model for modeling the potential barriers and enablers of sustainability for measuring and assessing the sustainability level is proposed. For context-dependent potential barriers and enablers, the proposed model takes the basis of the properties of the variables describing the sustainability functions and was developed into a realistic analytical model for the sustainable behavior of an organization. This thus serves as a means for measuring the sustainability of the organization. The main focus of this paper was the application of the AHP tool in a statistically-based model for measuring sustainability. Hence a strong weighted stochastic AHP based procedure was achieved. A case study scenario of a widely reported major Canadian electric utility was adopted to demonstrate the applicability of the developed model and comparatively examined its results with those of an equal-weighted model method. Variations in the sustainability of a company, as fluctuations, were figured out during the time. In the results obtained, sustainability index for successive years changed form 73.12%, 79.02%, 74.31%, 76.65%, 80.49%, 79.81%, 79.83% to more exact values 73.32%, 77.72%, 76.76%, 79.41%, 81.93%, 79.72%, and 80,45% according to priorities of factors that have found by expert views, respectively. By obtaining relatively necessary informative measurement indicators, the model can practically and effectively evaluate the sustainability extent of any organization and also to determine fluctuations in the organization over time.

Keywords: AHP, sustainability fluctuation, environmental indicators, performance measurement, environmental sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 848
8092 A Systemic Maturity Model

Authors: Emir H. Pernet, Jeimy J. Cano

Abstract:

Maturity models, used descriptively to explain changes in reality or normatively to guide managers to make interventions to make organizations more effective and efficient, are based on the principles of statistical quality control and PDCA continuous improvement (Plan, Do, Check, Act). Some frameworks developed over the concept of maturity models include COBIT, CMM, and ITIL. This paper presents some limitations of traditional maturity models, most of them related to the mechanistic and reductionist principles over which those models are built. As systems theory helps the understanding of the dynamics of organizations and organizational change, the development of a systemic maturity model can help to overcome some of those limitations. This document proposes a systemic maturity model, based on a systemic conceptualization of organizations, focused on the study of the functioning of the parties, the relationships among them, and their behavior as a whole. The concept of maturity from the system theory perspective is conceptually defined as an emergent property of the organization, which arises as a result of the degree of alignment and integration of their processes. This concept is operationalized through a systemic function that measures the maturity of organizations, and finally validated by the measuring of maturity in some organizations. For its operationalization and validation, the model was applied to measure the maturity of organizational Governance, Risk and Compliance (GRC) processes.

Keywords: GRC, Maturity Model, Systems Theory, Viable System Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2654
8091 A Study of Neuro-Fuzzy Inference System for Gross Domestic Product Growth Forecasting

Authors: Ε. Giovanis

Abstract:

In this paper we present a Adaptive Neuro-Fuzzy System (ANFIS) with inputs the lagged dependent variable for the prediction of Gross domestic Product growth rate in six countries. We compare the results with those of Autoregressive (AR) model. We conclude that the forecasting performance of neuro-fuzzy-system in the out-of-sample period is much more superior and can be a very useful alternative tool used by the national statistical services and the banking and finance industry.

Keywords: Autoregressive model, Forecasting, Gross DomesticProduct, Neuro-Fuzzy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556
8090 A Statistical Prediction of Likely Distress in Nigeria Banking Sector Using a Neural Network Approach

Authors: D. A. Farinde

Abstract:

One of the most significant threats to the economy of a nation is the bankruptcy of its banks. This study evaluates the susceptibility of Nigerian banks to failure with a view to identifying ratios and financial data that are sensitive to solvency of the bank. Further, a predictive model is generated to guide all stakeholders in the industry. Thirty quoted banks that had published Annual Reports for the year preceding the consolidation i.e. year 2004 were selected. They were examined for distress using the Multilayer Perceptron Neural Network Analysis. The model was used to analyze further reforms by the Central Bank of Nigeria using published Annual Reports of twenty quoted banks for the year 2008 and 2011. The model can thus be used for future prediction of failure in the Nigerian banking system.

Keywords: Bank, Bankruptcy, Financial Ratios, Neural Network, Multilayer Perceptron, Predictive Model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2652
8089 Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain

Authors: Suman Senapati, Goutam Saha

Abstract:

Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features.

Keywords: Speaker Identification, Log Gabor Wavelet, Bayesian Bivariate Estimator, Circularly Symmetric Probability Density Function, SIRP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594
8088 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: Statistical slope stability analysis, Skew distributions, Probability of failure, Functions of random variables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
8087 Statistical Analysis of First Order Plus Dead-time System using Operational Matrix

Authors: Pham Luu Trung Duong, Moonyong Lee

Abstract:

To increase precision and reliability of automatic control systems, we have to take into account of random factors affecting the control system. Thus, operational matrix technique is used for statistical analysis of first order plus time delay system with uniform random parameter. Examples with deterministic and stochastic disturbance are considered to demonstrate the validity of the method. Comparison with Monte Carlo method is made to show the computational effectiveness of the method.

Keywords: First order plus dead-time, Operational matrix, Statistical analysis, Walsh function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1318
8086 Comparison of Stochastic Point Process Models of Rainfall in Singapore

Authors: Y. Lu, X. S. Qin

Abstract:

Extensive rainfall disaggregation approaches have been developed and applied in climate change impact studies such as flood risk assessment and urban storm water management.In this study, five rainfall models that were capable ofdisaggregating daily rainfall data into hourly one were investigated for the rainfall record in theChangi Airport, Singapore. The objectives of this study were (i) to study the temporal characteristics of hourly rainfall in Singapore, and (ii) to evaluate the performance of variousdisaggregation models. The used models included: (i) Rectangular pulse Poisson model (RPPM), (ii) Bartlett-Lewis Rectangular pulse model (BLRPM), (iii) Bartlett-Lewis model with 2 cell types (BL2C), (iv) Bartlett-Lewis Rectangular with cell depth distribution dependent on duration (BLRD), and (v) Neyman-Scott Rectangular pulse model (NSRPM). All of these models werefitted using hourly rainfall data ranging from 1980 to 2005 (which was obtained from Changimeteorological station).The study results indicated that the weight scheme of inversely proportional variance could deliver more accurateoutputs for fitting rainfall patterns in tropical areas, and BLRPM performedrelatively better than other disaggregation models.

Keywords: Rainfall disaggregation, statistical properties, poisson processed, Bartlett-Lewis model, Neyman-Scott model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2243
8085 Miller’s Model for Developing Critical Thinking Skill of Pre-Service Teachers at Suan Sunandha Rajabhat University

Authors: Suttipong Boonphadung, Thassanant Unnanantn

Abstract:

This research focused on comparing the critical thinking of the teacher students before and after using Miller’s Model learning activities and investigating their opinions. The sampling groups were (1) fourth year 33 student teachers majoring in Early Childhood Education and enrolling in semester 1 of academic year 2013 (2) third year 28 student teachers majoring in English and enrolling in semester 2 of academic year 2013 and (3) third year 22 student teachers majoring in Thai and enrolling in semester 2 of academic year 2013. The research instruments were (1) lesson plans where the learning activities were settled based on Miller’s Model (2) critical thinking assessment criteria and (3) a questionnaire on opinions towards Miller’s Model based learning activities. The statistical treatment was mean, deviation, different scores and T-test. The result unfolded that (1) the critical thinking of the students after the assigned activities was better than before and (2) the students’ opinions towards the critical thinking improvement activities based on Miller’s Model ranged from the level of high to highest.

Keywords: Critical thinking, Miller’s model, Opinions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2033
8084 Time Series Modelling and Prediction of River Runoff: Case Study of Karkheh River, Iran

Authors: Karim Hamidi Machekposhti, Hossein Sedghi, Abdolrasoul Telvari, Hossein Babazadeh

Abstract:

Rainfall and runoff phenomenon is a chaotic and complex outcome of nature which requires sophisticated modelling and simulation methods for explanation and use. Time Series modelling allows runoff data analysis and can be used as forecasting tool. In the paper attempt is made to model river runoff data and predict the future behavioural pattern of river based on annual past observations of annual river runoff. The river runoff analysis and predict are done using ARIMA model. For evaluating the efficiency of prediction to hydrological events such as rainfall, runoff and etc., we use the statistical formulae applicable. The good agreement between predicted and observation river runoff coefficient of determination (R2) display that the ARIMA (4,1,1) is the suitable model for predicting Karkheh River runoff at Iran.

Keywords: Time series modelling, ARIMA model, River runoff, Karkheh River, CLS method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 745
8083 Air Quality Forecast Based on Principal Component Analysis-Genetic Algorithm and Back Propagation Model

Authors: Bin Mu, Site Li, Shijin Yuan

Abstract:

Under the circumstance of environment deterioration, people are increasingly concerned about the quality of the environment, especially air quality. As a result, it is of great value to give accurate and timely forecast of AQI (air quality index). In order to simplify influencing factors of air quality in a city, and forecast the city’s AQI tomorrow, this study used MATLAB software and adopted the method of constructing a mathematic model of PCA-GABP to provide a solution. To be specific, this study firstly made principal component analysis (PCA) of influencing factors of AQI tomorrow including aspects of weather, industry waste gas and IAQI data today. Then, we used the back propagation neural network model (BP), which is optimized by genetic algorithm (GA), to give forecast of AQI tomorrow. In order to verify validity and accuracy of PCA-GABP model’s forecast capability. The study uses two statistical indices to evaluate AQI forecast results (normalized mean square error and fractional bias). Eventually, this study reduces mean square error by optimizing individual gene structure in genetic algorithm and adjusting the parameters of back propagation model. To conclude, the performance of the model to forecast AQI is comparatively convincing and the model is expected to take positive effect in AQI forecast in the future.

Keywords: AQI forecast, principal component analysis, genetic algorithm, back propagation neural network model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 968
8082 Application of Feed-Forward Neural Networks Autoregressive Models in Gross Domestic Product Prediction

Authors: Ε. Giovanis

Abstract:

In this paper we present an autoregressive model with neural networks modeling and standard error backpropagation algorithm training optimization in order to predict the gross domestic product (GDP) growth rate of four countries. Specifically we propose a kind of weighted regression, which can be used for econometric purposes, where the initial inputs are multiplied by the neural networks final optimum weights from input-hidden layer after the training process. The forecasts are compared with those of the ordinary autoregressive model and we conclude that the proposed regression-s forecasting results outperform significant those of autoregressive model in the out-of-sample period. The idea behind this approach is to propose a parametric regression with weighted variables in order to test for the statistical significance and the magnitude of the estimated autoregressive coefficients and simultaneously to estimate the forecasts.

Keywords: Autoregressive model, Error back-propagation Feed-Forward neural networks, , Gross Domestic Product

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382
8081 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics

Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo

Abstract:

Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.

Keywords: Communication signal, feature extraction, holder coefficient, improved cloud model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 643
8080 Stochastic Modeling for Parameters of Modified Car-Following Model in Area-Based Traffic Flow

Authors: N. C. Sarkar, A. Bhaskar, Z. Zheng

Abstract:

The driving behavior in area-based (i.e., non-lane based) traffic is induced by the presence of other individuals in the choice space from the driver’s visual perception area. The driving behavior of a subject vehicle is constrained by the potential leaders and leaders are frequently changed over time. This paper is to determine a stochastic model for a parameter of modified intelligent driver model (MIDM) in area-based traffic (as in developing countries). The parametric and non-parametric distributions are presented to fit the parameters of MIDM. The goodness of fit for each parameter is measured in two different ways such as graphically and statistically. The quantile-quantile (Q-Q) plot is used for a graphical representation of a theoretical distribution to model a parameter and the Kolmogorov-Smirnov (K-S) test is used for a statistical measure of fitness for a parameter with a theoretical distribution. The distributions are performed on a set of estimated parameters of MIDM. The parameters are estimated on the real vehicle trajectory data from India. The fitness of each parameter with a stochastic model is well represented. The results support the applicability of the proposed modeling for parameters of MIDM in area-based traffic flow simulation.

Keywords: Area-based traffic, car-following model, micro-simulation, stochastic modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 724
8079 Comparative Study of Evolutionary Model and Clustering Methods in Circuit Partitioning Pertaining to VLSI Design

Authors: K. A. Sumitra Devi, N. P. Banashree, Annamma Abraham

Abstract:

Partitioning is a critical area of VLSI CAD. In order to build complex digital logic circuits its often essential to sub-divide multi -million transistor design into manageable Pieces. This paper looks at the various partitioning techniques aspects of VLSI CAD, targeted at various applications. We proposed an evolutionary time-series model and a statistical glitch prediction system using a neural network with selection of global feature by making use of clustering method model, for partitioning a circuit. For evolutionary time-series model, we made use of genetic, memetic & neuro-memetic techniques. Our work focused in use of clustering methods - K-means & EM methodology. A comparative study is provided for all techniques to solve the problem of circuit partitioning pertaining to VLSI design. The performance of all approaches is compared using benchmark data provided by MCNC standard cell placement benchmark net lists. Analysis of the investigational results proved that the Neuro-memetic model achieves greater performance then other model in recognizing sub-circuits with minimum amount of interconnections between them.

Keywords: VLSI, circuit partitioning, memetic algorithm, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
8078 Model Discovery and Validation for the Qsar Problem using Association Rule Mining

Authors: Luminita Dumitriu, Cristina Segal, Marian Craciun, Adina Cocu, Lucian P. Georgescu

Abstract:

There are several approaches in trying to solve the Quantitative 1Structure-Activity Relationship (QSAR) problem. These approaches are based either on statistical methods or on predictive data mining. Among the statistical methods, one should consider regression analysis, pattern recognition (such as cluster analysis, factor analysis and principal components analysis) or partial least squares. Predictive data mining techniques use either neural networks, or genetic programming, or neuro-fuzzy knowledge. These approaches have a low explanatory capability or non at all. This paper attempts to establish a new approach in solving QSAR problems using descriptive data mining. This way, the relationship between the chemical properties and the activity of a substance would be comprehensibly modeled.

Keywords: association rules, classification, data mining, Quantitative Structure - Activity Relationship.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
8077 Optimization of Conditions for Xanthan Gum Production from Waste Date in Submerged Fermantation

Authors: S. Moshaf, Z. Hamidi-Esfahani, M. H. Azizi

Abstract:

Xanthan gum is one of the major commercial biopolymers. Due to its excellent rheological properties xanthan gum is used in many applications, mainly in food industry. Commercial production of xanthan gum uses glucose as the carbon substrate; consequently the price of xanthan production is high. One of the ways to decrease xanthan price, is using cheaper substrate like agricultural wastes. Iran is one of the biggest date producer countries. However approximately 50% of date production is wasted annually. The goal of this study is to produce xanthan gum from waste date using Xanthomonas campestris PTCC1473 by submerged fermentation. In this study the effect of three variables including phosphor and nitrogen amount and agitation rate in three levels using response surface methodology (RSM) has been studied. Results achieved from statistical analysis Design Expert 7.0.0 software showed that xanthan increased with increasing level of phosphor. Low level of nitrogen leaded to higher xanthan production. Xanthan amount, increasing agitation had positive influence. The statistical model identified the optimum conditions nitrogen amount=3.15g/l, phosphor amount=5.03 g/l and agitation=394.8 rpm for xanthan. To model validation, experiments in optimum conditions for xanthan gum were carried out. The mean of result for xanthan was 6.72±0.26. The result was closed to the predicted value by using RSM.

Keywords: Optimization, RSM, Waste date, Xanthan gum, Xanthomonas Campestris

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2571
8076 A Survey of Model Comparison Strategies and Techniques in Model Driven Engineering

Authors: Junaid Rashid, Waqar Mehmood, Muhammad Wasif Nisar

Abstract:

This survey paper shows the recent state of model comparison as it’s applies to Model Driven engineering. In Model Driven Engineering to calculate the difference between the models is a very important and challenging task. There are number of tasks involved in model differencing that firstly starts with identifying and matching the elements of the model. In this paper, we discuss how model matching is accomplished, the strategies, techniques and the types of the model. We also discuss the future direction. We found out that many of the latest model comparison strategies are geared near enabling Meta model and similarity based matching. Therefore model versioning is the most dominant application of the model comparison. Recently to work on comparison for versioning has begun to deteriorate, giving way to different applications. Ultimately there is wide change among the tools in the measure of client exertion needed to perform model comparisons, as some require more push to encourage more sweeping statement and expressive force.

Keywords: Model comparison, model clone detection, model versioning, EMF Model, model diff.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2128
8075 Analytical and Statistical Study of the Parameters of Expansive Soil

Authors: A. Medjnoun, R. Bahar

Abstract:

The disorders caused by the shrinking-swelling phenomenon are prevalent in arid and semi-arid in the presence of swelling clay. This soil has the characteristic of changing state under the effect of water solicitation (wetting and drying). A set of geotechnical parameters is necessary for the characterization of this soil type, such as state parameters, physical and chemical parameters and mechanical parameters. Some of these tests are very long and some are very expensive, hence the use or methods of predictions. The complexity of this phenomenon and the difficulty of its characterization have prompted researchers to use several identification parameters in the prediction of swelling potential. This document is an analytical and statistical study of geotechnical parameters affecting the potential of swelling clays. This work is performing on a database obtained from investigations swelling Algerian soil. The obtained observations have helped us to understand the soil swelling structure and its behavior.

Keywords: Analysis, estimated model, parameter identification, Swelling of clay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1234
8074 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: Economic production quantity, random cost, supply chain management, vendor-managed inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 626
8073 Experimental Investigation of On-Body Channel Modelling at 2.45 GHz

Authors: Hasliza A. Rahim, Fareq Malek, Nur A. M. Affendi, Azuwa Ali, Norshafinash Saudin, Latifah Mohamed

Abstract:

This paper presents the experimental investigation of on-body channel fading at 2.45 GHz considering two effects of the user body movement; stationary and mobile. A pair of body-worn antennas was utilized in this measurement campaign. A statistical analysis was performed by comparing the measured on-body path loss to five well-known distributions; lognormal, normal, Nakagami, Weibull and Rayleigh. The results showed that the average path loss of moving arm varied higher than the path loss in sitting position for upper-arm-to-left-chest link, up to 3.5 dB. The analysis also concluded that the Nakagami distribution provided the best fit for most of on-body static link path loss in standing still and sitting position, while the arm movement can be best described by log-normal distribution.

Keywords: On-Body channel communications, fading characteristics, statistical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
8072 A Cross-Gender Statistical Analysis of Tuvinian Intonation Features in Comparison With Uzbek and Azerbaijani

Authors: D. Beziakina, E. Bulgakova

Abstract:

The paper deals with cross-gender and cross-linguistic comparison of pitch characteristics for Tuvinian with two other Turkic languages - Uzbek and Azerbaijani, based on the results of statistical analysis of pitch parameter values and intonation patterns used by male and female speakers.

The main goal of our work is to obtain the ranges of pitch parameter values typical for Tuvinian speakers for the purpose of automatic language identification. We also propose a cross-gender analysis of declarative intonation in the poorly studied Tuvinian language.

The ranges of pitch parameter values were obtained by means of specially developed software that deals with the distribution of pitch values and allows us to obtain statistical language-specific pitch intervals.

Keywords: Speech analysis, Statistical analysis, Speaker recognition, Identification of person.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
8071 Recognition of Isolated Speech Signals using Simplified Statistical Parameters

Authors: Abhijit Mitra, Bhargav Kumar Mitra, Biswajoy Chatterjee

Abstract:

We present a novel scheme to recognize isolated speech signals using certain statistical parameters derived from those signals. The determination of the statistical estimates is based on extracted signal information rather than the original signal information in order to reduce the computational complexity. Subtle details of these estimates, after extracting the speech signal from ambience noise, are first exploited to segregate the polysyllabic words from the monosyllabic ones. Precise recognition of each distinct word is then carried out by analyzing the histogram, obtained from these information.

Keywords: Isolated speech signals, Block overlapping technique, Positive peaks, Histogram analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1377
8070 Simulation of Snow Covers Area by a Physical based Model

Authors: Hossein Zeinivand, Florimond De Smedt

Abstract:

Snow cover is an important phenomenon in hydrology, hence modeling the snow accumulation and melting is an important issue in places where snowmelt significantly contributes to runoff and has significant effect on water balance. The physics-based models are invariably distributed, with the basin disaggregated into zones or grid cells. Satellites images provide valuable data to verify the accuracy of spatially distributed model outputs. In this study a spatially distributed physically based model (WetSpa) was applied to predict snow cover and melting in the Latyan dam watershed in Iran. Snowmelt is simulated based on an energy balance approach. The model is applied and calibrated with one year of observed daily precipitation, air temperature, windspeed, and daily potential evaporation. The predicted snow-covered area is compared with remotely sensed images (MODIS). The results show that simulated snow cover area SCA has a good agreement with satellite image snow cover area SCA from MODIS images. The model performance is also tested by statistical and graphical comparison of simulated and measured discharges entering the Latyan dam reservoir.

Keywords: Physical based model, Satellite image, Snow covers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812