Search results for: Markov%20chain%20MonteCarlo%20model%20composition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 139

Search results for: Markov%20chain%20MonteCarlo%20model%20composition

109 Evolutionary Training of Hybrid Systems of Recurrent Neural Networks and Hidden Markov Models

Authors: Rohitash Chandra, Christian W. Omlin

Abstract:

We present a hybrid architecture of recurrent neural networks (RNNs) inspired by hidden Markov models (HMMs). We train the hybrid architecture using genetic algorithms to learn and represent dynamical systems. We train the hybrid architecture on a set of deterministic finite-state automata strings and observe the generalization performance of the hybrid architecture when presented with a new set of strings which were not present in the training data set. In this way, we show that the hybrid system of HMM and RNN can learn and represent deterministic finite-state automata. We ran experiments with different sets of population sizes in the genetic algorithm; we also ran experiments to find out which weight initializations were best for training the hybrid architecture. The results show that the hybrid architecture of recurrent neural networks inspired by hidden Markov models can train and represent dynamical systems. The best training and generalization performance is achieved when the hybrid architecture is initialized with random real weight values of range -15 to 15.

Keywords: Deterministic finite-state automata, genetic algorithm, hidden Markov models, hybrid systems and recurrent neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1847
108 Periodic Storage Control Problem

Authors: Ru-Shuo Sheu, Han-Hsin Chou, Te-Shyang Tan

Abstract:

Considering a reservoir with periodic states and different cost functions with penalty, its release rules can be modeled as a periodic Markov decision process (PMDP). First, we prove that policy- iteration algorithm also works for the PMDP. Then, with policy- iteration algorithm, we obtain the optimal policies for a special aperiodic reservoir model with two cost functions under large penalty and give a discussion when the penalty is small.

Keywords: periodic Markov decision process, periodic state, policy-iteration algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1201
107 Hidden Markov Model for the Simulation Study of Neural States and Intentionality

Authors: R. B. Mishra

Abstract:

Hidden Markov Model (HMM) has been used in prediction and determination of states that generate different neural activations as well as mental working conditions. This paper addresses two applications of HMM; one to determine the optimal sequence of states for two neural states: Active (AC) and Inactive (IA) for the three emission (observations) which are for No Working (NW), Waiting (WT) and Working (W) conditions of human beings. Another is for the determination of optimal sequence of intentionality i.e. Believe (B), Desire (D), and Intention (I) as the states and three observational sequences: NW, WT and W. The computational results are encouraging and useful.

Keywords: BDI, HMM, neural activation, optimal states, working conditions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 832
106 Hybrid Markov Game Controller Design Algorithms for Nonlinear Systems

Authors: R. Sharma, M. Gopal

Abstract:

Markov games can be effectively used to design controllers for nonlinear systems. The paper presents two novel controller design algorithms by incorporating ideas from gametheory literature that address safety and consistency issues of the 'learned' control strategy. A more widely used approach for controller design is the H∞ optimal control, which suffers from high computational demand and at times, may be infeasible. We generate an optimal control policy for the agent (controller) via a simple Linear Program enabling the controller to learn about the unknown environment. The controller is facing an unknown environment and in our formulation this environment corresponds to the behavior rules of the noise modeled as the opponent. Proposed approaches aim to achieve 'safe-consistent' and 'safe-universally consistent' controller behavior by hybridizing 'min-max', 'fictitious play' and 'cautious fictitious play' approaches drawn from game theory. We empirically evaluate the approaches on a simulated Inverted Pendulum swing-up task and compare its performance against standard Q learning.

Keywords: Fictitious Play, Cautious Fictitious Play, InvertedPendulum, Controller, Markov Games, Mobile Robot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393
105 An Estimating Parameter of the Mean in Normal Distribution by Maximum Likelihood, Bayes, and Markov Chain Monte Carlo Methods

Authors: Autcha Araveeporn

Abstract:

This paper is to compare the parameter estimation of the mean in normal distribution by Maximum Likelihood (ML), Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML estimator is estimated by the average of data, the Bayes method is considered from the prior distribution to estimate Bayes estimator, and MCMC estimator is approximated by Gibbs sampling from posterior distribution. These methods are also to estimate a parameter then the hypothesis testing is used to check a robustness of the estimators. Data are simulated from normal distribution with the true parameter of mean 2, and variance 4, 9, and 16 when the sample sizes is set as 10, 20, 30, and 50. From the results, it can be seen that the estimation of MLE, and MCMC are perceivably different from the true parameter when the sample size is 10 and 20 with variance 16. Furthermore, the Bayes estimator is estimated from the prior distribution when mean is 1, and variance is 12 which showed the significant difference in mean with variance 9 at the sample size 10 and 20.

Keywords: Bayes method, Markov Chain Monte Carlo method, Maximum Likelihood method, normal distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1384
104 Ottoman Script Recognition Using Hidden Markov Model

Authors: Ayşe Onat, Ferruh Yildiz, Mesut Gündüz

Abstract:

In this study, an OCR system for segmentation, feature extraction and recognition of Ottoman Scripts has been developed using handwritten characters. Detection of handwritten characters written by humans is a difficult process. Segmentation and feature extraction stages are based on geometrical feature analysis, followed by the chain code transformation of the main strokes of each character. The output of segmentation is well-defined segments that can be fed into any classification approach. The classes of main strokes are identified through left-right Hidden Markov Model (HMM).

Keywords: Chain Code, HMM, Ottoman Script Recognition, OCR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2252
103 Part of Speech Tagging Using Statistical Approach for Nepali Text

Authors: Archit Yajnik

Abstract:

Part of Speech Tagging has always been a challenging task in the era of Natural Language Processing. This article presents POS tagging for Nepali text using Hidden Markov Model and Viterbi algorithm. From the Nepali text, annotated corpus training and testing data set are randomly separated. Both methods are employed on the data sets. Viterbi algorithm is found to be computationally faster and accurate as compared to HMM. The accuracy of 95.43% is achieved using Viterbi algorithm. Error analysis where the mismatches took place is elaborately discussed.

Keywords: Hidden Markov model, Viterbi algorithm, POS tagging, natural language processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655
102 Markov Chain Based QoS Support for Wireless Body Area Network Communication in Health Monitoring Services

Authors: R. A. Isabel, E. Baburaj

Abstract:

Wireless Body Area Networks (WBANs) are essential for real-time health monitoring of patients and in diagnosing of many diseases. WBANs comprise many sensors to monitor a large range of ambient conditions. Quality of Service (QoS) is a key challenge in WBAN, because the different state information of the neighboring nodes has to be monitored in an accurate manner. However, energy consumption gets increased while predicting and maintaining the exact information in highly dynamic environments. In order to reduce energy consumption and end to end delay, Markov Chain Based Quality of Service Support (MC-QoSS) method is designed in the health monitoring services of WBAN communication. The energy consumption gets reduced by forming a Markov chain with high energy nodes in the sensor networks communication path. The low energy level sensor nodes are removed using transitional probability in order to reduce end to end delay. High energy nodes are formed in the chain structure of its corresponding path to enhance communication. After choosing the communication path through high energy nodes, the packets are sent to the sink node from the source node with a higher Packet Delivery Ratio. The simulation result shows that MC-QoSS method improves the packet delivery ratio and reduces energy consumption with minimum end to end delay, compared to existing methods.

Keywords: Wireless body area networks, quality of service, Markov chain, health monitoring services.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1390
101 Genetic Algorithm and Padé-Moment Matching for Model Order Reduction

Authors: Shilpi Lavania, Deepak Nagaria

Abstract:

A mixed method for model order reduction is presented in this paper. The denominator polynomial is derived by matching both Markov parameters and time moments, whereas numerator polynomial derivation and error minimization is done using Genetic Algorithm. The efficiency of the proposed method can be investigated in terms of closeness of the response of reduced order model with respect to that of higher order original model and a comparison of the integral square error as well.

Keywords: Model Order Reduction (MOR), control theory, Markov parameters, time moments, genetic algorithm, Single Input Single Output (SISO).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3473
100 Computing Transition Intensity Using Time-Homogeneous Markov Jump Process: Case of South African HIV/AIDS Disposition

Authors: A. Bayaga

Abstract:

This research provides a technical account of estimating Transition Probability using Time-homogeneous Markov Jump Process applying by South African HIV/AIDS data from the Statistics South Africa. It employs Maximum Likelihood Estimator (MLE) model to explore the possible influence of Transition Probability of mortality cases in which case the data was based on actual Statistics South Africa. This was conducted via an integrated demographic and epidemiological model of South African HIV/AIDS epidemic. The model was fitted to age-specific HIV prevalence data and recorded death data using MLE model. Though the previous model results suggest HIV in South Africa has declined and AIDS mortality rates have declined since 2002 – 2013, in contrast, our results differ evidently with the generally accepted HIV models (Spectrum/EPP and ASSA2008) in South Africa. However, there is the need for supplementary research to be conducted to enhance the demographic parameters in the model and as well apply it to each of the nine (9) provinces of South Africa.

Keywords: AIDS mortality rates, Epidemiological model, Time-homogeneous Markov Jump Process, Transition Probability, Statistics South Africa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2121
99 Performance of the Strong Stability Method in the Univariate Classical Risk Model

Authors: Safia Hocine, Zina Benouaret, Djamil A¨ıssani

Abstract:

In this paper, we study the performance of the strong stability method of the univariate classical risk model. We interest to the stability bounds established using two approaches. The first based on the strong stability method developed for a general Markov chains. The second approach based on the regenerative processes theory . By adopting an algorithmic procedure, we study the performance of the stability method in the case of exponential distribution claim amounts. After presenting numerically and graphically the stability bounds, an interpretation and comparison of the results have been done.

Keywords: Markov Chain, regenerative processes, risk models, ruin probability, strong stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1093
98 Residual Life Prediction for a System Subject to Condition Monitoring and Two Failure Modes

Authors: Akram Khaleghei Ghosheh Balagh, Viliam Makis

Abstract:

In this paper, we investigate the residual life prediction problem for a partially observable system subject to two failure modes, namely a catastrophic failure and a failure due to the system degradation. The system is subject to condition monitoring and the degradation process is described by a hidden Markov model with unknown parameters. The parameter estimation procedure based on an EM algorithm is developed and the formulas for the conditional reliability function and the mean residual life are derived, illustrated by a numerical example.

Keywords: Partially observable system, hidden Markov model, competing risks, residual life prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1997
97 Geospatial Assessment of State Lands in the Cape Coast Urban Area

Authors: E. B. Quarcoo, I. Yakubu, K. J. Appau

Abstract:

Current land use and land cover (LULC) dynamics in Ghana have revealed considerable changes in settlement spaces. As a result, this study is intended to merge the cellular automata and Markov chain models using remotely sensed data and Geographical Information System (GIS) approaches to monitor, map, and detect the spatio-temporal LULC change in state lands within Cape Coast Metropolis. Multi-temporal satellite images from 1986-2020 were pre-processed, geo-referenced, and then mapped using supervised maximum likelihood classification to investigate the state’s land cover history (1986-2020) with an overall mapping accuracy of approximately 85%. The study further observed the rate of change for the area to have favored the built-up area 9.8 (12.58 km2) to the detriment of vegetation 5.14 (12.68 km2), but on average, 0.37 km2 (91.43 acres, or 37.00 ha.) of the landscape was transformed yearly. Subsequently, the CA-Markov model was used to anticipate the potential LULC for the study area for 2030. According to the anticipated 2030 LULC map, the patterns of vegetation transitioning into built-up regions will continue over the following ten years as a result of urban growth.

Keywords: LULC, cellular automata, Markov Chain, state lands, urbanisation, public lands, cape coast metropolis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52
96 Ruin Probability for a Markovian Risk Model with Two-type Claims

Authors: Dongdong Zhang, Deran Zhang

Abstract:

In this paper, a Markovian risk model with two-type claims is considered. In such a risk model, the occurrences of the two type claims are described by two point processes {Ni(t), t ¸ 0}, i = 1, 2, where {Ni(t), t ¸ 0} is the number of jumps during the interval (0, t] for the Markov jump process {Xi(t), t ¸ 0} . The ruin probability ª(u) of a company facing such a risk model is mainly discussed. An integral equation satisfied by the ruin probability ª(u) is obtained and the bounds for the convergence rate of the ruin probability ª(u) are given by using key-renewal theorem.

Keywords: Risk model, ruin probability, Markov jump process, integral equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1322
95 A Condition-Based Maintenance Policy for Multi-Unit Systems Subject to Deterioration

Authors: Nooshin Salari, Viliam Makis

Abstract:

In this paper, we propose a condition-based maintenance policy for multi-unit systems considering the existence of economic dependency among units. We consider a system composed of N identical units, where each unit deteriorates independently. Deterioration process of each unit is modeled as a three-state continuous time homogeneous Markov chain with two working states and a failure state. The average production rate of units varies in different working states and demand rate of the system is constant. Units are inspected at equidistant time epochs, and decision regarding performing maintenance is determined by the number of units in the failure state. If the total number of units in the failure state exceeds a critical level, maintenance is initiated, where units in failed state are replaced correctively and deteriorated state units are maintained preventively. Our objective is to determine the optimal number of failed units to initiate maintenance minimizing the long run expected average cost per unit time. The problem is formulated and solved in the semi-Markov decision process (SMDP) framework. A numerical example is developed to demonstrate the proposed policy and the comparison with the corrective maintenance policy is presented.

Keywords: Reliability, production, maintenance optimization, Semi-Markov Decision Process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747
94 Featured based Segmentation of Color Textured Images using GLCM and Markov Random Field Model

Authors: Dipti Patra, Mridula J

Abstract:

In this paper, we propose a new image segmentation approach for colour textured images. The proposed method for image segmentation consists of two stages. In the first stage, textural features using gray level co-occurrence matrix(GLCM) are computed for regions of interest (ROI) considered for each class. ROI acts as ground truth for the classes. Ohta model (I1, I2, I3) is the colour model used for segmentation. Statistical mean feature at certain inter pixel distance (IPD) of I2 component was considered to be the optimized textural feature for further segmentation. In the second stage, the feature matrix obtained is assumed to be the degraded version of the image labels and modeled as Markov Random Field (MRF) model to model the unknown image labels. The labels are estimated through maximum a posteriori (MAP) estimation criterion using ICM algorithm. The performance of the proposed approach is compared with that of the existing schemes, JSEG and another scheme which uses GLCM and MRF in RGB colour space. The proposed method is found to be outperforming the existing ones in terms of segmentation accuracy with acceptable rate of convergence. The results are validated with synthetic and real textured images.

Keywords: Texture Image Segmentation, Gray Level Cooccurrence Matrix, Markov Random Field Model, Ohta colour space, ICM algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2114
93 Stability Bound of Ruin Probability in a Reduced Two-Dimensional Risk Model

Authors: Zina Benouaret, Djamil Aissani

Abstract:

In this work, we introduce the qualitative and quantitative concept of the strong stability method in the risk process modeling two lines of business of the same insurance company or an insurance and re-insurance companies that divide between them both claims and premiums with a certain proportion. The approach proposed is based on the identification of the ruin probability associate to the model considered, with a stationary distribution of a Markov random process called a reversed process. Our objective, after clarifying the condition and the perturbation domain of parameters, is to obtain the stability inequality of the ruin probability which is applied to estimate the approximation error of a model with disturbance parameters by the considered model. In the stability bound obtained, all constants are explicitly written.

Keywords: Markov chain, risk models, ruin probabilities, strong stability analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
92 A Novel Approach of Route Choice in Stochastic Time-varying Networks

Authors: Siliang Wang, Minghui Wang

Abstract:

Many exist studies always use Markov decision processes (MDPs) in modeling optimal route choice in stochastic, time-varying networks. However, taking many variable traffic data and transforming them into optimal route decision is a computational challenge by employing MDPs in real transportation networks. In this paper we model finite horizon MDPs using directed hypergraphs. It is shown that the problem of route choice in stochastic, time-varying networks can be formulated as a minimum cost hyperpath problem, and it also can be solved in linear time. We finally demonstrate the significant computational advantages of the introduced methods.

Keywords: Markov decision processes (MDPs), stochastictime-varying networks, hypergraphs, route choice.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
91 Mobile Robot Control by Von Neumann Computer

Authors: E. V. Larkin, T. A. Akimenko, A. V. Bogomolov, A. N. Privalov

Abstract:

The digital control system of mobile robots (MR) control is considered. It is shown that sequential interpretation of control algorithm operators, unfolding in physical time, suggests the occurrence of time delays between inputting data from sensors and outputting data to actuators. Another destabilizing control factor is presence of backlash in the joints of an actuator with an executive unit. Complex model of control system, which takes into account the dynamics of the MR, the dynamics of the digital controller and backlash in actuators, is worked out. The digital controller model is divided into two parts: the first part describes the control law embedded in the controller in the form of a control program that realizes a polling procedure when organizing transactions to sensors and actuators. The second part of the model describes the time delays that occur in the Von Neumann-type controller when processing data. To estimate time intervals, the algorithm is represented in the form of an ergodic semi-Markov process. For an ergodic semi-Markov process of common form, a method is proposed for estimation a wandering time from one arbitrary state to another arbitrary state. Example shows how the backlash and time delays affect the quality characteristics of the MR control system functioning.

Keywords: Mobile robot, backlash, control algorithm, Von Neumann controller, semi-Markov process, time delay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 283
90 Image Modeling Using Gibbs-Markov Random Field and Support Vector Machines Algorithm

Authors: Refaat M Mohamed, Ayman El-Baz, Aly A. Farag

Abstract:

This paper introduces a novel approach to estimate the clique potentials of Gibbs Markov random field (GMRF) models using the Support Vector Machines (SVM) algorithm and the Mean Field (MF) theory. The proposed approach is based on modeling the potential function associated with each clique shape of the GMRF model as a Gaussian-shaped kernel. In turn, the energy function of the GMRF will be in the form of a weighted sum of Gaussian kernels. This formulation of the GMRF model urges the use of the SVM with the Mean Field theory applied for its learning for estimating the energy function. The approach has been tested on synthetic texture images and is shown to provide satisfactory results in retrieving the synthesizing parameters.

Keywords: Image Modeling, MRF, Parameters Estimation, SVM Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590
89 Support Vector Machine Approach for Classification of Cancerous Prostate Regions

Authors: Metehan Makinacı

Abstract:

The objective of this paper, is to apply support vector machine (SVM) approach for the classification of cancerous and normal regions of prostate images. Three kinds of textural features are extracted and used for the analysis: parameters of the Gauss- Markov random field (GMRF), correlation function and relative entropy. Prostate images are acquired by the system consisting of a microscope, video camera and a digitizing board. Cross-validated classification over a database of 46 images is implemented to evaluate the performance. In SVM classification, sensitivity and specificity of 96.2% and 97.0% are achieved for the 32x32 pixel block sized data, respectively, with an overall accuracy of 96.6%. Classification performance is compared with artificial neural network and k-nearest neighbor classifiers. Experimental results demonstrate that the SVM approach gives the best performance.

Keywords: Computer-aided diagnosis, support vector machines, Gauss-Markov random fields, texture classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
88 Application of Finite Dynamic Programming to Decision Making in the Use of Industrial Residual Water Treatment Plants

Authors: Oscar Vega Camacho, Andrea Vargas Guevara, Ellery Rowina Ariza

Abstract:

This paper presents the application of finite dynamic programming, specifically the "Markov Chain" model, as part of the decision making process of a company in the cosmetics sector located in the vicinity of Bogota DC. The objective of this process was to decide whether the company should completely reconstruct its wastewater treatment plant or instead optimize the plant through the addition of equipment. The goal of both of these options was to make the required improvements in order to comply with parameters established by national legislation regarding the treatment of waste before it is released into the environment. This technique will allow the company to select the best option and implement a solution for the processing of waste to minimize environmental damage and the acquisition and implementation costs.

Keywords: Decision making, Markov chain, optimization, wastewater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1971
87 Applying Gibbs Sampler for Multivariate Hierarchical Linear Model

Authors: Satoshi Usami

Abstract:

Among various HLM techniques, the Multivariate Hierarchical Linear Model (MHLM) is desirable to use, particularly when multivariate criterion variables are collected and the covariance structure has information valuable for data analysis. In order to reflect prior information or to obtain stable results when the sample size and the number of groups are not sufficiently large, the Bayes method has often been employed in hierarchical data analysis. In these cases, although the Markov Chain Monte Carlo (MCMC) method is a rather powerful tool for parameter estimation, Procedures regarding MCMC have not been formulated for MHLM. For this reason, this research presents concrete procedures for parameter estimation through the use of the Gibbs samplers. Lastly, several future topics for the use of MCMC approach for HLM is discussed.

Keywords: Gibbs sampler, Hierarchical Linear Model, Markov Chain Monte Carlo, Multivariate Hierarchical Linear Model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
86 Optimal Bayesian Control of the Proportion of Defectives in a Manufacturing Process

Authors: Viliam Makis, Farnoosh Naderkhani, Leila Jafari

Abstract:

In this paper, we present a model and an algorithm for the calculation of the optimal control limit, average cost, sample size, and the sampling interval for an optimal Bayesian chart to control the proportion of defective items produced using a semi-Markov decision process approach. Traditional p-chart has been widely used for controlling the proportion of defectives in various kinds of production processes for many years. It is well known that traditional non-Bayesian charts are not optimal, but very few optimal Bayesian control charts have been developed in the literature, mostly considering finite horizon. The objective of this paper is to develop a fast computational algorithm to obtain the optimal parameters of a Bayesian p-chart. The decision problem is formulated in the partially observable framework and the developed algorithm is illustrated by a numerical example.

Keywords: Bayesian control chart, semi-Markov decision process, quality control, partially observable process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1129
85 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: Coupled Markov random field, environment, object-based analysis, Polarimetric SAR images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
84 A New Vector Quantization Front-End Process for Discrete HMM Speech Recognition System

Authors: M. Debyeche, J.P Haton, A. Houacine

Abstract:

The paper presents a complete discrete statistical framework, based on a novel vector quantization (VQ) front-end process. This new VQ approach performs an optimal distribution of VQ codebook components on HMM states. This technique that we named the distributed vector quantization (DVQ) of hidden Markov models, succeeds in unifying acoustic micro-structure and phonetic macro-structure, when the estimation of HMM parameters is performed. The DVQ technique is implemented through two variants. The first variant uses the K-means algorithm (K-means- DVQ) to optimize the VQ, while the second variant exploits the benefits of the classification behavior of neural networks (NN-DVQ) for the same purpose. The proposed variants are compared with the HMM-based baseline system by experiments of specific Arabic consonants recognition. The results show that the distributed vector quantization technique increase the performance of the discrete HMM system.

Keywords: Hidden Markov Model, Vector Quantization, Neural Network, Speech Recognition, Arabic Language

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2009
83 Trajectory-Based Modified Policy Iteration

Authors: R. Sharma, M. Gopal

Abstract:

This paper presents a new problem solving approach that is able to generate optimal policy solution for finite-state stochastic sequential decision-making problems with high data efficiency. The proposed algorithm iteratively builds and improves an approximate Markov Decision Process (MDP) model along with cost-to-go value approximates by generating finite length trajectories through the state-space. The approach creates a synergy between an approximate evolving model and approximate cost-to-go values to produce a sequence of improving policies finally converging to the optimal policy through an intelligent and structured search of the policy space. The approach modifies the policy update step of the policy iteration so as to result in a speedy and stable convergence to the optimal policy. We apply the algorithm to a non-holonomic mobile robot control problem and compare its performance with other Reinforcement Learning (RL) approaches, e.g., a) Q-learning, b) Watkins Q(λ), c) SARSA(λ).

Keywords: Markov Decision Process (MDP), Mobile robot, Policy iteration, Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
82 Multiple Targets Classification and Fuzzy Logic Decision Fusion in Wireless Sensor Networks

Authors: Ahmad Aljaafreh

Abstract:

This paper proposes a hierarchical hidden Markov model (HHMM) to model the detection of M vehicles in a wireless sensor network (WSN). The HHMM model contains an extra level of hidden Markov model to model the temporal transitions of each state of the first HMM. By modeling the temporal transitions, only those hypothesis with nonzero transition probabilities needs to be tested. Thus, this method efficiently reduces the computation load, which is preferable in WSN applications.This paper integrates several techniques to optimize the detection performance. The output of the states of the first HMM is modeled as Gaussian Mixture Model (GMM), where the number of states and the number of Gaussians are experimentally determined, while the other parameters are estimated using Expectation Maximization (EM). HHMM is used to model the sequence of the local decisions which are based on multiple hypothesis testing with maximum likelihood approach. The states in the HHMM represent various combinations of vehicles of different types. Due to the statistical advantages of multisensor data fusion, we propose a heuristic based on fuzzy weighted majority voting to enhance cooperative classification of moving vehicles within a region that is monitored by a wireless sensor network. A fuzzy inference system weighs each local decision based on the signal to noise ratio of the acoustic signal for target detection and the signal to noise ratio of the radio signal for sensor communication. The spatial correlation among the observations of neighboring sensor nodes is efficiently utilized as well as the temporal correlation. Simulation results demonstrate the efficiency of this scheme.

Keywords: Classification, decision fusion, fuzzy logic, hidden Markov model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6203
81 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.

Keywords: Piecewise, Bayesian, reversible jump MCMC, segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
80 Availability Analysis of Milling System in a Rice Milling Plant

Authors: P. C. Tewari, Parveen Kumar

Abstract:

The paper describes the availability analysis of milling system of a rice milling plant using probabilistic approach. The subsystems under study are special purpose machines. The availability analysis of the system is carried out to determine the effect of failure and repair rates of each subsystem on overall performance (i.e. steady state availability) of system concerned. Further, on the basis of effect of repair rates on the system availability, maintenance repair priorities have been suggested. The problem is formulated using Markov Birth-Death process taking exponential distribution for probable failures and repair rates. The first order differential equations associated with transition diagram are developed by using mnemonic rule. These equations are solved using normalizing conditions and recursive method to drive out the steady state availability expression of the system. The findings of the paper are presented and discussed with the plant personnel to adopt a suitable maintenance policy to increase the productivity of the rice milling plant.

Keywords: Markov process, milling system, availability modeling, rice milling plant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518