Search results for: the maximizer of the posterior marginal estimate
860 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups
Authors: Naushad Mamode Khan
Abstract:
The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood-based estimating methodology. The joint generalized quasi-likelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill-conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQL-III) that is based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.
Keywords: Longitudinal, Com-Poisson, Ill-conditioned, INAR(1), GLMS, GQL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1776859 Variational EM Inference Algorithm for Gaussian Process Classification Model with Multiclass and Its Application to Human Action Classification
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
In this paper, we propose the variational EM inference algorithm for the multi-class Gaussian process classification model that can be used in the field of human behavior recognition. This algorithm can drive simultaneously both a posterior distribution of a latent function and estimators of hyper-parameters in a Gaussian process classification model with multiclass. Our algorithm is based on the Laplace approximation (LA) technique and variational EM framework. This is performed in two steps: called expectation and maximization steps. First, in the expectation step, using the Bayesian formula and LA technique, we derive approximately the posterior distribution of the latent function indicating the possibility that each observation belongs to a certain class in the Gaussian process classification model. Second, in the maximization step, using a derived posterior distribution of latent function, we compute the maximum likelihood estimator for hyper-parameters of a covariance matrix necessary to define prior distribution for latent function. These two steps iteratively repeat until a convergence condition satisfies. Moreover, we apply the proposed algorithm with human action classification problem using a public database, namely, the KTH human action data set. Experimental results reveal that the proposed algorithm shows good performance on this data set.
Keywords: Bayesian rule, Gaussian process classification model with multiclass, Gaussian process prior, human action classification, laplace approximation, variational EM algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760858 Development of a Microsensor to Minimize Post Cataract Surgery Complications
Authors: M. Mottaghi, F. Ghalichi, H. Badri Ghavifekr, H. Niroomand Oskui
Abstract:
This paper presents design and characterization of a microaccelerometer designated for integration into cataract surgical probe to detect hardness of different eye tissues during cataract surgery. Soft posterior lens capsule of eye can be easily damaged in comparison with hard opaque lens since the surgeon can not see directly behind cutting needle during the surgery. Presence of microsensor helps the surgeon to avoid rupturing posterior lens capsule which if occurs leads to severe complications such as glaucoma, infection, or even blindness. The microsensor having overall dimensions of 480 μm x 395 μm is able to deliver significant capacitance variations during encountered vibration situations which makes it capable to distinguish between different types of tissue. Integration of electronic components on chip ensures high level of reliability and noise immunity while minimizes space and power requirements. Physical characteristics and results on performance testing, proves integration of microsensor as an effective tool to aid the surgeon during this procedure.Keywords: Cataract surgery, MEMS, Microsensor, Phacoemulsification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846857 Optical Coherence Tomography Combined with the Confocal Microscopy Method and Fluorescence for Class V Cavities Investigations
Authors: M. Rominu, C. Sinescu, A.G. Podoleanu
Abstract:
The purpose of this study is to present a non invasive method for the marginal adaptation evaluation in class V composite restorations. Standardized class V cavities, prepared in human extracted teeth, were filled with Premise (Kerr) composite. The specimens were thermo cycled. The interfaces were examined by Optical Coherence Tomography method (OCT) combined with the confocal microscopy and fluorescence. The optical configuration uses two single mode directional couplers with a superluminiscent diode as the source at 1300 nm. The scanning procedure is similar to that used in any confocal microscope, where the fast scanning is enface (line rate) and the depth scanning is much slower (at the frame rate). Gaps at the interfaces as well as inside the composite resin materials were identified. OCT has numerous advantages which justify its use in vivo as well as in vitro in comparison with conventional techniques.Keywords: Class V Cavities, Marginal Adaptation, Optical Coherence Tomography Fluorescence, Confocal Microscopy
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1580856 Real Time Speed Estimation of Vehicles
Authors: Azhar Hussain, Kashif Shahzad, Chunming Tang
Abstract:
this paper gives a novel approach towards real-time speed estimation of multiple traffic vehicles using fuzzy logic and image processing techniques with proper arrangement of camera parameters. The described algorithm consists of several important steps. First, the background is estimated by computing median over time window of specific frames. Second, the foreground is extracted using fuzzy similarity approach (FSA) between estimated background pixels and the current frame pixels containing foreground and background. Third, the traffic lanes are divided into two parts for both direction vehicles for parallel processing. Finally, the speeds of vehicles are estimated by Maximum a Posterior Probability (MAP) estimator. True ground speed is determined by utilizing infrared sensors for three different vehicles and the results are compared to the proposed algorithm with an accuracy of ± 0.74 kmph.
Keywords: Defuzzification, Fuzzy similarity approach, lane cropping, Maximum a Posterior Probability (MAP) estimator, Speed estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2806855 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm
Authors: Suparman
Abstract:
Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.
Keywords: Piecewise, Bayesian, reversible jump MCMC, segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669854 Histochemistry of Intestinal Enzymes of Juvenile Dourado Salminus brasiliensis Fed Bovine Colostrum
Authors: Debora B. Moretti, Wiolene M. Nordi, Thaline Maira P. Cruz, José Eurico P. Cyrino, Raul Machado-Neto
Abstract:
Enzyme activity was evaluated in the intestine of juvenile dourado (Salminus brasiliensis) fed with diets containing 0, 10 or 20% of lyophilized bovine colostrum (LBC) inclusion for either 30 or 60 days. The intestinal enzymes acid and alkaline phosphatase (ACP and ALP, respectively), non-specific esterase (NSE), lipase (LIP), dipeptidyl aminopeptidase IV (DAP IV) and leucine aminopeptidase (LAP) were studied using histochemistry in four intestinal segments (S1, S2, S3 and posterior intestine). Weak proteolitic activity was observed in all intestinal segments for DAP IV and LAP. The activity of NSE and LIP was also weak in all intestines, except for the moderate activity of NSE in the S2 of 20% LBC group after 30 days and in the S1 of 0% LBC group after 60 days. The ACP was detected only in the S2 and S3 of the 10% LBC group after 30 days. Moderate and strong staining was observed in the first three intestinal segments for ALP and weak activity in the posterior intestine. The activity of DAP IV, LAP and ALP were also present in the cytoplasm of the enterocytes. In the present results, bovine colostrum feeding did not cause alterations in activity of intestinal enzymes.
Keywords: Carnivorous fish, enterocyte, intestinal epithelium, teleost.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008853 Bayesian Online Learning of Corresponding Points of Objects with Sequential Monte Carlo
Authors: Miika Toivanen, Jouko Lampinen
Abstract:
This paper presents an online method that learns the corresponding points of an object from un-annotated grayscale images containing instances of the object. In the first image being processed, an ensemble of node points is automatically selected which is matched in the subsequent images. A Bayesian posterior distribution for the locations of the nodes in the images is formed. The likelihood is formed from Gabor responses and the prior assumes the mean shape of the node ensemble to be similar in a translation and scale free space. An association model is applied for separating the object nodes and background nodes. The posterior distribution is sampled with Sequential Monte Carlo method. The matched object nodes are inferred to be the corresponding points of the object instances. The results show that our system matches the object nodes as accurately as other methods that train the model with annotated training images.Keywords: Bayesian modeling, Gabor filters, Online learning, Sequential Monte Carlo.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583852 Laboratory Evaluation of Asphalt Concrete Prepared with Over Burnt Brick Aggregate Treated by Zycosoil
Authors: D. Sarkar, M. Pal, A. K. Sarkar
Abstract:
Asphaltic concrete for pavement construction in India are produced by using crushed stone, gravels etc. as aggregate. In north-Eastern region of India, there is a scarcity of stone aggregate. Therefore the road engineers are always in search of an optional material as aggregate which can replace the regularly used material. The purpose of this work was to evaluate the utilization of substandard or marginal aggregates in flexible pavement construction. The investigation was undertaken to evaluate the effects of using lower quality aggregates such as over burnt brick aggregate on the preparation of asphalt concrete for flexible pavements. The scope of this work included a review of available literature and existing data, a laboratory evaluation organized to determine the effects of marginal aggregates and potential techniques to upgrade these substandard materials, and a laboratory evaluation of these upgraded marginal aggregate asphalt mixtures. Over burnt brick aggregates are water susceptible and can leads to moisture damage. Moisture damage is the progressive loss of functionality of the material owing to loss of the adhesion bond between the asphalt binder and the aggregate surface. Hence zycosoil as an anti striping additive were evaluated in this study. This study summarizes the results of the laboratory evaluation carried out to investigate the properties of asphalt concrete prepared with zycosoil modified over burnt brick aggregate. Marshall specimen were prepared with stone aggregate, zycosoil modified stone aggregate, over burnt brick aggregate and zycosoil modified over burnt brick aggregate. Results show that addition of zycosoil with stone aggregate increased stability by 6% and addition of zycosoil with over burnt brick aggregate increased stability by 30%.
Keywords: Asphalt Concrete, Over Burnt Brick Aggregate, Marshall Stability, Zycosoil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2882851 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.
Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168850 Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping
Authors: Adnan A. Y. Mustafa
Abstract:
In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented.
Keywords: Big images, binary images, similarity, matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 920849 Performance Improvement in the Bivariate Models by using Modified Marginal Variance of Noisy Observations for Image-Denoising Applications
Authors: R. Senthilkumar
Abstract:
Most simple nonlinear thresholding rules for wavelet- based denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. This paper attempts to give a recipe for selecting one of the popular image-denoising algorithms based on VisuShrink, SureShrink, OracleShrink, BayesShrink and BiShrink and also this paper compares different Bivariate models used for image denoising applications. The first part of the paper compares different Shrinkage functions used for image-denoising. The second part of the paper compares different bivariate models and the third part of this paper uses the Bivariate model with modified marginal variance which is based on Laplacian assumption. This paper gives an experimental comparison on six 512x512 commonly used images, Lenna, Barbara, Goldhill, Clown, Boat and Stonehenge. The following noise powers 25dB,26dB, 27dB, 28dB and 29dB are added to the six standard images and the corresponding Peak Signal to Noise Ratio (PSNR) values are calculated for each noise level.Keywords: BiShrink, Image-Denoising, PSNR, Shrinkage function
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1347848 The Reach of Shopping Center Layout Form on U Subway - Based On Kernel Density Estimate
Authors: Wen Liu
Abstract:
With the rapid progress of modern cities, the railway construction must be developing quickly in China.As a typical high-density country, shopping center on the subway should be one important factor during the process of urban development. The paper discusses the influence of the layout of shopping center on the subway, and put it in the time and space’s axis of Shanghai urban development. We usethe digital technology to establish the database of relevant information. And then get the change role about shopping center on subway in Shanghaiby the Kernel density estimate.The result shows the development of shopping center on subway has a relationship with local economic strength, population size, policysupport, and city construction. And the suburbanization trend of shopping center would be increasingly significant.By this case research, we could see the Kernel density estimate is an efficient analysis method on the spatial layout. It could reveal the characters of layout form of shopping center on subway in essence. And it can also be applied to the other research of space form.
Keywords: Shanghai, Shopping center on the subway, Layout form, The Kernel density estimate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787847 The Using Artificial Neural Network to Estimate of Chemical Oxygen Demand
Authors: S. Areerachakul
Abstract:
Nowadays, the increase of human population every year results in increasing of water usage and demand. Saen Saep canal is important canal in Bangkok. The main objective of this study is using Artificial Neural Network (ANN) model to estimate the Chemical Oxygen Demand (COD) on data from 11 sampling sites. The data is obtained from the Department of Drainage and Sewerage, Bangkok Metropolitan Administration, during 2007-2011. The twelve parameters of water quality are used as the input of the models. These water quality indices affect the COD. The experimental results indicate that the ANN model provides a high correlation coefficient (R=0.89).
Keywords: Artificial neural network, chemical oxygen demand, estimate, surface water.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2268846 FEM Models of Glued Laminated Timber Beams Enhanced by Bayesian Updating of Elastic Moduli
Authors: L. Melzerová, T. Janda, M. Šejnoha, J. Šejnoha
Abstract:
Two finite element (FEM) models are presented in this paper to address the random nature of the response of glued timber structures made of wood segments with variable elastic moduli evaluated from 3600 indentation measurements. This total database served to create the same number of ensembles as was the number of segments in the tested beam. Statistics of these ensembles were then assigned to given segments of beams and the Latin Hypercube Sampling (LHS) method was called to perform 100 simulations resulting into the ensemble of 100 deflections subjected to statistical evaluation. Here, a detailed geometrical arrangement of individual segments in the laminated beam was considered in the construction of two-dimensional FEM model subjected to in fourpoint bending to comply with the laboratory tests. Since laboratory measurements of local elastic moduli may in general suffer from a significant experimental error, it appears advantageous to exploit the full scale measurements of timber beams, i.e. deflections, to improve their prior distributions with the help of the Bayesian statistical method. This, however, requires an efficient computational model when simulating the laboratory tests numerically. To this end, a simplified model based on Mindlin’s beam theory was established. The improved posterior distributions show that the most significant change of the Young’s modulus distribution takes place in laminae in the most strained zones, i.e. in the top and bottom layers within the beam center region. Posterior distributions of moduli of elasticity were subsequently utilized in the 2D FEM model and compared with the original simulations.
Keywords: Bayesian inference, FEM, four point bending test, laminated timber, parameter estimation, prior and posterior distribution, Young’s modulus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2219845 Dynamic Correlations and Portfolio Optimization between Islamic and Conventional Equity Indexes: A Vine Copula-Based Approach
Authors: Imen Dhaou
Abstract:
This study examines conditional Value at Risk by applying the GJR-EVT-Copula model, and finds the optimal portfolio for eight Dow Jones Islamic-conventional pairs. Our methodology consists of modeling the data by a bivariate GJR-GARCH model in which we extract the filtered residuals and then apply the Peak over threshold model (POT) to fit the residual tails in order to model marginal distributions. After that, we use pair-copula to find the optimal portfolio risk dependence structure. Finally, with Monte Carlo simulations, we estimate the Value at Risk (VaR) and the conditional Value at Risk (CVaR). The empirical results show the VaR and CVaR values for an equally weighted portfolio of Dow Jones Islamic-conventional pairs. In sum, we found that the optimal investment focuses on Islamic-conventional US Market index pairs because of high investment proportion; however, all other index pairs have low investment proportion. These results deliver some real repercussions for portfolio managers and policymakers concerning to optimal asset allocations, portfolio risk management and the diversification advantages of these markets.
Keywords: CVaR, Dow Jones Islamic index, GJR-GARCH-EVT-pair copula, portfolio optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 997844 Computer Modeling of Drug Distribution after Intravitreal Administration
Authors: N. Haghjou, M. J. Abdekhodaie, Y. L. Cheng, M. Saadatmand
Abstract:
Intravitreal injection (IVI) is the most common treatment for eye posterior segment diseases such as endopthalmitis, retinitis, age-related macular degeneration, diabetic retinopathy, uveitis, and retinal detachment. Most of the drugs used to treat vitreoretinal diseases, have a narrow concentration range in which they are effective, and may be toxic at higher concentrations. Therefore, it is critical to know the drug distribution within the eye following intravitreal injection. Having knowledge of drug distribution, ophthalmologists can decide on drug injection frequency while minimizing damage to tissues. The goal of this study was to develop a computer model to predict intraocular concentrations and pharmacokinetics of intravitreally injected drugs. A finite volume model was created to predict distribution of two drugs with different physiochemical properties in the rabbit eye. The model parameters were obtained from literature review. To validate this numeric model, the in vivo data of spatial concentration profile from the lens to the retina were compared with the numeric data. The difference was less than 5% between the numerical and experimental data. This validation provides strong support for the numerical methodology and associated assumptions of the current study.
Keywords: Posterior segment, Intravitreal injection (IVI), Pharmacokinetic, Modelling, Finite volume method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2448843 Marangoni Instability in a Fluid Layer with Insoluble Surfactant
Authors: Ainon Syazana Ab. Hamid, Seripah Awang Kechil, Ahmad Sukri Abd. Aziz
Abstract:
The Marangoni convective instability in a horizontal fluid layer with the insoluble surfactant and nondeformable free surface is investigated. The surface tension at the free surface is linearly dependent on the temperature and concentration gradients. At the bottom surface, the temperature conditions of uniform temperature and uniform heat flux are considered. By linear stability theory, the exact analytical solutions for the steady Marangoni convection are derived and the marginal curves are plotted. The effects of surfactant or elasticity number, Lewis number and Biot number on the marginal Marangoni instability are assessed. The surfactant concentration gradients and the heat transfer mechanism at the free surface have stabilizing effects while the Lewis number destabilizes fluid system. The fluid system with uniform temperature condition at the bottom boundary is more stable than the fluid layer that is subjected to uniform heat flux at the bottom boundary.Keywords: Analytical solutions, Marangoni Instability, Nondeformable free surface, Surfactant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832842 The Study of the Discrete Risk Model with Random Income
Authors: Peichen Zhao
Abstract:
In this paper, we extend the compound binomial model to the case where the premium income process, based on a binomial process, is no longer a linear function. First, a mathematically recursive formula is derived for non ruin probability, and then, we examine the expected discounted penalty function, satisfy a defect renewal equation. Third, the asymptotic estimate for the expected discounted penalty function is then given. Finally, we give two examples of ruin quantities to illustrate applications of the recursive formula and the asymptotic estimate for penalty function.
Keywords: Discounted penalty function, compound binomial process, recursive formula, discrete renewal equation, asymptotic estimate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423841 Software Effort Estimation Models Using Radial Basis Function Network
Authors: E. Praynlin, P. Latha
Abstract:
Software Effort Estimation is the process of estimating the effort required to develop software. By estimating the effort, the cost and schedule required to estimate the software can be determined. Accurate Estimate helps the developer to allocate the resource accordingly in order to avoid cost overrun and schedule overrun. Several methods are available in order to estimate the effort among which soft computing based method plays a prominent role. Software cost estimation deals with lot of uncertainty among all soft computing methods neural network is good in handling uncertainty. In this paper Radial Basis Function Network is compared with the back propagation network and the results are validated using six data sets and it is found that RBFN is best suitable to estimate the effort. The Results are validated using two tests the error test and the statistical test.
Keywords: Software cost estimation, Radial Basis Function Network (RBFN), Back propagation function network, Mean Magnitude of Relative Error (MMRE).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2387840 The Estimate Rate of Permanent Flow of a Liquid Simulating Blood by Doppler Effect
Authors: Malika.D Kedir-Talha, Mohammed Mehenni
Abstract:
To improve the characterization of blood flows, we propose a method which makes it possible to use the spectral analysis of the Doppler signals. Our calculation induces a reasonable approximation, the error made on estimated speed reflects the fact that speed depends on the flow conditions as well as on measurement parameters like the bore and the volume flow rate. The estimate of the Doppler signal frequency enables us to determine the maximum Doppler frequencie Fd max as well as the maximum flow speed. The results show that the difference between the estimated frequencies ( Fde ) and the Doppler frequencies ( Fd ) is small, this variation tends to zero for important θ angles and it is proportional to the diameter D. The description of the speed of friction and the coefficient of friction justify the error rate obtained.Keywords: Doppler frequency, Doppler spectrum, estimate speed, permanent flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1343839 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.
Keywords: Human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, Prior distribution and approximate posterior distribution, KTH dataset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1006838 Estimation Model of Dry Docking Duration Using Data Mining
Authors: Isti Surjandari, Riara Novita
Abstract:
Maintenance is one of the most important activities in the shipyard industry. However, sometimes it is not supported by adequate services from the shipyard, where inaccuracy in estimating the duration of the ship maintenance is still common. This makes estimation of ship maintenance duration is crucial. This study uses Data Mining approach, i.e., CART (Classification and Regression Tree) to estimate the duration of ship maintenance that is limited to dock works or which is known as dry docking. By using the volume of dock works as an input to estimate the maintenance duration, 4 classes of dry docking duration were obtained with different linear model and job criteria for each class. These linear models can then be used to estimate the duration of dry docking based on job criteria.
Keywords: Classification and regression tree (CART), data mining, dry docking, maintenance duration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2433837 Self Organizing Mixture Network in Mixture Discriminant Analysis: An Experimental Study
Authors: Nazif Çalış, Murat Erişoğlu, Hamza Erol, Tayfun Servi
Abstract:
In the recent works related with mixture discriminant analysis (MDA), expectation and maximization (EM) algorithm is used to estimate parameters of Gaussian mixtures. But, initial values of EM algorithm affect the final parameters- estimates. Also, when EM algorithm is applied two times, for the same data set, it can be give different results for the estimate of parameters and this affect the classification accuracy of MDA. Forthcoming this problem, we use Self Organizing Mixture Network (SOMN) algorithm to estimate parameters of Gaussians mixtures in MDA that SOMN is more robust when random the initial values of the parameters are used [5]. We show effectiveness of this method on popular simulated waveform datasets and real glass data set.Keywords: Self Organizing Mixture Network, MixtureDiscriminant Analysis, Waveform Datasets, Glass Identification, Mixture of Multivariate Normal Distributions
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517836 Deterministic Modelling to Estimate Economic Impact from Implementation and Management of Large Infrastructure
Authors: Dimitrios J. Dimitriou
Abstract:
It is widely recognised that the assets portfolio development is helping to enhance economic growth, productivity and competitiveness. While numerous studies and reports certify the positive effect of investments in large infrastructure investments on the local economy, still, the methodology to estimate the contribution in economic development is a challenging issue for researchers and economists. The key question is how to estimate those economic impacts in each economic system. This paper provides a compact and applicable methodological framework providing quantitative results in terms of the overall jobs and income generated into the project life cycle. According to a deterministic mathematical approach, the key variables and the modelling framework are presented. The numerical case study highlights key results for a new motorway project in Greece, which is experienced economic stress for many years, providing the opportunity for comparisons with similar cases.
Keywords: Quantitative modelling, economic impact; large transport infrastructure; economic assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 884835 Simulation of Increased Ambient Ozone to Estimate Nutrient Content and Genetic Change in Two Thai Soybean Cultivars
Authors: Orose Rugchati, Kanita Thanacharoenchanaphas
Abstract:
This research studied the simulation of increased ambient ozone to estimate nutrient content and genetic changes in two Thai soybean cultivars (Chiang Mai 60 and Srisumrong 1). Ozone stress conditions affected proteins and lipids. It was found that proteins decreased, but lipids increased. Srisumrong 1 cultivars were more sensitive to ozone stress than Chiang Mai 60 cultivars. The effect of ozone stress conditions on plant phenotype and genotype was analyzed using the AFLP technique for the 2 Thai soybean cultivars (Chiang Mai 60 and Srisumrong 1).Keywords: simulation, ambient ozone estimate, nutrient content, genetic changes , Thai soybean
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382834 Estimating Localization Network Node Positions with a Multi-Robot System
Authors: Mikko Elomaa, Aarne Halme
Abstract:
A novel method using bearing-only SLAM to estimate node positions of a localization network is proposed. A group of simple robots are used to estimate the position of each node. Each node has a unique ID, which it can communicate to a robot close by. Initially the node IDs and positions are unknown. A case example using RFID technology in the localization network is introduced.
Keywords: Localization network, Multi-robot, RFID, SLAM
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1129833 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method
Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan
Abstract:
The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.
Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345832 Verified Experiment: Intelligent Fuzzy Weighted Input Estimation Method to Inverse Heat Conduction Problem
Authors: Chen-Yu Wang, Tsung-Chien Chen, Ming-Hui Lee, Jen-Feng Huang
Abstract:
In this paper, the innovative intelligent fuzzy weighted input estimation method (FWIEM) can be applied to the inverse heat transfer conduction problem (IHCP) to estimate the unknown time-varying heat flux efficiently as presented. The feasibility of this method can be verified by adopting the temperature measurement experiment. We would like to focus attention on the heat flux estimation to three kinds of samples (Copper, Iron and Steel/AISI 304) with the same 3mm thickness. The temperature measurements are then regarded as the inputs into the FWIEM to estimate the heat flux. The experiment results show that the proposed algorithm can estimate the unknown time-varying heat flux on-line.Keywords: Fuzzy Weighted Input Estimation Method, IHCP andHeat Flux.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541831 Analytical Model of Connection Establishment Duration Calculation in Wireless Networks
Authors: Y. Chaiko
Abstract:
It is important to provide possibility of so called “handover" for the mobile subscriber from GSM network to Wi-Fi network and back. To solve specified problem it is necessary to estimate connection time between base station and wireless access point. Difficulty to estimate this parameter is that it doesn-t described in specifications of the standard and, hence, no recommended value is given. In this paper, the analytical model is presented that allows the estimating connection time between base station and IEEE 802.11 access point.Keywords: Access point, connection procedure, Wi-Fi network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722