Search results for: higher dimensional pmf estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14255

Search results for: higher dimensional pmf estimation

14045 A Novel Approach to Design of EDDR Architecture for High Speed Motion Estimation Testing Applications

Authors: T. Gangadhararao, K. Krishna Kishore

Abstract:

Motion Estimation (ME) plays a critical role in a video coder, testing such a module is of priority concern. While focusing on the testing of ME in a video coding system, this work presents an error detection and data recovery (EDDR) design, based on the residue-and-quotient (RQ) code, to embed into ME for video coding testing applications. An error in processing Elements (PEs), i.e. key components of a ME, can be detected and recovered effectively by using the proposed EDDR design. The proposed EDDR design for ME testing can detect errors and recover data with an acceptable area overhead and timing penalty.

Keywords: area overhead, data recovery, error detection, motion estimation, reliability, residue-and-quotient (RQ) code

Procedia PDF Downloads 399
14044 Reasons for the Slow Uptake of Embodied Carbon Estimation in the Sri Lankan Building Sector

Authors: Amalka Nawarathna, Nirodha Fernando, Zaid Alwan

Abstract:

Global carbon reduction is not merely a responsibility of environmentally advanced developed countries, but also a responsibility of developing countries regardless of their less impact on global carbon emissions. In recognition of that, Sri Lanka as a developing country has initiated promoting green building construction as one reduction strategy. However, notwithstanding the increasing attention on Embodied Carbon (EC) reduction in the global building sector, they still mostly focus on Operational Carbon (OC) reduction (through improving operational energy). An adequate attention has not yet been given on EC estimation and reduction. Therefore, this study aims to identify the reasons for the slow uptake of EC estimation in the Sri Lankan building sector. To achieve this aim, 16 numbers of global barriers to estimate EC were identified through existing literature. They were then subjected to a pilot survey to identify the significant reasons for the slow uptake of EC estimation in the Sri Lankan building sector. A questionnaire with a three-point Likert scale was used to this end. The collected data were analysed using descriptive statistics. The findings revealed that 11 out of 16 challenges/ barriers are highly relevant as reasons for the slow uptake in estimating EC in buildings in Sri Lanka while the other five challenges/ barriers remain as moderately relevant reasons. Further, the findings revealed that there are no low relevant reasons. Eventually, the paper concluded that all the known reasons are significant to the Sri Lankan building sector and it is necessary to address them in order to upturn the attention on EC reduction.

Keywords: embodied carbon emissions, embodied carbon estimation, global carbon reduction, Sri Lankan building sector

Procedia PDF Downloads 173
14043 An Efficient Approach for Speed up Non-Negative Matrix Factorization for High Dimensional Data

Authors: Bharat Singh Om Prakash Vyas

Abstract:

Now a day’s applications deal with High Dimensional Data have tremendously used in the popular areas. To tackle with such kind of data various approached has been developed by researchers in the last few decades. To tackle with such kind of data various approached has been developed by researchers in the last few decades. One of the problems with the NMF approaches, its randomized valued could not provide absolute optimization in limited iteration, but having local optimization. Due to this, we have proposed a new approach that considers the initial values of the decomposition to tackle the issues of computationally expensive. We have devised an algorithm for initializing the values of the decomposed matrix based on the PSO (Particle Swarm Optimization). Through the experimental result, we will show the proposed method converse very fast in comparison to other row rank approximation like simple NMF multiplicative, and ACLS techniques.

Keywords: ALS, NMF, high dimensional data, RMSE

Procedia PDF Downloads 317
14042 Spectral Domain Fast Multipole Method for Solving Integral Equations of One and Two Dimensional Wave Scattering

Authors: Mohammad Ahmad, Dayalan Kasilingam

Abstract:

In this paper, a spectral domain implementation of the fast multipole method is presented. It is shown that the aggregation, translation, and disaggregation stages of the fast multipole method (FMM) can be performed using the spectral domain (SD) analysis. The spectral domain fast multipole method (SD-FMM) has the advantage of eliminating the near field/far field classification used in conventional FMM formulation. The study focuses on the application of SD-FMM to one-dimensional (1D) and two-dimensional (2D) electric field integral equation (EFIE). The case of perfectly conducting strip, circular and square cylinders are numerically analyzed and compared with the results from the standard method of moments (MoM).

Keywords: electric field integral equation, fast multipole method, method of moments, wave scattering, spectral domain

Procedia PDF Downloads 369
14041 Random Access in IoT Using Naïve Bayes Classification

Authors: Alhusein Almahjoub, Dongyu Qiu

Abstract:

This paper deals with the random access procedure in next-generation networks and presents the solution to reduce total service time (TST) which is one of the most important performance metrics in current and future internet of things (IoT) based networks. The proposed solution focuses on the calculation of optimal transmission probability which maximizes the success probability and reduces TST. It uses the information of several idle preambles in every time slot, and based on it, it estimates the number of backlogged IoT devices using Naïve Bayes estimation which is a type of supervised learning in the machine learning domain. The estimation of backlogged devices is necessary since optimal transmission probability depends on it and the eNodeB does not have information about it. The simulations are carried out in MATLAB which verify that the proposed solution gives excellent performance.

Keywords: random access, LTE/LTE-A, 5G, machine learning, Naïve Bayes estimation

Procedia PDF Downloads 120
14040 Estimation Model for Concrete Slump Recovery by Using Superplasticizer

Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert

Abstract:

This paper is aimed to introduce the solution of concrete slump recovery using chemical admixture type-F (superplasticizer, naphthalene base) to the practice, in order to solve unusable concrete problem due to concrete loss its slump, especially for those tropical countries that have faster slump loss rate. In the other hand, randomly adding superplasticizer into concrete can cause concrete to segregate. Therefore, this paper also develops the estimation model used to calculate amount of second dose of superplasticizer need for concrete slump recovery. Fresh properties of ordinary Portland cement concrete with volumetric ratio of paste to void between aggregate (paste content) of 1.1-1.3 with water-cement ratio zone of 0.30 to 0.67 and initial superplasticizer (naphthalene base) of 0.25%- 1.6% were tested for initial slump and slump loss for every 30 minutes for one and half hour by slump cone test. Those concretes with slump loss range from 10% to 90% were re-dosed and successfully recovered back to its initial slump. Slump after re-dosed was tested by slump cone test. From the result, it has been concluded that, slump loss was slower for those mix with high initial dose of superplasticizer due to addition of superplasticizer will disturb cement hydration. The required second dose of superplasticizer was affected by two major parameter, which were water-cement ratio and paste content, where lower water-cement ratio and paste content cause an increase in require second dose of superplasticizer. The amount of second dose of superplasticizer is higher as the solid content within the system is increase, solid can be either from cement particles or aggregate. The data was analyzed to form an equation use to estimate the amount of second dosage requirement of superplasticizer to recovery slump to its original.

Keywords: estimation model, second superplasticizer dosage, slump loss, slump recovery

Procedia PDF Downloads 172
14039 Approximating Maximum Speed on Road from Curvature Information of Bezier Curve

Authors: M. Yushalify Misro, Ahmad Ramli, Jamaludin M. Ali

Abstract:

Bezier curves have useful properties for path generation problem, for instance, it can generate the reference trajectory for vehicles to satisfy the path constraints. Both algorithms join cubic Bezier curve segment smoothly to generate the path. Some of the useful properties of Bezier are curvature. In mathematics, the curvature is the amount by which a geometric object deviates from being flat, or straight in the case of a line. Another extrinsic example of curvature is a circle, where the curvature is equal to the reciprocal of its radius at any point on the circle. The smaller the radius, the higher the curvature thus the vehicle needs to bend sharply. In this study, we use Bezier curve to fit highway-like curve. We use the different approach to finding the best approximation for the curve so that it will resemble highway-like curve. We compute curvature value by analytical differentiation of the Bezier Curve. We will then compute the maximum speed for driving using the curvature information obtained. Our research works on some assumptions; first the Bezier curve estimates the real shape of the curve which can be verified visually. Even, though, the fitting process of Bezier curve does not interpolate exactly on the curve of interest, we believe that the estimation of speed is acceptable. We verified our result with the manual calculation of the curvature from the map.

Keywords: speed estimation, path constraints, reference trajectory, Bezier curve

Procedia PDF Downloads 344
14038 Tracking Filtering Algorithm Based on ConvLSTM

Authors: Ailing Yang, Penghan Song, Aihua Cai

Abstract:

The nonlinear maneuvering target tracking problem is mainly a state estimation problem when the target motion model is uncertain. Traditional solutions include Kalman filtering based on Bayesian filtering framework and extended Kalman filtering. However, these methods need prior knowledge such as kinematics model and state system distribution, and their performance is poor in state estimation of nonprior complex dynamic systems. Therefore, in view of the problems existing in traditional algorithms, a convolution LSTM target state estimation (SAConvLSTM-SE) algorithm based on Self-Attention memory (SAM) is proposed to learn the historical motion state of the target and the error distribution information measured at the current time. The measured track point data of airborne radar are processed into data sets. After supervised training, the data-driven deep neural network based on SAConvLSTM can directly obtain the target state at the next moment. Through experiments on two different maneuvering targets, we find that the network has stronger robustness and better tracking accuracy than the existing tracking methods.

Keywords: maneuvering target, state estimation, Kalman filter, LSTM, self-attention

Procedia PDF Downloads 103
14037 Real Time Video Based Smoke Detection Using Double Optical Flow Estimation

Authors: Anton Stadler, Thorsten Ike

Abstract:

In this paper, we present a video based smoke detection algorithm based on TVL1 optical flow estimation. The main part of the algorithm is an accumulating system for motion angles and upward motion speed of the flow field. We optimized the usage of TVL1 flow estimation for the detection of smoke with very low smoke density. Therefore, we use adapted flow parameters and estimate the flow field on difference images. We show in theory and in evaluation that this improves the performance of smoke detection significantly. We evaluate the smoke algorithm using videos with different smoke densities and different backgrounds. We show that smoke detection is very reliable in varying scenarios. Further we verify that our algorithm is very robust towards crowded scenes disturbance videos.

Keywords: low density, optical flow, upward smoke motion, video based smoke detection

Procedia PDF Downloads 318
14036 Two-Dimensional WO₃ and TiO₂ Semiconductor Oxides Developed by Atomic Layer Deposition with Controllable Nano-Thickness on Wafer-Scale

Authors: S. Zhuiykov, Z. Wei

Abstract:

Conformal defect-free two-dimensional (2D) WO₃ and TiO₂ semiconductors have been developed by the atomic layer deposition (ALD) technique on wafer scale with unique approach to the thickness control with precision of ± 10% from the monolayer of nanomaterial (less than 1.0 nm thick) to the nano-layered 2D structures with thickness of ~3.0-7.0 nm. Developed 2D nanostructures exhibited unique, distinguishable properties at nanoscale compare to their thicker counterparts. Specifically, 2D TiO₂-Au bilayer demonstrated improved photocatalytic degradation of palmitic acid under UV and visible light illumination. Improved functional capabilities of 2D semiconductors would be advantageous to various environmental, nano-energy and bio-sensing applications. The ALD-enabled approach is proven to be versatile, scalable and applicable to the broader range of 2D semiconductors.

Keywords: two-dimensional (2D) semiconductors, ALD, WO₃, TiO₂, wafer scale

Procedia PDF Downloads 130
14035 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method

Authors: M. M. Qasaymeh, M. A. Khodeir

Abstract:

Subspace channel estimation methods have been studied widely. It depends on subspace decomposition of the covariance matrix to separate signal subspace from noise subspace. The decomposition normally is done by either Eigenvalue Decomposition (EVD) or Singular Value Decomposition (SVD) of the Auto-Correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. In this paper, the multipath channel estimation problem for a Slow Frequency Hopping (SFH) system using noise space based method is considered. An efficient method to estimate multipath the time delays basically is proposed, by applying MUltiple Signal Classification (MUSIC) algorithm which used the null space extracted by the Rank Revealing LU factorization (RRLU). The RRLU provides accurate information about the rank and the numerical null space which make it a valuable tool in numerical linear algebra. The proposed novel method decreases the computational complexity approximately to the half compared with RRQR methods keeping the same performance. Computer simulations are also included to demonstrate the effectiveness of the proposed scheme.

Keywords: frequency hopping, channel model, time delay estimation, RRLU, RRQR, MUSIC, LS-ESPRIT

Procedia PDF Downloads 384
14034 Remote Sensing and GIS Integration for Paddy Production Estimation in Bali Province, Indonesia

Authors: Sarono, Hamim Zaky Hadibasyir, dan Ridho Kurniawan

Abstract:

Estimation of paddy production is one of the areas that can be examined using the techniques of remote sensing and geographic information systems (GIS) in the field of agriculture. The purpose of this research is to know the amount of the paddy production estimation and how remote sensing and geographic information systems (GIS) are able to perform analysis of paddy production estimation in Tegalallang and Payangan Sub district, Bali Province, Indonesia. The method used is the method of land suitability. This method associates a physical parameters which are to be embodied in the smallest unit of a mapping that represents a mapping unit in a particular field and connecting with its field productivity. Analysis of estimated production using standard land suitability from FAO using matching technique. The parameters used to create the land unit is slope (FAO), climate classification (Oldeman), landform (Prapto Suharsono), and soil type. Land use map consist of paddy and non paddy field information obtained from Geo-eye 1 imagery using visual interpretation technique. Landsat image of the Data used for the interpretation of the landform, the classification of the slopes obtained from high point identification with method of interpolation spline, whereas climate data, soil, use secondary data originating from institutions-related institutions. The results of this research indicate Tegallalang and Payangan Districts in known wetland suitability consists of S1 (very suitable) covering an area of 2884,7 ha with the productivity of 5 tons/ha and S2 (suitable) covering an area of 482,9 ha with the productivity of 3 tons/ha. The sum of paddy production estimation as a results in both districts are 31.744, 3 tons in one year.

Keywords: production estimation, paddy, remote sensing, geography information system, land suitability

Procedia PDF Downloads 301
14033 Modeling Depth Averaged Velocity and Boundary Shear Stress Distributions

Authors: Ebissa Gadissa Kedir, C. S. P. Ojha, K. S. Hari Prasad

Abstract:

In the present study, the depth-averaged velocity and boundary shear stress in non-prismatic compound channels with three different converging floodplain angles ranging from 1.43ᶱ to 7.59ᶱ have been studied. The analytical solutions were derived by considering acting forces on the channel beds and walls. In the present study, five key parameters, i.e., non-dimensional coefficient, secondary flow term, secondary flow coefficient, friction factor, and dimensionless eddy viscosity, were considered and discussed. An expression for non-dimensional coefficient and integration constants was derived based on the boundary conditions. The model was applied to different data sets of the present experiments and experiments from other sources, respectively, to examine and analyse the influence of floodplain converging angles on depth-averaged velocity and boundary shear stress distributions. The results show that the non-dimensional parameter plays important in portraying the variation of depth-averaged velocity and boundary shear stress distributions with different floodplain converging angles. Thus, the variation of the non-dimensional coefficient needs attention since it affects the secondary flow term and secondary flow coefficient in both the main channel and floodplains. The analysis shows that the depth-averaged velocities are sensitive to a shear stress-dependent model parameter non-dimensional coefficient, and the analytical solutions are well agreed with experimental data when five parameters are included. It is inferred that the developed model may facilitate the interest of others in complex flow modeling.

Keywords: depth-average velocity, converging floodplain angles, non-dimensional coefficient, non-prismatic compound channels

Procedia PDF Downloads 48
14032 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method

Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola

Abstract:

The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.

Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization

Procedia PDF Downloads 330
14031 Estimation and Forecasting with a Quantile AR Model for Financial Returns

Authors: Yuzhi Cai

Abstract:

This talk presents a Bayesian approach to quantile autoregressive (QAR) time series model estimation and forecasting. We establish that the joint posterior distribution of the model parameters and future values is well defined. The associated MCMC algorithm for parameter estimation and forecasting converges to the posterior distribution quickly. We also present a combining forecasts technique to produce more accurate out-of-sample forecasts by using a weighted sequence of fitted QAR models. A moving window method to check the quality of the estimated conditional quantiles is developed. We verify our methodology using simulation studies and then apply it to currency exchange rate data. An application of the method to the USD to GBP daily currency exchange rates will also be discussed. The results obtained show that an unequally weighted combining method performs better than other forecasting methodology.

Keywords: combining forecasts, MCMC, quantile modelling, quantile forecasting, predictive density functions

Procedia PDF Downloads 314
14030 Convergence Results of Two-Dimensional Homogeneous Elastic Plates from Truncation of Potential Energy

Authors: Erick Pruchnicki, Nikhil Padhye

Abstract:

Plates are important engineering structures which have attracted extensive research since the 19th century. The subject of this work is statical analysis of a linearly elastic homogenous plate under small deformations. A 'thin plate' is a three-dimensional structure comprising of a small transverse dimension with respect to a flat mid-surface. The general aim of any plate theory is to deduce a two-dimensional model, in terms of mid-surface quantities, to approximately and accurately describe the plate's deformation in terms of mid-surface quantities. In recent decades, a common starting point for this purpose is to utilize series expansion of a displacement field across the thickness dimension in terms of the thickness parameter (h). These attempts are mathematically consistent in deriving leading-order plate theories based on certain a priori scaling between the thickness and the applied loads; for example, asymptotic methods which are aimed at generating leading-order two-dimensional variational problems by postulating formal asymptotic expansion of the displacement fields. Such methods rigorously generate a hierarchy of two-dimensional models depending on the order of magnitude of the applied load with respect to the plate-thickness. However, in practice, applied loads are external and thus not directly linked or dependent on the geometry/thickness of the plate; thus, rendering any such model (based on a priori scaling) of limited practical utility. In other words, the main limitation of these approaches is that they do not furnish a single plate model for all orders of applied loads. Following analogy of recent efforts of deploying Fourier-series expansion to study convergence of reduced models, we propose two-dimensional model(s) resulting from truncation of the potential energy and rigorously prove the convergence of these two-dimensional plate models to the parent three-dimensional linear elasticity with increasing truncation order of the potential energy.

Keywords: plate theory, Fourier-series expansion, convergence result, Legendre polynomials

Procedia PDF Downloads 81
14029 Estimation of Opc, Fly Ash and Slag Contents in Blended and Composite Cements by Selective Dissolution Method

Authors: Suresh Palla

Abstract:

This research paper presents the results of the study on the estimation of fly ash, slag and cement contents in blended and composite cements by novel selective dissolution method. Types of cement samples investigated include OPC with fly ash as performance improver, OPC with slag as performance improver, PPC, PSC and Composite cement confirming to respective Indian Standards. Slag and OPC contents in PSC were estimated by selectively dissolving OPC in stage 1 and selectively dissolving slag in stage 2. In the case of composite cement sample, the percentage of cement, slag and fly ash were estimated systematically by selective dissolution of cement, slag and fly ash in three stages. In the first stage, cement dissolved and separated by leaving the residue of slag and fly ash, designated as R1. The second stage involves gravimetric estimation of fractions of OPC, residue and selective dissolution of fly ash and slag contents. Fly ash content, R2 was estimated through gravimetric analysis. Thereafter, the difference between the R1 and R2 is considered as slag content. The obtained results of cement, fly ash and slag using selective dissolution method showed 10% of standard deviation with the corresponding percentage of respective constituents. The results suggest that this novel selective dissolution method can be successfully used for estimation of OPC and SCMs contents in different types of cements.

Keywords: selective dissolution method , fly ash, ggbfs slag, edta

Procedia PDF Downloads 122
14028 Magnetic End Leakage Flux in a Spoke Type Rotor Permanent Magnet Synchronous Generator

Authors: Petter Eklund, Jonathan Sjölund, Sandra Eriksson, Mats Leijon

Abstract:

The spoke type rotor can be used to obtain magnetic flux concentration in permanent magnet machines. This allows the air gap magnetic flux density to exceed the remanent flux density of the permanent magnets but gives problems with leakage fluxes in the magnetic circuit. The end leakage flux of one spoke type permanent magnet rotor design is studied through measurements and finite element simulations. The measurements are performed in the end regions of a 12 kW prototype generator for a vertical axis wind turbine. The simulations are made using three dimensional finite elements to calculate the magnetic field distribution in the end regions of the machine. Also two dimensional finite element simulations are performed and the impact of the two dimensional approximation is studied. It is found that the magnetic leakage flux in the end regions of the machine is equal to about 20% of the flux in the permanent magnets. The overestimation of the performance by the two dimensional approximation is quantified and a curve-fitted expression for its behavior is suggested.

Keywords: end effects, end leakage flux, permanent magnet machine, spoke type rotor

Procedia PDF Downloads 298
14027 A Modified Refined Higher Order Zigzag Theory for Stress Analysis of Hybrid Composite Laminates

Authors: Dhiraj Biswas, Chaitali Ray

Abstract:

A modified refined higher order zigzag theory has been developed in this paper in order to compute the accurate interlaminar stresses within hybrid laminates. Warping has significant effect on the mechanical behaviour of the laminates. To the best of author(s)’ knowledge the stress analysis of hybrid laminates is not reported in the published literature. The present paper aims to develop a new C0 continuous element based on the refined higher order zigzag theories considering warping effect in the formulation of hybrid laminates. The eight noded isoparametric plate bending element is used for the flexural analysis of laminated composite plates to study the performance of the proposed model. The transverse shear stresses are computed by using the differential equations of stress equilibrium in a simplified manner. A computer code has been developed using MATLAB software package. Several numerical examples are solved to assess the performance of the present finite element model based on the proposed higher order zigzag theory by comparing the present results with three-dimensional elasticity solutions. The present formulation is validated by comparing the results obtained from the relevant literature. An extensive parametric study has been carried out on the hybrid laminates with varying percentage of materials and angle of orientation of fibre content.

Keywords: hybrid laminate, Interlaminar stress, refined higher order zigzag theory, warping effect

Procedia PDF Downloads 193
14026 Polynomially Adjusted Bivariate Density Estimates Based on the Saddlepoint Approximation

Authors: S. B. Provost, Susan Sheng

Abstract:

An alternative bivariate density estimation methodology is introduced in this presentation. The proposed approach involves estimating the density function associated with the marginal distribution of each of the two variables by means of the saddlepoint approximation technique and applying a bivariate polynomial adjustment to the product of these density estimates. Since the saddlepoint approximation is utilized in the context of density estimation, such estimates are determined from empirical cumulant-generating functions. In the univariate case, the saddlepoint density estimate is itself adjusted by a polynomial. Given a set of observations, the coefficients of the polynomial adjustments are obtained from the sample moments. Several illustrative applications of the proposed methodology shall be presented. Since this approach relies essentially on a determinate number of sample moments, it is particularly well suited for modeling massive data sets.

Keywords: density estimation, empirical cumulant-generating function, moments, saddlepoint approximation

Procedia PDF Downloads 247
14025 Motion Estimator Architecture with Optimized Number of Processing Elements for High Efficiency Video Coding

Authors: Seongsoo Lee

Abstract:

Motion estimation occupies the heaviest computation in HEVC (high efficiency video coding). Many fast algorithms such as TZS (test zone search) have been proposed to reduce the computation. Still the huge computation of the motion estimation is a critical issue in the implementation of HEVC video codec. In this paper, motion estimator architecture with optimized number of PEs (processing element) is presented by exploiting early termination. It also reduces hardware size by exploiting parallel processing. The presented motion estimator architecture has 8 PEs, and it can efficiently perform TZS with very high utilization of PEs.

Keywords: motion estimation, test zone search, high efficiency video coding, processing element, optimization

Procedia PDF Downloads 332
14024 Human Posture Estimation Based on Multiple Viewpoints

Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo

Abstract:

This study aimed to address the problem of improving the confidence of key points by fusing multi-view information, thereby estimating human posture more accurately. We first obtained multi-view image information and then used the MvP algorithm to fuse this multi-view information together to obtain a set of high-confidence human key points. We used these as the input for the Spatio-Temporal Graph Convolution (ST-GCN). ST-GCN is a deep learning model used for processing spatio-temporal data, which can effectively capture spatio-temporal relationships in video sequences. By using the MvP algorithm to fuse multi-view information and inputting it into the spatio-temporal graph convolution model, this study provides an effective method to improve the accuracy of human posture estimation and provides strong support for further research and application in related fields.

Keywords: multi-view, pose estimation, ST-GCN, joint fusion

Procedia PDF Downloads 35
14023 Parameter Estimation of False Dynamic EIV Model with Additive Uncertainty

Authors: Dalvinder Kaur Mangal

Abstract:

For the past decade, noise corrupted output measurements have been a fundamental research problem to be investigated. On the other hand, the estimation of the parameters for linear dynamic systems when also the input is affected by noise is recognized as more difficult problem which only recently has received increasing attention. Representations where errors or measurement noises/disturbances are present on both the inputs and outputs are usually called errors-in-variables (EIV) models. These disturbances may also have additive effects which are also considered in this paper. Parameter estimation of false EIV problem using equation error, output error and iterative prefiltering identification schemes with and without additive uncertainty, when only the output observation is corrupted by noise has been dealt in this paper. The comparative study of these three schemes has also been carried out.

Keywords: errors-in-variable (EIV), false EIV, equation error, output error, iterative prefiltering, Gaussian noise

Procedia PDF Downloads 457
14022 Particle Filter State Estimation Algorithm Based on Improved Artificial Bee Colony Algorithm

Authors: Guangyuan Zhao, Nan Huang, Xuesong Han, Xu Huang

Abstract:

In order to solve the problem of sample dilution in the traditional particle filter algorithm and achieve accurate state estimation in a nonlinear system, a particle filter method based on an improved artificial bee colony (ABC) algorithm was proposed. The algorithm simulated the process of bee foraging and optimization and made the high likelihood region of the backward probability of particles moving to improve the rationality of particle distribution. The opposition-based learning (OBL) strategy is introduced to optimize the initial population of the artificial bee colony algorithm. The convergence factor is introduced into the neighborhood search strategy to limit the search range and improve the convergence speed. Finally, the crossover and mutation operations of the genetic algorithm are introduced into the search mechanism of the following bee, which makes the algorithm jump out of the local extreme value quickly and continue to search the global extreme value to improve its optimization ability. The simulation results show that the improved method can improve the estimation accuracy of particle filters, ensure the diversity of particles, and improve the rationality of particle distribution.

Keywords: particle filter, impoverishment, state estimation, artificial bee colony algorithm

Procedia PDF Downloads 99
14021 A Systematic Review on Development of a Cost Estimation Framework: A Case Study of Nigeria

Authors: Babatunde Dosumu, Obuks Ejohwomu, Akilu Yunusa-Kaltungo

Abstract:

Cost estimation in construction is often difficult, particularly when dealing with risks and uncertainties, which are inevitable and peculiar to developing countries like Nigeria. Direct consequences of these are major deviations in cost, duration, and quality. The fundamental aim of this study is to develop a framework for assessing the impacts of risk on cost estimation, which in turn causes variabilities between contract sum and final account. This is very important, as initial estimates given to clients should reflect the certain magnitude of consistency and accuracy, which the client builds other planning-related activities upon, and also enhance the capabilities of construction industry professionals by enabling better prediction of the final account from the contract sum. In achieving this, a systematic literature review was conducted with cost variability and construction projects as search string within three databases: Scopus, Web of science, and Ebsco (Business source premium), which are further analyzed and gap(s) in knowledge or research discovered. From the extensive review, it was found that factors causing deviation between final accounts and contract sum ranged between 1 and 45. Besides, it was discovered that a cost estimation framework similar to Building Cost Information Services (BCIS) is unavailable in Nigeria, which is a major reason why initial estimates are very often inconsistent, leading to project delay, abandonment, or determination at the expense of the huge sum of money invested. It was concluded that the development of a cost estimation framework that is adjudged an important tool in risk shedding rather than risk-sharing in project risk management would be a panacea to cost estimation problems, leading to cost variability in the Nigerian construction industry by the time this ongoing Ph.D. research is completed. It was recommended that practitioners in the construction industry should always take into account risk in order to facilitate the rapid development of the construction industry in Nigeria, which should give stakeholders a more in-depth understanding of the estimation effectiveness and efficiency to be adopted by stakeholders in both the private and public sectors.

Keywords: cost variability, construction projects, future studies, Nigeria

Procedia PDF Downloads 163
14020 Shrinkage Evaluation in a Stepped Wax Pattern – a Simulation Approach

Authors: Alok S Chauhan, Sridhar S., Pradyumna R.

Abstract:

In the process of precision investment casting of turbine hollow blade/vane components, a part of the dimensional deviations observed in the castings can be attributed to the wax pattern. In the process of injection moulding of wax to produce patterns, heated wax shrinks in size during cooling in the die, leading to a reduction in the dimensions of the pattern. Also, flow and thermal induced residual stresses result in shrinkage & warpage of the component after removal from the die, further adding to the deviations. Injection moulding parameters such as wax temperature, flow rate, packing pressure, etc. affect the flow and thermal behavior of the component and hence are directly responsible for the dimensional deviations. There is a need to precisely determine and control these deviations in order to achieve stringent dimensional accuracies imposed on these castings by aerospace standards. Simulation based approaches provide a platform to predict these dimensional deviations without resorting to elaborate experimentation. In the present paper, Moldex3D simulation package has been utilized to analyze the effect of variations in injection temperature, packing pressure and cooling time on the shrinkage behavior of a stepped pattern. Two types of waxes with different rheological properties have been included in the study to gauge the effect of change in wax on the dimensional deviations. A full factorial design of experiments has been configured with these parameters and results of analysis of variance have been presented.

Keywords: wax patterns, investment casting, pattern die/mould, wax injection, Moldex3D simulation

Procedia PDF Downloads 339
14019 Analysis of Reflection of Elastic Waves in Three Dimensional Model Comprised with Viscoelastic Anisotropic Medium

Authors: Amares Chattopadhyay, Akanksha Srivastava

Abstract:

A unified approach to study the reflection of a plane wave in three-dimensional model comprised of the triclinic viscoelastic medium. The phase velocities of reflected qP, qSV and qSH wave have been calculated for the concerned medium by using the eigenvalue approach. The generalized method has been implemented to compute the complex form of amplitude ratios. Further, we discussed the nature of reflection coefficients of qP, qSV and qSH wave. The viscoelastic parameter, polar angle and azimuthal angle are found to be strongly influenced by amplitude ratios. The research article is particularly focused to study the effect of viscoelasticity associated with highly anisotropic media which exhibits the notable information about the reflection coefficients of qP, qSV, and qSH wave. The outcomes may further useful to the better exploration of all types of hydrocarbon reservoir and advancement in the field of reflection seismology.

Keywords: amplitude ratios, three dimensional, triclinic, viscoelastic

Procedia PDF Downloads 200
14018 A Quantification Method of Attractiveness of Stations and an Estimation Method of Number of Passengers Taking into Consideration the Attractiveness of the Station

Authors: Naoya Ozaki, Takuya Watanabe, Ryosuke Matsumoto, Noriko Fukasawa

Abstract:

In the metropolitan areas in Japan, in many stations, shopping areas are set up, and escalators and elevators are installed to make the stations be barrier-free. Further, many areas around the stations are being redeveloped. Railway business operators want to know how much effect these circumstances have on attractiveness of the station or number of passengers using the station. So, we performed a questionnaire survey of the station users in the metropolitan areas for finding factors to affect the attractiveness of stations. Then, based on the analysis of the survey, we developed a method to quantitatively evaluate attractiveness of the stations. We also developed an estimation method for number of passengers based on combination of attractiveness of the station quantitatively evaluated and the residential and labor population around the station. Then, we derived precise linear regression models estimating the attractiveness of the station and number of passengers of the station.

Keywords: attractiveness of the station, estimation method, number of passengers of the station, redevelopment around the station, renovation of the station

Procedia PDF Downloads 258
14017 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 116
14016 Authentication Based on Hand Movement by Low Dimensional Space Representation

Authors: Reut Lanyado, David Mendlovic

Abstract:

Most biological methods for authentication require special equipment and, some of them are easy to fake. We proposed a method for authentication based on hand movement while typing a sentence with a regular camera. This technique uses the full video of the hand, which is harder to fake. In the first phase, we tracked the hand joints in each frame. Next, we represented a single frame for each individual using our Pose Agnostic Rotation and Movement (PARM) dimensional space. Then, we indicated a full video of hand movement in a fixed low dimensional space using this method: Fixed Dimension Video by Interpolation Statistics (FDVIS). Finally, we identified each individual in the FDVIS representation using unsupervised clustering and supervised methods. Accuracy exceeds 96% for 80 individuals by using supervised KNN.

Keywords: authentication, feature extraction, hand recognition, security, signal processing

Procedia PDF Downloads 86