Search results for: point estimation
6550 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method
Authors: M. M. Qasaymeh, M. A. Khodeir
Abstract:
Subspace channel estimation methods have been studied widely. It depends on subspace decomposition of the covariance matrix to separate signal subspace from noise subspace. The decomposition normally is done by either Eigenvalue Decomposition (EVD) or Singular Value Decomposition (SVD) of the Auto-Correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. In this paper, the multipath channel estimation problem for a Slow Frequency Hopping (SFH) system using noise space based method is considered. An efficient method to estimate multipath the time delays basically is proposed, by applying MUltiple Signal Classification (MUSIC) algorithm which used the null space extracted by the Rank Revealing LU factorization (RRLU). The RRLU provides accurate information about the rank and the numerical null space which make it a valuable tool in numerical linear algebra. The proposed novel method decreases the computational complexity approximately to the half compared with RRQR methods keeping the same performance. Computer simulations are also included to demonstrate the effectiveness of the proposed scheme.Keywords: frequency hopping, channel model, time delay estimation, RRLU, RRQR, MUSIC, LS-ESPRIT
Procedia PDF Downloads 4106549 Generalization of Zhou Fixed Point Theorem
Authors: Yu Lu
Abstract:
Fixed point theory is a basic tool for the study of the existence of Nash equilibria in game theory. This paper presents a significant generalization of the Veinott-Zhou fixed point theorem for increasing correspondences, which serves as an essential framework for investigating the existence of Nash equilibria in supermodular and quasisupermodular games. To establish our proofs, we explore different conceptions of multivalued increasingness and provide comprehensive results concerning the existence of the largest/least fixed point. We provide two distinct approaches to the proof, each offering unique insights and advantages. These advancements not only extend the applicability of the Veinott-Zhou theorem to a broader range of economic scenarios but also enhance the theoretical framework for analyzing equilibrium behavior in complex game-theoretic models. Our findings pave the way for future research in the development of more sophisticated models of economic behavior and strategic interaction.Keywords: fixed-point, Tarski’s fixed-point theorem, Nash equilibrium, supermodular game
Procedia PDF Downloads 546548 Estimation and Forecasting with a Quantile AR Model for Financial Returns
Authors: Yuzhi Cai
Abstract:
This talk presents a Bayesian approach to quantile autoregressive (QAR) time series model estimation and forecasting. We establish that the joint posterior distribution of the model parameters and future values is well defined. The associated MCMC algorithm for parameter estimation and forecasting converges to the posterior distribution quickly. We also present a combining forecasts technique to produce more accurate out-of-sample forecasts by using a weighted sequence of fitted QAR models. A moving window method to check the quality of the estimated conditional quantiles is developed. We verify our methodology using simulation studies and then apply it to currency exchange rate data. An application of the method to the USD to GBP daily currency exchange rates will also be discussed. The results obtained show that an unequally weighted combining method performs better than other forecasting methodology.Keywords: combining forecasts, MCMC, quantile modelling, quantile forecasting, predictive density functions
Procedia PDF Downloads 3476547 Estimation of Opc, Fly Ash and Slag Contents in Blended and Composite Cements by Selective Dissolution Method
Authors: Suresh Palla
Abstract:
This research paper presents the results of the study on the estimation of fly ash, slag and cement contents in blended and composite cements by novel selective dissolution method. Types of cement samples investigated include OPC with fly ash as performance improver, OPC with slag as performance improver, PPC, PSC and Composite cement confirming to respective Indian Standards. Slag and OPC contents in PSC were estimated by selectively dissolving OPC in stage 1 and selectively dissolving slag in stage 2. In the case of composite cement sample, the percentage of cement, slag and fly ash were estimated systematically by selective dissolution of cement, slag and fly ash in three stages. In the first stage, cement dissolved and separated by leaving the residue of slag and fly ash, designated as R1. The second stage involves gravimetric estimation of fractions of OPC, residue and selective dissolution of fly ash and slag contents. Fly ash content, R2 was estimated through gravimetric analysis. Thereafter, the difference between the R1 and R2 is considered as slag content. The obtained results of cement, fly ash and slag using selective dissolution method showed 10% of standard deviation with the corresponding percentage of respective constituents. The results suggest that this novel selective dissolution method can be successfully used for estimation of OPC and SCMs contents in different types of cements.Keywords: selective dissolution method , fly ash, ggbfs slag, edta
Procedia PDF Downloads 1566546 Polynomially Adjusted Bivariate Density Estimates Based on the Saddlepoint Approximation
Authors: S. B. Provost, Susan Sheng
Abstract:
An alternative bivariate density estimation methodology is introduced in this presentation. The proposed approach involves estimating the density function associated with the marginal distribution of each of the two variables by means of the saddlepoint approximation technique and applying a bivariate polynomial adjustment to the product of these density estimates. Since the saddlepoint approximation is utilized in the context of density estimation, such estimates are determined from empirical cumulant-generating functions. In the univariate case, the saddlepoint density estimate is itself adjusted by a polynomial. Given a set of observations, the coefficients of the polynomial adjustments are obtained from the sample moments. Several illustrative applications of the proposed methodology shall be presented. Since this approach relies essentially on a determinate number of sample moments, it is particularly well suited for modeling massive data sets.Keywords: density estimation, empirical cumulant-generating function, moments, saddlepoint approximation
Procedia PDF Downloads 2806545 Motion Estimator Architecture with Optimized Number of Processing Elements for High Efficiency Video Coding
Authors: Seongsoo Lee
Abstract:
Motion estimation occupies the heaviest computation in HEVC (high efficiency video coding). Many fast algorithms such as TZS (test zone search) have been proposed to reduce the computation. Still the huge computation of the motion estimation is a critical issue in the implementation of HEVC video codec. In this paper, motion estimator architecture with optimized number of PEs (processing element) is presented by exploiting early termination. It also reduces hardware size by exploiting parallel processing. The presented motion estimator architecture has 8 PEs, and it can efficiently perform TZS with very high utilization of PEs.Keywords: motion estimation, test zone search, high efficiency video coding, processing element, optimization
Procedia PDF Downloads 3636544 Human Posture Estimation Based on Multiple Viewpoints
Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo
Abstract:
This study aimed to address the problem of improving the confidence of key points by fusing multi-view information, thereby estimating human posture more accurately. We first obtained multi-view image information and then used the MvP algorithm to fuse this multi-view information together to obtain a set of high-confidence human key points. We used these as the input for the Spatio-Temporal Graph Convolution (ST-GCN). ST-GCN is a deep learning model used for processing spatio-temporal data, which can effectively capture spatio-temporal relationships in video sequences. By using the MvP algorithm to fuse multi-view information and inputting it into the spatio-temporal graph convolution model, this study provides an effective method to improve the accuracy of human posture estimation and provides strong support for further research and application in related fields.Keywords: multi-view, pose estimation, ST-GCN, joint fusion
Procedia PDF Downloads 706543 Low Complexity Carrier Frequency Offset Estimation for Cooperative Orthogonal Frequency Division Multiplexing Communication Systems without Cyclic Prefix
Authors: Tsui-Tsai Lin
Abstract:
Cooperative orthogonal frequency division multiplexing (OFDM) transmission, which possesses the advantages of better connectivity, expanded coverage, and resistance to frequency selective fading, has been a more powerful solution for the physical layer in wireless communications. However, such a hybrid scheme suffers from the carrier frequency offset (CFO) effects inherited from the OFDM-based systems, which lead to a significant degradation in performance. In addition, insertion of a cyclic prefix (CP) at each symbol block head for combating inter-symbol interference will lead to a reduction in spectral efficiency. The design on the CFO estimation for the cooperative OFDM system without CP is a suspended problem. This motivates us to develop a low complexity CFO estimator for the cooperative OFDM decode-and-forward (DF) communication system without CP over the multipath fading channel. Especially, using a block-type pilot, the CFO estimation is first derived in accordance with the least square criterion. A reliable performance can be obtained through an exhaustive two-dimensional (2D) search with a penalty of heavy computational complexity. As a remedy, an alternative solution realized with an iteration approach is proposed for the CFO estimation. In contrast to the 2D-search estimator, the iterative method enjoys the advantage of the substantially reduced implementation complexity without sacrificing the estimate performance. Computer simulations have been presented to demonstrate the efficacy of the proposed CFO estimation.Keywords: cooperative transmission, orthogonal frequency division multiplexing (OFDM), carrier frequency offset, iteration
Procedia PDF Downloads 2656542 Parameter Estimation of False Dynamic EIV Model with Additive Uncertainty
Authors: Dalvinder Kaur Mangal
Abstract:
For the past decade, noise corrupted output measurements have been a fundamental research problem to be investigated. On the other hand, the estimation of the parameters for linear dynamic systems when also the input is affected by noise is recognized as more difficult problem which only recently has received increasing attention. Representations where errors or measurement noises/disturbances are present on both the inputs and outputs are usually called errors-in-variables (EIV) models. These disturbances may also have additive effects which are also considered in this paper. Parameter estimation of false EIV problem using equation error, output error and iterative prefiltering identification schemes with and without additive uncertainty, when only the output observation is corrupted by noise has been dealt in this paper. The comparative study of these three schemes has also been carried out.Keywords: errors-in-variable (EIV), false EIV, equation error, output error, iterative prefiltering, Gaussian noise
Procedia PDF Downloads 4916541 System Identification in Presence of Outliers
Authors: Chao Yu, Qing-Guo Wang, Dan Zhang
Abstract:
The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low-rank, sparse matrices and further recast as a semidefinite programming (SDP) problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered “clean” data from the proposed method can give much better parameter estimation compared with that based on the raw data.Keywords: outlier detection, system identification, matrix decomposition, low-rank matrix, sparsity, semidefinite programming, interior-point methods, denoising
Procedia PDF Downloads 3076540 Particle Filter State Estimation Algorithm Based on Improved Artificial Bee Colony Algorithm
Authors: Guangyuan Zhao, Nan Huang, Xuesong Han, Xu Huang
Abstract:
In order to solve the problem of sample dilution in the traditional particle filter algorithm and achieve accurate state estimation in a nonlinear system, a particle filter method based on an improved artificial bee colony (ABC) algorithm was proposed. The algorithm simulated the process of bee foraging and optimization and made the high likelihood region of the backward probability of particles moving to improve the rationality of particle distribution. The opposition-based learning (OBL) strategy is introduced to optimize the initial population of the artificial bee colony algorithm. The convergence factor is introduced into the neighborhood search strategy to limit the search range and improve the convergence speed. Finally, the crossover and mutation operations of the genetic algorithm are introduced into the search mechanism of the following bee, which makes the algorithm jump out of the local extreme value quickly and continue to search the global extreme value to improve its optimization ability. The simulation results show that the improved method can improve the estimation accuracy of particle filters, ensure the diversity of particles, and improve the rationality of particle distribution.Keywords: particle filter, impoverishment, state estimation, artificial bee colony algorithm
Procedia PDF Downloads 1516539 A Systematic Review on Development of a Cost Estimation Framework: A Case Study of Nigeria
Authors: Babatunde Dosumu, Obuks Ejohwomu, Akilu Yunusa-Kaltungo
Abstract:
Cost estimation in construction is often difficult, particularly when dealing with risks and uncertainties, which are inevitable and peculiar to developing countries like Nigeria. Direct consequences of these are major deviations in cost, duration, and quality. The fundamental aim of this study is to develop a framework for assessing the impacts of risk on cost estimation, which in turn causes variabilities between contract sum and final account. This is very important, as initial estimates given to clients should reflect the certain magnitude of consistency and accuracy, which the client builds other planning-related activities upon, and also enhance the capabilities of construction industry professionals by enabling better prediction of the final account from the contract sum. In achieving this, a systematic literature review was conducted with cost variability and construction projects as search string within three databases: Scopus, Web of science, and Ebsco (Business source premium), which are further analyzed and gap(s) in knowledge or research discovered. From the extensive review, it was found that factors causing deviation between final accounts and contract sum ranged between 1 and 45. Besides, it was discovered that a cost estimation framework similar to Building Cost Information Services (BCIS) is unavailable in Nigeria, which is a major reason why initial estimates are very often inconsistent, leading to project delay, abandonment, or determination at the expense of the huge sum of money invested. It was concluded that the development of a cost estimation framework that is adjudged an important tool in risk shedding rather than risk-sharing in project risk management would be a panacea to cost estimation problems, leading to cost variability in the Nigerian construction industry by the time this ongoing Ph.D. research is completed. It was recommended that practitioners in the construction industry should always take into account risk in order to facilitate the rapid development of the construction industry in Nigeria, which should give stakeholders a more in-depth understanding of the estimation effectiveness and efficiency to be adopted by stakeholders in both the private and public sectors.Keywords: cost variability, construction projects, future studies, Nigeria
Procedia PDF Downloads 2096538 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137
Authors: Abdulsalam M. Alhawsawi
Abstract:
Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137
Procedia PDF Downloads 1166537 A Quantification Method of Attractiveness of Stations and an Estimation Method of Number of Passengers Taking into Consideration the Attractiveness of the Station
Authors: Naoya Ozaki, Takuya Watanabe, Ryosuke Matsumoto, Noriko Fukasawa
Abstract:
In the metropolitan areas in Japan, in many stations, shopping areas are set up, and escalators and elevators are installed to make the stations be barrier-free. Further, many areas around the stations are being redeveloped. Railway business operators want to know how much effect these circumstances have on attractiveness of the station or number of passengers using the station. So, we performed a questionnaire survey of the station users in the metropolitan areas for finding factors to affect the attractiveness of stations. Then, based on the analysis of the survey, we developed a method to quantitatively evaluate attractiveness of the stations. We also developed an estimation method for number of passengers based on combination of attractiveness of the station quantitatively evaluated and the residential and labor population around the station. Then, we derived precise linear regression models estimating the attractiveness of the station and number of passengers of the station.Keywords: attractiveness of the station, estimation method, number of passengers of the station, redevelopment around the station, renovation of the station
Procedia PDF Downloads 2876536 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models
Authors: I. V. Pinto, M. R. Sooriyarachchi
Abstract:
It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error
Procedia PDF Downloads 1426535 Home Legacy Device Output Estimation Using Temperature and Humidity Information by Adaptive Neural Fuzzy Inference System
Authors: Sung Hyun Yoo, In Hwan Choi, Jun Ho Jung, Choon Ki Ahn, Myo Taeg Lim
Abstract:
Home energy management system (HEMS) has been issued to reduce the power consumption. The HEMS performs electric power control for the indoor electric device. However, HEMS commonly treats the smart devices. In this paper, we suggest the output estimation of home legacy device using the artificial neural fuzzy inference system (ANFIS). This paper discusses the overview and the architecture of the system. In addition, accurate performance of the output estimation using the ANFIS inference system is shown via a numerical example.Keywords: artificial neural fuzzy inference system (ANFIS), home energy management system (HEMS), smart device, legacy device
Procedia PDF Downloads 5436534 Satellite Derived Evapotranspiration and Turbulent Heat Fluxes Using Surface Energy Balance System (SEBS)
Authors: Muhammad Tayyab Afzal, Muhammad Arslan, Mirza Muhammad Waqar
Abstract:
One of the key components of the water cycle is evapotranspiration (ET), which represents water consumption by vegetated and non-vegetated surfaces. Conventional techniques for measurements of ET are point based and representative of the local scale only. Satellite remote sensing data with large area coverage and high temporal frequency provide representative measurements of several relevant biophysical parameters required for estimation of ET at regional scales. The objective is of this research is to exploit satellite data in order to estimate evapotranspiration. This study uses Surface Energy Balance System (SEBS) model to calculate daily actual evapotranspiration (ETa) in Larkana District, Sindh Pakistan using Landsat TM data for clouds-free days. As there is no flux tower in the study area for direct measurement of latent heat flux or evapotranspiration and sensible heat flux, therefore, the model estimated values of ET were compared with reference evapotranspiration (ETo) computed by FAO-56 Penman Monteith Method using meteorological data. For a country like Pakistan, agriculture by irrigation in the river basins is the largest user of fresh water. For the better assessment and management of irrigation water requirement, the estimation of consumptive use of water for agriculture is very important because it is the main consumer of water. ET is yet an essential issue of water imbalance due to major loss of irrigation water and precipitation on cropland. As large amount of irrigated water is lost through ET, therefore its accurate estimation can be helpful for efficient management of irrigation water. Results of this study can be used to analyse surface conditions, i.e. temperature, energy budgets and relevant characteristics. Through this information we can monitor vegetation health and suitable agricultural conditions and can take controlling steps to increase agriculture production.Keywords: SEBS, remote sensing, evapotranspiration, ETa
Procedia PDF Downloads 3336533 A Bayesian Multivariate Microeconometric Model for Estimation of Price Elasticity of Demand
Authors: Jefferson Hernandez, Juan Padilla
Abstract:
Estimation of price elasticity of demand is a valuable tool for the task of price settling. Given its relevance, it is an active field for microeconomic and statistical research. Price elasticity in the industry of oil and gas, in particular for fuels sold in gas stations, has shown to be a challenging topic given the market and state restrictions, and underlying correlations structures between the types of fuels sold by the same gas station. This paper explores the Lotka-Volterra model for the problem for price elasticity estimation in the context of fuels; in addition, it is introduced multivariate random effects with the purpose of dealing with errors, e.g., measurement or missing data errors. In order to model the underlying correlation structures, the Inverse-Wishart, Hierarchical Half-t and LKJ distributions are studied. Here, the Bayesian paradigm through Markov Chain Monte Carlo (MCMC) algorithms for model estimation is considered. Simulation studies covering a wide range of situations were performed in order to evaluate parameter recovery for the proposed models and algorithms. Results revealed that the proposed algorithms recovered quite well all model parameters. Also, a real data set analysis was performed in order to illustrate the proposed approach.Keywords: price elasticity, volume, correlation structures, Bayesian models
Procedia PDF Downloads 1656532 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model
Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You
Abstract:
The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.Keywords: DBSCAN, potential function, speech signal, the UBSS model
Procedia PDF Downloads 1356531 The Influence of Air Temperature Controls in Estimation of Air Temperature over Homogeneous Terrain
Authors: Fariza Yunus, Jasmee Jaafar, Zamalia Mahmud, Nurul Nisa’ Khairul Azmi, Nursalleh K. Chang, Nursalleh K. Chang
Abstract:
Variation of air temperature from one place to another is cause by air temperature controls. In general, the most important control of air temperature is elevation. Another significant independent variable in estimating air temperature is the location of meteorological stations. Distances to coastline and land use type are also contributed to significant variations in the air temperature. On the other hand, in homogeneous terrain direct interpolation of discrete points of air temperature work well to estimate air temperature values in un-sampled area. In this process the estimation is solely based on discrete points of air temperature. However, this study presents that air temperature controls also play significant roles in estimating air temperature over homogenous terrain of Peninsular Malaysia. An Inverse Distance Weighting (IDW) interpolation technique was adopted to generate continuous data of air temperature. This study compared two different datasets, observed mean monthly data of T, and estimation error of T–T’, where T’ estimated value from a multiple regression model. The multiple regression model considered eight independent variables of elevation, latitude, longitude, coastline, and four land use types of water bodies, forest, agriculture and build up areas, to represent the role of air temperature controls. Cross validation analysis was conducted to review accuracy of the estimation values. Final results show, estimation values of T–T’ produced lower errors for mean monthly mean air temperature over homogeneous terrain in Peninsular Malaysia.Keywords: air temperature control, interpolation analysis, peninsular Malaysia, regression model, air temperature
Procedia PDF Downloads 3746530 Estimation of Rare and Clustered Population Mean Using Two Auxiliary Variables in Adaptive Cluster Sampling
Authors: Muhammad Nouman Qureshi, Muhammad Hanif
Abstract:
Adaptive cluster sampling (ACS) is specifically developed for the estimation of highly clumped populations and applied to a wide range of situations like animals of rare and endangered species, uneven minerals, HIV patients and drug users. In this paper, we proposed a generalized semi-exponential estimator with two auxiliary variables under the framework of ACS design. The expressions of approximate bias and mean square error (MSE) of the proposed estimator are derived. Theoretical comparisons of the proposed estimator have been made with existing estimators. A numerical study is conducted on real and artificial populations to demonstrate and compare the efficiencies of the proposed estimator. The results indicate that the proposed generalized semi-exponential estimator performed considerably better than all the adaptive and non-adaptive estimators considered in this paper.Keywords: auxiliary information, adaptive cluster sampling, clustered populations, Hansen-Hurwitz estimation
Procedia PDF Downloads 2386529 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization
Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu
Abstract:
This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection
Procedia PDF Downloads 616528 Effects of Different Meteorological Variables on Reference Evapotranspiration Modeling: Application of Principal Component Analysis
Authors: Akinola Ikudayisi, Josiah Adeyemo
Abstract:
The correct estimation of reference evapotranspiration (ETₒ) is required for effective irrigation water resources planning and management. However, there are some variables that must be considered while estimating and modeling ETₒ. This study therefore determines the multivariate analysis of correlated variables involved in the estimation and modeling of ETₒ at Vaalharts irrigation scheme (VIS) in South Africa using Principal Component Analysis (PCA) technique. Weather and meteorological data between 1994 and 2014 were obtained both from South African Weather Service (SAWS) and Agricultural Research Council (ARC) in South Africa for this study. Average monthly data of minimum and maximum temperature (°C), rainfall (mm), relative humidity (%), and wind speed (m/s) were the inputs to the PCA-based model, while ETₒ is the output. PCA technique was adopted to extract the most important information from the dataset and also to analyze the relationship between the five variables and ETₒ. This is to determine the most significant variables affecting ETₒ estimation at VIS. From the model performances, two principal components with a variance of 82.7% were retained after the eigenvector extraction. The results of the two principal components were compared and the model output shows that minimum temperature, maximum temperature and windspeed are the most important variables in ETₒ estimation and modeling at VIS. In order words, ETₒ increases with temperature and windspeed. Other variables such as rainfall and relative humidity are less important and cannot be used to provide enough information about ETₒ estimation at VIS. The outcome of this study has helped to reduce input variable dimensionality from five to the three most significant variables in ETₒ modelling at VIS, South Africa.Keywords: irrigation, principal component analysis, reference evapotranspiration, Vaalharts
Procedia PDF Downloads 2586527 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data
Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu
Abstract:
Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.Keywords: food waste reduction, particle filter, point-of-sales, sustainable development goals, Taylor's law, time series analysis
Procedia PDF Downloads 1316526 Axisymmetric Nonlinear Analysis of Point Supported Shallow Spherical Shells
Authors: M. Altekin, R. F. Yükseler
Abstract:
Geometrically nonlinear axisymmetric bending of a shallow spherical shell with a point support at the apex under linearly varying axisymmetric load was investigated numerically. The edge of the shell was assumed to be simply supported or clamped. The solution was obtained by the finite difference and the Newton-Raphson methods. The thickness of the shell was considered to be uniform and the material was assumed to be homogeneous and isotropic. Sensitivity analysis was made for two geometrical parameters. The accuracy of the algorithm was checked by comparing the deflection with the solution of point supported circular plates and good agreement was obtained.Keywords: Bending, Nonlinear, Plate, Point support, Shell.
Procedia PDF Downloads 2646525 Changes of Acute-phase Reactants in Systemic Sclerosis During Long-term Rituximab Therapy
Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova, Anna Khelkovskaya-Sergeeva
Abstract:
Objectives. C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) are associated with severe course, increased morbidity and mortality in systemic sclerosis (SSc). The aim of our study was to assess changes in CRP and ESR in SSc patients during long-term RTX therapy. Methods. This study included 113 patients with SSc. Mean age was 48.1±13 years, female-85%. The mean disease duration was 6±5 years. The diffuse cutaneous subset of the disease had 55% of patients. All pts had interstitial lung disease (ILD). All patients received prednisolone at a mean dose of 11.6±4.8 mg/day, and 53 of them - were immunosuppressants at inclusion. Patients received RTX due to the ineffectiveness of previous therapy for ILD. The parameters were evaluated over the periods: at baseline (point 0), 13±2.3 month (point 1, n=113), 42±14 month (point 2, n=80) and 79±6.5 month (point 3, n=25) after initiation of RTX therapy. Cumulative mean dose of RTX at point 1 = 1.7±0.6g, at point 2 = 3±1.5g, and at point 3 = 3.8±2.4g. The results are presented in the form of mean values, delta(Δ)-difference between the baseline parameter and follow-up point. Results. There was an improvement in studied parameters on RTX therapy. There was a significant decrease of ESR, CRP and activity index (EScSG-AI) at all observation points (p=0.001). In point 1: ΔCRP was 6.7 mg/l, ΔESR = 7.4 mm/h, ΔActivity index (EScSG-AI) = 1.7. In point 2: ΔCRP was 8.7 mg/l, ΔESR = 7.5 mm/h, ΔActivity index (EScSG-AI) = 1.9. In point 3: ΔCRP was 16.1 mg/l, ΔESR = 11 mm/h, ΔActivity index (EScSG-AI) = 2.1. Conclusion. There was a significant decrease in CRP and ESR during long-term RTX therapy, which correlated with a decrease in the disease activity index. RTX is an effective treatment option for SSc with an elevation of acute-phase reactants.Keywords: C-reactive protein, interstitial lung disease, systemic sclerosis, rituximab
Procedia PDF Downloads 266524 The Role of Human Capital in the Evolution of Inequality and Economic Growth in Latin-America
Authors: Luis Felipe Brito-Gaona, Emma M. Iglesias
Abstract:
There is a growing literature that studies the main determinants and drivers of inequality and economic growth in several countries, using panel data and different estimation methods (fixed effects, Generalized Methods of Moments (GMM) and Two Stages Least Squares (TSLS)). Recently, it was studied the evolution of these variables in the period 1980-2009 in the 18 countries of Latin-America and it was found that one of the main variables that explained their evolution was Foreign Direct Investment (FDI). We extend this study to the year 2015 in the same 18 countries in Latin-America, and we find that FDI does not have a significant role anymore, while we find a significant negative and positive effect of schooling levels on inequality and economic growth respectively. We also find that the point estimates associated with human capital are the largest ones of the variables included in the analysis, and this means that an increase in human capital (measured by schooling levels of secondary education) is the main determinant that can help to reduce inequality and to increase economic growth in Latin-America. Therefore, we advise that economic policies in Latin-America should be directed towards increasing the level of education. We use the methodologies of estimating by fixed effects, GMM and TSLS to check the robustness of our results. Our conclusion is the same regardless of the estimation method we choose. We also find that the international recession in the Latin-American countries in 2008 reduced significantly their economic growth.Keywords: economic growth, human capital, inequality, Latin-America
Procedia PDF Downloads 2266523 An Energy Detection-Based Algorithm for Cooperative Spectrum Sensing in Rayleigh Fading Channel
Authors: H. Bakhshi, E. Khayyamian
Abstract:
Cognitive radios have been recognized as one of the most promising technologies dealing with the scarcity of the radio spectrum. In cognitive radio systems, secondary users are allowed to utilize the frequency bands of primary users when the bands are idle. Hence, how to accurately detect the idle frequency bands has attracted many researchers’ interest. Detection performance is sensitive toward noise power and gain fluctuation. Since signal to noise ratio (SNR) between primary user and secondary users are not the same and change over the time, SNR and noise power estimation is essential. In this paper, we present a cooperative spectrum sensing algorithm using SNR estimation to improve detection performance in the real situation.Keywords: cognitive radio, cooperative spectrum sensing, energy detection, SNR estimation, spectrum sensing, rayleigh fading channel
Procedia PDF Downloads 4496522 Mathematical Modeling for the Break-Even Point Problem in a Non-homogeneous System
Authors: Filipe Cardoso de Oliveira, Lino Marcos da Silva, Ademar Nogueira do Nascimento, Cristiano Hora de Oliveira Fontes
Abstract:
This article presents a mathematical formulation for the production Break-Even Point problem in a non-homogeneous system. The optimization problem aims to obtain the composition of the best product mix in a non-homogeneous industrial plant, with the lowest cost until the breakeven point is reached. The problem constraints represent real limitations of a generic non-homogeneous industrial plant for n different products. The proposed model is able to solve the equilibrium point problem simultaneously for all products, unlike the existing approaches that propose a resolution in a sequential way, considering each product in isolation and providing a sub-optimal solution to the problem. The results indicate that the product mix found through the proposed model has economical advantages over the traditional approach used.Keywords: branch and bound, break-even point, non-homogeneous production system, integer linear programming, management accounting
Procedia PDF Downloads 2116521 Existence Solutions for Three Point Boundary Value Problem for Differential Equations
Authors: Mohamed Houas, Maamar Benbachir
Abstract:
In this paper, under weak assumptions, we study the existence and uniqueness of solutions for a nonlinear fractional boundary value problem. New existence and uniqueness results are established using Banach contraction principle. Other existence results are obtained using scheafer and krasnoselskii's fixed point theorem. At the end, some illustrative examples are presented.Keywords: caputo derivative, boundary value problem, fixed point theorem, local conditions
Procedia PDF Downloads 428