Search results for: variance estimation.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1371

Search results for: variance estimation.

981 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study

Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb

Abstract:

The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.

Keywords: Anthropomorphic phantom, computed tomography, CT-expo, radiation dose.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464
980 Kernel Matching versus Inverse Probability Weighting: A Comparative Study

Authors: Andy Handouyahia, Tony Haddad, Frank Eaton

Abstract:

Recent quasi-experimental evaluation of the Canadian Active Labour Market Policies (ALMP) by Human Resources and Skills Development Canada (HRSDC) has provided an opportunity to examine alternative methods to estimating the incremental effects of Employment Benefits and Support Measures (EBSMs) on program participants. The focus of this paper is to assess the efficiency and robustness of inverse probability weighting (IPW) relative to kernel matching (KM) in the estimation of program effects. To accomplish this objective, the authors compare pairs of 1,080 estimates, along with their associated standard errors, to assess which type of estimate is generally more efficient and robust. In the interest of practicality, the authorsalso document the computationaltime it took to produce the IPW and KM estimates, respectively.

Keywords: Treatment effect, causal inference, observational studies, Propensity score based matching, Kernel Matching, Inverse Probability Weighting, Estimation methods for incremental effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6925
979 On the Optimality of Blocked Main Effects Plans

Authors: Rita SahaRay, Ganesh Dutta

Abstract:

In this article, experimental situations are considered where a main effects plan is to be used to study m two-level factors using n runs which are partitioned into b blocks, not necessarily of same size. Assuming the block sizes to be even for all blocks, for the case n ≡ 2 (mod 4), optimal designs are obtained with respect to type 1 and type 2 optimality criteria in the class of designs providing estimation of all main effects orthogonal to the block effects. In practice, such orthogonal estimation of main effects is often a desirable condition. In the wider class of all available m two level even sized blocked main effects plans, where the factors do not occur at high and low levels equally often in each block, E-optimal designs are also characterized. Simple construction methods based on Hadamard matrices and Kronecker product for these optimal designs are presented.

Keywords: Design matrix, Hadamard matrix, Kronecker product, type 1 criteria, type 2 criteria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1053
978 Estimation of the External Force for a Co-Manipulation Task Using the Drive Chain Robot

Authors: Sylvain Devie, Pierre-Philippe Robet, Yannick Aoustin, Maxime Gautier

Abstract:

The aim of this paper is to show that the observation of the external effort and the sensor-less control of a system is limited by the mechanical system. First, the model of a one-joint robot with a prismatic joint is presented. Based on this model, two different procedures were performed in order to identify the mechanical parameters of the system and observe the external effort applied on it. Experiments have proven that the accuracy of the force observer, based on the DC motor current, is limited by the mechanics of the robot. The sensor-less control will be limited by the accuracy in estimation of the mechanical parameters and by the maximum static friction force, that is the minimum force which can be observed in this case. The consequence of this limitation is that industrial robots without specific design are not well adapted to perform sensor-less precision tasks. Finally, an efficient control law is presented for high effort applications.

Keywords: Control, Identification, Robot, Co-manipulation, Sensor-less.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 639
977 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method

Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen

Abstract:

This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.

Keywords: Design of Experiment, Taguchi Design, Optimization, Analysis of Variance, Machining Parameters, Horizontal Boring Tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2706
976 Texture Feature-Based Language Identification Using Wavelet-Domain BDIP and BVLC Features and FFT Feature

Authors: Ick Hoon Jang, Hoon Jae Lee, Dae Hoon Kwon, Ui Young Pak

Abstract:

In this paper, we propose a texture feature-based language identification using wavelet-domain BDIP (block difference of inverse probabilities) and BVLC (block variance of local correlation coefficients) features and FFT (fast Fourier transform) feature. In the proposed method, wavelet subbands are first obtained by wavelet transform from a test image and denoised by Donoho-s soft-thresholding. BDIP and BVLC operators are next applied to the wavelet subbands. FFT blocks are also obtained by 2D (twodimensional) FFT from the blocks into which the test image is partitioned. Some significant FFT coefficients in each block are selected and magnitude operator is applied to them. Moments for each subband of BDIP and BVLC and for each magnitude of significant FFT coefficients are then computed and fused into a feature vector. In classification, a stabilized Bayesian classifier, which adopts variance thresholding, searches the training feature vector most similar to the test feature vector. Experimental results show that the proposed method with the three operations yields excellent language identification even with rather low feature dimension.

Keywords: BDIP, BVLC, FFT, language identification, texture feature, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2149
975 Application of GM (1, 1) Model Group Based on Recursive Solution in China's Energy Demand Forecasting

Authors: Yeqing Guan, Fen Yang

Abstract:

To learn about China-s future energy demand, this paper first proposed GM(1,1) model group based on recursive solutions of parameters estimation, setting up a general solving-algorithm of the model group. This method avoided the problems occurred on the past researches that remodeling, loss of information and large amount of calculation. This paper established respectively all-data-GM(1,1), metabolic GM(1,1) and new information GM (1,1)model according to the historical data of energy consumption in China in the year 2005-2010 and the added data of 2011, then modeling, simulating and comparison of accuracies we got the optimal models and to predict. Results showed that the total energy demand of China will be 37.2221 billion tons of equivalent coal in 2012 and 39.7973 billion tons of equivalent coal in 2013, which are as the same as the overall planning of energy demand in The 12th Five-Year Plan.

Keywords: energy demands, GM(1, 1) model group, least square estimation, prediction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555
974 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method

Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan

Abstract:

The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.

Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345
973 Evaluating Accuracy of Foetal Weight Estimation by Clinicians in Christian Medical College Hospital, India and Its Correlation to Actual Birth Weight: A Clinical Audit

Authors: Aarati Susan Mathew, Radhika Narendra Patel, Jiji Mathew

Abstract:

A retrospective study conducted at Christian Medical College (CMC) Teaching Hospital, Vellore, India on 14th August 2014 to assess the accuracy of clinically estimated foetal weight upon labour admission. Estimating foetal weight is a crucial factor in assessing maternal and foetal complications during and after labour. Medical notes of ninety-eight postnatal women who fulfilled the inclusion criteria were studied to evaluate the correlation between their recorded Estimated Foetal Weight (EFW) on admission and actual birth weight (ABW) of the newborn after delivery. Data concerning maternal and foetal demographics was also noted. Accuracy was determined by absolute percentage error and proportion of estimates within 10% of ABW. Actual birth weights ranged from 950-4080g. A strong positive correlation between EFW and ABW (r=0.904) was noted. Term deliveries (≥40 weeks) in the normal weight range (2500-4000g) had a 59.5% estimation accuracy (n=74) compared to pre-term (<40 weeks) with an estimation accuracy of 0% (n=2). Out of the term deliveries, macrosomic babies (>4000g) were underestimated by 25% (n=3) and low birthweight (LBW) babies were overestimated by 12.7% (n=9). Registrars who estimated foetal weight were accurate in babies within normal weight ranges. However, there needs to be an improvement in predicting weight of macrosomic and LBW foetuses. We have suggested the use of an amended version of the Johnson’s formula for the Indian population for improvement and a need to re-audit once implemented.

Keywords: Clinical palpation, estimated foetal weight, pregnancy, India, Johnson’s formula.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2927
972 A Fast Adaptive Tomlinson-Harashima Precoder for Indoor Wireless Communications

Authors: M. Naresh Kumar, Abhijit Mitra, C. Ardil

Abstract:

A fast adaptive Tomlinson Harashima (T-H) precoder structure is presented for indoor wireless communications, where the channel may vary due to rotation and small movement of the mobile terminal. A frequency-selective slow fading channel which is time-invariant over a frame is assumed. In this adaptive T-H precoder, feedback coefficients are updated at the end of every uplink frame by using system identification technique for channel estimation in contrary with the conventional T-H precoding concept where the channel is estimated during the starting of the uplink frame via Wiener solution. In conventional T-H precoder it is assumed the channel is time-invariant in both uplink and downlink frames. However assuming the channel is time-invariant over only one frame instead of two, the proposed adaptive T-H precoder yields better performance than conventional T-H precoder if the channel is varied in uplink after receiving the training sequence.

Keywords: Tomlinson-Harashima precoder, Adaptive channel estimation, Indoor wireless communication, Bit error rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
971 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning

Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds are not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.

Keywords: Structural health monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2743
970 Estimation of Attenuation and Phase Delay in Driving Voltage Waveform of an Ultra-High-Speed Image Sensor by Dimensional Analysis

Authors: V. T. S. Dao, T. G. Etoh, C. Vo Le, H. D. Nguyen, K. Takehara, T. Akino, K. Nishi

Abstract:

We present an explicit expression to estimate driving voltage attenuation through RC networks representation of an ultrahigh- speed image sensor. Elmore delay metric for a fundamental RC chain is employed as the first-order approximation. By application of dimensional analysis to SPICE simulation data, we found a simple expression that significantly improves the accuracy of the approximation. Estimation error of the resultant expression for uniform RC networks is less than 2%. Similarly, another simple closed-form model to estimate 50 % delay through fundamental RC networks is also derived with sufficient accuracy. The framework of this analysis can be extended to address delay or attenuation issues of other VLSI structures.

Keywords: Dimensional Analysis, Elmore model, RC network, Signal Attenuation, Ultra-High-Speed Image Sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1424
969 Human Motion Capture: New Innovations in the Field of Computer Vision

Authors: Najm Alotaibi

Abstract:

Human motion capture has become one of the major area of interest in the field of computer vision. Some of the major application areas that have been rapidly evolving include the advanced human interfaces, virtual reality and security/surveillance systems. This study provides a brief overview of the techniques and applications used for the markerless human motion capture, which deals with analyzing the human motion in the form of mathematical formulations. The major contribution of this research is that it classifies the computer vision based techniques of human motion capture based on the taxonomy, and then breaks its down into four systematically different categories of tracking, initialization, pose estimation and recognition. The detailed descriptions and the relationships descriptions are given for the techniques of tracking and pose estimation. The subcategories of each process are further described. Various hypotheses have been used by the researchers in this domain are surveyed and the evolution of these techniques have been explained. It has been concluded in the survey that most researchers have focused on using the mathematical body models for the markerless motion capture.

Keywords: Human Motion Capture, Computer Vision, Vision based, Tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2491
968 A Self Adaptive Genetic Based Algorithm for the Identification and Elimination of Bad Data

Authors: A. A. Hossam-Eldin, E. N. Abdallah, M. S. El-Nozahy

Abstract:

The identification and elimination of bad measurements is one of the basic functions of a robust state estimator as bad data have the effect of corrupting the results of state estimation according to the popular weighted least squares method. However this is a difficult problem to handle especially when dealing with multiple errors from the interactive conforming type. In this paper, a self adaptive genetic based algorithm is proposed. The algorithm utilizes the results of the classical linearized normal residuals approach to tune the genetic operators thus instead of making a randomized search throughout the whole search space it is more likely to be a directed search thus the optimum solution is obtained at very early stages(maximum of 5 generations). The algorithm utilizes the accumulating databases of already computed cases to reduce the computational burden to minimum. Tests are conducted with reference to the standard IEEE test systems. Test results are very promising.

Keywords: Bad Data, Genetic Algorithms, Linearized Normal residuals, Observability, Power System State Estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346
967 Estimation Model for Concrete Slump Recovery by Using Superplasticizer

Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert

Abstract:

This paper aimed to introduce the solution of concrete slump recovery using chemical admixture type-F (superplasticizer, naphthalene base) to the practice in order to solve unusable concrete problem due to concrete loss its slump, especially for those tropical countries that have faster slump loss rate. In the other hand, randomly adding superplasticizer into concrete can cause concrete to segregate. Therefore, this paper also develops the estimation model used to calculate amount of second dose of superplasticizer need for concrete slump recovery. Fresh properties of ordinary Portland cement concrete with volumetric ratio of paste to void between aggregate (paste content) of 1.1-1.3 with water-cement ratio zone of 0.30 to 0.67 and initial superplasticizer (naphthalene base) of 0.25%-1.6% were tested for initial slump and slump loss for every 30 minutes for one and half hour by slump cone test. Those concretes with slump loss range from 10% to 90% were re-dosed and successfully recovered back to its initial slump. Slump after re-dosed was tested by slump cone test. From the result, it has been concluded that, slump loss was slower for those mix with high initial dose of superplasticizer due to addition of superplasticizer will disturb cement hydration. The required second dose of superplasticizer was affected by two major parameters, which were water-cement ratio and paste content, where lower water-cement ratio and paste content cause an increase in require second dose of superplasticizer. The amount of second dose of superplasticizer is higher as the solid content within the system is increase, solid can be either from cement particles or aggregate. The data was analyzed to form an equation use to estimate the amount of second dosage requirement of superplasticizer to recovery slump to its original.

Keywords: Estimation model, second superplasticizer dosage, slump loss, slump recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
966 A New Approach of Fuzzy Methods for Evaluating of Hydrological Data

Authors: Nasser Shamskia, Seyyed Habib Rahmati, Hassan Haleh , Seyyedeh Hoda Rahmati

Abstract:

The main criteria of designing in the most hydraulic constructions essentially are based on runoff or discharge of water. Two of those important criteria are runoff and return period. Mostly, these measures are calculated or estimated by stochastic data. Another feature in hydrological data is their impreciseness. Therefore, in order to deal with uncertainty and impreciseness, based on Buckley-s estimation method, a new fuzzy method of evaluating hydrological measures are developed. The method introduces triangular shape fuzzy numbers for different measures in which both of the uncertainty and impreciseness concepts are considered. Besides, since another important consideration in most of the hydrological studies is comparison of a measure during different months or years, a new fuzzy method which is consistent with special form of proposed fuzzy numbers, is also developed. Finally, to illustrate the methods more explicitly, the two algorithms are tested on one simple example and a real case study.

Keywords: Fuzzy Discharge, Fuzzy estimation, Fuzzy ranking method, Hydrological data

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
965 Application of UAS in Forest Firefighting for Detecting Ignitions and 3D Fuel Volume Estimation

Authors: Artur Krukowski, Emmanouela Vogiatzaki

Abstract:

The article presents results from the AF3 project “Advanced Forest Fire Fighting” focused on Unmanned Aircraft Systems (UAS)-based 3D surveillance and 3D area mapping using high-resolution photogrammetric methods from multispectral imaging, also taking advantage of the 3D scanning techniques from the SCAN4RECO project. We also present a proprietary embedded sensor system used for the detection of fire ignitions in the forest using near-infrared based scanner with weight and form factors allowing it to be easily deployed on standard commercial micro-UAVs, such as DJI Inspire or Mavic. Results from real-life pilot trials in Greece, Spain, and Israel demonstrated added-value in the use of UAS for precise and reliable detection of forest fires, as well as high-resolution 3D aerial modeling for accurate quantification of human resources and equipment required for firefighting.

Keywords: Forest wildfires, fuel volume estimation, 3D modeling, UAV, surveillance, firefighting, ignition detectors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 581
964 Acceleration-Based Motion Model for Visual SLAM

Authors: Daohong Yang, Xiang Zhang, Wanting Zhou, Lei Li

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that gathers information about the surrounding environment to ascertain its own position and create a map. It is widely used in computer vision, robotics, and various other fields. Many visual SLAM systems, such as OBSLAM3, utilize a constant velocity motion model. The utilization of this model facilitates the determination of the initial pose of the current frame, thereby enhancing the efficiency and precision of feature matching. However, it is often difficult to satisfy the constant velocity motion model in actual situations. This can result in a significant deviation between the obtained initial pose and the true value, leading to errors in nonlinear optimization results. Therefore, this paper proposes a motion model based on acceleration that can be applied to most SLAM systems. To provide a more accurate description of the camera pose acceleration, we separate the pose transformation matrix into its rotation matrix and translation vector components. The rotation matrix is now represented by a rotation vector. We assume that, over a short period, the changes in rotating angular velocity and translation vector remain constant. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of the constant velocity model is analyzed theoretically. Finally, we apply our proposed approach to the ORBSLAM3 system and evaluate two sets of sequences from the TUM datasets. The results show that our proposed method has a more accurate initial pose estimation, resulting in an improvement of 6.61% and 6.46% in the accuracy of the ORBSLAM3 system on the two test sequences, respectively.

Keywords: Error estimation, constant acceleration motion model, pose estimation, visual SLAM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 251
963 Perspectives of Renewable Energy in 21st Century in India: Statistics and Estimation

Authors: Manoj Kumar, Rajesh Kumar

Abstract:

With the favourable geographical conditions at Indian-subcontinent, it is suitable for flourishing renewable energy. Increasing amount of dependence on coal and other conventional sources is driving the world into pollution and depletion of resources. This paper presents the statistics of energy consumption and energy generation in Indian Sub-continent, which notifies us with the increasing energy demands surpassing energy generation. With the aggrandizement in demand for energy, usage of coal has increased, since the major portion of energy production in India is from thermal power plants. The increase in usage of thermal power plants causes pollution and depletion of reserves; hence, a paradigm shift to renewable sources is inevitable. In this work, the capacity and potential of renewable sources in India are analyzed. Based on the analysis of this work, future potential of these sources is estimated.

Keywords: Energy consumption and generation, depletion of reserves, pollution, estimation, renewable sources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 819
962 An Efficient Adaptive Thresholding Technique for Wavelet Based Image Denoising

Authors: D.Gnanadurai, V.Sadasivam

Abstract:

This frame work describes a computationally more efficient and adaptive threshold estimation method for image denoising in the wavelet domain based on Generalized Gaussian Distribution (GGD) modeling of subband coefficients. In this proposed method, the choice of the threshold estimation is carried out by analysing the statistical parameters of the wavelet subband coefficients like standard deviation, arithmetic mean and geometrical mean. The noisy image is first decomposed into many levels to obtain different frequency bands. Then soft thresholding method is used to remove the noisy coefficients, by fixing the optimum thresholding value by the proposed method. Experimental results on several test images by using this method show that this method yields significantly superior image quality and better Peak Signal to Noise Ratio (PSNR). Here, to prove the efficiency of this method in image denoising, we have compared this with various denoising methods like wiener filter, Average filter, VisuShrink and BayesShrink.

Keywords: Wavelet Transform, Gaussian Noise, ImageDenoising, Filter Banks and Thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2907
961 A Study of Adaptive Fault Detection Method for GNSS Applications

Authors: Je Young Lee, Hee Sung Kim, Kwang Ho Choi, Joonhoo Lim, Sebum Chun, Hyung Keun Lee

Abstract:

This study is purposed to develop an efficient fault detection method for Global Navigation Satellite Systems (GNSS) applications based on adaptive noise covariance estimation. Due to the dependence on radio frequency signals, GNSS measurements are dominated by systematic errors in receiver’s operating environment. In the proposed method, the pseudorange and carrier-phase measurement noise covariances are obtained at time propagations and measurement updates in process of Carrier-Smoothed Code (CSC) filtering, respectively. The test statistics for fault detection are generated by the estimated measurement noise covariances. To evaluate the fault detection capability, intentional faults were added to the filed-collected measurements. The experiment result shows that the proposed method is efficient in detecting unhealthy measurements and improves GNSS positioning accuracy against fault occurrences.

Keywords: Adaptive estimation, fault detection, GNSS, residual.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2555
960 A Study on the Location and Range of Obstacle Region in Robot's Point Placement Task based on the Vision Control Algorithm

Authors: Jae Kyung Son, Wan Shik Jang, Sung hyun Shim, Yoon Gyung Sung

Abstract:

This paper is concerned with the application of the vision control algorithm for robot's point placement task in discontinuous trajectory caused by obstacle. The presented vision control algorithm consists of four models, which are the robot kinematic model, vision system model, parameters estimation model, and robot joint angle estimation model.When the robot moves toward a target along discontinuous trajectory, several types of obstacles appear in two obstacle regions. Then, this study is to investigate how these changes will affect the presented vision control algorithm.Thus, the practicality of the vision control algorithm is demonstrated experimentally by performing the robot's point placement task in discontinuous trajectory by obstacle.

Keywords: Vision control algorithm, location of obstacle region, range of obstacle region, point placement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
959 The Hyperbolic Smoothing Approach for Automatic Calibration of Rainfall-Runoff Models

Authors: Adilson Elias Xavier, Otto Corrêa Rotunno Filho, Paulo Canedo de Magalhães

Abstract:

This paper addresses the issue of automatic parameter estimation in conceptual rainfall-runoff (CRR) models. Due to threshold structures commonly occurring in CRR models, the associated mathematical optimization problems have the significant characteristic of being strongly non-differentiable. In order to face this enormous task, the resolution method proposed adopts a smoothing strategy using a special C∞ differentiable class function. The final estimation solution is obtained by solving a sequence of differentiable subproblems which gradually approach the original conceptual problem. The use of this technique, called Hyperbolic Smoothing Method (HSM), makes possible the application of the most powerful minimization algorithms, and also allows for the main difficulties presented by the original CRR problem to be overcome. A set of computational experiments is presented for the purpose of illustrating both the reliability and the efficiency of the proposed approach.

Keywords: Rainfall-runoff models, optimization procedure, automatic parameter calibration, hyperbolic smoothing method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 408
958 The Impact of Transaction Costs on Rebalancing an Investment Portfolio in Portfolio Optimization

Authors: B. Marasović, S. Pivac, S. V. Vukasović

Abstract:

Constructing a portfolio of investments is one of the most significant financial decisions facing individuals and institutions. In accordance with the modern portfolio theory maximization of return at minimal risk should be the investment goal of any successful investor. In addition, the costs incurred when setting up a new portfolio or rebalancing an existing portfolio must be included in any realistic analysis. In this paper rebalancing an investment portfolio in the presence of transaction costs on the Croatian capital market is analyzed. The model applied in the paper is an extension of the standard portfolio mean-variance optimization model in which transaction costs are incurred to rebalance an investment portfolio. This model allows different costs for different securities, and different costs for buying and selling. In order to find efficient portfolio, using this model, first, the solution of quadratic programming problem of similar size to the Markowitz model, and then the solution of a linear programming problem have to be found. Furthermore, in the paper the impact of transaction costs on the efficient frontier is investigated. Moreover, it is shown that global minimum variance portfolio on the efficient frontier always has the same level of the risk regardless of the amount of transaction costs. Although efficient frontier position depends of both transaction costs amount and initial portfolio it can be concluded that extreme right portfolio on the efficient frontier always contains only one stock with the highest expected return and the highest risk.

Keywords: Croatian capital market, Fractional quadratic programming, Markowitz model, Portfolio optimization, Transaction costs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2946
957 Validation of the Linear Trend Estimation Technique for Prediction of Average Water and Sewerage Charge Rate Prices in the Czech Republic

Authors: Aneta Oblouková, Eva Vítková

Abstract:

The article deals with the issue of water and sewerage charge rate prices in the Czech Republic. The research is specifically focused on the analysis of the development of the average prices of water and sewerage charge rate in the Czech Republic in 1994-2021 and on the validation of the chosen methodology relevant for the prediction of the development of the average prices of water and sewerage charge rate in the Czech Republic. The research is based on data collection. The data for this research were obtained from the Czech Statistical Office. The aim of the paper is to validate the relevance of the mathematical linear trend estimate technique for the calculation of the predicted average prices of water and sewerage charge rates. The real values of the average prices of water and sewerage charge rates in the Czech Republic in 1994-2018 were obtained from the Czech Statistical Office and were converted into a mathematical equation. The same type of real data was obtained from the Czech Statistical Office for 2019-2021. Prediction of the average prices of water and sewerage charge rates in the Czech Republic in 2019-2021 was also calculated using a chosen method – a linear trend estimation technique. The values obtained from the Czech Statistical Office and the values calculated using the chosen methodology were subsequently compared. The research result is a validation of the chosen mathematical technique to be a suitable technique for this research.

Keywords: Czech Republic, linear trend estimation, price prediction, water and sewerage charge rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202
956 Inferences on Compound Rayleigh Parameters with Progressively Type-II Censored Samples

Authors: Abdullah Y. Al-Hossain

Abstract:

This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.

Keywords: Progressive type II censoring, compound Rayleigh failure time distribution, maximum likelihood estimation, Bayes estimation, Lindley's approximation method, Monte Carlo simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2390
955 GSM Position Tracking using a Kalman Filter

Authors: Jean-Pierre Dubois, Jihad S. Daba, M. Nader, C. El Ferkh

Abstract:

GSM has undoubtedly become the most widespread cellular technology and has established itself as one of the most promising technology in wireless communication. The next generation of mobile telephones had also become more powerful and innovative in a way that new services related to the user-s location will arise. Other than the 911 requirements for emergency location initiated by the Federal Communication Commission (FCC) of the United States, GSM positioning can be highly integrated in cellular communication technology for commercial use. However, GSM positioning is facing many challenges. Issues like accuracy, availability, reliability and suitable cost render the development and implementation of GSM positioning a challenging task. In this paper, we investigate the optimal mobile position tracking means. We employ an innovative scheme by integrating the Kalman filter in the localization process especially that it has great tracking characteristics. When tracking in two dimensions, Kalman filter is very powerful due to its reliable performance as it supports estimation of past, present, and future states, even when performing in unknown environments. We show that enhanced position tracking results is achieved when implementing the Kalman filter for GSM tracking.

Keywords: Cellular communication, estimation, GSM, Kalman filter, positioning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3073
954 SNR Classification Using Multiple CNNs

Authors: Thinh Ngo, Paul Rad, Brian Kelley

Abstract:

Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.

Keywords: Classification, classifier fusion, CNN, Deep Learning, prediction, SNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 720
953 Channel Estimation/Equalization with Adaptive Modulation and Coding over Multipath Faded Channels for WiMAX

Authors: B. Siva Kumar Reddy, B. Lakshmi

Abstract:

Different order modulations combined with different coding schemes, allow sending more bits per symbol, thus achieving higher throughputs and better spectral efficiencies. However, it must also be noted that when using a modulation technique such as 64- QAM with less overhead bits, better signal-to-noise ratios (SNRs) are needed to overcome any Inter symbol Interference (ISI) and maintain a certain bit error ratio (BER). The use of adaptive modulation allows wireless technologies to yielding higher throughputs while also covering long distances. The aim of this paper is to implement an Adaptive Modulation and Coding (AMC) features of the WiMAX PHY in MATLAB and to analyze the performance of the system in different channel conditions (AWGN, Rayleigh and Rician fading channel) with channel estimation and blind equalization. Simulation results have demonstrated that the increment in modulation order causes to increment in throughput and BER values. These results derived a trade-off among modulation order, FFT length, throughput, BER value and spectral efficiency. The BER changes gradually for AWGN channel and arbitrarily for Rayleigh and Rician fade channels.

Keywords: AMC, CSI, CMA, OFDM, OFDMA, WiMAX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3102
952 Multiple Sensors and JPDA-IMM-UKF Algorithm for Tracking Multiple Maneuvering Targets

Authors: Wissem Saidani, Yacine Morsly, Mohand Saïd Djouadi

Abstract:

In this paper, we consider the problem of tracking multiple maneuvering targets using switching multiple target motion models. With this paper, we aim to contribute in solving the problem of model-based body motion estimation by using data coming from visual sensors. The Interacting Multiple Model (IMM) algorithm is specially designed to track accurately targets whose state and/or measurement (assumed to be linear) models changes during motion transition. However, when these models are nonlinear, the IMM algorithm must be modified in order to guarantee an accurate track. In this paper we propose to avoid the Extended Kalman filter because of its limitations and substitute it with the Unscented Kalman filter which seems to be more efficient especially according to the simulation results obtained with the nonlinear IMM algorithm (IMMUKF). To resolve the problem of data association, the JPDA approach is combined with the IMM-UKF algorithm, the derived algorithm is noted JPDA-IMM-UKF.

Keywords: Estimation, Kalman filtering, Multi-Target Tracking, Visual servoing, data association.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2562