Search results for: Pose Estimation
832 UAV Position Estimation Using Remote Radio Head With Adaptive Power Control
Authors: Hyeon-Cheol Lee
Abstract:
The adaptive power control of Code Division Multiple Access (CDMA) communications using Remote Radio Head (RRH) between multiple Unmanned Aerial Vehicles (UAVs) with a link-budget based Signal-to-Interference Ratio (SIR) estimate is applied to four inner loop power control algorithms. It is concluded that Base Station (BS) can calculate not only UAV distance using linearity between speed and Consecutive Transmit-Power-Control Ratio (CTR) of Adaptive Step-size Closed Loop Power Control (ASCLPC), Consecutive TPC Ratio Step-size Closed Loop Power Control (CS-CLPC), Fixed Step-size Power Control (FSPC), but also UAV position with Received Signal Strength Indicator (RSSI) ratio of RRHs.Keywords: speed estimation, adaptive power control, link-budget, SIR, multi-bit quantizer, RRH
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2141831 Speed Sensorless Direct Torque Control of a PMSM Drive using Space Vector Modulation Based MRAS and Stator Resistance Estimator
Authors: A. Ameur, B. Mokhtari, N. Essounbouli, L. Mokrani
Abstract:
This paper presents a speed sensorless direct torque control scheme using space vector modulation (DTC-SVM) for permanent magnet synchronous motor (PMSM) drive based a Model Reference Adaptive System (MRAS) algorithm and stator resistance estimator. The MRAS is utilized to estimate speed and stator resistance and compensate the effects of parameter variation on stator resistance, which makes flux and torque estimation more accurate and insensitive to parameter variation. In other hand the use of SVM method reduces the torque ripple while achieving a good dynamic response. Simulation results are presented and show the effectiveness of the proposed method.Keywords: MRAS, PMSM, SVM, DTC, Speed and Resistance estimation, Sensorless drive
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3869830 Identification of LTI Autonomous All Pole System Using Eigenvector Algorithm
Authors: Sudipta Majumdar
Abstract:
This paper presents a method for identification of a linear time invariant (LTI) autonomous all pole system using singular value decomposition. The novelty of this paper is two fold: First, MUSIC algorithm for estimating complex frequencies from real measurements is proposed. Secondly, using the proposed algorithm, we can identify the coefficients of differential equation that determines the LTI system by switching off our input signal. For this purpose, we need only to switch off the input, apply our complex MUSIC algorithm and determine the coefficients as symmetric polynomials in the complex frequencies. This method can be applied to unstable system and has higher resolution as compared to time series solution when, noisy data are used. The classical performance bound, Cramer Rao bound (CRB), has been used as a basis for performance comparison of the proposed method for multiple poles estimation in noisy exponential signal.Keywords: MUSIC algorithm, Cramer Rao bound, frequency estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 934829 Design of Nonlinear Observer by Using Augmented Linear System based on Formal Linearization of Polynomial Type
Authors: Kazuo Komatsu, Hitoshi Takata
Abstract:
The objective of this study is to propose an observer design for nonlinear systems by using an augmented linear system derived by application of a formal linearization method. A given nonlinear differential equation is linearized by the formal linearization method which is based on Taylor expansion considering up to the higher order terms, and a measurement equation is transformed into an augmented linear one. To this augmented dimensional linear system, a linear estimation theory is applied and a nonlinear observer is derived. As an application of this method, an estimation problem of transient state of electric power systems is studied, and its numerical experiments indicate that this observer design shows remarkable performances for nonlinear systems.
Keywords: nonlinear system, augmented linear system, nonlinear observer, formal linearization, electric power system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582828 Automatic Detection of Mass Type Breast Cancer using Texture Analysis in Korean Digital Mammography
Authors: E. B. Jo, J. H. Lee, J. Y. Park, S. M. Kim
Abstract:
In this study, we present an advanced detection technique for mass type breast cancer based on texture information of organs. The proposed method detects the cancer areas in three stages. In the first stage, the midpoints of mass area are determined based on AHE (Adaptive Histogram Equalization). In the second stage, we set the threshold coefficient of homogeneity by using MLE (Maximum Likelihood Estimation) to compute the uniformity of texture. Finally, mass type cancer tissues are extracted from the original image. As a result, it was observed that the proposed method shows an improved detection performance on dense breast tissues of Korean women compared with the existing methods. It is expected that the proposed method may provide additional diagnostic information for detection of mass-type breast cancer.Keywords: Mass Type Breast Cancer, Mammography, Maximum Likelihood Estimation (MLE), Ranklets, SVM
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989827 Life Estimation of Induction Motor Insulation under Non-Sinusoidal Voltage and Current Waveforms Using Fuzzy Logic
Authors: Triloksingh G. Arora, Mohan V. Aware, Dhananjay R. Tutakne
Abstract:
Thyristor based firing angle controlled voltage regulators are extensively used for speed control of single phase induction motors. This leads to power saving but the applied voltage and current waveforms become non-sinusoidal. These non-sinusoidal waveforms increase voltage and thermal stresses which result into accelerated insulation aging, thus reducing the motor life. Life models that allow predicting the capability of insulation under such multi-stress situations tend to be very complex and somewhat impractical. This paper presents the fuzzy logic application to investigate the synergic effect of voltage and thermal stresses on intrinsic aging of induction motor insulation. A fuzzy expert system is developed to estimate the life of induction motor insulation under multiple stresses. Three insulation degradation parameters, viz. peak modification factor, wave shape modification factor and thermal loss are experimentally obtained for different firing angles. Fuzzy expert system consists of fuzzyfication of the insulation degradation parameters, algorithms based on inverse power law to estimate the life and defuzzyficaton process to output the life. An electro-thermal life model is developed from the results of fuzzy expert system. This fuzzy logic based electro-thermal life model can be used for life estimation of induction motors operated with non-sinusoidal voltage and current waveforms.
Keywords: Aging, Dielectric losses, Insulation and Life Estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3053826 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study
Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb
Abstract:
The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.
Keywords: Anthropomorphic phantom, computed tomography, CT-expo, radiation dose.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464825 Kernel Matching versus Inverse Probability Weighting: A Comparative Study
Authors: Andy Handouyahia, Tony Haddad, Frank Eaton
Abstract:
Recent quasi-experimental evaluation of the Canadian Active Labour Market Policies (ALMP) by Human Resources and Skills Development Canada (HRSDC) has provided an opportunity to examine alternative methods to estimating the incremental effects of Employment Benefits and Support Measures (EBSMs) on program participants. The focus of this paper is to assess the efficiency and robustness of inverse probability weighting (IPW) relative to kernel matching (KM) in the estimation of program effects. To accomplish this objective, the authors compare pairs of 1,080 estimates, along with their associated standard errors, to assess which type of estimate is generally more efficient and robust. In the interest of practicality, the authorsalso document the computationaltime it took to produce the IPW and KM estimates, respectively.
Keywords: Treatment effect, causal inference, observational studies, Propensity score based matching, Kernel Matching, Inverse Probability Weighting, Estimation methods for incremental effect.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6925824 On the Optimality of Blocked Main Effects Plans
Authors: Rita SahaRay, Ganesh Dutta
Abstract:
In this article, experimental situations are considered where a main effects plan is to be used to study m two-level factors using n runs which are partitioned into b blocks, not necessarily of same size. Assuming the block sizes to be even for all blocks, for the case n ≡ 2 (mod 4), optimal designs are obtained with respect to type 1 and type 2 optimality criteria in the class of designs providing estimation of all main effects orthogonal to the block effects. In practice, such orthogonal estimation of main effects is often a desirable condition. In the wider class of all available m two level even sized blocked main effects plans, where the factors do not occur at high and low levels equally often in each block, E-optimal designs are also characterized. Simple construction methods based on Hadamard matrices and Kronecker product for these optimal designs are presented.Keywords: Design matrix, Hadamard matrix, Kronecker product, type 1 criteria, type 2 criteria.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1053823 Estimation of the External Force for a Co-Manipulation Task Using the Drive Chain Robot
Authors: Sylvain Devie, Pierre-Philippe Robet, Yannick Aoustin, Maxime Gautier
Abstract:
The aim of this paper is to show that the observation of the external effort and the sensor-less control of a system is limited by the mechanical system. First, the model of a one-joint robot with a prismatic joint is presented. Based on this model, two different procedures were performed in order to identify the mechanical parameters of the system and observe the external effort applied on it. Experiments have proven that the accuracy of the force observer, based on the DC motor current, is limited by the mechanics of the robot. The sensor-less control will be limited by the accuracy in estimation of the mechanical parameters and by the maximum static friction force, that is the minimum force which can be observed in this case. The consequence of this limitation is that industrial robots without specific design are not well adapted to perform sensor-less precision tasks. Finally, an efficient control law is presented for high effort applications.Keywords: Control, Identification, Robot, Co-manipulation, Sensor-less.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 639822 Application of GM (1, 1) Model Group Based on Recursive Solution in China's Energy Demand Forecasting
Authors: Yeqing Guan, Fen Yang
Abstract:
To learn about China-s future energy demand, this paper first proposed GM(1,1) model group based on recursive solutions of parameters estimation, setting up a general solving-algorithm of the model group. This method avoided the problems occurred on the past researches that remodeling, loss of information and large amount of calculation. This paper established respectively all-data-GM(1,1), metabolic GM(1,1) and new information GM (1,1)model according to the historical data of energy consumption in China in the year 2005-2010 and the added data of 2011, then modeling, simulating and comparison of accuracies we got the optimal models and to predict. Results showed that the total energy demand of China will be 37.2221 billion tons of equivalent coal in 2012 and 39.7973 billion tons of equivalent coal in 2013, which are as the same as the overall planning of energy demand in The 12th Five-Year Plan.
Keywords: energy demands, GM(1, 1) model group, least square estimation, prediction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555821 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method
Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan
Abstract:
The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.
Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345820 Evaluating Accuracy of Foetal Weight Estimation by Clinicians in Christian Medical College Hospital, India and Its Correlation to Actual Birth Weight: A Clinical Audit
Authors: Aarati Susan Mathew, Radhika Narendra Patel, Jiji Mathew
Abstract:
A retrospective study conducted at Christian Medical College (CMC) Teaching Hospital, Vellore, India on 14th August 2014 to assess the accuracy of clinically estimated foetal weight upon labour admission. Estimating foetal weight is a crucial factor in assessing maternal and foetal complications during and after labour. Medical notes of ninety-eight postnatal women who fulfilled the inclusion criteria were studied to evaluate the correlation between their recorded Estimated Foetal Weight (EFW) on admission and actual birth weight (ABW) of the newborn after delivery. Data concerning maternal and foetal demographics was also noted. Accuracy was determined by absolute percentage error and proportion of estimates within 10% of ABW. Actual birth weights ranged from 950-4080g. A strong positive correlation between EFW and ABW (r=0.904) was noted. Term deliveries (≥40 weeks) in the normal weight range (2500-4000g) had a 59.5% estimation accuracy (n=74) compared to pre-term (<40 weeks) with an estimation accuracy of 0% (n=2). Out of the term deliveries, macrosomic babies (>4000g) were underestimated by 25% (n=3) and low birthweight (LBW) babies were overestimated by 12.7% (n=9). Registrars who estimated foetal weight were accurate in babies within normal weight ranges. However, there needs to be an improvement in predicting weight of macrosomic and LBW foetuses. We have suggested the use of an amended version of the Johnson’s formula for the Indian population for improvement and a need to re-audit once implemented.Keywords: Clinical palpation, estimated foetal weight, pregnancy, India, Johnson’s formula.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2927819 A Fast Adaptive Tomlinson-Harashima Precoder for Indoor Wireless Communications
Authors: M. Naresh Kumar, Abhijit Mitra, C. Ardil
Abstract:
A fast adaptive Tomlinson Harashima (T-H) precoder structure is presented for indoor wireless communications, where the channel may vary due to rotation and small movement of the mobile terminal. A frequency-selective slow fading channel which is time-invariant over a frame is assumed. In this adaptive T-H precoder, feedback coefficients are updated at the end of every uplink frame by using system identification technique for channel estimation in contrary with the conventional T-H precoding concept where the channel is estimated during the starting of the uplink frame via Wiener solution. In conventional T-H precoder it is assumed the channel is time-invariant in both uplink and downlink frames. However assuming the channel is time-invariant over only one frame instead of two, the proposed adaptive T-H precoder yields better performance than conventional T-H precoder if the channel is varied in uplink after receiving the training sequence.
Keywords: Tomlinson-Harashima precoder, Adaptive channel estimation, Indoor wireless communication, Bit error rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814818 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning
Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds are not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.Keywords: Structural health monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2743817 Estimation of Attenuation and Phase Delay in Driving Voltage Waveform of an Ultra-High-Speed Image Sensor by Dimensional Analysis
Authors: V. T. S. Dao, T. G. Etoh, C. Vo Le, H. D. Nguyen, K. Takehara, T. Akino, K. Nishi
Abstract:
We present an explicit expression to estimate driving voltage attenuation through RC networks representation of an ultrahigh- speed image sensor. Elmore delay metric for a fundamental RC chain is employed as the first-order approximation. By application of dimensional analysis to SPICE simulation data, we found a simple expression that significantly improves the accuracy of the approximation. Estimation error of the resultant expression for uniform RC networks is less than 2%. Similarly, another simple closed-form model to estimate 50 % delay through fundamental RC networks is also derived with sufficient accuracy. The framework of this analysis can be extended to address delay or attenuation issues of other VLSI structures.
Keywords: Dimensional Analysis, Elmore model, RC network, Signal Attenuation, Ultra-High-Speed Image Sensor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1424816 A Self Adaptive Genetic Based Algorithm for the Identification and Elimination of Bad Data
Authors: A. A. Hossam-Eldin, E. N. Abdallah, M. S. El-Nozahy
Abstract:
The identification and elimination of bad measurements is one of the basic functions of a robust state estimator as bad data have the effect of corrupting the results of state estimation according to the popular weighted least squares method. However this is a difficult problem to handle especially when dealing with multiple errors from the interactive conforming type. In this paper, a self adaptive genetic based algorithm is proposed. The algorithm utilizes the results of the classical linearized normal residuals approach to tune the genetic operators thus instead of making a randomized search throughout the whole search space it is more likely to be a directed search thus the optimum solution is obtained at very early stages(maximum of 5 generations). The algorithm utilizes the accumulating databases of already computed cases to reduce the computational burden to minimum. Tests are conducted with reference to the standard IEEE test systems. Test results are very promising.Keywords: Bad Data, Genetic Algorithms, Linearized Normal residuals, Observability, Power System State Estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346815 Estimation Model for Concrete Slump Recovery by Using Superplasticizer
Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert
Abstract:
This paper aimed to introduce the solution of concrete slump recovery using chemical admixture type-F (superplasticizer, naphthalene base) to the practice in order to solve unusable concrete problem due to concrete loss its slump, especially for those tropical countries that have faster slump loss rate. In the other hand, randomly adding superplasticizer into concrete can cause concrete to segregate. Therefore, this paper also develops the estimation model used to calculate amount of second dose of superplasticizer need for concrete slump recovery. Fresh properties of ordinary Portland cement concrete with volumetric ratio of paste to void between aggregate (paste content) of 1.1-1.3 with water-cement ratio zone of 0.30 to 0.67 and initial superplasticizer (naphthalene base) of 0.25%-1.6% were tested for initial slump and slump loss for every 30 minutes for one and half hour by slump cone test. Those concretes with slump loss range from 10% to 90% were re-dosed and successfully recovered back to its initial slump. Slump after re-dosed was tested by slump cone test. From the result, it has been concluded that, slump loss was slower for those mix with high initial dose of superplasticizer due to addition of superplasticizer will disturb cement hydration. The required second dose of superplasticizer was affected by two major parameters, which were water-cement ratio and paste content, where lower water-cement ratio and paste content cause an increase in require second dose of superplasticizer. The amount of second dose of superplasticizer is higher as the solid content within the system is increase, solid can be either from cement particles or aggregate. The data was analyzed to form an equation use to estimate the amount of second dosage requirement of superplasticizer to recovery slump to its original.Keywords: Estimation model, second superplasticizer dosage, slump loss, slump recovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915814 A New Approach of Fuzzy Methods for Evaluating of Hydrological Data
Authors: Nasser Shamskia, Seyyed Habib Rahmati, Hassan Haleh , Seyyedeh Hoda Rahmati
Abstract:
The main criteria of designing in the most hydraulic constructions essentially are based on runoff or discharge of water. Two of those important criteria are runoff and return period. Mostly, these measures are calculated or estimated by stochastic data. Another feature in hydrological data is their impreciseness. Therefore, in order to deal with uncertainty and impreciseness, based on Buckley-s estimation method, a new fuzzy method of evaluating hydrological measures are developed. The method introduces triangular shape fuzzy numbers for different measures in which both of the uncertainty and impreciseness concepts are considered. Besides, since another important consideration in most of the hydrological studies is comparison of a measure during different months or years, a new fuzzy method which is consistent with special form of proposed fuzzy numbers, is also developed. Finally, to illustrate the methods more explicitly, the two algorithms are tested on one simple example and a real case study.Keywords: Fuzzy Discharge, Fuzzy estimation, Fuzzy ranking method, Hydrological data
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712813 Application of UAS in Forest Firefighting for Detecting Ignitions and 3D Fuel Volume Estimation
Authors: Artur Krukowski, Emmanouela Vogiatzaki
Abstract:
The article presents results from the AF3 project “Advanced Forest Fire Fighting” focused on Unmanned Aircraft Systems (UAS)-based 3D surveillance and 3D area mapping using high-resolution photogrammetric methods from multispectral imaging, also taking advantage of the 3D scanning techniques from the SCAN4RECO project. We also present a proprietary embedded sensor system used for the detection of fire ignitions in the forest using near-infrared based scanner with weight and form factors allowing it to be easily deployed on standard commercial micro-UAVs, such as DJI Inspire or Mavic. Results from real-life pilot trials in Greece, Spain, and Israel demonstrated added-value in the use of UAS for precise and reliable detection of forest fires, as well as high-resolution 3D aerial modeling for accurate quantification of human resources and equipment required for firefighting.
Keywords: Forest wildfires, fuel volume estimation, 3D modeling, UAV, surveillance, firefighting, ignition detectors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 581812 Perspectives of Renewable Energy in 21st Century in India: Statistics and Estimation
Authors: Manoj Kumar, Rajesh Kumar
Abstract:
With the favourable geographical conditions at Indian-subcontinent, it is suitable for flourishing renewable energy. Increasing amount of dependence on coal and other conventional sources is driving the world into pollution and depletion of resources. This paper presents the statistics of energy consumption and energy generation in Indian Sub-continent, which notifies us with the increasing energy demands surpassing energy generation. With the aggrandizement in demand for energy, usage of coal has increased, since the major portion of energy production in India is from thermal power plants. The increase in usage of thermal power plants causes pollution and depletion of reserves; hence, a paradigm shift to renewable sources is inevitable. In this work, the capacity and potential of renewable sources in India are analyzed. Based on the analysis of this work, future potential of these sources is estimated.Keywords: Energy consumption and generation, depletion of reserves, pollution, estimation, renewable sources.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 819811 An Efficient Adaptive Thresholding Technique for Wavelet Based Image Denoising
Authors: D.Gnanadurai, V.Sadasivam
Abstract:
This frame work describes a computationally more efficient and adaptive threshold estimation method for image denoising in the wavelet domain based on Generalized Gaussian Distribution (GGD) modeling of subband coefficients. In this proposed method, the choice of the threshold estimation is carried out by analysing the statistical parameters of the wavelet subband coefficients like standard deviation, arithmetic mean and geometrical mean. The noisy image is first decomposed into many levels to obtain different frequency bands. Then soft thresholding method is used to remove the noisy coefficients, by fixing the optimum thresholding value by the proposed method. Experimental results on several test images by using this method show that this method yields significantly superior image quality and better Peak Signal to Noise Ratio (PSNR). Here, to prove the efficiency of this method in image denoising, we have compared this with various denoising methods like wiener filter, Average filter, VisuShrink and BayesShrink.Keywords: Wavelet Transform, Gaussian Noise, ImageDenoising, Filter Banks and Thresholding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2907810 Critical Assessment of Scoring Schemes for Protein-Protein Docking Predictions
Authors: Dhananjay C. Joshi, Jung-Hsin Lin
Abstract:
Protein-protein interactions (PPI) play a crucial role in many biological processes such as cell signalling, transcription, translation, replication, signal transduction, and drug targeting, etc. Structural information about protein-protein interaction is essential for understanding the molecular mechanisms of these processes. Structures of protein-protein complexes are still difficult to obtain by biophysical methods such as NMR and X-ray crystallography, and therefore protein-protein docking computation is considered an important approach for understanding protein-protein interactions. However, reliable prediction of the protein-protein complexes is still under way. In the past decades, several grid-based docking algorithms based on the Katchalski-Katzir scoring scheme were developed, e.g., FTDock, ZDOCK, HADDOCK, RosettaDock, HEX, etc. However, the success rate of protein-protein docking prediction is still far from ideal. In this work, we first propose a more practical measure for evaluating the success of protein-protein docking predictions,the rate of first success (RFS), which is similar to the concept of mean first passage time (MFPT). Accordingly, we have assessed the ZDOCK bound and unbound benchmarks 2.0 and 3.0. We also createda new benchmark set for protein-protein docking predictions, in which the complexes have experimentally determined binding affinity data. We performed free energy calculation based on the solution of non-linear Poisson-Boltzmann equation (nlPBE) to improve the binding mode prediction. We used the well-studied thebarnase-barstarsystem to validate the parameters for free energy calculations. Besides,thenlPBE-based free energy calculations were conducted for the badly predicted cases by ZDOCK and ZRANK. We found that direct molecular mechanics energetics cannot be used to discriminate the native binding pose from the decoys.Our results indicate that nlPBE-based calculations appeared to be one of the promising approaches for improving the success rate of binding pose predictions.
Keywords: protein-protein docking, protein-protein interaction, molecular mechanics energetics, Poisson-Boltzmann calculations
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805809 A Study of Adaptive Fault Detection Method for GNSS Applications
Authors: Je Young Lee, Hee Sung Kim, Kwang Ho Choi, Joonhoo Lim, Sebum Chun, Hyung Keun Lee
Abstract:
This study is purposed to develop an efficient fault detection method for Global Navigation Satellite Systems (GNSS) applications based on adaptive noise covariance estimation. Due to the dependence on radio frequency signals, GNSS measurements are dominated by systematic errors in receiver’s operating environment. In the proposed method, the pseudorange and carrier-phase measurement noise covariances are obtained at time propagations and measurement updates in process of Carrier-Smoothed Code (CSC) filtering, respectively. The test statistics for fault detection are generated by the estimated measurement noise covariances. To evaluate the fault detection capability, intentional faults were added to the filed-collected measurements. The experiment result shows that the proposed method is efficient in detecting unhealthy measurements and improves GNSS positioning accuracy against fault occurrences.
Keywords: Adaptive estimation, fault detection, GNSS, residual.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2555808 Confidence Intervals for the Coefficients of Variation with Bounded Parameters
Authors: Jeerapa Sappakitkamjorn, Sa-aat Niwitpong
Abstract:
In many practical applications in various areas, such as engineering, science and social science, it is known that there exist bounds on the values of unknown parameters. For example, values of some measurements for controlling machines in an industrial process, weight or height of subjects, blood pressures of patients and retirement ages of public servants. When interval estimation is considered in a situation where the parameter to be estimated is bounded, it has been argued that the classical Neyman procedure for setting confidence intervals is unsatisfactory. This is due to the fact that the information regarding the restriction is simply ignored. It is, therefore, of significant interest to construct confidence intervals for the parameters that include the additional information on parameter values being bounded to enhance the accuracy of the interval estimation. Therefore in this paper, we propose a new confidence interval for the coefficient of variance where the population mean and standard deviation are bounded. The proposed interval is evaluated in terms of coverage probability and expected length via Monte Carlo simulation.
Keywords: Bounded parameters, coefficient of variation, confidence interval, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4227807 A Study on the Location and Range of Obstacle Region in Robot's Point Placement Task based on the Vision Control Algorithm
Authors: Jae Kyung Son, Wan Shik Jang, Sung hyun Shim, Yoon Gyung Sung
Abstract:
This paper is concerned with the application of the vision control algorithm for robot's point placement task in discontinuous trajectory caused by obstacle. The presented vision control algorithm consists of four models, which are the robot kinematic model, vision system model, parameters estimation model, and robot joint angle estimation model.When the robot moves toward a target along discontinuous trajectory, several types of obstacles appear in two obstacle regions. Then, this study is to investigate how these changes will affect the presented vision control algorithm.Thus, the practicality of the vision control algorithm is demonstrated experimentally by performing the robot's point placement task in discontinuous trajectory by obstacle.
Keywords: Vision control algorithm, location of obstacle region, range of obstacle region, point placement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402806 Big Data: Big Challenges to Privacy and Data Protection
Authors: Abu Bakar Munir, Siti Hajar Mohd Yasin, Firdaus Muhammad-Sukki
Abstract:
This paper seeks to analyse the benefits of big data and more importantly the challenges it pose to the subject of privacy and data protection. First, the nature of big data will be briefly deliberated before presenting the potential of big data in the present days. Afterwards, the issue of privacy and data protection is highlighted before discussing the challenges of implementing this issue in big data. In conclusion, the paper will put forward the debate on the adequacy of the existing legal framework in protecting personal data in the era of big data.
Keywords: Big data, data protection, information, privacy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3924805 The Hyperbolic Smoothing Approach for Automatic Calibration of Rainfall-Runoff Models
Authors: Adilson Elias Xavier, Otto Corrêa Rotunno Filho, Paulo Canedo de Magalhães
Abstract:
This paper addresses the issue of automatic parameter estimation in conceptual rainfall-runoff (CRR) models. Due to threshold structures commonly occurring in CRR models, the associated mathematical optimization problems have the significant characteristic of being strongly non-differentiable. In order to face this enormous task, the resolution method proposed adopts a smoothing strategy using a special C∞ differentiable class function. The final estimation solution is obtained by solving a sequence of differentiable subproblems which gradually approach the original conceptual problem. The use of this technique, called Hyperbolic Smoothing Method (HSM), makes possible the application of the most powerful minimization algorithms, and also allows for the main difficulties presented by the original CRR problem to be overcome. A set of computational experiments is presented for the purpose of illustrating both the reliability and the efficiency of the proposed approach.
Keywords: Rainfall-runoff models, optimization procedure, automatic parameter calibration, hyperbolic smoothing method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 408804 Validation of the Linear Trend Estimation Technique for Prediction of Average Water and Sewerage Charge Rate Prices in the Czech Republic
Authors: Aneta Oblouková, Eva Vítková
Abstract:
The article deals with the issue of water and sewerage charge rate prices in the Czech Republic. The research is specifically focused on the analysis of the development of the average prices of water and sewerage charge rate in the Czech Republic in 1994-2021 and on the validation of the chosen methodology relevant for the prediction of the development of the average prices of water and sewerage charge rate in the Czech Republic. The research is based on data collection. The data for this research were obtained from the Czech Statistical Office. The aim of the paper is to validate the relevance of the mathematical linear trend estimate technique for the calculation of the predicted average prices of water and sewerage charge rates. The real values of the average prices of water and sewerage charge rates in the Czech Republic in 1994-2018 were obtained from the Czech Statistical Office and were converted into a mathematical equation. The same type of real data was obtained from the Czech Statistical Office for 2019-2021. Prediction of the average prices of water and sewerage charge rates in the Czech Republic in 2019-2021 was also calculated using a chosen method – a linear trend estimation technique. The values obtained from the Czech Statistical Office and the values calculated using the chosen methodology were subsequently compared. The research result is a validation of the chosen mathematical technique to be a suitable technique for this research.
Keywords: Czech Republic, linear trend estimation, price prediction, water and sewerage charge rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202803 Inferences on Compound Rayleigh Parameters with Progressively Type-II Censored Samples
Authors: Abdullah Y. Al-Hossain
Abstract:
This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.
Keywords: Progressive type II censoring, compound Rayleigh failure time distribution, maximum likelihood estimation, Bayes estimation, Lindley's approximation method, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2390