Search results for: likelihood estimation method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19946

Search results for: likelihood estimation method

19646 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 45
19645 Estimation of Source Parameters Using Source Parameters Imaging Method From Digitised High Resolution Airborne Magnetic Data of a Basement Complex

Authors: O. T. Oluriz, O. D. Akinyemi, J. A.Olowofela, O. A. Idowu, S. A. Ganiyu

Abstract:

This study was carried out using aeromagnetic data which record variation in the magnitude of the earth magnetic field in order to detect local changes in the properties of the underlying geology. The aeromagnetic data (Sheet No. 261) was acquired from the archives of Nigeria Geological Survey Agency of Nigeria, obtained in 2009. The study present estimation of source parameters within an area of about 3,025 square kilometers on geographic latitude to and longitude to within Ibadan and it’s environs in Oyo State, southwestern Nigeria. The area under study belongs to part of basement complex in southwestern Nigeria. Estimation of source parameters of aeromagnetic data was achieve through the application of source imaging parameters (SPI) techniques that provide delineation, depth, dip contact, susceptibility contrast and mineral potentials of magnetic signatures within the region. The depth to the magnetic sources in the area ranges from 0.675 km to 4.48 km. The estimated depth limit to shallow sources is 0.695 km and depth to deep sources is 4.48 km. The apparent susceptibility values of the entire study area obtained ranges from 0.01 to 0.005 [SI]. This study has shown that the magnetic susceptibility within study area is controlled mainly by super paramagnetic minerals.

Keywords: aeromagnetic, basement complex, meta-sediment, precambrian

Procedia PDF Downloads 407
19644 An Efficient Collocation Method for Solving the Variable-Order Time-Fractional Partial Differential Equations Arising from the Physical Phenomenon

Authors: Haniye Dehestani, Yadollah Ordokhani

Abstract:

In this work, we present an efficient approach for solving variable-order time-fractional partial differential equations, which are based on Legendre and Laguerre polynomials. First, we introduced the pseudo-operational matrices of integer and variable fractional order of integration by use of some properties of Riemann-Liouville fractional integral. Then, applied together with collocation method and Legendre-Laguerre functions for solving variable-order time-fractional partial differential equations. Also, an estimation of the error is presented. At last, we investigate numerical examples which arise in physics to demonstrate the accuracy of the present method. In comparison results obtained by the present method with the exact solution and the other methods reveals that the method is very effective.

Keywords: collocation method, fractional partial differential equations, legendre-laguerre functions, pseudo-operational matrix of integration

Procedia PDF Downloads 141
19643 A Digital Filter for Symmetrical Components Identification

Authors: Khaled M. El-Naggar

Abstract:

This paper presents a fast and efficient technique for monitoring and supervising power system disturbances generated due to dynamic performance of power systems or faults. Monitoring power system quantities involve monitoring fundamental voltage, current magnitudes, and their frequencies as well as their negative and zero sequence components under different operating conditions. The proposed technique is based on simulated annealing optimization technique (SA). The method uses digital set of measurements for the voltage or current waveforms at power system bus to perform the estimation process digitally. The algorithm is tested using different simulated data to monitor the symmetrical components of power system waveforms. Different study cases are considered in this work. Effects of number of samples, sampling frequency and the sample window size are studied. Results are reported and discussed.

Keywords: estimation, faults, measurement, symmetrical components

Procedia PDF Downloads 440
19642 A Bathtub Curve from Nonparametric Model

Authors: Eduardo C. Guardia, Jose W. M. Lima, Afonso H. M. Santos

Abstract:

This paper presents a nonparametric method to obtain the hazard rate “Bathtub curve” for power system components. The model is a mixture of the three known phases of a component life, the decreasing failure rate (DFR), the constant failure rate (CFR) and the increasing failure rate (IFR) represented by three parametric Weibull models. The parameters are obtained from a simultaneous fitting process of the model to the Kernel nonparametric hazard rate curve. From the Weibull parameters and failure rate curves the useful lifetime and the characteristic lifetime were defined. To demonstrate the model the historic time-to-failure of distribution transformers were used as an example. The resulted “Bathtub curve” shows the failure rate for the equipment lifetime which can be applied in economic and replacement decision models.

Keywords: bathtub curve, failure analysis, lifetime estimation, parameter estimation, Weibull distribution

Procedia PDF Downloads 419
19641 Assessment of the Energy Balance Method in the Case of Masonry Domes

Authors: M. M. Sadeghi, S. Vahdani

Abstract:

Masonry dome structures had been widely used for covering large spans in the past. The seismic assessment of these historical structures is very complicated due to the nonlinear behavior of the material, their rigidness, and special stability configuration. The assessment method based on energy balance concept, as well as the standard pushover analysis, is used to evaluate the effectiveness of these methods in the case of masonry dome structures. The Soltanieh dome building is used as an example to which two methods are applied. The performance points are given from superimposing the capacity, and demand curves in Acceleration Displacement Response Spectra (ADRS) and energy coordination are compared with the nonlinear time history analysis as the exact result. The results show a good agreement between the dynamic analysis and the energy balance method, but standard pushover method does not provide an acceptable estimation.

Keywords: energy balance method, pushover analysis, time history analysis, masonry dome

Procedia PDF Downloads 258
19640 Fault Location Identification in High Voltage Transmission Lines

Authors: Khaled M. El Naggar

Abstract:

This paper introduces a digital method for fault section identification in transmission lines. The method uses digital set of the measured short circuit current to locate faults in electrical power systems. The digitized current is used to construct a set of overdetermined system of equations. The problem is then constructed and solved using the proposed digital optimization technique to find the fault distance. The proposed optimization methodology is an application of simulated annealing optimization technique. The method is tested using practical case study to evaluate the proposed method. The accurate results obtained show that the algorithm can be used as a powerful tool in the area of power system protection.

Keywords: optimization, estimation, faults, measurement, high voltage, simulated annealing

Procedia PDF Downloads 374
19639 Earnings vs Cash Flows: The Valuation Perspective

Authors: Megha Agarwal

Abstract:

The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.

Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)

Procedia PDF Downloads 345
19638 Robust Adaptation to Background Noise in Multichannel C-OTDR Monitoring Systems

Authors: Andrey V. Timofeev, Viktor M. Denisov

Abstract:

A robust sequential nonparametric method is proposed for adaptation to background noise parameters for real-time. The distribution of background noise was modelled like to Huber contamination mixture. The method is designed to operate as an adaptation-unit, which is included inside a detection subsystem of an integrated multichannel monitoring system. The proposed method guarantees the given size of a nonasymptotic confidence set for noise parameters. Properties of the suggested method are rigorously proved. The proposed algorithm has been successfully tested in real conditions of a functioning C-OTDR monitoring system, which was designed to monitor railways.

Keywords: guaranteed estimation, multichannel monitoring systems, non-asymptotic confidence set, contamination mixture

Procedia PDF Downloads 402
19637 Rainfall Estimation over Northern Tunisia by Combining Meteosat Second Generation Cloud Top Temperature and Tropical Rainfall Measuring Mission Microwave Imager Rain Rates

Authors: Saoussen Dhib, Chris M. Mannaerts, Zoubeida Bargaoui, Ben H. P. Maathuis, Petra Budde

Abstract:

In this study, a new method to delineate rain areas in northern Tunisia is presented. The proposed approach is based on the blending of the geostationary Meteosat Second Generation (MSG) infrared channel (IR) with the low-earth orbiting passive Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). To blend this two products, we need to apply two main steps. Firstly, we have to identify the rainy pixels. This step is achieved based on a classification using MSG channel IR 10.8 and the water vapor WV 0.62, applying a threshold on the temperature difference of less than 11 Kelvin which is an approximation of the clouds that have a high likelihood of precipitation. The second step consists on fitting the relation between IR cloud top temperature with the TMI rain rates. The correlation coefficient of these two variables has a negative tendency, meaning that with decreasing temperature there is an increase in rainfall intensity. The fitting equation will be applied for the whole day of MSG 15 minutes interval images which will be summed. To validate this combined product, daily extreme rainfall events occurred during the period 2007-2009 were selected, using a threshold criterion for large rainfall depth (> 50 mm/day) occurring at least at one rainfall station. Inverse distance interpolation method was applied to generate rainfall maps for the drier summer season (from May to October) and the wet winter season (from November to April). The evaluation results of the estimated rainfall combining MSG and TMI was very encouraging where all the events were detected rainy and the correlation coefficients were much better than previous evaluated products over the study area such as MSGMPE and PERSIANN products. The combined product showed a better performance during wet season. We notice also an overestimation of the maximal estimated rain for many events.

Keywords: combination, extreme, rainfall, TMI-MSG, Tunisia

Procedia PDF Downloads 148
19636 Modelling Causal Effects from Complex Longitudinal Data via Point Effects of Treatments

Authors: Xiaoqin Wang, Li Yin

Abstract:

Background and purpose: In many practices, one estimates causal effects arising from a complex stochastic process, where a sequence of treatments are assigned to influence a certain outcome of interest, and there exist time-dependent covariates between treatments. When covariates are plentiful and/or continuous, statistical modeling is needed to reduce the huge dimensionality of the problem and allow for the estimation of causal effects. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to conduct the modeling via point effects. The purpose of the work is to study the modeling of these causal effects via point effects. Challenges and solutions: The time-dependent covariates often have influences from earlier treatments as well as on subsequent treatments. Consequently, the standard parameters – i.e., the mean of the outcome given all treatments and covariates-- are essentially all different (null paradox). Furthermore, the dimension of the parameters is huge (curse of dimensionality). Therefore, it can be difficult to conduct the modeling in terms of standard parameters. Instead of standard parameters, we have use point effects of treatments to develop likelihood-based parametric approach to the modeling of these causal effects and are able to model the causal effects of a sequence of treatments by modeling a small number of point effects of individual treatment Achievements: We are able to conduct the modeling of the causal effects from a sequence of treatments in the familiar framework of single-point causal inference. The simulation shows that our method achieves not only an unbiased estimate for the causal effect but also the nominal level of type I error and a low level of type II error for the hypothesis testing. We have applied this method to a longitudinal study of COVID-19 mortality among Scandinavian countries and found that the Swedish approach performed far worse than the other countries' approach for COVID-19 mortality and the poor performance was largely due to its early measure during the initial period of the pandemic.

Keywords: causal effect, point effect, statistical modelling, sequential causal inference

Procedia PDF Downloads 176
19635 Regionalization of IDF Curves, by Interpolating Intensity and Adjustment Parameters - Application to Boyacá, Colombia

Authors: Pedro Mauricio Acosta, Carlos Andrés Caro

Abstract:

This research presents the regionalization of IDF curves for the department of Boyacá, Colombia, which comprises 16 towns, including the provincial capital, Tunja. For regionalization adjustment parameters (U and alpha) of the IDF curves stations referred to in the studied area were used. Similar regionalization is used by the interpolation of intensities. In the case of regionalization by parameters found by the construction of the curves intensity, duration and frequency estimation methods using ordinary moments and maximum likelihood. Regionalization and interpolation of data were performed with the assistance of Arcgis software. Within the development of the project the best choice to provide a level of reliability such as to determine which of the options and ways to regionalize is best sought. The resulting isolines maps were made in the case of regionalization intensities, each map is associated with a different return period and duration in order to build IDF curves in the studied area. In the case of the regionalization maps parameters associated with each parameter were performed last.

Keywords: intensity duration, frequency curves, regionalization, hydrology

Procedia PDF Downloads 306
19634 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 306
19633 Sensing to Respond & Recover in Emergency

Authors: Alok Kumar, Raviraj Patil

Abstract:

The ability to respond to an incident of a disastrous event in a vulnerable area is very crucial an aspect of emergency management. The ability to constantly predict the likelihood of an event along with its severity in an area and react to those significant events which are likely to have a high impact allows the authorities to respond by allocating resources optimally in a timely manner. It provides for measuring, monitoring, and modeling facilities that integrate underlying systems into one solution to improve operational efficiency, planning, and coordination. We were particularly involved in this innovative incubation work on the current state of research and development in collaboration. technologies & systems for a disaster.

Keywords: predictive analytics, advanced analytics, area flood likelihood model, area flood severity model, level of impact model, mortality score, economic loss score, resource allocation, crew allocation

Procedia PDF Downloads 291
19632 Residual Life Estimation of K-out-of-N Cold Standby System

Authors: Qian Zhao, Shi-Qi Liu, Bo Guo, Zhi-Jun Cheng, Xiao-Yue Wu

Abstract:

Cold standby redundancy is considered to be an effective mechanism for improving system reliability and is widely used in industrial engineering. However, because of the complexity of the reliability structure, there is little literature studying on the residual life of cold standby system consisting of complex components. In this paper, a simulation method is presented to predict the residual life of k-out-of-n cold standby system. In practical cases, failure information of a system is either unknown, partly unknown or completely known. Our proposed method is designed to deal with the three scenarios, respectively. Differences between the procedures are analyzed. Finally, numerical examples are used to validate the proposed simulation method.

Keywords: cold standby system, k-out-of-n, residual life, simulation sampling

Procedia PDF Downloads 373
19631 An Optimized Method for 3D Magnetic Navigation of Nanoparticles inside Human Arteries

Authors: Evangelos G. Karvelas, Christos Liosis, Andreas Theodorakakos, Theodoros E. Karakasidis

Abstract:

In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.

Keywords: computational fluid dynamics, CFD, covariance matrix adaptation evolution strategy, discrete element method, DEM, magnetic navigation, spherical particles

Procedia PDF Downloads 113
19630 Evaluation of Expected Annual Loss Probabilities of RC Moment Resisting Frames

Authors: Saemee Jun, Dong-Hyeon Shin, Tae-Sang Ahn, Hyung-Joon Kim

Abstract:

Building loss estimation methodologies which have been advanced considerably in recent decades are usually used to estimate socio and economic impacts resulting from seismic structural damage. In accordance with these methods, this paper presents the evaluation of an annual loss probability of a reinforced concrete moment resisting frame designed according to Korean Building Code. The annual loss probability is defined by (1) a fragility curve obtained from a capacity spectrum method which is similar to a method adopted from HAZUS, and (2) a seismic hazard curve derived from annual frequencies of exceedance per peak ground acceleration. Seismic fragilities are computed to calculate the annual loss probability of a certain structure using functions depending on structural capacity, seismic demand, structural response and the probability of exceeding damage state thresholds. This study carried out a nonlinear static analysis to obtain the capacity of a RC moment resisting frame selected as a prototype building. The analysis results show that the probability of being extensive structural damage in the prototype building is expected to 0.004% in a year.

Keywords: expected annual loss, loss estimation, RC structure, fragility analysis

Procedia PDF Downloads 380
19629 Spatiotemporal Neural Network for Video-Based Pose Estimation

Authors: Bin Ji, Kai Xu, Shunyu Yao, Jingjing Liu, Ye Pan

Abstract:

Human pose estimation is a popular research area in computer vision for its important application in human-machine interface. In recent years, 2D human pose estimation based on convolution neural network has got great progress and development. However, in more and more practical applications, people often need to deal with tasks based on video. It’s not far-fetched for us to consider how to combine the spatial and temporal information together to achieve a balance between computing cost and accuracy. To address this issue, this study proposes a new spatiotemporal model, namely Spatiotemporal Net (STNet) to combine both temporal and spatial information more rationally. As a result, the predicted keypoints heatmap is potentially more accurate and spatially more precise. Under the condition of ensuring the recognition accuracy, the algorithm deal with spatiotemporal series in a decoupled way, which greatly reduces the computation of the model, thus reducing the resource consumption. This study demonstrate the effectiveness of our network over the Penn Action Dataset, and the results indicate superior performance of our network over the existing methods.

Keywords: convolutional long short-term memory, deep learning, human pose estimation, spatiotemporal series

Procedia PDF Downloads 121
19628 Runoff Estimation Using NRCS-CN Method

Authors: E. K. Naseela, B. M. Dodamani, Chaithra Chandran

Abstract:

The GIS and remote sensing techniques facilitate accurate estimation of surface runoff from watershed. In the present study an attempt has been made to evaluate the applicability of Natural Resources Service Curve Number method using GIS and Remote sensing technique in the upper Krishna basin (69,425 Sq.km). Landsat 7 (with resolution 30 m) satellite data for the year 2012 has been used for the preparation of land use land cover (LU/LC) map. The hydrologic soil group is mapped using GIS platform. The weighted curve numbers (CN) for all the 5 subcatchments calculated on the basis of LU/LC type and hydrologic soil class in the area by considering antecedent moisture condition. Monthly rainfall data was available for 58 raingauge stations. Overlay technique is adopted for generating weighted curve number. Results of the study show that land use changes determined from satellite images are useful in studying the runoff response of the basin. The results showed that there is no significant difference between observed and estimated runoff depths. For each subcatchment, statistically positive correlations were detected between observed and estimated runoff depth (0.6Keywords: curve number, GIS, remote sensing, runoff

Procedia PDF Downloads 517
19627 The Ability of Forecasting the Term Structure of Interest Rates Based on Nelson-Siegel and Svensson Model

Authors: Tea Poklepović, Zdravka Aljinović, Branka Marasović

Abstract:

Due to the importance of yield curve and its estimation it is inevitable to have valid methods for yield curve forecasting in cases when there are scarce issues of securities and/or week trade on a secondary market. Therefore in this paper, after the estimation of weekly yield curves on Croatian financial market from October 2011 to August 2012 using Nelson-Siegel and Svensson models, yield curves are forecasted using Vector auto-regressive model and Neural networks. In general, it can be concluded that both forecasting methods have good prediction abilities where forecasting of yield curves based on Nelson Siegel estimation model give better results in sense of lower Mean Squared Error than forecasting based on Svensson model Also, in this case Neural networks provide slightly better results. Finally, it can be concluded that most appropriate way of yield curve prediction is neural networks using Nelson-Siegel estimation of yield curves.

Keywords: Nelson-Siegel Model, neural networks, Svensson Model, vector autoregressive model, yield curve

Procedia PDF Downloads 289
19626 Enhancement of Primary User Detection in Cognitive Radio by Scattering Transform

Authors: A. Moawad, K. C. Yao, A. Mansour, R. Gautier

Abstract:

The detecting of an occupied frequency band is a major issue in cognitive radio systems. The detection process becomes difficult if the signal occupying the band of interest has faded amplitude due to multipath effects. These effects make it hard for an occupying user to be detected. This work mitigates the missed-detection problem in the context of cognitive radio in frequency-selective fading channel by proposing blind channel estimation method that is based on scattering transform. By initially applying conventional energy detection, the missed-detection probability is evaluated, and if it is greater than or equal to 50%, channel estimation is applied on the received signal followed by channel equalization to reduce the channel effects. In the proposed channel estimator, we modify the Morlet wavelet by using its first derivative for better frequency resolution. A mathematical description of the modified function and its frequency resolution is formulated in this work. The improved frequency resolution is required to follow the spectral variation of the channel. The channel estimation error is evaluated in the mean-square sense for different channel settings, and energy detection is applied to the equalized received signal. The simulation results show improvement in reducing the missed-detection probability as compared to the detection based on principal component analysis. This improvement is achieved at the expense of increased estimator complexity, which depends on the number of wavelet filters as related to the channel taps. Also, the detection performance shows an improvement in detection probability for low signal-to-noise scenarios over principal component analysis- based energy detection.

Keywords: channel estimation, cognitive radio, scattering transform, spectrum sensing

Procedia PDF Downloads 177
19625 Estimating of Groundwater Recharge Value for Al-Najaf City, Iraq

Authors: Hayder H. Kareem

Abstract:

Groundwater recharge is a crucial parameter for any groundwater management system. The variability of the recharge rates and the difficulty in estimating this factor in many processes by direct observation leads to the complexity of estimating the recharge value. Various methods are existing to estimate the groundwater recharge, with some limitations for each method to be able for application. This paper focuses particularly on a real study area, Al-Najaf City, Iraq. In this city, there are few groundwater aquifers, but the aquifer which is considered in this study is the closest one to the ground surface, the Dibdibba aquifer. According to the Aridity Index, which is estimated in the paper, Al-Najaf City is classified as a region located in an arid climate, and this identified that the most appropriate method to estimate the groundwater recharge is Thornthwaite's formula or Thornthwaite's method. From the calculations, the estimated average groundwater recharge over the period 1980-2014 for Al-Najaf City is 40.32 mm/year. Groundwater recharge is completely affected the groundwater table level (groundwater head). Therefore, to make sure that this value of recharge is true, the MODFLOW program has been used to apply this value through finding the relationship between the calculated and observed heads where a groundwater model for the Al-Najaf City study area has been built by MODFLOW to simulate this area for different purposes, one of these purposes is to simulate the groundwater recharge. MODFLOW results show that this value of groundwater recharge is extremely high and needs to be reduced. Therefore, a further sensitivity test has been carried out for the Al-Najaf City study area by the MODFLOW program through changing the recharge value and found that the best estimation of groundwater recharge value for this city is 16.5 mm/year where this value gives the best fitting between the calculated and observed heads with minimum values of RMSE % (13.175) and RSS m² (1454).

Keywords: Al-Najaf City, groundwater modelling, recharge estimation, visual MODFLOW

Procedia PDF Downloads 102
19624 Multivariate Control Chart to Determine Efficiency Measurements in Industrial Processes

Authors: J. J. Vargas, N. Prieto, L. A. Toro

Abstract:

Control charts are commonly used to monitor processes involving either variable or attribute of quality characteristics and determining the control limits as a critical task for quality engineers to improve the processes. Nonetheless, in some applications it is necessary to include an estimation of efficiency. In this paper, the ability to define the efficiency of an industrial process was added to a control chart by means of incorporating a data envelopment analysis (DEA) approach. In depth, a Bayesian estimation was performed to calculate the posterior probability distribution of parameters as means and variance and covariance matrix. This technique allows to analyse the data set without the need of using the hypothetical large sample implied in the problem and to be treated as an approximation to the finite sample distribution. A rejection simulation method was carried out to generate random variables from the parameter functions. Each resulting vector was used by stochastic DEA model during several cycles for establishing the distribution of each efficiency measures for each DMU (decision making units). A control limit was calculated with model obtained and if a condition of a low level efficiency of DMU is presented, system efficiency is out of control. In the efficiency calculated a global optimum was reached, which ensures model reliability.

Keywords: data envelopment analysis, DEA, Multivariate control chart, rejection simulation method

Procedia PDF Downloads 353
19623 An Approach to Apply Kernel Density Estimation Tool for Crash Prone Location Identification

Authors: Kazi Md. Shifun Newaz, S. Miaji, Shahnewaz Hazanat-E-Rabbi

Abstract:

In this study, the kernel density estimation tool has been used to identify most crash prone locations in a national highway of Bangladesh. Like other developing countries, in Bangladesh road traffic crashes (RTC) have now become a great social alarm and the situation is deteriorating day by day. Today’s black spot identification process is not based on modern technical tools and most of the cases provide wrong output. In this situation, characteristic analysis and black spot identification by spatial analysis would be an effective and low cost approach in ensuring road safety. The methodology of this study incorporates a framework on the basis of spatial-temporal study to identify most RTC occurrence locations. In this study, a very important and economic corridor like Dhaka to Sylhet highway has been chosen to apply the method. This research proposes that KDE method for identification of Hazardous Road Location (HRL) could be used for all other National highways in Bangladesh and also for other developing countries. Some recommendations have been suggested for policy maker to reduce RTC in Dhaka-Sylhet especially in black spots.

Keywords: hazardous road location (HRL), crash, GIS, kernel density

Procedia PDF Downloads 278
19622 Application of the Total Least Squares Estimation Method for an Aircraft Aerodynamic Model Identification

Authors: Zaouche Mohamed, Amini Mohamed, Foughali Khaled, Aitkaid Souhila, Bouchiha Nihad Sarah

Abstract:

The aerodynamic coefficients are important in the evaluation of an aircraft performance and stability-control characteristics. These coefficients also can be used in the automatic flight control systems and mathematical model of flight simulator. The study of the aerodynamic aspect of flying systems is a reserved domain and inaccessible for the developers. Doing tests in a wind tunnel to extract aerodynamic forces and moments requires a specific and expensive means. Besides, the glaring lack of published documentation in this field of study makes the aerodynamic coefficients determination complicated. This work is devoted to the identification of an aerodynamic model, by using an aircraft in virtual simulated environment. We deal with the identification of the system, we present an environment framework based on Software In the Loop (SIL) methodology and we use MicrosoftTM Flight Simulator (FS-2004) as the environment for plane simulation. We propose The Total Least Squares Estimation technique (TLSE) to identify the aerodynamic parameters, which are unknown, variable, classified and used in the expression of the piloting law. In this paper, we define each aerodynamic coefficient as the mean of its numerical values. All other variations are considered as modeling uncertainties that will be compensated by the robustness of the piloting control.

Keywords: aircraft aerodynamic model, total least squares estimation, piloting the aircraft, robust control, Microsoft Flight Simulator, MQ-1 predator

Procedia PDF Downloads 246
19621 Thyroid Malignancy Concurrent with Hyperthyroidism: Variations with Thyroid Status and Age

Authors: N. J. Nawarathna, N. R. Kmarasinghe, D. Chandrasekara, B. M. R. S. Balasooriya, R. A. A. Shaminda, R. J. K. Senevirathne

Abstract:

Introduction: Thyroid malignancy associated with hyperthyroidism is considered rare. Retrospective studies have shown the incidence of thyroid malignancy in hyperthyroid patients to be low (0.7-8.5%). To assess the clinical relevance of this association, thyroid status in a cohort of patients with thyroid malignancy were analyzed. Method: Thyroid malignancies diagnosed histologically in 56 patients, over a 18 month period beginning from April 2013, in a single surgical unit at Teaching Hospital Kandy were included. Preoperative patient details and progression of thyroid status were asessed with Thyroid Stimulating Hormone, free Thyroxin and free Triiodothyronine levels. Results: Amongst 56 patients Papillary carcinoma was diagnosed in 44(78.6%), follicular carcinomas in 7(12.5%) and 5(8.9%) with medullary and anaplastic carcinomas. 12(21.4%) were males and 44(78.6%) were females. 20(35.7%) were less than 40years, 29(51.8%) were between 40 to 59years and 7(12.5%) were above 59years. Cross tabulation of Type of carcinoma with Gender revealed likelihood ratio of 6.908, Significance p = 0.032. Biochemically 12(21.4%) were hyperthyroid. Out of them 5(41.7%) had primary hyperthyroidism and 7(58.3%) had secondary hyperthyroidism. Mean age of euthyroid patients was 43.77years (SD 10.574) and hyperthyroid patients was 53.25years(SD 16.057). Independent Samples Test t is -2.446, two tailed significance p =0.018. When cross tabulate thyroid status with Age group Likelihood Ratio was 9.640, Significance p = 0.008. Conclusion: Papillary carcinoma is seen more among females. Among the patients with thyroid carcinomas, those with biochemically proven hyperthyroidism were more among the older age group than those who were euthyroid. Hence careful evaluation of elderly hyperthyroid patients to select the most suitable therapeutic approach is justified.

Keywords: age, hyperthyroidism, thyroid malignancy, thyroid status

Procedia PDF Downloads 379
19620 Quantification of Methane Emissions from Solid Waste in Oman Using IPCC Default Methodology

Authors: Wajeeha A. Qazi, Mohammed-Hasham Azam, Umais A. Mehmood, Ghithaa A. Al-Mufragi, Noor-Alhuda Alrawahi, Mohammed F. M. Abushammala

Abstract:

Municipal Solid Waste (MSW) disposed in landfill sites decompose under anaerobic conditions and produce gases which mainly contain carbon dioxide (CO₂) and methane (CH₄). Methane has the potential of causing global warming 25 times more than CO₂, and can potentially affect human life and environment. Thus, this research aims to determine MSW generation and the annual CH₄ emissions from the generated waste in Oman over the years 1971-2030. The estimation of total waste generation was performed using existing models, while the CH₄ emissions estimation was performed using the intergovernmental panel on climate change (IPCC) default method. It is found that total MSW generation in Oman might be reached 3,089 Gg in the year 2030, which approximately produced 85 Gg of CH₄ emissions in the year 2030.

Keywords: methane, emissions, landfills, solid waste

Procedia PDF Downloads 481
19619 GPS Refinement in Cities Using Statistical Approach

Authors: Ashwani Kumar

Abstract:

GPS plays an important role in everyday life for safe and convenient transportation. While pedestrians use hand held devices to know their position in a city, vehicles in intelligent transport systems use relatively sophisticated GPS receivers for estimating their current position. However, in urban areas where the GPS satellites are occluded by tall buildings, trees and reflections of GPS signals from nearby vehicles, GPS position estimation becomes poor. In this work, an exhaustive GPS data is collected at a single point in urban area under different times of day and under dynamic environmental conditions. The data is analyzed and statistical refinement methods are used to obtain optimal position estimate among all the measured positions. The results obtained are compared with publically available datasets and obtained position estimation refinement results are promising.

Keywords: global positioning system, statistical approach, intelligent transport systems, least squares estimation

Procedia PDF Downloads 261
19618 An Estimation Process for Progress Rate Based on Labor-Quantity in Republic of Korea

Authors: Dong-Ho Kim, Zheng-Xun Jin, Yong-Woon Cha, Su-Sang Lim, Sang-Won Han, Chang-Taek Hyun

Abstract:

As construction is a labor-intensive industry, it is important to identify and manage labor quantities for accurate progress management of the construction project. However, the progress management that focuses on construction cost calculated based on materials rather than labor quantities has led to a difference in the implementation of cost and progress of the actual construction. In addition, since it is not easy to predict accurate labor quantities in the estimation of labor quantity-based progress rate, there have been limited researches into the progress rate estimation based on labor quantity. Accordingly, this study proposed a process for labor quantity-based progress rate estimation using a standard of estimate to predict accurate progress rate of the construction project in Republic Korea. It is expected that the utilization of the proposed process will help to identify the progress rate closer to that of the actual site management and adjust the workforce in each construction type, thereby contributing to improving construction efficiency.

Keywords: labor based, labor cost, progress management, progress rate, progress payment

Procedia PDF Downloads 312
19617 Education Levels & University Student’s Income: Primary Data Analysis from the Universities of Punjab, Pakistan

Authors: Muhammad Ashraf

Abstract:

It is experimentally conceded reality that education not just promotes social and intellectual abilities yet, in addition, the incomes of people. The present study is directed to investigate the connection between education level and student income. Data of different education levels is acquired from 300 students through field review from four public sector Universities; two from upper Punjab (University of Gujarat and Government college university-Lahore) and two from lower Punjab (Islamia University-Bahawalpur and The University of Sahiwal). Two-phase estimation is based on the Mincerian human capital model. The first stage presents statistical/descriptive investigation, which shows positive linkage among higher education and income of the students. Econometric estimation is estimated in the second stage by applying Ordinary least Square Method (OLS). Econometric examination reaffirms the importance of higher education as the impact of higher education on students’ incomes accelerates as we move from lower-level education to higher-level education. Educational levels, experience, and working hours are sure and noteworthy with student’s income. Econometric estimation additionally investigated that M. Phil and Ph.D. students have a higher income than bachelor students. Concerning the students, the income profile study commended that the Government ought to give part-time jobs or internships to students as indicated to labor market demand.

Keywords: education, student’s income, experience, universities

Procedia PDF Downloads 94