Search results for: Precipitation Estimation.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1202

Search results for: Precipitation Estimation.

1112 Robust UKF Insensitive to Measurement Faults for Pico Satellite Attitude Estimation

Authors: Halil Ersin Soken, Chingiz Hajiyev

Abstract:

In the normal operation conditions of a pico satellite, conventional Unscented Kalman Filter (UKF) gives sufficiently good estimation results. However, if the measurements are not reliable because of any kind of malfunction in the estimation system, UKF gives inaccurate results and diverges by time. This study, introduces Robust Unscented Kalman Filter (RUKF) algorithms with the filter gain correction for the case of measurement malfunctions. By the use of defined variables named as measurement noise scale factor, the faulty measurements are taken into the consideration with a small weight and the estimations are corrected without affecting the characteristic of the accurate ones. Two different RUKF algorithms, one with single scale factor and one with multiple scale factors, are proposed and applied for the attitude estimation process of a pico satellite. The results of these algorithms are compared for different types of measurement faults in different estimation scenarios and recommendations about their applications are given.

Keywords: attitude algorithms, Kalman filters, robustestimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
1111 Forecasting the Volatility of Geophysical Time Series with Stochastic Volatility Models

Authors: Maria C. Mariani, Md Al Masum Bhuiyan, Osei K. Tweneboah, Hector G. Huizar

Abstract:

This work is devoted to the study of modeling geophysical time series. A stochastic technique with time-varying parameters is used to forecast the volatility of data arising in geophysics. In this study, the volatility is defined as a logarithmic first-order autoregressive process. We observe that the inclusion of log-volatility into the time-varying parameter estimation significantly improves forecasting which is facilitated via maximum likelihood estimation. This allows us to conclude that the estimation algorithm for the corresponding one-step-ahead suggested volatility (with ±2 standard prediction errors) is very feasible since it possesses good convergence properties.

Keywords: Augmented Dickey Fuller Test, geophysical time series, maximum likelihood estimation, stochastic volatility model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 825
1110 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques

Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah

Abstract:

Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or underestimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improve accuracies. This requires standard measurement methods to be structured in ontological and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.

Keywords: BIM, Construction projects, Cost estimation, NRM, Ontology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4387
1109 Array Signal Processing: DOA Estimation for Missing Sensors

Authors: Lalita Gupta, R. P. Singh

Abstract:

Array signal processing involves signal enumeration and source localization. Array signal processing is centered on the ability to fuse temporal and spatial information captured via sampling signals emitted from a number of sources at the sensors of an array in order to carry out a specific estimation task: source characteristics (mainly localization of the sources) and/or array characteristics (mainly array geometry) estimation. Array signal processing is a part of signal processing that uses sensors organized in patterns or arrays, to detect signals and to determine information about them. Beamforming is a general signal processing technique used to control the directionality of the reception or transmission of a signal. Using Beamforming we can direct the majority of signal energy we receive from a group of array. Multiple signal classification (MUSIC) is a highly popular eigenstructure-based estimation method of direction of arrival (DOA) with high resolution. This Paper enumerates the effect of missing sensors in DOA estimation. The accuracy of the MUSIC-based DOA estimation is degraded significantly both by the effects of the missing sensors among the receiving array elements and the unequal channel gain and phase errors of the receiver.

Keywords: Array Signal Processing, Beamforming, ULA, Direction of Arrival, MUSIC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2968
1108 Heat Treatment of Aluminum Alloy 7449

Authors: Suleiman E. Al-lubani, Mohammad E. Matarneh, Hussien M. Al-Wedyan, Ala M. Rayes

Abstract:

Aluminum alloy has an extensive range of industrial application due to its consistent mechanical properties and structural integrity. The heat treatment by precipitation technique affected the Magnesium, Silicon Manganese and copper crystals dissolved in the Aluminum alloy. The crystals dislocated to precipitate on the crystal’s boundaries of the Aluminum alloy when given a thermal energy increased its hardness. In this project various times and temperature were varied to find out the best combination of these variables to increase the precipitation of the metals on the Aluminum crystal’s boundaries which will lead to get the highest hardness. These specimens are then tested for their hardness and tensile strength. It is noticed that when the temperature increases, the precipitation increases and consequently the hardness increases. A threshold temperature value (264C0) of Aluminum alloy should not be reached due to the occurrence of recrystalization which causes the crystal to grow. This recrystalization process affected the ductility of the alloy and decrease hardness. In addition, and while increasing the temperature the alloy’s mechanical properties will decrease. The mechanical properties, namely tensile and hardness properties are investigated according to standard procedures. In this research, different temperature and time have been applied to increase hardening.The highest hardness at 100°c in 6 hours equals to 207.31 HBR, while at the same temperature and time the lowest elongation equals to 146.5.

Keywords: Aluminum alloy, recrystalization process, heat treatment, hardness properties, precipitation, intergranular breakage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4022
1107 Combined Beamforming and Channel Estimation in WCDMA Communication Systems

Authors: Nermin A. Mohamed, Mohamed F. Madkour

Abstract:

We address the problem of joint beamforming and multipath channel parameters estimation in Wideband Code Division Multiple Access (WCDMA) communication systems that employ Multiple-Access Interference (MAI) suppression techniques in the uplink (from mobile to base station). Most of the existing schemes rely on time multiplex a training sequence with the user data. In WCDMA, the channel parameters can also be estimated from a code multiplexed common pilot channel (CPICH) that could be corrupted by strong interference resulting in a bad estimate. In this paper, we present new methods to combine interference suppression together with channel estimation when using multiple receiving antennas by using adaptive signal processing techniques. Computer simulation is used to compare between the proposed methods and the existing conventional estimation techniques.

Keywords: Adaptive arrays, channel estimation, interferencecancellation, wideband code division multiple access (WCDMA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2273
1106 Localization of Near Field Radio Controlled Unintended Emitting Sources

Authors: Nurbanu Guzey, S. Jagannathan

Abstract:

Locating Radio Controlled (RC) devices using their unintended emissions has a great interest considering security concerns. Weak nature of these emissions requires near field localization approach since it is hard to detect these signals in far field region of array. Instead of only angle estimation, near field localization also requires range estimation of the source which makes this method more complicated than far field models. Challenges of locating such devices in a near field region and real time environment are analyzed in this paper. An ESPRIT like near field localization scheme is utilized for both angle and range estimation. 1-D search with symmetric subarrays is provided. Two 7 element uniform linear antenna arrays (ULA) are employed for locating RC source. Experiment results of location estimation for one unintended emitting walkie-talkie for different positions are given.

Keywords: Localization, angle of arrival (AoA), range estimation, array signal processing, ESPRIT, uniform linear array (ULA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2343
1105 An Enhanced Floor Estimation Algorithm for Indoor Wireless Localization Systems Using Confidence Interval Approach

Authors: Kriangkrai Maneerat, Chutima Prommak

Abstract:

Indoor wireless localization systems have played an important role to enhance context-aware services. Determining the position of mobile objects in complex indoor environments, such as those in multi-floor buildings, is very challenging problems. This paper presents an effective floor estimation algorithm, which can accurately determine the floor where mobile objects located. The proposed algorithm is based on the confidence interval of the summation of online Received Signal Strength (RSS) obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSN).We compare the performance of the proposed algorithm with those of other floor estimation algorithms in literature by conducting a real implementation of WSN in our facility. The experimental results and analysis showed that the proposed floor estimation algorithm outperformed the other algorithms and provided highest percentage of floor accuracy up to 100% with 95-percent confidence interval.

Keywords: Floor estimation algorithm, floor determination, multi-floor building, indoor wireless systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3163
1104 Two New Relative Efficiencies of Linear Weighted Regression

Authors: Shuimiao Wan, Chao Yuan, Baoguang Tian

Abstract:

In statistics parameter theory, usually the parameter estimations have two kinds, one is the least-square estimation (LSE), and the other is the best linear unbiased estimation (BLUE). Due to the determining theorem of minimum variance unbiased estimator (MVUE), the parameter estimation of BLUE in linear model is most ideal. But since the calculations are complicated or the covariance is not given, people are hardly to get the solution. Therefore, people prefer to use LSE rather than BLUE. And this substitution will take some losses. To quantize the losses, many scholars have presented many kinds of different relative efficiencies in different views. For the linear weighted regression model, this paper discusses the relative efficiencies of LSE of β to BLUE of β. It also defines two new relative efficiencies and gives their lower bounds.

Keywords: Linear weighted regression, Relative efficiency, Lower bound, Parameter estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068
1103 Orthogonal Regression for Nonparametric Estimation of Errors-in-Variables Models

Authors: Anastasiia Yu. Timofeeva

Abstract:

Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.

Keywords: Grade point average, orthogonal regression, penalized regression spline, locally weighted regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2093
1102 Variogram Fitting Based on the Wilcoxon Norm

Authors: Hazem Al-Mofleh, John Daniels, Joseph McKean

Abstract:

Within geostatistics research, effective estimation of the variogram points has been examined, particularly in developing robust alternatives. The parametric fit of these variogram points which eventually defines the kriging weights, however, has not received the same attention from a robust perspective. This paper proposes the use of the non-linear Wilcoxon norm over weighted non-linear least squares as a robust variogram fitting alternative. First, we introduce the concept of variogram estimation and fitting. Then, as an alternative to non-linear weighted least squares, we discuss the non-linear Wilcoxon estimator. Next, the robustness properties of the non-linear Wilcoxon are demonstrated using a contaminated spatial data set. Finally, under simulated conditions, increasing levels of contaminated spatial processes have their variograms points estimated and fit. In the fitting of these variogram points, both non-linear Weighted Least Squares and non-linear Wilcoxon fits are examined for efficiency. At all levels of contamination (including 0%), using a robust estimation and robust fitting procedure, the non-weighted Wilcoxon outperforms weighted Least Squares.

Keywords: Non-Linear Wilcoxon, robust estimation, Variogram estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 929
1101 A Novel Stator Resistance Estimation Method and Control Design of Speed-Sensorless Induction Motor Drives

Authors: N. Ben Si Ali, N. Benalia, N. Zarzouri

Abstract:

Speed sensorless systems are intensively studied during recent years; this is mainly due to their economical benefit and fragility of mechanical sensors and also the difficulty of installing this type of sensor in many applications. These systems suffer from instability problems and sensitivity to parameter mismatch at low speed operation. In this paper an analysis of adaptive observer stability with stator resistance estimation is given.

Keywords: Motor drive, sensorless control, adaptive observer, stator resistance estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2168
1100 Spread Spectrum Code Estimation by Genetic Algorithm

Authors: V. R. Asghari, M. Ardebilipour

Abstract:

In the context of spectrum surveillance, a method to recover the code of spread spectrum signal is presented, whereas the receiver has no knowledge of the transmitter-s spreading sequence. The approach is based on a genetic algorithm (GA), which is forced to model the received signal. Genetic algorithms (GAs) are well known for their robustness in solving complex optimization problems. Experimental results show that the method provides a good estimation, even when the signal power is below the noise power.

Keywords: Code estimation, genetic algorithms, spread spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
1099 Interannual Variations in Snowfall and Continuous Snow Cover Duration in Pelso, Central Finland, Linked to Teleconnection Patterns, 1944-2010

Authors: M. Irannezhad, E. H. N. Gashti, S. Mohammadighavam, M. Zarrini, B. Kløve

Abstract:

Climate warming would increase rainfall by shifting precipitation falling form from snow to rain, and would accelerate snow cover disappearing by increasing snowpack. Using temperature and precipitation data in the temperature-index snowmelt model, we evaluated variability of snowfall and continuous snow cover duration (CSCD) during 1944-2010 over Pelso, central Finland. Mann- Kendall non-parametric test determined that annual precipitation increased by 2.69 (mm/year, p<0.05) during the study period, but no clear trend in annual temperature. Both annual rainfall and snowfall increased by 1.67 and 0.78 (mm/year, p<0.05), respectively. CSCD was generally about 205 days from 14 October to 6 May. No clear trend was found in CSCD over Pelso. Spearman’s rank correlation showed most significant relationships of annual snowfall with the East Atlantic (EA) pattern, and CSCD with the East Atlantic/West Russia (EA/WR) pattern. Increased precipitation with no warming temperature caused the rainfall and snowfall to increase, while no effects on CSCD.

Keywords: Variations, snowfall, snow cover duration, temperature-index snowmelt model, teleconnection patterns.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
1098 Effectiveness of Business Software Systems Development and Enhancement Projects versus Work Effort Estimation Methods

Authors: Beata Czarnacka-Chrobot

Abstract:

Execution of Business Software Systems (BSS) Development and Enhancement Projects (D&EP) is characterized by the exceptionally low effectiveness, leading to considerable financial losses. The general reason for low effectiveness of such projects is that they are inappropriately managed. One of the factors of proper BSS D&EP management is suitable (reliable and objective) method of project work effort estimation since this is what determines correct estimation of its major attributes: project cost and duration. BSS D&EP is usually considered to be accomplished effectively if product of a planned functionality is delivered without cost and time overrun. The goal of this paper is to prove that choosing approach to the BSS D&EP work effort estimation has a considerable influence on the effectiveness of such projects execution.

Keywords: Business software systems, development and enhancement projects, effectiveness, work effort estimation methods, software product size, software product functionality, project duration, project cost.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2036
1097 Study on Extraction of Lanthanum Oxide from Monazite Concentrate

Authors: Nwe Nwe Soe, Lwin Thuzar Shwe, Kay Thi Lwin

Abstract:

Lanthanum oxide is to be recovered from monazite, which contains about 13.44% lanthanum oxide. The principal objective of this study is to be able to extract lanthanum oxide from monazite of Moemeik Myitsone Area. The treatment of monazite in this study involves three main steps; extraction of lanthanum hydroxide from monazite by using caustic soda, digestion with nitric acid and precipitation with ammonium hydroxide and calcination of lanthanum oxalate to lanthanum oxide.

Keywords: Calcination, Digestion, Precipitation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3979
1096 Dielectric Studies on Nano Zirconium Dioxide Synthesized through Co-Precipitation Process

Authors: K. Geethalakshmi, T. Prabhakaran, J. Hemalatha

Abstract:

Nano sized zirconium dioxide in monoclinic phase (m-ZrO2) has been synthesized in pure form through co-precipitation processing at different calcination temperatures and has been characterized by several techniques such as XRD, FT-IR, UV-Vis Spectroscopy and SEM. The dielectric and capacitance values of the pelletized samples have been examined at room temperature as the functions of frequency. The higher dielectric constant value of the sample having larger grain size proves the strong influence of grain size on the dielectric constant.

Keywords: capacitance, dielectric constant, m-ZrO2, nano zirconia

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3979
1095 Connectivity Estimation from the Inverse Coherence Matrix in a Complex Chaotic Oscillator Network

Authors: Won Sup Kim, Xue-Mei Cui, Seung Kee Han

Abstract:

We present on the method of inverse coherence matrix for the estimation of network connectivity from multivariate time series of a complex system. In a model system of coupled chaotic oscillators, it is shown that the inverse coherence matrix defined as the inverse of cross coherence matrix is proportional to the network connectivity. Therefore the inverse coherence matrix could be used for the distinction between the directly connected links from indirectly connected links in a complex network. We compare the result of network estimation using the method of the inverse coherence matrix with the results obtained from the coherence matrix and the partial coherence matrix.

Keywords: Chaotic oscillator, complex network, inverse coherence matrix, network estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1958
1094 Motion Prediction and Motion Vector Cost Reduction during Fast Block Motion Estimation in MCTF

Authors: Karunakar A K, Manohara Pai M M

Abstract:

In 3D-wavelet video coding framework temporal filtering is done along the trajectory of motion using Motion Compensated Temporal Filtering (MCTF). Hence computationally efficient motion estimation technique is the need of MCTF. In this paper a predictive technique is proposed in order to reduce the computational complexity of the MCTF framework, by exploiting the high correlation among the frames in a Group Of Picture (GOP). The proposed technique applies coarse and fine searches of any fast block based motion estimation, only to the first pair of frames in a GOP. The generated motion vectors are supplied to the next consecutive frames, even to subsequent temporal levels and only fine search is carried out around those predicted motion vectors. Hence coarse search is skipped for all the motion estimation in a GOP except for the first pair of frames. The technique has been tested for different fast block based motion estimation algorithms over different standard test sequences using MC-EZBC, a state-of-the-art scalable video coder. The simulation result reveals substantial reduction (i.e. 20.75% to 38.24%) in the number of search points during motion estimation, without compromising the quality of the reconstructed video compared to non-predictive techniques. Since the motion vectors of all the pair of frames in a GOP except the first pair will have value ±1 around the motion vectors of the previous pair of frames, the number of bits required for motion vectors is also reduced by 50%.

Keywords: Motion Compensated Temporal Filtering, predictivemotion estimation, lifted wavelet transform, motion vector

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
1093 Selective Sulfidation of Copper, Zinc and Nickelin Plating Wastewater using Calcium Sulfide

Authors: K. Soya, N. Mihara, D. Kuchar, M. Kubota, H. Matsuda, T. Fukuta

Abstract:

The present work is concerned with sulfidation of Cu, Zn and Ni containing plating wastewater with CaS. The sulfidation experiments were carried out at a room temperature by adding solid CaS to simulated metal solution containing either single-metal of Ni, Zn and Cu, or Ni-Zn-Cu mixture. At first, the experiments were conducted without pH adjustment and it was found that the complete sulfidation of Zn and Ni was achieved at an equimolar ratio of CaS to a particular metal. However, in the case of Cu, a complete copper sulfidation was achieved at CaS to Cu molar ratio of about 2. In the case of the selective sulfidation, a simulated plating solution containing Cu, Zn and Ni at the concentration of 100 mg/dm3 was treated with CaS under various pH conditions. As a result, selective precipitation of metal sulfides was achieved by a sulfidation treatment at different pH values. Further, the precipitation agents of NaOH, Na2S and CaS were compared in terms of the average specific filtration resistance and compressibility coefficients of metal sulfide slurry. Consequently, based on the lowest filtration parameters of the produced metal sulfides, it was concluded that CaS was the most effective precipitation agent for separation and recovery of Cu, Zn and Ni.

Keywords: Calcium sulfide, Plating Wastewater, Filtrationcharacteristics, Heavy metals, Sulfidation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3172
1092 Effect of Formulation Compositions on Particle Size and Zeta Potential of Diclofenac Sodium-Loaded Chitosan Nanoparticles

Authors: Rathapon Asasutjarit, Chayanid Sorrachaitawatwong, Nardauma Tipchuwong, Sirijit Pouthai

Abstract:

This study was conducted to formulate diclofenac sodium-loaded chitosan nanoparticles and to study the effect of formulation compositions on particle size and zeta potential of chitosan nanoparticles (CSN) containing diclofenac sodium (DC) prepared by ionotropic gelation method. It was found that the formulations containing chitosan, DC and tripolyphosphate (TPP) at a weight ratio of 4:1:1, respectively, with various pH provided various systems. At pH 5.0 and 6.0, the obtained systems were turbid because of precipitation of DC and chitosan, respectively. However, the dispersed system of CSN possessing diameter of 108±1 nm and zeta potential of 19±1 mV could be obtained at pH 5.5. These CSN also showed spherical morphology observed via a transmission scanning electron microscope. Change in weight ratio of chitosan:DC:TPP i.e. 1:1:1, 2:1:1, 3:1:1 and 4:1:1 showed that these ratios led to precipitation of particles except for the ratio of 4:1:1 providing CSN properly. The effect of Tween 80 as a stabilizer was also determined. It suggested that increment of Tween 80 concentration to 0.02% w/v could stabilize CSN at least 48 hours. However, increment of Tween 80 to 0.03% w/v led to quick precipitation of particles. The study of effect of TPP suggested that increment of TPP concentration increased particle size but decreased zeta potential. The excess TPP caused precipitation of CSN. Therefore, the optimized CSN was the CSN containing chitosan, DC and TPP at the ratio of 4:1:1and 0.02% w/v Tween 80 prepared at pH 5.5. Their particle size, zeta potential and entrapment efficiency were 128±1 nm, 15±1 mV and 45.8±2.6%, respectively.

Keywords: Chitosan nanoparticles, diclofenac sodium, size, zeta potential.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4475
1091 Trend Analysis of Annual Total Precipitation Data in Konya

Authors: Naci Büyükkaracığan

Abstract:

Hydroclimatic observation values ​​are used in the planning of the project of water resources. Climate variables are the first of the values ​​used in planning projects. At the same time, the climate system is a complex and interactive system involving the atmosphere, land surfaces, snow and bubbles, the oceans and other water structures. The amount and distribution of precipitation, which is an important climate parameter, is a limiting environmental factor for dispersed living things. Trend analysis is applied to the detection of the presence of a pattern or trend in the data set. Many trends work in different parts of the world are usually made for the determination of climate change. The detection and attribution of past trends and variability in climatic variables is essential for explaining potential future alteration resulting from anthropogenic activities. Parametric and non-parametric tests are used for determining the trends in climatic variables. In this study, trend tests were applied to annual total precipitation data obtained in period of 1972 and 2012, in the Konya Basin. Non-parametric trend tests, (Sen’s T, Spearman’s Rho, Mann-Kendal, Sen’s T trend, Wald-Wolfowitz) and parametric test (mean square) were applied to annual total precipitations of 15 stations for trend analysis. The linear slopes (change per unit time) of trends are calculated by using a non-parametric estimator developed by Sen. The beginning of trends is determined by using the Mann-Kendall rank correlation test. In addition, homogeneities in precipitation trends are tested by using a method developed by Van Belle and Hughes. As a result of tests, negative linear slopes were found in annual total precipitations in Konya.

Keywords: Trend analysis, precipitation, hydroclimatology, Konya, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 962
1090 Dynamic State Estimation with Optimal PMU and Conventional Measurements for Complete Observability

Authors: M. Ravindra, R. Srinivasa Rao

Abstract:

This paper presents a Generalized Binary Integer Linear Programming (GBILP) method for optimal allocation of Phasor Measurement Units (PMUs) and to generate Dynamic State Estimation (DSE) solution with complete observability. The GBILP method is formulated with Zero Injection Bus (ZIB) constraints to reduce the number of locations for placement of PMUs in the case of normal and single line contingency. The integration of PMU and conventional measurements is modeled in DSE process to estimate accurate states of the system. To estimate the dynamic behavior of the power system with proposed method, load change up to 40% considered at a bus in the power system network. The proposed DSE method is compared with traditional Weighted Least Squares (WLS) state estimation method in presence of load changes to show the impact of PMU measurements. MATLAB simulations are carried out on IEEE 14, 30, 57, and 118 bus systems to prove the validity of the proposed approach.

Keywords: Observability, phasor measurement units, PMU, state estimation, dynamic state estimation, SCADA measurements, zero injection bus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1205
1089 The Relative Efficiency of Parameter Estimation in Linear Weighted Regression

Authors: Baoguang Tian, Nan Chen

Abstract:

A new relative efficiency in linear model in reference is instructed into the linear weighted regression, and its upper and lower bound are proposed. In the linear weighted regression model, for the best linear unbiased estimation of mean matrix respect to the least-squares estimation, two new relative efficiencies are given, and their upper and lower bounds are also studied.

Keywords: Linear weighted regression, Relative efficiency, Mean matrix, Trace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2419
1088 Frequency-Variation Based Method for Parameter Estimation of Transistor Amplifier

Authors: Akash Rathee, Harish Parthasarathy

Abstract:

In this paper, a frequency-variation based method has been proposed for transistor parameter estimation in a commonemitter transistor amplifier circuit. We design an algorithm to estimate the transistor parameters, based on noisy measurements of the output voltage when the input voltage is a sine wave of variable frequency and constant amplitude. The common emitter amplifier circuit has been modelled using the transistor Ebers-Moll equations and the perturbation technique has been used for separating the linear and nonlinear parts of the Ebers-Moll equations. This model of the amplifier has been used to determine the amplitude of the output sinusoid as a function of the frequency and the parameter vector. Then, applying the proposed method to the frequency components, the transistor parameters have been estimated. As compared to the conventional time-domain least squares method, the proposed method requires much less data storage and it results in more accurate parameter estimation, as it exploits the information in the time and frequency domain, simultaneously. The proposed method can be utilized for parameter estimation of an analog device in its operating range of frequencies, as it uses data collected from different frequencies output signals for parameter estimation.

Keywords: Perturbation Technique, Parameter estimation, frequency-variation based method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1713
1087 Evaluation of Model Evaluation Criterion for Software Development Effort Estimation

Authors: S. K. Pillai, M. K. Jeyakumar

Abstract:

Estimation of model parameters is necessary to predict the behavior of a system. Model parameters are estimated using optimization criteria. Most algorithms use historical data to estimate model parameters. The known target values (actual) and the output produced by the model are compared. The differences between the two form the basis to estimate the parameters. In order to compare different models developed using the same data different criteria are used. The data obtained for short scale projects are used here. We consider software effort estimation problem using radial basis function network. The accuracy comparison is made using various existing criteria for one and two predictors. Then, we propose a new criterion based on linear least squares for evaluation and compared the results of one and two predictors. We have considered another data set and evaluated prediction accuracy using the new criterion. The new criterion is easy to comprehend compared to single statistic. Although software effort estimation is considered, this method is applicable for any modeling and prediction.

Keywords: Software effort estimation, accuracy, Radial Basis Function, linear least squares.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
1086 The Investigation of Precipitation Conditions of Chevreul’s Salt

Authors: Turan Çalban, Fatih Sevim, Oral Laçin

Abstract:

In this study, the precipitation conditions of Chevreul’s salt were evaluated. The structure of Chevreul’s salt was examined by considering the previous studies. Thermodynamically, the most important precipitation parameters were pH, temperature, and sulphite-copper(II) ratio. The amount of Chevreul’s salt increased with increasing the temperature and sulphite-copper(II) ratio at the certain range, while it increased with decreasing the pH value at the chosen range. The best solution medium for recovery of Chevreul’s salt is sulphur dioxide gas-water system. Moreover, the soluble sulphite salts are used as efficient precipitating reagents. Chevreul’s salt is generally used to produce the highly pure copper powders from synthetic copper sulphate solutions and impure leach solutions. When the pH of the initial ammoniacal solution is greater than 8.5, ammonia in the medium is not free, and Chevreul’s salt from solution does not precipitate. In contrast, copper ammonium sulphide is precipitated. The pH of the initial solution containing ammonia for precipitating of Chevreul’s salt must be less than 8.5.

Keywords: Chevreul’s salt, copper sulphites, mixed-valence sulphite compounds, precipitating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
1085 H∞ State Estimation of Neural Networks with Discrete and Distributed Delays

Authors: Biao Qin, Jin Huang

Abstract:

In this paper, together with some improved Lyapunov-Krasovskii functional and effective mathematical techniques, several sufficient conditions are derived to guarantee the error system is globally asymptotically stable with H∞ performance, in which both the time-delay and its time variation can be fully considered. In order to get less conservative results of the state estimation condition, zero equalities and reciprocally convex approach are employed. The estimator gain matrix can be obtained in terms of the solution to linear matrix inequalities. A numerical example is provided to illustrate the usefulness and effectiveness of the obtained results.

Keywords: H∞ performance, Neural networks, State estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
1084 A Discrete Filtering Algorithm for Impulse Wave Parameter Estimation

Authors: Khaled M. EL-Naggar

Abstract:

This paper presents a new method for estimating the mean curve of impulse voltage waveforms that are recorded during impulse tests. In practice, these waveforms are distorted by noise, oscillations and overshoot. The problem is formulated as an estimation problem. Estimation of the current signal parameters is achieved using a fast and accurate technique. The method is based on discrete dynamic filtering algorithm (DDF). The main advantage of the proposed technique is its ability in producing the estimates in a very short time and at a very high degree of accuracy. The algorithm uses sets of digital samples of the recorded impulse waveform. The proposed technique has been tested using simulated data of practical waveforms. Effects of number of samples and data window size are studied. Results are reported and discussed.

Keywords: Digital Filtering, Estimation, Impulse wave, Stochastic filtering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
1083 A New Criterion Pose and Shape of Objects for Collision Risk Estimation

Authors: Do Hyeung Kim, Dae Hee Seo, Byung Doo Kim, Byung Gil Lee

Abstract:

As many recent researches being implemented in aviation and maritime aspects, strong doubts have been raised concerning the reliability of the estimation of collision risk. It is shown that using position and velocity of objects can lead to imprecise results. In this paper, therefore, a new approach to the estimation of collision risks using pose and shape of objects is proposed. Simulation results are presented validating the accuracy of the new criterion to adapt to collision risk algorithm based on fuzzy logic.

Keywords: Collision risk, Pose and shape, Fuzzy logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1864