Search results for: Sequential Monte Carlo.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 374

Search results for: Sequential Monte Carlo.

194 UD Covariance Factorization for Unscented Kalman Filter using Sequential Measurements Update

Authors: H. Ghanbarpour Asl, S. H. Pourtakdoust

Abstract:

Extended Kalman Filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, not only it has difficulties arising from linearization but also many times it becomes numerically unstable because of computer round off errors that occur in the process of its implementation. To overcome linearization limitations, the unscented transformation (UT) was developed as a method to propagate mean and covariance information through nonlinear transformations. Kalman filter that uses UT for calculation of the first two statistical moments is called Unscented Kalman Filter (UKF). Square-root form of UKF (SRUKF) developed by Rudolph van der Merwe and Eric Wan to achieve numerical stability and guarantee positive semi-definiteness of the Kalman filter covariances. This paper develops another implementation of SR-UKF for sequential update measurement equation, and also derives a new UD covariance factorization filter for the implementation of UKF. This filter is equivalent to UKF but is computationally more efficient.

Keywords: Unscented Kalman filter, Square-root unscentedKalman filter, UD covariance factorization, Target tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4786
193 Using Historical Data for Stock Prediction of a Tech Company

Authors: Sofia Stoica

Abstract:

In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices over the past five years of 10 major tech companies: Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We implemented and tested three models – a linear regressor model, a k-nearest neighbor model (KNN), and a sequential neural network – and two algorithms – Multiplicative Weight Update and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.

Keywords: Finance, machine learning, opening price, stock market.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 316
192 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model

Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis

Abstract:

In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.

Keywords: Expectation-maximization (EM) algorithm, cause of failure, intensity, linear degradation path, masked data, reliability function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1036
191 The Study on the Stationarity of Energy Consumption in US States: Considering Structural Breaks, Nonlinearity, and Cross- Sectional Dependency

Authors: Wen-Chi Liu

Abstract:

This study applies the sequential panel selection method (SPSM) procedure proposed by Chortareas and Kapetanios (2009) to investigate the time-series properties of energy consumption in 50 US states from 1963 to 2009. SPSM involves the classification of the entire panel into a group of stationary series and a group of non-stationary series to identify how many and which series in the panel are stationary processes. Empirical results obtained through SPSM with the panel KSS unit root test developed by Ucar and Omay (2009) combined with a Fourier function indicate that energy consumption in all the 50 US states are stationary. The results of this study have important policy implications for the 50 US states.

Keywords: Energy Consumption, Panel Unit Root, Sequential Panel Selection Method, Fourier Function, US states.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
190 Uncertainty Propagation and Sensitivity Analysis During Calibration of an Integrated Land Use and Transport Model

Authors: Parikshit Dutta, Mathieu Saujot, Elise Arnaud, Benoit Lefevre, Emmanuel Prados

Abstract:

In this work, propagation of uncertainty during calibration process of TRANUS, an integrated land use and transport model (ILUTM), has been investigated. It has also been examined, through a sensitivity analysis, which input parameters affect the variation of the outputs the most. Moreover, a probabilistic verification methodology of calibration process, which equates the observed and calculated production, has been proposed. The model chosen as an application is the model of the city of Grenoble, France. For sensitivity analysis and uncertainty propagation, Monte Carlo method was employed, and a statistical hypothesis test was used for verification. The parameters of the induced demand function in TRANUS, were assumed as uncertain in the present case. It was found that, if during calibration, TRANUS converges, then with a high probability the calibration process is verified. Moreover, a weak correlation was found between the inputs and the outputs of the calibration process. The total effect of the inputs on outputs was investigated, and the output variation was found to be dictated by only a few input parameters.

Keywords: Uncertainty propagation, sensitivity analysis, calibration under uncertainty, hypothesis testing, integrated land use and transport models, TRANUS, Grenoble.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
189 Using Simulation Modeling Approach to Predict USMLE Steps 1 and 2 Performances

Authors: Chau-Kuang Chen, John Hughes, Jr., A. Dexter Samuels

Abstract:

The prediction models for the United States Medical Licensure Examination (USMLE) Steps 1 and 2 performances were constructed by the Monte Carlo simulation modeling approach via linear regression. The purpose of this study was to build robust simulation models to accurately identify the most important predictors and yield the valid range estimations of the Steps 1 and 2 scores. The application of simulation modeling approach was deemed an effective way in predicting student performances on licensure examinations. Also, sensitivity analysis (a/k/a what-if analysis) in the simulation models was used to predict the magnitudes of Steps 1 and 2 affected by changes in the National Board of Medical Examiners (NBME) Basic Science Subject Board scores. In addition, the study results indicated that the Medical College Admission Test (MCAT) Verbal Reasoning score and Step 1 score were significant predictors of the Step 2 performance. Hence, institutions could screen qualified student applicants for interviews and document the effectiveness of basic science education program based on the simulation results.

Keywords: Prediction Model, Sensitivity Analysis, Simulation Method, USMLE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423
188 Estimated Production Potential Types of Wind Turbines Connected to the Network Using Random Numbers Simulation

Authors: Saeid Nahi, Seyed Mohammad Hossein Nabavi

Abstract:

Nowadays, power systems, energy generation by wind has been very important. Noting that the production of electrical energy by wind turbines on site to several factors (such as wind speed and profile site for the turbines, especially off the wind input speed, wind rated speed and wind output speed disconnect) is dependent. On the other hand, several different types of turbines in the market there. Therefore, selecting a turbine that its capacity could also answer the need for electric consumers the efficiency is high something is important and necessary. In this context, calculating the amount of wind power to help optimize overall network, system operation, in determining the parameters of wind power is very important. In this article, to help calculate the amount of wind power plant, connected to the national network in the region Manjil wind, selecting the best type of turbine and power delivery profile appropriate to the network using Monte Carlo method has been. In this paper, wind speed data from the wind site in Manjil, as minute and during the year has been. Necessary simulations based on Random Numbers Simulation method and repeat, using the software MATLAB and Excel has been done.

Keywords: wind turbine, efficiency, wind turbine work points, Random Numbers, reliability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1369
187 Statically Fused Unbiased Converted Measurements Kalman Filter

Authors: Zhengkun Guo, Yanbin Li, Wenqing Wang, Bo Zou

Abstract:

Active radar and sonar systems often report Doppler measurements in addition to the position measurements such as range and bearing. The tracker can perform better by making full use of the Doppler measurements. However, due to the high nonlinearity of the Doppler measurements with respect to the target state in the Cartesian coordinate systems, those measurements are not always fully exploited. This paper mainly focuses on dealing with the Doppler measurements as well as the position measurements in Polar coordinates. The Statically Fused Converted Position and Doppler Measurements Kalman Filter (SF-CMKF) with additive debiased measurement conversion has been presented. However, the exact compensation for the bias of the measurement conversion are multiplicative and depend on the statistics of the cosine of the angle measurement errors. As a result, the consistency and performance of the SF-CMKF may be suboptimal in the large angle error situations. In this paper, the multiplicative unbiased position and Doppler measurement conversion for two-dimensional (Polar-to-Cartesian) tracking are derived, and the SF-CMKF is improved by using those conversion. Monte Carlo simulations are presented to demonstrate the statistic consistency of the multiplicative unbiased conversion and the superior performance of the modified SF-CMKF (SF-UCMKF).

Keywords: Measurement conversion, Doppler, Kalman filter, estimation, tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 311
186 Quantum Dot Cellular Automata Based Effective Design of Combinational and Sequential Logical Structures

Authors: Hema Sandhya Jagarlamudi, Mousumi Saha, Pavan Kumar Jagarlamudi

Abstract:

The use of Quantum dots is a promising emerging Technology for implementing digital system at the nano level. It is effecient for attractive features such as faster speed , smaller size and low power consumption than transistor technology. In this paper, various Combinational and sequential logical structures - HALF ADDER, SR Latch and Flip-Flop, D Flip-Flop preceding NAND, NOR, XOR,XNOR are discussed based on QCA design, with comparatively less number of cells and area. By applying these layouts, the hardware requirements for a QCA design can be reduced. These structures are designed and simulated using QCA Designer Tool. By taking full advantage of the unique features of this technology, we are able to create complete circuits on a single layer of QCA. Such Devices are expected to function with ultra low power Consumption and very high speeds.

Keywords: QCA, QCA Designer, Clock, Majority Gate

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2580
185 Minimization of Switching Losses in Cascaded Multilevel Inverters Using Efficient Sequential Switching Hybrid-Modulation Techniques

Authors: P. Satish Kumar, K. Ramakrishna, Ch. Lokeshwar Reddy, G. Sridhar

Abstract:

This paper presents two different sequential switching hybrid-modulation strategies and implemented for cascaded multilevel inverters. Hybrid modulation strategies represent the combinations of Fundamental-frequency pulse width modulation (FFPWM) and Multilevel sinusoidal-modulation (MSPWM) strategies, and are designed for performance of the well-known Alternative Phase opposition disposition (APOD), Phase shifted carrier (PSC). The main characteristics of these modulations are the reduction of switching losses with good harmonic performance, balanced power loss dissipation among the devices with in a cell, and among the series-connected cells. The feasibility of these modulations is verified through spectral analysis, power loss analysis and simulation.

Keywords: Cascaded multilevel inverters, hybrid modulation, power loss analysis, pulse width modulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2939
184 Reliability Based Performance Evaluation of Stone Column Improved Soft Ground

Authors: A. GuhaRay, C. V. S. P. Kiranmayi, S. Rudraraju

Abstract:

The present study considers the effect of variation of different geotechnical random variables in the design of stone column-foundation systems for assessing the bearing capacity and consolidation settlement of highly compressible soil. The soil and stone column properties, spacing, diameter and arrangement of stone columns are considered as the random variables. Probability of failure (Pf) is computed for a target degree of consolidation and a target safe load by Monte Carlo Simulation (MCS). The study shows that the variation in coefficient of radial consolidation (cr) and cohesion of soil (cs) are two most important factors influencing Pf. If the coefficient of variation (COV) of cr exceeds 20%, Pf exceeds 0.001, which is unsafe following the guidelines of US Army Corps of Engineers. The bearing capacity also exceeds its safe value for COV of cs > 30%. It is also observed that as the spacing between the stone column increases, the probability of reaching a target degree of consolidation decreases. Accordingly, design guidelines, considering both consolidation and bearing capacity of improved ground, are proposed for different spacing and diameter of stone columns and geotechnical random variables.

Keywords: Bearing capacity, consolidation, geotechnical random variables, probability of failure, stone columns.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1112
183 JConqurr - A Multi-Core Programming Toolkit for Java

Authors: G.A.C.P. Ganegoda, D.M.A. Samaranayake, L.S. Bandara, K.A.D.N.K. Wimalawarne

Abstract:

With the popularity of the multi-core and many-core architectures there is a great requirement for software frameworks which can support parallel programming methodologies. In this paper we introduce an Eclipse toolkit, JConqurr which is easy to use and provides robust support for flexible parallel progrmaming. JConqurr is a multi-core and many-core programming toolkit for Java which is capable of providing support for common parallel programming patterns which include task, data, divide and conquer and pipeline parallelism. The toolkit uses an annotation and a directive mechanism to convert the sequential code into parallel code. In addition to that we have proposed a novel mechanism to achieve the parallelism using graphical processing units (GPU). Experiments with common parallelizable algorithms have shown that our toolkit can be easily and efficiently used to convert sequential code to parallel code and significant performance gains can be achieved.

Keywords: Multi-core, parallel programming patterns, GPU, Java, Eclipse plugin, toolkit,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
182 Mucosal- Submucosal Changes in Rabbit Duodenum during Development

Authors: Elnasharty M. A., Abou-Ghanema I. I., Sayed-Ahmed A., A. Abo Elnour

Abstract:

The sequential morphologic changes of rabbit duodenal mucosa-submucosa were studied from primodial stage to birth in 15 fetuses and during the early days of life in 21 rabbit newborns till maturity using light, scanning and transmission electron microscopy. Fetal rabbit duodenum develops from a simple tube of stratified epithelium to a tube containing villus and intervillus regions of simple columnar epithelium. By day 21 of gestation, the first rudimentary villi were appeared and by day 24 the first true villi were appeared. The Crypts of Lieberkuhn did not appear until birth. By the first day of postnatal life the duodenal glands appeared. The histological maturity of the rabbit small intestine occurred one month after birth. In conclusion, at all stages, the sequential morphologic changes of the rabbit small intestine developed to meet the structural and physiological demands during the fetal stage to be prepared to extra uterine life.

Keywords: Duodenum, mucosa, submucosa, morphogenesis, rabbit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2367
181 Effect of Size of the Step in the Response Surface Methodology using Nonlinear Test Functions

Authors: Jesús Everardo Olguín Tiznado, Rafael García Martínez, Claudia Camargo Wilson, Juan Andrés López Barreras, Everardo Inzunza González, Javier Ordorica Villalvazo

Abstract:

The response surface methodology (RSM) is a collection of mathematical and statistical techniques useful in the modeling and analysis of problems in which the dependent variable receives the influence of several independent variables, in order to determine which are the conditions under which should operate these variables to optimize a production process. The RSM estimated a regression model of first order, and sets the search direction using the method of maximum / minimum slope up / down MMS U/D. However, this method selects the step size intuitively, which can affect the efficiency of the RSM. This paper assesses how the step size affects the efficiency of this methodology. The numerical examples are carried out through Monte Carlo experiments, evaluating three response variables: efficiency gain function, the optimum distance and the number of iterations. The results in the simulation experiments showed that in response variables efficiency and gain function at the optimum distance were not affected by the step size, while the number of iterations is found that the efficiency if it is affected by the size of the step and function type of test used.

Keywords: RSM, dependent variable, independent variables, efficiency, simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1949
180 Effects of Livestream Affordances on Consumer Purchase Willingness: Explicit IT Affordances Perspective

Authors: Isaac O. Asante, Yushi Jiang, Hailin Tao

Abstract:

Livestreaming marketing, the new electronic commerce element, has become an optional marketing channel following the COVID-19 pandemic, and many sellers are leveraging the features presented by livestreaming to increase sales. This study was conducted to measure real-time observable interactions between consumers and sellers. Based on the affordance theory, this study conceptualized constructs representing the interactive features and examined how they drive consumers’ purchase willingness during livestreaming sessions using 1238 datasets from Amazon Live, following the manual observation of transaction records. Using structural equation modeling, the ordinary least square regression suggests that live viewers, new followers, live chats, and likes positively affect purchase willingness. The Sobel and Monte Carlo tests show that new followers, live chats, and likes significantly mediate the relationship between live viewers and purchase willingness. The study presents a way of measuring interactions in livestreaming commerce and proposes a way to manually gather data on consumer behaviors in livestreaming platforms when the application programming interface (API) of such platforms does not support data mining algorithms.

Keywords: Livestreaming marketing, live chats, live viewers, likes, new followers, purchase willingness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 44
179 A Data Hiding Model with High Security Features Combining Finite State Machines and PMM method

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Recent years have witnessed the rapid development of the Internet and telecommunication techniques. Information security is becoming more and more important. Applications such as covert communication, copyright protection, etc, stimulate the research of information hiding techniques. Traditionally, encryption is used to realize the communication security. However, important information is not protected once decoded. Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, video or audio, etc, and then transmitted secretly to the receiver.In this paper a data hiding model with high security features combining both cryptography using finite state sequential machine and image based steganography technique for communicating information more securely between two locations is proposed. The authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security. Before the embedding operation the secret information has been encrypted with the help of finite-state sequential machine and segmented in different parts. The cover image is also segmented in different objects through normalized cut.Each part of the encoded secret information has been embedded with the help of a novel image steganographic method (PMM) on different cuts of the cover image to form different stego objects. Finally stego image is formed by combining different stego objects and transmit to the receiver side. At the receiving end different opposite processes should run to get the back the original secret message.

Keywords: Cover Image, Finite state sequential machine, Melaymachine, Pixel Mapping Method (PMM), Stego Image, NCUT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2215
178 Diagnosing the Cause and its Timing of Changes in Multivariate Process Mean Vector from Quality Control Charts using Artificial Neural Network

Authors: Farzaneh Ahmadzadeh

Abstract:

Quality control charts are very effective in detecting out of control signals but when a control chart signals an out of control condition of the process mean, searching for a special cause in the vicinity of the signal time would not always lead to prompt identification of the source(s) of the out of control condition as the change point in the process parameter(s) is usually different from the signal time. It is very important to manufacturer to determine at what point and which parameters in the past caused the signal. Early warning of process change would expedite the search for the special causes and enhance quality at lower cost. In this paper the quality variables under investigation are assumed to follow a multivariate normal distribution with known means and variance-covariance matrix and the process means after one step change remain at the new level until the special cause is being identified and removed, also it is supposed that only one variable could be changed at the same time. This research applies artificial neural network (ANN) to identify the time the change occurred and the parameter which caused the change or shift. The performance of the approach was assessed through a computer simulation experiment. The results show that neural network performs effectively and equally well for the whole shift magnitude which has been considered.

Keywords: Artificial neural network, change point estimation, monte carlo simulation, multivariate exponentially weighted movingaverage

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1335
177 Probability-Based Damage Detection of Structures Using Kriging Surrogates and Enhanced Ideal Gas Molecular Movement Algorithm

Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee

Abstract:

Surrogate model has received increasing attention for use in detecting damage of structures based on vibration modal parameters. However, uncertainties existing in the measured vibration data may lead to false or unreliable output result from such model. In this study, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The kriging technique allows one to genuinely quantify the surrogate error, therefore it is chosen as metamodeling technique. Enhanced version of ideal gas molecular movement (EIGMM) algorithm is used as main algorithm for model updating. The developed approach is applied to detect simulated damage in numerical models of 72-bar space truss and 120-bar dome truss. The simulation results show the proposed method can perform well in probability-based damage detection of structures with less computational effort compared to direct finite element model.

Keywords: Enhanced ideal gas molecular movement, Kriging, probability-based damage detection, probability of damage existence, surrogate modeling, uncertainty quantification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910
176 Removal of Pharmaceutical Compounds by a Sequential Treatment of Ozonation Followed by Fenton Process: Influence of the Water Matrix

Authors: Almudena Aguinaco, Olga Gimeno, Fernando J. Beltrán, Juan José P. Sagasti

Abstract:

A sequential treatment of ozonation followed by a Fenton or photo-Fenton process, using black light lamps (365 nm) in this latter case, has been applied to remove a mixture of pharmaceutical compounds and the generated by-products both in ultrapure and secondary treated wastewater. The scientifictechnological innovation of this study stems from the in situ generation of hydrogen peroxide from the direct ozonation of pharmaceuticals, and can later be used in the application of Fenton and photo-Fenton processes. The compounds selected as models were sulfamethoxazol and acetaminophen. It should be remarked that the use of a second process is necessary as a result of the low mineralization yield reached by the exclusive application of ozone. Therefore, the influence of the water matrix has been studied in terms of hydrogen peroxide concentration, individual compound concentration and total organic carbon removed. Moreover, the concentration of different iron species in solution has been measured.

Keywords: Fenton, photo-Fenton, ozone, pharmaceutical compounds, hydrogen peroxide, water treatment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
175 A Parametric Study of an Inverse Electrostatics Problem (IESP) Using Simulated Annealing, Hooke & Jeeves and Sequential Quadratic Programming in Conjunction with Finite Element and Boundary Element Methods

Authors: Ioannis N. Koukoulis, Clio G. Vossou, Christopher G. Provatidis

Abstract:

The aim of the current work is to present a comparison among three popular optimization methods in the inverse elastostatics problem (IESP) of flaw detection within a solid. In more details, the performance of a simulated annealing, a Hooke & Jeeves and a sequential quadratic programming algorithm was studied in the test case of one circular flaw in a plate solved by both the boundary element (BEM) and the finite element method (FEM). The proposed optimization methods use a cost function that utilizes the displacements of the static response. The methods were ranked according to the required number of iterations to converge and to their ability to locate the global optimum. Hence, a clear impression regarding the performance of the aforementioned algorithms in flaw identification problems was obtained. Furthermore, the coupling of BEM or FEM with these optimization methods was investigated in order to track differences in their performance.

Keywords: Elastostatic, inverse problem, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
174 Firing Angle Range Control For Minimising Harmonics in TCR Employed in SVC-s

Authors: D. R. Patil, U. Gudaru

Abstract:

Most electrical distribution systems are incurring large losses as the loads are wide spread, inadequate reactive power compensation facilities and their improper control. A typical static VAR compensator consists of capacitor bank in binary sequential steps operated in conjunction with a thyristor controlled reactor of the smallest step size. This SVC facilitates stepless control of reactive power closely matching with load requirements so as to maintain power factor nearer to unity. This type of SVC-s requiring a appropriately controlled TCR. This paper deals with an air cored reactor suitable for distribution transformer of 3phase, 50Hz, Dy11, 11KV/433V, 125 KVA capacity. Air cored reactors are designed, built, tested and operated in conjunction with capacitor bank in five binary sequential steps. It is established how the delta connected TCR minimizes the harmonic components and the operating range for various electrical quantities as a function of firing angle is investigated. In particular firing angle v/s line & phase currents, D.C. components, THD-s, active and reactive powers, odd and even triplen harmonics, dominant characteristic harmonics are all investigated and range of firing angle is fixed for satisfactory operation. The harmonic spectra for phase and line quantities at specified firing angles are given. In case the TCR is operated within the bound specified in this paper established through simulation studies are yielding the best possible operating condition particularly free from all dominant harmonics.

Keywords: Binary Sequential switched capacitor bank, TCR, Nontriplen harmonics, step less Q control, Active and Reactivepower, Simulink

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5934
173 Three Dimensional Analysis of Sequential Quasi Isotropic Composite Disc for Rotating Machine Application

Authors: Amin Almasi

Abstract:

Composite laminates are relatively weak in out of plane loading, inter-laminar stress, stress concentration near the edge and stress singularities. This paper develops a new analytical formulation for laminated composite rotating disc fabricated from symmetric sequential quasi isotropic layers to predict three dimensional stress and deformation. This analysis is necessary to evaluate mechanical integrity of fiber reinforced multi-layer laminates used for high speed rotating applications such as high speed impellers. Three dimensional governing equations are written for rotating composite disc. Explicit solution is obtained with "Frobenius" expansion series. Based on analytical results, there are two separate zones of three dimensional stress fields in centre and edge of rotating disc. For thin discs, out of plane deformations and stresses are small in comparison with plane ones. For relatively thick discs deformation and stress fields are three dimensional.

Keywords: Composite Disc, Rotating Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1357
172 Suppression of Narrowband Interference in Impulse Radio Based High Data Rate UWB WPAN Communication System Using NLOS Channel Model

Authors: Bikramaditya Das, Susmita Das

Abstract:

Study on suppression of interference in time domain equalizers is attempted for high data rate impulse radio (IR) ultra wideband communication system. The narrow band systems may cause interference with UWB devices as it is having very low transmission power and the large bandwidth. SRAKE receiver improves system performance by equalizing signals from different paths. This enables the use of SRAKE receiver techniques in IRUWB systems. But Rake receiver alone fails to suppress narrowband interference (NBI). A hybrid SRake-MMSE time domain equalizer is proposed to overcome this by taking into account both the effect of the number of rake fingers and equalizer taps. It also combats intersymbol interference. A semi analytical approach and Monte-Carlo simulation are used to investigate the BER performance of SRAKEMMSE receiver on IEEE 802.15.3a UWB channel models. Study on non-line of sight indoor channel models (both CM3 and CM4) illustrates that bit error rate performance of SRake-MMSE receiver with NBI performs better than that of Rake receiver without NBI. We show that for a MMSE equalizer operating at high SNR-s the number of equalizer taps plays a more significant role in suppressing interference.

Keywords: IR-UWB, UWB, IEEE 802.15.3a, NBI, data rate, bit error rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
171 A Case Study on the Numerical-Probability Approach for Deep Excavation Analysis

Authors: Komeil Valipourian

Abstract:

Urban advances and the growing need for developing infrastructures has increased the importance of deep excavations. In this study, after the introducing probability analysis as an important issue, an attempt has been made to apply it for the deep excavation project of Bangkok’s Metro as a case study. For this, the numerical probability model has been developed based on the Finite Difference Method and Monte Carlo sampling approach. The results indicate that disregarding the issue of probability in this project will result in an inappropriate design of the retaining structure. Therefore, probabilistic redesign of the support is proposed and carried out as one of the applications of probability analysis. A 50% reduction in the flexural strength of the structure increases the failure probability just by 8% in the allowable range and helps improve economic conditions, while maintaining mechanical efficiency. With regard to the lack of efficient design in most deep excavations, by considering geometrical and geotechnical variability, an attempt was made to develop an optimum practical design standard for deep excavations based on failure probability. On this basis, a practical relationship is presented for estimating the maximum allowable horizontal displacement, which can help improve design conditions without developing the probability analysis.

Keywords: Numerical probability modeling, deep excavation, allowable maximum displacement, finite difference method, FDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 615
170 Yield, Yield Components, Soil Minerals and Aroma of KDML 105 Rice in Tungkularonghai, Roi-Et,Thailand

Authors: Kanlaya Kong-ngern, Tossapol Buaphan, Duangsamorn Tulaphitak, Naug Phuvongpha, Sirirut Wongpakonkul, Piyada Threerakulpisut

Abstract:

Pearson-s correlation coefficient and sequential path analysis has been used for determining the interrelationship among yield, yield components, soil minerals and aroma of Khao Dawk Mali (KDML) 105 rice grown in the area of Tungkularonghai in Roi-Et province, located in the northeast of Thailand. Pearson-s correlation coefficient in this study showed that the number of panicles was the only factor that had positive significant (0.790**) effect on grain yield. Sequential path analysis revealed that the number of panicles followed by the number of fertile spikelets and 100-grain weight were the first-order factors which had positive direct effects on grain yield. Whereas, other factors analyzed had indirect effects influencing grain yield. This study also indicated that no significant relationship was found between the aroma level and any of the factors analyzed.

Keywords: 2-Acetyl-1-Pyrroline, rice aroma

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2096
169 Probabilistic Method of Wind Generation Placement for Congestion Management

Authors: S. Z. Moussavi, A. Badri, F. Rastegar Kashkooli

Abstract:

Wind farms (WFs) with high level of penetration are being established in power systems worldwide more rapidly than other renewable resources. The Independent System Operator (ISO), as a policy maker, should propose appropriate places for WF installation in order to maximize the benefits for the investors. There is also a possibility of congestion relief using the new installation of WFs which should be taken into account by the ISO when proposing the locations for WF installation. In this context, efficient wind farm (WF) placement method is proposed in order to reduce burdens on congested lines. Since the wind speed is a random variable and load forecasts also contain uncertainties, probabilistic approaches are used for this type of study. AC probabilistic optimal power flow (P-OPF) is formulated and solved using Monte Carlo Simulations (MCS). In order to reduce computation time, point estimate methods (PEM) are introduced as efficient alternative for time-demanding MCS. Subsequently, WF optimal placement is determined using generation shift distribution factors (GSDF) considering a new parameter entitled, wind availability factor (WAF). In order to obtain more realistic results, N-1 contingency analysis is employed to find the optimal size of WF, by means of line outage distribution factors (LODF). The IEEE 30-bus test system is used to show and compare the accuracy of proposed methodology.

Keywords: Probabilistic optimal power flow, Wind power, Pointestimate methods, Congestion management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1836
168 Method of Parameter Calibration for Error Term in Stochastic User Equilibrium Traffic Assignment Model

Authors: Xiang Zhang, David Rey, S. Travis Waller

Abstract:

Stochastic User Equilibrium (SUE) model is a widely used traffic assignment model in transportation planning, which is regarded more advanced than Deterministic User Equilibrium (DUE) model. However, a problem exists that the performance of the SUE model depends on its error term parameter. The objective of this paper is to propose a systematic method of determining the appropriate error term parameter value for the SUE model. First, the significance of the parameter is explored through a numerical example. Second, the parameter calibration method is developed based on the Logit-based route choice model. The calibration process is realized through multiple nonlinear regression, using sequential quadratic programming combined with least square method. Finally, case analysis is conducted to demonstrate the application of the calibration process and validate the better performance of the SUE model calibrated by the proposed method compared to the SUE models under other parameter values and the DUE model.

Keywords: Parameter calibration, sequential quadratic programming, Stochastic User Equilibrium, traffic assignment, transportation planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2078
167 The Application of Real Options to Capital Budgeting

Authors: George Yungchih Wang

Abstract:

Real options theory suggests that managerial flexibility embedded within irreversible investments can account for a significant value in project valuation. Although the argument has become the dominant focus of capital investment theory over decades, yet recent survey literature in capital budgeting indicates that corporate practitioners still do not explicitly apply real options in investment decisions. In this paper, we explore how real options decision criteria can be transformed into equivalent capital budgeting criteria under the consideration of uncertainty, assuming that underlying stochastic process follows a geometric Brownian motion (GBM), a mixed diffusion-jump (MX), or a mean-reverting process (MR). These equivalent valuation techniques can be readily decomposed into conventional investment rules and “option impacts", the latter of which describe the impacts on optimal investment rules with the option value considered. Based on numerical analysis and Monte Carlo simulation, three major findings are derived. First, it is shown that real options could be successfully integrated into the mindset of conventional capital budgeting. Second, the inclusion of option impacts tends to delay investment. It is indicated that the delay effect is the most significant under a GBM process and the least significant under a MR process. Third, it is optimal to adopt the new capital budgeting criteria in investment decision-making and adopting a suboptimal investment rule without considering real options could lead to a substantial loss in value.

Keywords: real options, capital budgeting, geometric Brownianmotion, mixed diffusion-jump, mean-reverting process

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2719
166 Appling Eyring-s Accelerated Life Testing Model to “Times to Breakdown“ of Insulating Fluid: A Combined Approach of an Accelerated and a Sequential Life Testing

Authors: D. I. De Souza, D. R. Fonseca, D. Kipper

Abstract:

In this paper, the test purpose will be to assess whether or not the accelerated model proposed by Eyring will be able to translate results for the shape and scale parameters of an underlying Weibull model, obtained under two accelerating using conditions, to expected normal using condition results for these parameters. The product being analyzed is a new type of insulate fluid, and the accelerating factor is the voltage stresses applied to the fluid at two different levels (30KV and 40KV). The normal operating voltage is 25KV. In this case, it was possible to test the insulate fluid at normal voltage using condition. Both results for the two parameters of the Weibull model, obtained under normal using condition and translated from accelerated using conditions to normal conditions, will be compared to each other to assess the accuracy of the Eyring model when the accelerating factor is only the voltage stress.

Keywords: Eyring Accelerated Model, Sequential Life Testing, Two-Parameter Weibull Distribution, Voltage Stresses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2244
165 Application of a Modified BCR Approach to Investigate the Mobility and Availability of Trace Elements (As, Ba, Cd, Co, Cr, Cu, Mo,Ni, Pb, Zn, and Hg) from a Solid Residue Matrix Designed for Soil Amendment

Authors: Mikko Mäkelä, Risto Pöykiö, Gary Watkins, Hannu Nurmesniemi, Olli Dahl

Abstract:

Trace element speciation of an integrated soil amendment matrix was studied with a modified BCR sequential extraction procedure. The analysis included pseudo-total concentration determinations according to USEPA 3051A and relevant physicochemical properties by standardized methods. Based on the results, the soil amendment matrix possessed neutralization capacity comparable to commercial fertilizers. Additionally, the pseudo-total concentrations of all trace elements included in the Finnish regulation for agricultural fertilizers were lower than the respective statutory limit values. According to chemical speciation, the lability of trace elements increased in the following order: Hg < Cr < Co < Cu < As < Zn < Ni < Pb < Cd < V < Mo < Ba. The validity of the BCR approach as a tool for chemical speciation was confirmed by the additional acid digestion phase. Recovery of trace elements during the procedure assured the validity of the approach and indicated good quality of the analytical work.

Keywords: BCR, bioavailability, trace element, industrialresidue, sequential extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1797