Search results for: multi-Monte Carlo method.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8128

Search results for: multi-Monte Carlo method.

8038 Economic Evaluation Offshore Wind Project under Uncertainly and Risk Circumstances

Authors: Sayed Amir Hamzeh Mirkheshti

Abstract:

Offshore wind energy as a strategic renewable energy, has been growing rapidly due to availability, abundance and clean nature of it. On the other hand, budget of this project is incredibly higher in comparison with other renewable energies and it takes more duration. Accordingly, precise estimation of time and cost is needed in order to promote awareness in the developers and society and to convince them to develop this kind of energy despite its difficulties. Occurrence risks during on project would cause its duration and cost constantly changed. Therefore, to develop offshore wind power, it is critical to consider all potential risks which impacted project and to simulate their impact. Hence, knowing about these risks could be useful for the selection of most influencing strategies such as avoidance, transition, and act in order to decrease their probability and impact. This paper presents an evaluation of the feasibility of 500 MV offshore wind project in the Persian Gulf and compares its situation with uncertainty resources and risk. The purpose of this study is to evaluate time and cost of offshore wind project under risk circumstances and uncertain resources by using Monte Carlo simulation. We analyzed each risk and activity along with their distribution function and their effect on the project.

Keywords: Wind energy project; uncertain resources; risks; Monte Carlo simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 737
8037 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1128
8036 Comparison of Wind Fragility for Window System in the Simplified 10 and 15-Story Building Considering Exposure Category

Authors: Viriyavudh Sim, WooYoung Jung

Abstract:

Window system in high rise building is occasionally subjected to an excessive wind intensity, particularly during typhoon. The failure of window system did not affect overall safety of structural performance; however, it could endanger the safety of the residents. In this paper, comparison of fragility curves for window system of two residential buildings was studied. The probability of failure for individual window was determined with Monte Carlo Simulation method. Then, lognormal cumulative distribution function was used to represent the fragility. The results showed that windows located on the edge of leeward wall were more susceptible to wind load and the probability of failure for each window panel increased at higher floors.

Keywords: Wind fragility, window system, high rise building.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1245
8035 Generalized Mean-field Theory of Phase Unwrapping via Multiple Interferograms

Authors: Yohei Saika

Abstract:

On the basis of Bayesian inference using the maximizer of the posterior marginal estimate, we carry out phase unwrapping using multiple interferograms via generalized mean-field theory. Numerical calculations for a typical wave-front in remote sensing using the synthetic aperture radar interferometry, phase diagram in hyper-parameter space clarifies that the present method succeeds in phase unwrapping perfectly under the constraint of surface- consistency condition, if the interferograms are not corrupted by any noises. Also, we find that prior is useful for extending a phase in which phase unwrapping under the constraint of the surface-consistency condition. These results are quantitatively confirmed by the Monte Carlo simulation.

Keywords: Bayesian inference, generalized mean-field theory, phase unwrapping, statistical mechanics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
8034 Reliability Indices Evaluation of SEIG Rotor Core Magnetization with Minimum Capacitive Excitation for WECs

Authors: Lokesh Varshney, R. K. Saket

Abstract:

This paper presents reliability indices evaluation of the rotor core magnetization of the induction motor operated as a self excited induction generator by using probability distribution approach and Monte Carlo simulation. Parallel capacitors with calculated minimum capacitive value across the terminals of the induction motor operated as a SEIG with unregulated shaft speed have been connected during the experimental study. A three phase, 4 poles, 50Hz, 5.5 hp, 12.3A, 230V induction motor coupled with DC Shunt Motor was tested in the electrical machine laboratory with variable reactive loads. Based on this experimental study, it is possible to choose a reliable induction machines operated as a SEIG for unregulated renewable energy application in remote area or where grid is not available. Failure density function, cumulative failure distribution function, survivor function, hazard model, probability of success and probability of failure for reliability evaluation of the three phase induction motor operating as a SEIG have been presented graphically in this paper.

Keywords: Residual magnetism, magnetization curve, induction motor, self excited induction generator, probability distribution, Monte Carlo simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2075
8033 Estimated Production Potential Types of Wind Turbines Connected to the Network Using Random Numbers Simulation

Authors: Saeid Nahi, Seyed Mohammad Hossein Nabavi

Abstract:

Nowadays, power systems, energy generation by wind has been very important. Noting that the production of electrical energy by wind turbines on site to several factors (such as wind speed and profile site for the turbines, especially off the wind input speed, wind rated speed and wind output speed disconnect) is dependent. On the other hand, several different types of turbines in the market there. Therefore, selecting a turbine that its capacity could also answer the need for electric consumers the efficiency is high something is important and necessary. In this context, calculating the amount of wind power to help optimize overall network, system operation, in determining the parameters of wind power is very important. In this article, to help calculate the amount of wind power plant, connected to the national network in the region Manjil wind, selecting the best type of turbine and power delivery profile appropriate to the network using Monte Carlo method has been. In this paper, wind speed data from the wind site in Manjil, as minute and during the year has been. Necessary simulations based on Random Numbers Simulation method and repeat, using the software MATLAB and Excel has been done.

Keywords: wind turbine, efficiency, wind turbine work points, Random Numbers, reliability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1364
8032 Spectrum Analysis with Monte Cralo Simulation, BEAMnrc, for Low Energy X-RAY

Authors: Z. Salehi Dehyagani, A. L. Yusoff

Abstract:

BEAMnrc was used to calculate the spectrum and HVL for X-ray Beam during low energy X-ray radiation using tube model: SRO 33/100 /ROT 350 Philips. The results of BEAMnrc simulation and measurements were compared to the IPEM report number 78 and SpekCalc software. Three energies 127, 103 and 84 Kv were used. In these simulation a tungsten anode with 1.2 mm for Be window were used as source. HVLs were calculated from BEAMnrc spectrum with air Kerma method for four different filters. For BEAMnrc one billion particles were used as original particles for all simulations. The results show that for 127 kV, there was maximum 5.2 % difference between BEAMnrc and Measurements and minimum was 0.7% .the maximum 9.1% difference between BEAMnrc and IPEM and minimum was 2.3% .The maximum difference was 3.2% between BEAMnrc and SpekCal and minimum was 2.8%. The result show BEAMnrc was able to satisfactory predict the quantities of Low energy Beam as well as high energy X-ray radiation.

Keywords: BEAMnr , Monte Carlo , HVL

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3007
8031 Angles of Arrival Estimation with Unitary Partial Propagator

Authors: Youssef Khmou, Said Safi

Abstract:

In this paper, we investigated the effect of real valued transformation of the spectral matrix of the received data for Angles Of Arrival estimation problem.  Indeed, the unitary transformation of Partial Propagator (UPP) for narrowband sources is proposed and applied on Uniform Linear Array (ULA).

Monte Carlo simulations proved the performance of the UPP spectrum comparatively with Forward Backward Partial Propagator (FBPP) and Unitary Propagator (UP). The results demonstrates that when some of the sources are fully correlated and closer than the Rayleigh angular limit resolution of the broadside array, the UPP method outperforms the FBPP in both of spatial resolution and complexity.

Keywords: DOA, Uniform Linear Array, Narrowband, Propagator, Real valued transformation, Subspace, Unitary Operator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2236
8030 Effect of Birks Constant and Defocusing Parameter on Triple-to-Double Coincidence Ratio Parameter in Monte Carlo Simulation-GEANT4

Authors: F. Abubaker, F. Tortorici, M. Capogni, C. Sutera, V. Bellini

Abstract:

This project concerns with the detection efficiency of the portable Triple-to-Double Coincidence Ratio (TDCR) at the National Institute of Metrology of Ionizing Radiation (INMRI-ENEA) which allows direct activity measurement and radionuclide standardization for pure-beta emitter or pure electron capture radionuclides. The dependency of the simulated detection efficiency of the TDCR, by using Monte Carlo simulation Geant4 code, on the Birks factor (kB) and defocusing parameter has been examined especially for low energy beta-emitter radionuclides such as 3H and 14C, for which this dependency is relevant. The results achieved in this analysis can be used for selecting the best kB factor and the defocusing parameter for computing theoretical TDCR parameter value. The theoretical results were compared with the available ones, measured by the ENEA TDCR portable detector, for some pure-beta emitter radionuclides. This analysis allowed to improve the knowledge of the characteristics of the ENEA TDCR detector that can be used as a traveling instrument for in-situ measurements with particular benefits in many applications in the field of nuclear medicine and in the nuclear energy industry.

Keywords: Birks constant, defocusing parameter, GEANT4 code, TDCR parameter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 439
8029 Resistance and Sub-Resistances of RC Beams Subjected to Multiple Failure Modes

Authors: F. Sangiorgio, J. Silfwerbrand, G. Mancini

Abstract:

Geometric and mechanical properties all influence the resistance of RC structures and may, in certain combination of property values, increase the risk of a brittle failure of the whole system. This paper presents a statistical and probabilistic investigation on the resistance of RC beams designed according to Eurocodes 2 and 8, and subjected to multiple failure modes, under both the natural variation of material properties and the uncertainty associated with cross-section and transverse reinforcement geometry. A full probabilistic model based on JCSS Probabilistic Model Code is derived. Different beams are studied through material nonlinear analysis via Monte Carlo simulations. The resistance model is consistent with Eurocode 2. Both a multivariate statistical evaluation and the data clustering analysis of outcomes are then performed. Results show that the ultimate load behaviour of RC beams subjected to flexural and shear failure modes seems to be mainly influenced by the combination of the mechanical properties of both longitudinal reinforcement and stirrups, and the tensile strength of concrete, of which the latter appears to affect the overall response of the system in a nonlinear way. The model uncertainty of the resistance model used in the analysis plays undoubtedly an important role in interpreting results.

Keywords: Modelling, Monte Carlo Simulations, Probabilistic Models, Data Clustering, Reinforced Concrete Members, Structural Design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068
8028 Purity Monitor Studies in Medium Liquid Argon TPC

Authors: I. Badhrees

Abstract:

This paper is an attempt to describe some of the results that had been found through a journey of study in the field of particle physics. This study consists of two parts, one about the measurement of the cross section of the decay of the Z particle in two electrons, and the other deals with the measurement of the cross section of the multi-photon absorption process using a beam of Laser in the Liquid Argon Time Projection Chamber.

The first part of the paper concerns the results based on the analysis of a data sample containing 8120 ee candidates to reconstruct the mass of the Z particle for each event where each event has an ee pair with PT(e) > 20GeV, and η(e) < 2.5. Monte Carlo templates of the reconstructed Z particle were produced as a function of the Z mass scale. The distribution of the reconstructed Z mass in the data was compared to the Monte Carlo templates, where the total cross section is calculated to be equal to 1432pb.

The second part concerns the Liquid Argon Time Projection Chamber, LAr TPC, the results of the interaction of the UV Laser, Nd-YAG with λ= 266mm, with LAr and through the study of the multi-photon ionization process as a part of the R&D at Bern University. The main result of this study was the cross section of the process of the multi-photon ionization process of the LAr, σe = 1.24±0.10stat±0.30sys.10 -56cm4.

Keywords: ATLAS, CERN, KACST, LArTPC, Particle Physics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655
8027 Jeffrey's Prior for Unknown Sinusoidal Noise Model via Cramer-Rao Lower Bound

Authors: Samuel A. Phillips, Emmanuel A. Ayanlowo, Rasaki O. Olanrewaju, Olayode Fatoki

Abstract:

This paper employs the Jeffrey's prior technique in the process of estimating the periodograms and frequency of sinusoidal model for unknown noisy time variants or oscillating events (data) in a Bayesian setting. The non-informative Jeffrey's prior was adopted for the posterior trigonometric function of the sinusoidal model such that Cramer-Rao Lower Bound (CRLB) inference was used in carving-out the minimum variance needed to curb the invariance structure effect for unknown noisy time observational and repeated circular patterns. An average monthly oscillating temperature series measured in degree Celsius (0C) from 1901 to 2014 was subjected to the posterior solution of the unknown noisy events of the sinusoidal model via Markov Chain Monte Carlo (MCMC). It was not only deduced that two minutes period is required before completing a cycle of changing temperature from one particular degree Celsius to another but also that the sinusoidal model via the CRLB-Jeffrey's prior for unknown noisy events produced a miniature posterior Maximum A Posteriori (MAP) compare to a known noisy events.

Keywords: Cramer-Rao Lower Bound (CRLB), Jeffrey's prior, Sinusoidal, Maximum A Posteriori (MAP), Markov Chain Monte Carlo (MCMC), Periodograms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 601
8026 Effect of Size of the Step in the Response Surface Methodology using Nonlinear Test Functions

Authors: Jesús Everardo Olguín Tiznado, Rafael García Martínez, Claudia Camargo Wilson, Juan Andrés López Barreras, Everardo Inzunza González, Javier Ordorica Villalvazo

Abstract:

The response surface methodology (RSM) is a collection of mathematical and statistical techniques useful in the modeling and analysis of problems in which the dependent variable receives the influence of several independent variables, in order to determine which are the conditions under which should operate these variables to optimize a production process. The RSM estimated a regression model of first order, and sets the search direction using the method of maximum / minimum slope up / down MMS U/D. However, this method selects the step size intuitively, which can affect the efficiency of the RSM. This paper assesses how the step size affects the efficiency of this methodology. The numerical examples are carried out through Monte Carlo experiments, evaluating three response variables: efficiency gain function, the optimum distance and the number of iterations. The results in the simulation experiments showed that in response variables efficiency and gain function at the optimum distance were not affected by the step size, while the number of iterations is found that the efficiency if it is affected by the size of the step and function type of test used.

Keywords: RSM, dependent variable, independent variables, efficiency, simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1944
8025 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector

Authors: H. Aldousari, T. Buchacher, N. M. Spyrou

Abstract:

Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.

Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1847
8024 Weighted Harmonic Arnoldi Method for Large Interior Eigenproblems

Authors: Zhengsheng Wang, Jing Qi, Chuntao Liu, Yuanjun Li

Abstract:

The harmonic Arnoldi method can be used to find interior eigenpairs of large matrices. However, it has been shown that this method may converge erratically and even may fail to do so. In this paper, we present a new method for computing interior eigenpairs of large nonsymmetric matrices, which is called weighted harmonic Arnoldi method. The implementation of the method has been tested by numerical examples, the results show that the method converges fast and works with high accuracy.

Keywords: Harmonic Arnoldi method, weighted harmonic Arnoldi method, eigenpair, interior eigenproblem, non symmetric matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501
8023 Dissipation of Higher Mode using Numerical Integration Algorithm in Dynamic Analysis

Authors: Jin Sup Kim, Woo Young Jung, Minho Kwon

Abstract:

In general dynamic analyses, lower mode response is of interest, however the higher modes of spatially discretized equations generally do not represent the real behavior and not affects to global response much. Some implicit algorithms, therefore, are introduced to filter out the high-frequency modes using intended numerical error. The objective of this study is to introduce the P-method and PC α-method to compare that with dissipation method and Newmark method through the stability analysis and numerical example. PC α-method gives more accuracy than other methods because it based on the α-method inherits the superior properties of the implicit α-method. In finite element analysis, the PC α-method is more useful than other methods because it is the explicit scheme and it achieves the second order accuracy and numerical damping simultaneously.

Keywords: Dynamic, α-Method, P-Method, PC α-Method, Newmark method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3019
8022 Seismic Fragility for Sliding Failure of Weir Structure Considering the Process of Concrete Aging

Authors: HoYoung Son, Ki Young Kim, Woo Young Jung

Abstract:

This study investigated the change of weir structure performances when durability of concrete, which is the main material of weir structure, decreased due to their aging by mean of seismic fragility analysis. In the analysis, it was assumed that the elastic modulus of concrete was reduced by 10% in order to account for their aged deterioration. Additionally, the analysis of seismic fragility was based on Monte Carlo Simulation method combined with a 2D nonlinear finite element in ABAQUS platform with the consideration of deterioration of concrete. Finally, the comparison of seismic fragility of model pre- and post-deterioration was made to study the performance of weir. Results show that the probability of failure in moderate damage for deteriorated model was found to be larger than pre-deterioration model when peak ground acceleration (PGA) passed 0.4 g.

Keywords: Weir, FEM, concrete, fragility, aging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 873
8021 Influence of Wind Induced Fatigue Damage in the Reliability of Wind Turbines

Authors: Emilio A. Berny-Brandt, Sonia E. Ruiz

Abstract:

Steel tubular towers serving as support structures for large wind turbines are subjected to several hundred million stress cycles caused by the turbulent nature of the wind. This causes highcycle fatigue, which could govern the design of the tower. Maintaining the support structure after the wind turbines reach its typical 20-year design life has become a common practice; however, quantifying the changes in the reliability on the tower is not usual. In this paper the effect of fatigue damage in the wind turbine structure is studied whit the use of fracture mechanics, and a method to estimate the reliability over time of the structure is proposed. A representative wind turbine located in Oaxaca, Mexico is then studied. It is found that the system reliability is significantly affected by the accumulation of fatigue damage. 

Keywords: Crack growth, fatigue, Monte Carlo simulation, structural reliability, wind turbines

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2278
8020 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: Audit fee, heteroscedasticity, Lagrange multiplier test, periodicity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 681
8019 A Case Study on the Numerical-Probability Approach for Deep Excavation Analysis

Authors: Komeil Valipourian

Abstract:

Urban advances and the growing need for developing infrastructures has increased the importance of deep excavations. In this study, after the introducing probability analysis as an important issue, an attempt has been made to apply it for the deep excavation project of Bangkok’s Metro as a case study. For this, the numerical probability model has been developed based on the Finite Difference Method and Monte Carlo sampling approach. The results indicate that disregarding the issue of probability in this project will result in an inappropriate design of the retaining structure. Therefore, probabilistic redesign of the support is proposed and carried out as one of the applications of probability analysis. A 50% reduction in the flexural strength of the structure increases the failure probability just by 8% in the allowable range and helps improve economic conditions, while maintaining mechanical efficiency. With regard to the lack of efficient design in most deep excavations, by considering geometrical and geotechnical variability, an attempt was made to develop an optimum practical design standard for deep excavations based on failure probability. On this basis, a practical relationship is presented for estimating the maximum allowable horizontal displacement, which can help improve design conditions without developing the probability analysis.

Keywords: Numerical probability modeling, deep excavation, allowable maximum displacement, finite difference method, FDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 607
8018 Probabilistic Method of Wind Generation Placement for Congestion Management

Authors: S. Z. Moussavi, A. Badri, F. Rastegar Kashkooli

Abstract:

Wind farms (WFs) with high level of penetration are being established in power systems worldwide more rapidly than other renewable resources. The Independent System Operator (ISO), as a policy maker, should propose appropriate places for WF installation in order to maximize the benefits for the investors. There is also a possibility of congestion relief using the new installation of WFs which should be taken into account by the ISO when proposing the locations for WF installation. In this context, efficient wind farm (WF) placement method is proposed in order to reduce burdens on congested lines. Since the wind speed is a random variable and load forecasts also contain uncertainties, probabilistic approaches are used for this type of study. AC probabilistic optimal power flow (P-OPF) is formulated and solved using Monte Carlo Simulations (MCS). In order to reduce computation time, point estimate methods (PEM) are introduced as efficient alternative for time-demanding MCS. Subsequently, WF optimal placement is determined using generation shift distribution factors (GSDF) considering a new parameter entitled, wind availability factor (WAF). In order to obtain more realistic results, N-1 contingency analysis is employed to find the optimal size of WF, by means of line outage distribution factors (LODF). The IEEE 30-bus test system is used to show and compare the accuracy of proposed methodology.

Keywords: Probabilistic optimal power flow, Wind power, Pointestimate methods, Congestion management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830
8017 Ion Thruster Grid Lifetime Assessment Based on Its Structural Failure

Authors: Juan Li, Jiawen Qiu, Yuchuan Chu, Tianping Zhang, Wei Meng, Yanhui Jia, Xiaohui Liu

Abstract:

This article developed an ion thruster optic system sputter erosion depth numerical 3D model by IFE-PIC (Immersed Finite Element-Particle-in-Cell) and Mont Carlo method, and calculated the downstream surface sputter erosion rate of accelerator grid; compared with LIPS-200 life test data. The results of the numerical model are in reasonable agreement with the measured data. Finally, we predicted the lifetime of the 20cm diameter ion thruster via the erosion data obtained with the model. The ultimate result demonstrated that under normal operating condition, the erosion rate of the grooves wears on the downstream surface of the accelerator grid is 34.6μm⁄1000h, which means the conservative lifetime until structural failure occurring on the accelerator grid is 11500 hours.

Keywords: Ion thruster, accelerator gird, sputter erosion, lifetime assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1945
8016 Estimation of the Mean of the Selected Population

Authors: Kalu Ram Meena, Aditi Kar Gangopadhyay, Satrajit Mandal

Abstract:

Two normal populations with different means and same variance are considered, where the variance is known. The population with the smaller sample mean is selected. Various estimators are constructed for the mean of the selected normal population. Finally, they are compared with respect to the bias and MSE risks by the mehod of Monte-Carlo simulation and their performances are analysed with the help of graphs.

Keywords: Estimation after selection, Brewster-Zidek technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1355
8015 Electricity Load Modeling: An Application to Italian Market

Authors: Giovanni Masala, Stefania Marica

Abstract:

Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.

Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441
8014 The RK1GL2X3 Method for Initial Value Problems in Ordinary Differential Equations

Authors: J.S.C. Prentice

Abstract:

The RK1GL2X3 method is a numerical method for solving initial value problems in ordinary differential equations, and is based on the RK1GL2 method which, in turn, is a particular case of the general RKrGLm method. The RK1GL2X3 method is a fourth-order method, even though its underlying Runge-Kutta method RK1 is the first-order Euler method, and hence, RK1GL2X3 is considerably more efficient than RK1. This enhancement is achieved through an implementation involving triple-nested two-point Gauss- Legendre quadrature.

Keywords: RK1GL2X3, RK1GL2, RKrGLm, Runge-Kutta, Gauss-Legendre, initial value problem, local error, global error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1262
8013 Seat Assignment Problem Optimization

Authors: Mohammed Salem Alzahrani

Abstract:

In this paper the optimality of the solution of an existing real word assignment problem known as the seat assignment problem using Seat Assignment Method (SAM) is discussed. SAM is the newly driven method from three existing methods, Hungarian Method, Northwest Corner Method and Least Cost Method in a special way that produces the easiness & fairness among all methods that solve the seat assignment problem.

Keywords: Assignment Problem, Hungarian Method, Least Cost Method, Northwest Corner Method, Seat Assignment Method (SAM), A Real Word Assignment Problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3385
8012 Optimization of Air Pollution Control Model for Mining

Authors: Zunaira Asif, Zhi Chen

Abstract:

The sustainable measures on air quality management are recognized as one of the most serious environmental concerns in the mining region. The mining operations emit various types of pollutants which have significant impacts on the environment. This study presents a stochastic control strategy by developing the air pollution control model to achieve a cost-effective solution. The optimization method is formulated to predict the cost of treatment using linear programming with an objective function and multi-constraints. The constraints mainly focus on two factors which are: production of metal should not exceed the available resources, and air quality should meet the standard criteria of the pollutant. The applicability of this model is explored through a case study of an open pit metal mine, Utah, USA. This method simultaneously uses meteorological data as a dispersion transfer function to support the practical local conditions. The probabilistic analysis and the uncertainties in the meteorological conditions are accomplished by Monte Carlo simulation. Reasonable results have been obtained to select the optimized treatment technology for PM2.5, PM10, NOx, and SO2. Additional comparison analysis shows that baghouse is the least cost option as compared to electrostatic precipitator and wet scrubbers for particulate matter, whereas non-selective catalytical reduction and dry-flue gas desulfurization are suitable for NOx and SO2 reduction respectively. Thus, this model can aid planners to reduce these pollutants at a marginal cost by suggesting control pollution devices, while accounting for dynamic meteorological conditions and mining activities.

Keywords: Air pollution, linear programming, mining, optimization, treatment technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529
8011 Cyclostationary Gaussian Linearization for Analyzing Nonlinear System Response under Sinusoidal Signal and White Noise Excitation

Authors: R. J. Chang

Abstract:

A cyclostationary Gaussian linearization method is formulated for investigating the time average response of nonlinear system under sinusoidal signal and white noise excitation. The quantitative measure of cyclostationary mean, variance, spectrum of mean amplitude, and mean power spectral density of noise are analyzed. The qualitative response behavior of stochastic jump and bifurcation are investigated. The validity of the present approach in predicting the quantitative and qualitative statistical responses is supported by utilizing Monte Carlo simulations. The present analysis without imposing restrictive analytical conditions can be directly derived by solving non-linear algebraic equations. The analytical solution gives reliable quantitative and qualitative prediction of mean and noise response for the Duffing system subjected to both sinusoidal signal and white noise excitation.

Keywords: Cyclostationary, Duffing system, Gaussian linearization, sinusoidal signal and white noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948
8010 A New Method to Solve a Non Linear Differential System

Authors: Seifedine Kadry

Abstract:

In this article, our objective is the analysis of the resolution of non-linear differential systems by combining Newton and Continuation (N-C) method. The iterative numerical methods converge where the initial condition is chosen close to the exact solution. The question of choosing the initial condition is answered by N-C method.

Keywords: Continuation Method, Newton Method, Finite Difference Method, Numerical Analysis and Non-Linear partial Differential Equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1339
8009 An Efficient Method for Solving Multipoint Equation Boundary Value Problems

Authors: Ampon Dhamacharoen, Kanittha Chompuvised

Abstract:

In this work, we solve multipoint boundary value problems where the boundary value conditions are equations using the Newton-Broyden Shooting method (NBSM).The proposed method is tested upon several problems from the literature and the results are compared with the available exact solution. The experiments are given to illustrate the efficiency and implementation of the method.

Keywords: Boundary value problem; Multipoint equation boundary value problems, Shooting Method, Newton-Broyden method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1732