Search results for: Monte Carlo Simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5041

Search results for: Monte Carlo Simulation

4921 Interval Estimation for Rainfall Mean in Northeastern Thailand

Authors: Nitaya Buntao

Abstract:

This paper considers the problems of interval estimation for rainfall mean of the lognormal distribution and the delta-lognormal distribution in Northeastern Thailand. We present here the modified generalized pivotal approach (MGPA) compared to the modified method of variance estimates recovery (MMOVER). The performance of each method is examined in term of coverage probabilities and average lengths by Monte Carlo simulation. An extensive simulation study indicates that the MMOVER performs better than the MGPA approach in terms of the coverage probability; it results in highly accurate coverage probability.

Keywords: rainfall mean, interval estimation, lognormal distribution, delta-lognormal distribution

Procedia PDF Downloads 421
4920 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion

Authors: Omran M. Kenshel, Alan J. O'Connor

Abstract:

Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.

Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability

Procedia PDF Downloads 441
4919 Reliability Analysis of Variable Stiffness Composite Laminate Structures

Authors: A. Sohouli, A. Suleman

Abstract:

This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.

Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures

Procedia PDF Downloads 484
4918 Approximate Confidence Interval for Effect Size Base on Bootstrap Resampling Method

Authors: S. Phanyaem

Abstract:

This paper presents the confidence intervals for the effect size base on bootstrap resampling method. The meta-analytic confidence interval for effect size is proposed that are easy to compute. A Monte Carlo simulation study was conducted to compare the performance of the proposed confidence intervals with the existing confidence intervals. The best confidence interval method will have a coverage probability close to 0.95. Simulation results have shown that our proposed confidence intervals perform well in terms of coverage probability and expected length.

Keywords: effect size, confidence interval, bootstrap method, resampling

Procedia PDF Downloads 564
4917 Evaluation of the Performance of Solar Stills as an Alternative for Brine Treatment Applying the Monte Carlo Ray Tracing Method

Authors: B. E. Tarazona-Romero, J. G. Ascanio-Villabona, O. Lengerke-Perez, A. D. Rincon-Quintero, C. L. Sandoval-Rodriguez

Abstract:

Desalination offers solutions for the shortage of water in the world, however, the process of eliminating salts generates a by-product known as brine, generally eliminated in the environment through techniques that mitigate its impact. Brine treatment techniques are vital to developing an environmentally sustainable desalination process. Consequently, this document evaluates three different geometric configurations of solar stills as an alternative for brine treatment to be integrated into a low-scale desalination process. The geometric scenarios to be studied were selected because they have characteristics that adapt to the concept of appropriate technology; low cost, intensive labor and material resources for local manufacturing, modularity, and simplicity in construction. Additionally, the conceptual design of the collectors was carried out, and the ray tracing methodology was applied through the open access software SolTrace and Tonatiuh. The simulation process used 600.00 rays and modified two input parameters; direct normal radiation (DNI) and reflectance. In summary, for the scenarios evaluated, the ladder-type distiller presented higher efficiency values compared to the pyramid-type and single-slope collectors. Finally, the efficiency of the collectors studied was directly related to their geometry, that is, large geometries allow them to receive a greater number of solar rays in various paths, affecting the efficiency of the device.

Keywords: appropriate technology, brine treatment techniques, desalination, monte carlo ray tracing

Procedia PDF Downloads 47
4916 Establishment of the Regression Uncertainty of the Critical Heat Flux Power Correlation for an Advanced Fuel Bundle

Authors: L. Q. Yuan, J. Yang, A. Siddiqui

Abstract:

A new regression uncertainty analysis methodology was applied to determine the uncertainties of the critical heat flux (CHF) power correlation for an advanced 43-element bundle design, which was developed by Canadian Nuclear Laboratories (CNL) to achieve improved economics, resource utilization and energy sustainability. The new methodology is considered more appropriate than the traditional methodology in the assessment of the experimental uncertainty associated with regressions. The methodology was first assessed using both the Monte Carlo Method (MCM) and the Taylor Series Method (TSM) for a simple linear regression model, and then extended successfully to a non-linear CHF power regression model (CHF power as a function of inlet temperature, outlet pressure and mass flow rate). The regression uncertainty assessed by MCM agrees well with that by TSM. An equation to evaluate the CHF power regression uncertainty was developed and expressed as a function of independent variables that determine the CHF power.

Keywords: CHF experiment, CHF correlation, regression uncertainty, Monte Carlo Method, Taylor Series Method

Procedia PDF Downloads 387
4915 Uncertainty Assessment in Building Energy Performance

Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud

Abstract:

The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.

Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method

Procedia PDF Downloads 425
4914 Dynamical Characteristics of Interaction between Water Droplet and Aerosol Particle in Dedusting Technology

Authors: Ding Jue, Li Jiahua, Lei Zhidi, Weng Peifen, Li Xiaowei

Abstract:

With the rapid development of national modern industry, people begin to pay attention to environmental pollution and harm caused by industrial dust. Based on above, a numerical study on the dedusting technology of industrial environment was conducted. The dynamic models of multicomponent particles collision and coagulation, breakage and deposition are developed, and the interaction of water droplet and aerosol particle in 2-Dimension flow field was researched by Eulerian-Lagrangian method and Multi-Monte Carlo method. The effects of the droplet scale, movement speed of droplet and the flow field structure on scavenging efficiency were analyzed. The results show that under the certain condition, 30μm of droplet has the best scavenging efficiency. At the initial speed 1m/s of droplets, droplets and aerosol particles have more time to interact, so it has a better scavenging efficiency for the particle.

Keywords: water droplet, aerosol particle, collision and coagulation, multi-monte carlo method

Procedia PDF Downloads 274
4913 Secondary Radiation in Laser-Accelerated Proton Beamline (LAP)

Authors: Seyed Ali Mahdipour, Maryam Shafeei Sarvestani

Abstract:

Radiation pressure acceleration (RPA) and target normal sheath acceleration (TNSA) are the most important methods of Laser-accelerated proton beams (LAP) planning systems.LAP has inspired novel applications that can benefit from proton bunch properties different from conventionally accelerated proton beams. The secondary neutron and photon produced in the collision of protons with beamline components are of the important concern in proton therapy. Various published Monte Carlo researches evaluated the beamline and shielding considerations for TNSA method, but there is no studies directly address secondary neutron and photon production from RPA method in LAP. The purpose of this study is to calculate the flux distribution of neutron and photon secondary radiations on the first area ofLAP and to determine the optimize thickness and radius of the energyselector in a LAP planning system based on RPA method. Also, we present the Monte Carlo calculations to determine the appropriate beam pipe for shielding a LAP planning system. The GEANT4 Monte Carlo toolkit has been used to simulate a secondary radiation production in LAP. A section of new multifunctional LAP beamlinehas been proposed, based on the pulsed power solenoid scheme as a GEANT4 toolkit. The results show that the energy selector is the most important source of neutron and photon secondary particles in LAP beamline. According to the calculations, the pure Tungsten energy selector not be the proper case, and using of Tungsten+Polyethylene or Tungsten+Graphitecomposite selectors will reduce the production of neutron and photon intensities by approximately ~10% and ~25%, respectively. Also the optimal radiuses of energy selectors were found to be ~4 cm and ~6 cm for a 3 degree and 5 degree proton deviation angles, respectively.

Keywords: neutron, photon, flux distribution, energy selector, GEANT4 toolkit

Procedia PDF Downloads 73
4912 Comparison of Water Equivalent Ratio of Several Dosimetric Materials in Proton Therapy Using Monte Carlo Simulations and Experimental Data

Authors: M. R. Akbari , H. Yousefnia, E. Mirrezaei

Abstract:

Range uncertainties of protons are currently a topic of interest in proton therapy. Two of the parameters that are often used to specify proton range are water equivalent thickness (WET) and water equivalent ratio (WER). Since WER values for a specific material is nearly constant at different proton energies, it is a more useful parameter to compare. In this study, WER values were calculated for different proton energies in polymethyl methacrylate (PMMA), polystyrene (PS) and aluminum (Al) using FLUKA and TRIM codes. The results were compared with analytical, experimental and simulated SEICS code data obtained from the literature. In FLUKA simulation, a cylindrical phantom, 1000 mm in height and 300 mm in diameter, filled with the studied materials was simulated. A typical mono-energetic proton pencil beam in a wide range of incident energies usually applied in proton therapy (50 MeV to 225 MeV) impinges normally on the phantom. In order to obtain the WER values for the considered materials, cylindrical detectors, 1 mm in height and 20 mm in diameter, were also simulated along the beam trajectory in the phantom. In TRIM calculations, type of projectile, energy and angle of incidence, type of target material and thickness should be defined. The mode of 'detailed calculation with full damage cascades' was selected for proton transport in the target material. The biggest difference in WER values between the codes was 3.19%, 1.9% and 0.67% for Al, PMMA and PS, respectively. In Al and PMMA, the biggest difference between each code and experimental data was 1.08%, 1.26%, 2.55%, 0.94%, 0.77% and 0.95% for SEICS, FLUKA and SRIM, respectively. FLUKA and SEICS had the greatest agreement (≤0.77% difference in PMMA and ≤1.08% difference in Al, respectively) with the available experimental data in this study. It is concluded that, FLUKA and TRIM codes have capability for Bragg curves simulation and WER values calculation in the studied materials. They can also predict Bragg peak location and range of proton beams with acceptable accuracy.

Keywords: water equivalent ratio, dosimetric materials, proton therapy, Monte Carlo simulations

Procedia PDF Downloads 285
4911 New Estimation in Autoregressive Models with Exponential White Noise by Using Reversible Jump MCMC Algorithm

Authors: Suparman Suparman

Abstract:

A white noise in autoregressive (AR) model is often assumed to be normally distributed. In application, the white noise usually do not follows a normal distribution. This paper aims to estimate a parameter of AR model that has a exponential white noise. A Bayesian method is adopted. A prior distribution of the parameter of AR model is selected and then this prior distribution is combined with a likelihood function of data to get a posterior distribution. Based on this posterior distribution, a Bayesian estimator for the parameter of AR model is estimated. Because the order of AR model is considered a parameter, this Bayesian estimator cannot be explicitly calculated. To resolve this problem, a method of reversible jump Markov Chain Monte Carlo (MCMC) is adopted. A result is a estimation of the parameter AR model can be simultaneously calculated.

Keywords: autoregressive (AR) model, exponential white Noise, bayesian, reversible jump Markov Chain Monte Carlo (MCMC)

Procedia PDF Downloads 320
4910 Etude 3D Quantum Numerical Simulation of Performance in the HEMT

Authors: A. Boursali, A. Guen-Bouazza

Abstract:

We present a simulation of a HEMT (high electron mobility transistor) structure with and without a field plate. We extract the device characteristics through the analysis of DC, AC and high frequency regimes, as shown in this paper. This work demonstrates the optimal device with a gate length of 15 nm, InAlN/GaN heterostructure and field plate structure, making it superior to modern HEMTs when compared with otherwise equivalent devices. This improves the ability to bear the burden of the current density passes in the channel. We have demonstrated an excellent current density, as high as 2.05 A/m, a peak extrinsic transconductance of 0.59S/m at VDS=2 V, and cutting frequency cutoffs of 638 GHz in the first HEMT and 463 GHz for Field plate HEMT., maximum frequency of 1.7 THz, maximum efficiency of 73%, maximum breakdown voltage of 400 V, leakage current density IFuite=1 x 10-26 A, DIBL=33.52 mV/V and an ON/OFF current density ratio higher than 1 x 1010. These values were determined through the simulation by deriving genetic and Monte Carlo algorithms that optimize the design and the future of this technology.

Keywords: HEMT, silvaco, field plate, genetic algorithm, quantum

Procedia PDF Downloads 317
4909 3D Quantum Simulation of a HEMT Device Performance

Authors: Z. Kourdi, B. Bouazza, M. Khaouani, A. Guen-Bouazza, Z. Djennati, A. Boursali

Abstract:

We present a simulation of a HEMT (high electron mobility transistor) structure with and without a field plate. We extract the device characteristics through the analysis of DC, AC and high frequency regimes, as shown in this paper. This work demonstrates the optimal device with a gate length of 15 nm, InAlN/GaN heterostructure and field plate structure, making it superior to modern HEMTs when compared with otherwise equivalent devices. This improves the ability to bear the burden of the current density passes in the channel. We have demonstrated an excellent current density, as high as 2.05 A/mm, a peak extrinsic transconductance of 590 mS/mm at VDS=2 V, and cutting frequency cutoffs of 638 GHz in the first HEMT and 463 GHz for Field plate HEMT., maximum frequency of 1.7 THz, maximum efficiency of 73%, maximum breakdown voltage of 400 V, DIBL=33.52 mV/V and an ON/OFF current density ratio higher than 1 x 1010. These values were determined through the simulation by deriving genetic and Monte Carlo algorithms that optimize the design and the future of this technology.

Keywords: HEMT, Silvaco, field plate, genetic algorithm, quantum

Procedia PDF Downloads 433
4908 Molecular Simulation of NO, NH3 Adsorption in MFI and H-ZSM5

Authors: Z. Jamalzadeh, A. Niaei, H. Erfannia, S. G. Hosseini, A. S. Razmgir

Abstract:

Due to developing the industries, the emission of pollutants such as NOx, SOx, and CO2 are rapidly increased. Generally, NOx is attributed to the mono nitrogen oxides of NO and NO2 that is one of the most important atmospheric contaminants. Hence, controlling the emission of nitrogen oxides is urgent environmentally. Selective Catalytic Reduction of NOx is one of the most common techniques for NOx removal in which Zeolites have wide application due to their high performance. In zeolitic processes, the catalytic reaction occurs mostly in the pores. Therefore, investigation the adsorption phenomena of the molecules in order to gain an insight and understand the catalytic cycle is of important. Hence, in current study, molecular simulations is applied for studying the adsorption phenomena in nanocatalysts applied for SCR of NOx process. The effect of cation addition to the support in the catalysts’ behavior through adsorption step was explored by Mont Carlo (MC). Simulation time of 1 Ns accompanying 1 fs time step, COMPASS27 Force Field and the cut off radios of 12.5 Ȧ was applied for performed runs. It was observed that the adsorption capacity increases in the presence of cations. The sorption isotherms demonstrated the behavior of type I isotherm categories and sorption capacity diminished with increase in temperature whereas an increase was observed at high pressures. Besides, NO sorption showed higher sorption capacity than NH3 in H–ZSM5. In this respect, the Energy distributions signified that the molecules could adsorb in just one sorption site at the catalyst and the sorption energy of NO was stronger than the NH3 in H-ZSM5. Furthermore, the isosteric heat of sorption data showed nearly same values for the molecules; however, it indicated stronger interactions of NO molecules with H-ZSM5 Zeolite compared to the isosteric heat of NH3 which was low in value.

Keywords: Monte Carlo simulation, adsorption, NOx, ZSM5

Procedia PDF Downloads 335
4907 A Novel Model for Saturation Velocity Region of Graphene Nanoribbon Transistor

Authors: Mohsen Khaledian, Razali Ismail, Mehdi Saeidmanesh, Mahdiar Hosseinghadiry

Abstract:

A semi-analytical model for impact ionization coefficient of graphene nanoribbon (GNR) is presented. The model is derived by calculating probability of electrons reaching ionization threshold energy Et and the distance traveled by electron gaining Et. In addition, ionization threshold energy is semi-analytically modeled for GNR. We justify our assumptions using analytic modeling and comparison with simulation results. Gaussian simulator together with analytical modeling is used in order to calculate ionization threshold energy and Kinetic Monte Carlo is employed to calculate ionization coefficient and verify the analytical results. Finally, the profile of ionization is presented using the proposed models and simulation and the results are compared with that of silicon.

Keywords: nanostructures, electronic transport, semiconductor modeling, systems engineering

Procedia PDF Downloads 444
4906 Analysis of Exponential Distribution under Step Stress Partially Accelerated Life Testing Plan Using Adaptive Type-I Hybrid Progressive Censoring Schemes with Competing Risks Data

Authors: Ahmadur Rahman, Showkat Ahmad Lone, Ariful Islam

Abstract:

In this article, we have estimated the parameters for the failure times of units based on the sampling technique adaptive type-I progressive hybrid censoring under the step-stress partially accelerated life tests for competing risk. The failure times of the units are assumed to follow an exponential distribution. Maximum likelihood estimation technique is used to estimate the unknown parameters of the distribution and tampered coefficient. Confidence interval also obtained for the parameters. A simulation study is performed by using Monte Carlo Simulation method to check the authenticity of the model and its assumptions.

Keywords: adaptive type-I hybrid progressive censoring, competing risks, exponential distribution, simulation, step-stress partially accelerated life tests

Procedia PDF Downloads 314
4905 Travel Behavior Simulation of Bike-Sharing System Users in Kaoshiung City

Authors: Hong-Yi Lin, Feng-Tyan Lin

Abstract:

In a Bike-sharing system (BSS), users can easily rent bikes from any station in the city for mid-range or short-range trips. BSS can also be integrated with other types of transport system, especially Green Transportation system, such as rail transport, bus etc. Since BSS records time and place of each pickup and return, the operational data can reflect more authentic and dynamic state of user behaviors. Furthermore, land uses around docking stations are highly associated with origins and destinations for the BSS users. As urban researchers, what concerns us more is to take BSS into consideration during the urban planning process and enhance the quality of urban life. This research focuses on the simulation of travel behavior of BSS users in Kaohsiung. First, rules of users’ behavior were derived by analyzing operational data and land use patterns nearby docking stations. Then, integrating with Monte Carlo method, these rules were embedded into a travel behavior simulation model, which was implemented by NetLogo, an agent-based modeling tool. The simulation model allows us to foresee the rent-return behaviour of BSS in order to choose potential locations of the docking stations. Also, it can provide insights and recommendations about planning and policies for the future BSS.

Keywords: agent-based model, bike-sharing system, BSS operational data, simulation

Procedia PDF Downloads 278
4904 The Influence of Design Complexity of a Building Structure on the Expected Performance

Authors: Ormal Lishi

Abstract:

This research presents a computationally efficient probabilistic method to assess the performance of compartmentation walls with similar Fire Resistance Levels (FRL) but varying complexity. Specifically, a masonry brick wall and a light-steel framed (LSF) wall with comparable insulation performance are analyzed. A Monte Carlo technique, employing Latin Hypercube Sampling (LHS), is utilized to quantify uncertainties and determine the probability of failure for both walls exposed to standard and parametric fires, following ISO 834 and Eurocodes guidelines. Results show that the probability of failure for the brick masonry wall under standard fire exposure is estimated at 4.8%, while the LSF wall is 7.6%. These probabilities decrease to 0.4% and 4.8%, respectively, when subjected to parametric fires. Notably, the complex LSF wall exhibits higher variability in predicting time to failure for specific criteria compared to the less complex brick wall, especially at higher temperatures. The proposed approach highlights the need for Probabilistic Risk Assessment (PRA) to accurately evaluate the reliability and safety levels of complex designs.

Keywords: design complexity, probability of failure, monte carlo analysis, compartmentation walls, insulation

Procedia PDF Downloads 30
4903 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 496
4902 On Coverage Probability of Confidence Intervals for the Normal Mean with Known Coefficient of Variation

Authors: Suparat Niwitpong, Sa-aat Niwitpong

Abstract:

Statistical inference of normal mean with known coefficient of variation has been investigated recently. This phenomenon occurs normally in environment and agriculture experiments when the scientist knows the coefficient of variation of their experiments. In this paper, we constructed new confidence intervals for the normal population mean with known coefficient of variation. We also derived analytic expressions for the coverage probability of each confidence interval. To confirm our theoretical results, Monte Carlo simulation will be used to assess the performance of these intervals based on their coverage probabilities.

Keywords: confidence interval, coverage probability, expected length, known coefficient of variation

Procedia PDF Downloads 354
4901 Evaluation of a Data Fusion Algorithm for Detecting and Locating a Radioactive Source through Monte Carlo N-Particle Code Simulation and Experimental Measurement

Authors: Hadi Ardiny, Amir Mohammad Beigzadeh

Abstract:

Through the utilization of a combination of various sensors and data fusion methods, the detection of potential nuclear threats can be significantly enhanced by extracting more information from different data. In this research, an experimental and modeling approach was employed to track a radioactive source by combining a surveillance camera and a radiation detector (NaI). To run this experiment, three mobile robots were utilized, with one of them equipped with a radioactive source. An algorithm was developed in identifying the contaminated robot through correlation between camera images and camera data. The computer vision method extracts the movements of all robots in the XY plane coordinate system, and the detector system records the gamma-ray count. The position of the robots and the corresponding count of the moving source were modeled using the MCNPX simulation code while considering the experimental geometry. The results demonstrated a high level of accuracy in finding and locating the target in both the simulation model and experimental measurement. The modeling techniques prove to be valuable in designing different scenarios and intelligent systems before initiating any experiments.

Keywords: nuclear threats, radiation detector, MCNPX simulation, modeling techniques, intelligent systems

Procedia PDF Downloads 63
4900 The Martingale Options Price Valuation for European Puts Using Stochastic Differential Equation Models

Authors: H. C. Chinwenyi, H. D. Ibrahim, F. A. Ahmed

Abstract:

In modern financial mathematics, valuing derivatives such as options is often a tedious task. This is simply because their fair and correct prices in the future are often probabilistic. This paper examines three different Stochastic Differential Equation (SDE) models in finance; the Constant Elasticity of Variance (CEV) model, the Balck-Karasinski model, and the Heston model. The various Martingales option price valuation formulas for these three models were obtained using the replicating portfolio method. Also, the numerical solution of the derived Martingales options price valuation equations for the SDEs models was carried out using the Monte Carlo method which was implemented using MATLAB. Furthermore, results from the numerical examples using published data from the Nigeria Stock Exchange (NSE), all share index data show the effect of increase in the underlying asset value (stock price) on the value of the European Put Option for these models. From the results obtained, we see that an increase in the stock price yields a decrease in the value of the European put option price. Hence, this guides the option holder in making a quality decision by not exercising his right on the option.

Keywords: equivalent martingale measure, European put option, girsanov theorem, martingales, monte carlo method, option price valuation formula

Procedia PDF Downloads 102
4899 A One Dimensional Particle in Cell Model for Excimer Lamps

Authors: W. Benstaali, A. Belasri

Abstract:

In this work we study a planar lamp filled with neon-xenon gas. We use a one-dimensional particle in a cell with Monte Carlo simulation (PIC-MCC) to investigate the effect xenon concentration on the energy deposited on excitation, ionization and ions. A Xe-Ne discharge is studied for a gas pressure of 400 torr. The results show an efficient Xe20-Ne mixture with an applied voltage of 1.2KV; the xenon excitation energy represents 65% form total energy dissipated in the discharge. We have also studied electrical properties and the energy balance a discharge for Xe50-Ne which needs a voltage of 2kv; the xenon energy is than more important.

Keywords: dielectric barrier discharge, efficiency, excitation, lamps

Procedia PDF Downloads 130
4898 On the Influence of the Metric Space in the Critical Behavior of Magnetic Temperature

Authors: J. C. Riaño-Rojas, J. D. Alzate-Cardona, E. Restrepo-Parra

Abstract:

In this work, a study of generic magnetic nanoparticles varying the metric space is presented. As the metric space is changed, the nanoparticle form and the inner product are also varied, since the energetic scale is not conserved. This study is carried out using Monte Carlo simulations combined with the Wolff embedding and Metropolis algorithms. The Metropolis algorithm is used at high temperature regions to reach the equilibrium quickly. The Wolff embedding algorithm is used at low and critical temperature regions in order to reduce the critical slowing down phenomenon. The ions number is kept constant for the different forms and the critical temperatures using finite size scaling are found. We observed that critical temperatures don't exhibit significant changes when the metric space was varied. Additionally, the effective dimension according the metric space was determined. A study of static behavior for reaching the static critical exponents was developed. The objective of this work is to observe the behavior of the thermodynamic quantities as energy, magnetization, specific heat, susceptibility and Binder's cumulants at the critical region, in order to demonstrate if the magnetic nanoparticles describe their magnetic interactions in the Euclidean space or if there is any correspondence in other metric spaces.

Keywords: nanoparticles, metric, Monte Carlo, critical behaviour

Procedia PDF Downloads 481
4897 Effect of Rainflow Cycle Number on Fatigue Lifetime of an Arm of Vehicle Suspension System

Authors: Hatem Mrad, Mohamed Bouazara, Fouad Erchiqui

Abstract:

Fatigue, is considered as one of the main cause of mechanical properties degradation of mechanical parts. Probability and reliability methods are appropriate for fatigue analysis using uncertainties that exist in fatigue material or process parameters. Current work deals with the study of the effect of the number and counting Rainflow cycle on fatigue lifetime (cumulative damage) of an upper arm of the vehicle suspension system. The major part of the fatigue damage induced in suspension arm is caused by two main classes of parameters. The first is related to the materials properties and the second is the road excitation or the applied force of the passenger’s number. Therefore, Young's modulus and road excitation are selected as input parameters to conduct repetitive simulations by Monte Carlo (MC) algorithm. Latin hypercube sampling method is used to generate these parameters. Response surface method is established according to fatigue lifetime of each combination of input parameters according to strain-life method. A PYTHON script was developed to automatize finite element simulations of the upper arm according to a design of experiments.

Keywords: fatigue, monte carlo, rainflow cycle, response surface, suspension system

Procedia PDF Downloads 222
4896 The Generalized Pareto Distribution as a Model for Sequential Order Statistics

Authors: Mahdy ‎Esmailian, Mahdi ‎Doostparast, Ahmad ‎Parsian

Abstract:

‎In this article‎, ‎sequential order statistics (SOS) censoring type II samples coming from the generalized Pareto distribution are considered‎. ‎Maximum likelihood (ML) estimators of the unknown parameters are derived on the basis of the available multiple SOS data‎. ‎Necessary conditions for existence and uniqueness of the derived ML estimates are given‎. Due to complexity in the proposed likelihood function‎, ‎a useful re-parametrization is suggested‎. ‎For illustrative purposes‎, ‎a Monte Carlo simulation study is conducted and an illustrative example is analysed‎.

Keywords: bayesian estimation‎, generalized pareto distribution‎, ‎maximum likelihood estimation‎, sequential order statistics

Procedia PDF Downloads 469
4895 Numerical response of Coaxial HPGe Detector for Skull and Knee measurement

Authors: Pabitra Sahu, M. Manohari, S. Priyadharshini, R. Santhanam, S. Chandrasekaran, B. Venkatraman

Abstract:

Radiation workers of reprocessing plants have a potential for internal exposure due to actinides and fission products. Radionuclides like Americium, lead, Polonium and Europium are bone seekers and get accumulated in the skeletal part. As the major skeletal content is in the skull (13%) and knee (22%), measurements of old intake have to be carried out in the skull and knee. At the Indira Gandhi Centre for Atomic Research, a twin HPGe-based actinide monitor is used for the measurement of actinides present in bone. Efficiency estimation, which is one of the prerequisites for the quantification of radionuclides, requires anthropomorphic phantoms. Such phantoms are very limited. Hence, in this study, efficiency curves for a Twin HPGe-based actinide monitoring system are established theoretically using the FLUKA Monte Carlo method and ICRP adult male voxel phantom. In the case of skull measurement, the detector is placed over the forehead, and for knee measurement, one detector is placed over each knee. The efficiency values of radionuclides present in the knee and skull vary from 3.72E-04 to 4.19E-04 CPS/photon and 5.22E-04 to 7.07E-04 CPS/photon, respectively, for the energy range 17 to 3000keV. The efficiency curves for the measurement are established, and it is found that initially, the efficiency value increases up to 100 keV and then starts decreasing. It is found that the skull efficiency values are 4% to 63% higher than that of the knee, depending on the energy for all the energies except 17.74 keV. The reason is the closeness of the detector to the skull compared to the knee. But for 17.74 keV the efficiency of the knee is more than the skull due to the higher attenuation caused in the skull bones because of its greater thickness. The Minimum Detectable Activity (MDA) for 241Am present in the skull and knee is 9 Bq. 239Pu has a MDA of 950 Bq and 1270 Bq for knee and skull, respectively, for a counting time of 1800 sec. This paper discusses the simulation method and the results obtained in the study.

Keywords: FLUKA Monte Carlo Method, ICRP adult male voxel phantom, knee, Skull.

Procedia PDF Downloads 5
4894 The MCNP Simulation of Prompt Gamma-Ray Neutron Activation Analysis at TRR-1/M1

Authors: S. Sangaroon, W. Ratanatongchai, S. Khaweerat, R. Picha, J. Channuie

Abstract:

The prompt gamma-ray neutron activation analysis system (PGNAA) has been constructed and installed at a 6 inch diameter neutron beam port of the Thai Research Reactor-1/ Modification 1 (TRR-1/M1) since 1989. It was designed for the reactor operating power at 1.2 MW. The purpose of the system is for an elemental and isotopic analytical. In 2016, the PGNAA facility will be developed to reduce the leakage and background of neutrons and gamma radiation at the sample and detector position. In this work, the designed condition of these facilities is carried out based on the Monte Carlo method using MCNP5 computer code. The conditions with different modification materials, thicknesses and structure of the PGNAA facility, including gamma collimator and radiation shields of the detector, are simulated, and then the optimal structure parameters with a significantly improved performance of the facility are obtained.

Keywords: MCNP simulation, PGNAA, Thai research reactor (TRR-1/M1), radiation shielding

Procedia PDF Downloads 342
4893 Influence of Optical Fluence Distribution on Photoacoustic Imaging

Authors: Mohamed K. Metwally, Sherif H. El-Gohary, Kyung Min Byun, Seung Moo Han, Soo Yeol Lee, Min Hyoung Cho, Gon Khang, Jinsung Cho, Tae-Seong Kim

Abstract:

Photoacoustic imaging (PAI) is a non-invasive and non-ionizing imaging modality that combines the absorption contrast of light with ultrasound resolution. Laser is used to deposit optical energy into a target (i.e., optical fluence). Consequently, the target temperature rises, and then thermal expansion occurs that leads to generating a PA signal. In general, most image reconstruction algorithms for PAI assume uniform fluence within an imaging object. However, it is known that optical fluence distribution within the object is non-uniform. This could affect the reconstruction of PA images. In this study, we have investigated the influence of optical fluence distribution on PA back-propagation imaging using finite element method. The uniform fluence was simulated as a triangular waveform within the object of interest. The non-uniform fluence distribution was estimated by solving light propagation within a tissue model via Monte Carlo method. The results show that the PA signal in the case of non-uniform fluence is wider than the uniform case by 23%. The frequency spectrum of the PA signal due to the non-uniform fluence has missed some high frequency components in comparison to the uniform case. Consequently, the reconstructed image with the non-uniform fluence exhibits a strong smoothing effect.

Keywords: finite element method, fluence distribution, Monte Carlo method, photoacoustic imaging

Procedia PDF Downloads 352
4892 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit

Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic

Abstract:

Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.

Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method

Procedia PDF Downloads 82