Search results for: Monte Carlo algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3881

Search results for: Monte Carlo algorithm

3731 Etude 3D Quantum Numerical Simulation of Performance in the HEMT

Authors: A. Boursali, A. Guen-Bouazza

Abstract:

We present a simulation of a HEMT (high electron mobility transistor) structure with and without a field plate. We extract the device characteristics through the analysis of DC, AC and high frequency regimes, as shown in this paper. This work demonstrates the optimal device with a gate length of 15 nm, InAlN/GaN heterostructure and field plate structure, making it superior to modern HEMTs when compared with otherwise equivalent devices. This improves the ability to bear the burden of the current density passes in the channel. We have demonstrated an excellent current density, as high as 2.05 A/m, a peak extrinsic transconductance of 0.59S/m at VDS=2 V, and cutting frequency cutoffs of 638 GHz in the first HEMT and 463 GHz for Field plate HEMT., maximum frequency of 1.7 THz, maximum efficiency of 73%, maximum breakdown voltage of 400 V, leakage current density IFuite=1 x 10-26 A, DIBL=33.52 mV/V and an ON/OFF current density ratio higher than 1 x 1010. These values were determined through the simulation by deriving genetic and Monte Carlo algorithms that optimize the design and the future of this technology.

Keywords: HEMT, silvaco, field plate, genetic algorithm, quantum

Procedia PDF Downloads 328
3730 Numerical response of Coaxial HPGe Detector for Skull and Knee measurement

Authors: Pabitra Sahu, M. Manohari, S. Priyadharshini, R. Santhanam, S. Chandrasekaran, B. Venkatraman

Abstract:

Radiation workers of reprocessing plants have a potential for internal exposure due to actinides and fission products. Radionuclides like Americium, lead, Polonium and Europium are bone seekers and get accumulated in the skeletal part. As the major skeletal content is in the skull (13%) and knee (22%), measurements of old intake have to be carried out in the skull and knee. At the Indira Gandhi Centre for Atomic Research, a twin HPGe-based actinide monitor is used for the measurement of actinides present in bone. Efficiency estimation, which is one of the prerequisites for the quantification of radionuclides, requires anthropomorphic phantoms. Such phantoms are very limited. Hence, in this study, efficiency curves for a Twin HPGe-based actinide monitoring system are established theoretically using the FLUKA Monte Carlo method and ICRP adult male voxel phantom. In the case of skull measurement, the detector is placed over the forehead, and for knee measurement, one detector is placed over each knee. The efficiency values of radionuclides present in the knee and skull vary from 3.72E-04 to 4.19E-04 CPS/photon and 5.22E-04 to 7.07E-04 CPS/photon, respectively, for the energy range 17 to 3000keV. The efficiency curves for the measurement are established, and it is found that initially, the efficiency value increases up to 100 keV and then starts decreasing. It is found that the skull efficiency values are 4% to 63% higher than that of the knee, depending on the energy for all the energies except 17.74 keV. The reason is the closeness of the detector to the skull compared to the knee. But for 17.74 keV the efficiency of the knee is more than the skull due to the higher attenuation caused in the skull bones because of its greater thickness. The Minimum Detectable Activity (MDA) for 241Am present in the skull and knee is 9 Bq. 239Pu has a MDA of 950 Bq and 1270 Bq for knee and skull, respectively, for a counting time of 1800 sec. This paper discusses the simulation method and the results obtained in the study.

Keywords: FLUKA Monte Carlo Method, ICRP adult male voxel phantom, knee, Skull.

Procedia PDF Downloads 22
3729 3D Quantum Simulation of a HEMT Device Performance

Authors: Z. Kourdi, B. Bouazza, M. Khaouani, A. Guen-Bouazza, Z. Djennati, A. Boursali

Abstract:

We present a simulation of a HEMT (high electron mobility transistor) structure with and without a field plate. We extract the device characteristics through the analysis of DC, AC and high frequency regimes, as shown in this paper. This work demonstrates the optimal device with a gate length of 15 nm, InAlN/GaN heterostructure and field plate structure, making it superior to modern HEMTs when compared with otherwise equivalent devices. This improves the ability to bear the burden of the current density passes in the channel. We have demonstrated an excellent current density, as high as 2.05 A/mm, a peak extrinsic transconductance of 590 mS/mm at VDS=2 V, and cutting frequency cutoffs of 638 GHz in the first HEMT and 463 GHz for Field plate HEMT., maximum frequency of 1.7 THz, maximum efficiency of 73%, maximum breakdown voltage of 400 V, DIBL=33.52 mV/V and an ON/OFF current density ratio higher than 1 x 1010. These values were determined through the simulation by deriving genetic and Monte Carlo algorithms that optimize the design and the future of this technology.

Keywords: HEMT, Silvaco, field plate, genetic algorithm, quantum

Procedia PDF Downloads 446
3728 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 199
3727 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion

Authors: Omran M. Kenshel, Alan J. O'Connor

Abstract:

Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.

Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability

Procedia PDF Downloads 456
3726 Reliability Analysis of Variable Stiffness Composite Laminate Structures

Authors: A. Sohouli, A. Suleman

Abstract:

This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.

Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures

Procedia PDF Downloads 493
3725 Optimal Load Control Strategy in the Presence of Stochastically Dependent Renewable Energy Sources

Authors: Mahmoud M. Othman, Almoataz Y. Abdelaziz, Yasser G. Hegazy

Abstract:

This paper presents a load control strategy based on modification of the Big Bang Big Crunch optimization method. The proposed strategy aims to determine the optimal load to be controlled and the corresponding time of control in order to minimize the energy purchased from substation. The presented strategy helps the distribution network operator to rely on the renewable energy sources in supplying the system demand. The renewable energy sources used in the presented study are modeled using the diagonal band Copula method and sequential Monte Carlo method in order to accurately consider the multivariate stochastic dependence between wind power, photovoltaic power and the system demand. The proposed algorithms are implemented in MATLAB environment and tested on the IEEE 37-node feeder. Several case studies are done and the subsequent discussions show the effectiveness of the proposed algorithm.

Keywords: big bang big crunch, distributed generation, load control, optimization, planning

Procedia PDF Downloads 323
3724 Development of a Robust Protein Classifier to Predict EMT Status of Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC) Tumors

Authors: ZhenlinJu, Christopher P. Vellano, RehanAkbani, Yiling Lu, Gordon B. Mills

Abstract:

The epithelial–mesenchymal transition (EMT) is a process by which epithelial cells acquire mesenchymal characteristics, such as profound disruption of cell-cell junctions, loss of apical-basolateral polarity, and extensive reorganization of the actin cytoskeleton to induce cell motility and invasion. A hallmark of EMT is its capacity to promote metastasis, which is due in part to activation of several transcription factors and subsequent downregulation of E-cadherin. Unfortunately, current approaches have yet to uncover robust protein marker sets that can classify tumors as possessing strong EMT signatures. In this study, we utilize reverse phase protein array (RPPA) data and consensus clustering methods to successfully classify a subset of cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC) tumors into an EMT protein signaling group (EMT group). The overall survival (OS) of patients in the EMT group is significantly worse than those in the other Hormone and PI3K/AKT signaling groups. In addition to a shrinkage and selection method for linear regression (LASSO), we applied training/test set and Monte Carlo resampling approaches to identify a set of protein markers that predicts the EMT status of CESC tumors. We fit a logistic model to these protein markers and developed a classifier, which was fixed in the training set and validated in the testing set. The classifier robustly predicted the EMT status of the testing set with an area under the curve (AUC) of 0.975 by Receiver Operating Characteristic (ROC) analysis. This method not only identifies a core set of proteins underlying an EMT signature in cervical cancer patients, but also provides a tool to examine protein predictors that drive molecular subtypes in other diseases.

Keywords: consensus clustering, TCGA CESC, Silhouette, Monte Carlo LASSO

Procedia PDF Downloads 439
3723 Consistent Testing for an Implication of Supermodular Dominance with an Application to Verifying the Effect of Geographic Knowledge Spillover

Authors: Chung Danbi, Linton Oliver, Whang Yoon-Jae

Abstract:

Supermodularity, or complementarity, is a popular concept in economics which can characterize many objective functions such as utility, social welfare, and production functions. Further, supermodular dominance captures a preference for greater interdependence among inputs of those functions, and it can be applied to examine which input set would produce higher expected utility, social welfare, or production. Therefore, we propose and justify a consistent testing for a useful implication of supermodular dominance. We also conduct Monte Carlo simulations to explore the finite sample performance of our test, with critical values obtained from the recentered bootstrap method, with and without the selective recentering, and the subsampling method. Under various parameter settings, we confirmed that our test has reasonably good size and power performance. Finally, we apply our test to compare the geographic and distant knowledge spillover in terms of their effects on social welfare using the National Bureau of Economic Research (NBER) patent data. We expect localized citing to supermodularly dominate distant citing if the geographic knowledge spillover engenders greater social welfare than distant knowledge spillover. Taking subgroups based on firm and patent characteristics, we found that there is industry-wise and patent subclass-wise difference in the pattern of supermodular dominance between localized and distant citing. We also compare the results from analyzing different time periods to see if the development of Internet and communication technology has changed the pattern of the dominance. In addition, to appropriately deal with the sparse nature of the data, we apply high-dimensional methods to efficiently select relevant data.

Keywords: supermodularity, supermodular dominance, stochastic dominance, Monte Carlo simulation, bootstrap, subsampling

Procedia PDF Downloads 106
3722 Ensemble Sampler For Infinite-Dimensional Inverse Problems

Authors: Jeremie Coullon, Robert J. Webber

Abstract:

We introduce a Markov chain Monte Carlo (MCMC) sam-pler for infinite-dimensional inverse problems. Our sam-pler is based on the affine invariant ensemble sampler, which uses interacting walkers to adapt to the covariance structure of the target distribution. We extend this ensem-ble sampler for the first time to infinite-dimensional func-tion spaces, yielding a highly efficient gradient-free MCMC algorithm. Because our ensemble sampler does not require gradients or posterior covariance estimates, it is simple to implement and broadly applicable. In many Bayes-ian inverse problems, Markov chain Monte Carlo (MCMC) meth-ods are needed to approximate distributions on infinite-dimensional function spaces, for example, in groundwater flow, medical imaging, and traffic flow. Yet designing efficient MCMC methods for function spaces has proved challenging. Recent gradi-ent-based MCMC methods preconditioned MCMC methods, and SMC methods have improved the computational efficiency of functional random walk. However, these samplers require gradi-ents or posterior covariance estimates that may be challenging to obtain. Calculating gradients is difficult or impossible in many high-dimensional inverse problems involving a numerical integra-tor with a black-box code base. Additionally, accurately estimating posterior covariances can require a lengthy pilot run or adaptation period. These concerns raise the question: is there a functional sampler that outperforms functional random walk without requir-ing gradients or posterior covariance estimates? To address this question, we consider a gradient-free sampler that avoids explicit covariance estimation yet adapts naturally to the covariance struc-ture of the sampled distribution. This sampler works by consider-ing an ensemble of walkers and interpolating and extrapolating between walkers to make a proposal. This is called the affine in-variant ensemble sampler (AIES), which is easy to tune, easy to parallelize, and efficient at sampling spaces of moderate dimen-sionality (less than 20). The main contribution of this work is to propose a functional ensemble sampler (FES) that combines func-tional random walk and AIES. To apply this sampler, we first cal-culate the Karhunen–Loeve (KL) expansion for the Bayesian prior distribution, assumed to be Gaussian and trace-class. Then, we use AIES to sample the posterior distribution on the low-wavenumber KL components and use the functional random walk to sample the posterior distribution on the high-wavenumber KL components. Alternating between AIES and functional random walk updates, we obtain our functional ensemble sampler that is efficient and easy to use without requiring detailed knowledge of the target dis-tribution. In past work, several authors have proposed splitting the Bayesian posterior into low-wavenumber and high-wavenumber components and then applying enhanced sampling to the low-wavenumber components. Yet compared to these other samplers, FES is unique in its simplicity and broad applicability. FES does not require any derivatives, and the need for derivative-free sam-plers has previously been emphasized. FES also eliminates the requirement for posterior covariance estimates. Lastly, FES is more efficient than other gradient-free samplers in our tests. In two nu-merical examples, we apply FES to challenging inverse problems that involve estimating a functional parameter and one or more scalar parameters. We compare the performance of functional random walk, FES, and an alternative derivative-free sampler that explicitly estimates the posterior covariance matrix. We conclude that FES is the fastest available gradient-free sampler for these challenging and multimodal test problems.

Keywords: Bayesian inverse problems, Markov chain Monte Carlo, infinite-dimensional inverse problems, dimensionality reduction

Procedia PDF Downloads 133
3721 Environmental Radioactivity Analysis by a Sequential Approach

Authors: G. Medkour Ishak-Boushaki, A. Taibi, M. Allab

Abstract:

Quantitative environmental radioactivity measurements are needed to determine the level of exposure of a population to ionizing radiations and for the assessment of the associated risks. Gamma spectrometry remains a very powerful tool for the analysis of radionuclides present in an environmental sample but the basic problem in such measurements is the low rate of detected events. Using large environmental samples could help to get around this difficulty but, unfortunately, new issues are raised by gamma rays attenuation and self-absorption. Recently, a new method has been suggested, to detect and identify without quantification, in a short time, a gamma ray of a low count source. This method does not require, as usually adopted in gamma spectrometry measurements, a pulse height spectrum acquisition. It is based on a chronological record of each detected photon by simultaneous measurements of its energy ε and its arrival time τ on the detector, the pair parameters [ε,τ] defining an event mode sequence (EMS). The EMS serials are analyzed sequentially by a Bayesian approach to detect the presence of a given radioactive source. The main object of the present work is to test the applicability of this sequential approach in radioactive environmental materials detection. Moreover, for an appropriate health oversight of the public and of the concerned workers, the analysis has been extended to get a reliable quantification of the radionuclides present in environmental samples. For illustration, we consider as an example, the problem of detection and quantification of 238U. Monte Carlo simulated experience is carried out consisting in the detection, by a Ge(Hp) semiconductor junction, of gamma rays of 63 keV emitted by 234Th (progeny of 238U). The generated EMS serials are analyzed by a Bayesian inference. The application of the sequential Bayesian approach, in environmental radioactivity analysis, offers the possibility of reducing the measurements time without requiring large environmental samples and consequently avoids the attached inconvenient. The work is still in progress.

Keywords: Bayesian approach, event mode sequence, gamma spectrometry, Monte Carlo method

Procedia PDF Downloads 473
3720 Comparison of Water Equivalent Ratio of Several Dosimetric Materials in Proton Therapy Using Monte Carlo Simulations and Experimental Data

Authors: M. R. Akbari , H. Yousefnia, E. Mirrezaei

Abstract:

Range uncertainties of protons are currently a topic of interest in proton therapy. Two of the parameters that are often used to specify proton range are water equivalent thickness (WET) and water equivalent ratio (WER). Since WER values for a specific material is nearly constant at different proton energies, it is a more useful parameter to compare. In this study, WER values were calculated for different proton energies in polymethyl methacrylate (PMMA), polystyrene (PS) and aluminum (Al) using FLUKA and TRIM codes. The results were compared with analytical, experimental and simulated SEICS code data obtained from the literature. In FLUKA simulation, a cylindrical phantom, 1000 mm in height and 300 mm in diameter, filled with the studied materials was simulated. A typical mono-energetic proton pencil beam in a wide range of incident energies usually applied in proton therapy (50 MeV to 225 MeV) impinges normally on the phantom. In order to obtain the WER values for the considered materials, cylindrical detectors, 1 mm in height and 20 mm in diameter, were also simulated along the beam trajectory in the phantom. In TRIM calculations, type of projectile, energy and angle of incidence, type of target material and thickness should be defined. The mode of 'detailed calculation with full damage cascades' was selected for proton transport in the target material. The biggest difference in WER values between the codes was 3.19%, 1.9% and 0.67% for Al, PMMA and PS, respectively. In Al and PMMA, the biggest difference between each code and experimental data was 1.08%, 1.26%, 2.55%, 0.94%, 0.77% and 0.95% for SEICS, FLUKA and SRIM, respectively. FLUKA and SEICS had the greatest agreement (≤0.77% difference in PMMA and ≤1.08% difference in Al, respectively) with the available experimental data in this study. It is concluded that, FLUKA and TRIM codes have capability for Bragg curves simulation and WER values calculation in the studied materials. They can also predict Bragg peak location and range of proton beams with acceptable accuracy.

Keywords: water equivalent ratio, dosimetric materials, proton therapy, Monte Carlo simulations

Procedia PDF Downloads 293
3719 Measurement and Simulation of Axial Neutron Flux Distribution in Dry Tube of KAMINI Reactor

Authors: Manish Chand, Subhrojit Bagchi, R. Kumar

Abstract:

A new dry tube (DT) has been installed in the tank of KAMINI research reactor, Kalpakkam India. This tube will be used for neutron activation analysis of small to large samples and testing of neutron detectors. DT tube is 375 cm height and 7.5 cm in diameter, located 35 cm away from the core centre. The experimental thermal flux at various axial positions inside the tube has been measured by irradiating the flux monitor (¹⁹⁷Au) at 20kW reactor power. The measured activity of ¹⁹⁸Au and the thermal cross section of ¹⁹⁷Au (n,γ) ¹⁹⁸Au reaction were used for experimental thermal flux measurement. The flux inside the tube varies from 10⁹ to 10¹⁰ and maximum flux was (1.02 ± 0.023) x10¹⁰ n cm⁻²s⁻¹ at 36 cm from the bottom of the tube. The Au and Zr foils without and with cadmium cover of 1-mm thickness were irradiated at the maximum flux position in the DT to find out the irradiation specific input parameters like sub-cadmium to epithermal neutron flux ratio (f) and the epithermal neutron flux shape factor (α). The f value was 143 ± 5, indicates about 99.3% thermal neutron component and α value was -0.2886 ± 0.0125, indicates hard epithermal neutron spectrum due to insufficient moderation. The measured flux profile has been validated using theoretical model of KAMINI reactor through Monte Carlo N-Particle Code (MCNP). In MCNP, the complex geometry of the entire reactor is modelled in 3D, ensuring minimum approximations for all the components. Continuous energy cross-section data from ENDF-B/VII.1 as well as S (α, β) thermal neutron scattering functions are considered. The neutron flux has been estimated at the corresponding axial locations of the DT using mesh tally. The thermal flux obtained from the experiment shows good agreement with the theoretically predicted values by MCNP, it was within ± 10%. It can be concluded that this MCNP model can be utilized for calculating other important parameters like neutron spectra, dose rate, etc. and multi elemental analysis can be carried out by irradiating the sample at maximum flux position using measured f and α parameters by k₀-NAA standardization.

Keywords: neutron flux, neutron activation analysis, neutron flux shape factor, MCNP, Monte Carlo N-Particle Code

Procedia PDF Downloads 133
3718 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 330
3717 Non-Invasive Imaging of Tissue Using Near Infrared Radiations

Authors: Ashwani Kumar Aggarwal

Abstract:

NIR Light is non-ionizing and can pass easily through living tissues such as breast without any harmful effects. Therefore, use of NIR light for imaging the biological tissue and to quantify its optical properties is a good choice over other invasive methods. Optical tomography involves two steps. One is the forward problem and the other is the reconstruction problem. The forward problem consists of finding the measurements of transmitted light through the tissue from source to detector, given the spatial distribution of absorption and scattering properties. The second step is the reconstruction problem. In X-ray tomography, there is standard method for reconstruction called filtered back projection method or the algebraic reconstruction methods. But this method cannot be applied as such, in optical tomography due to highly scattering nature of biological tissue. A hybrid algorithm for reconstruction has been implemented in this work which takes into account the highly scattered path taken by photons while back projecting the forward data obtained during Monte Carlo simulation. The reconstructed image suffers from blurring due to point spread function. This blurred reconstructed image has been enhanced using a digital filter which is optimal in mean square sense.

Keywords: least-squares optimization, filtering, tomography, laser interaction, light scattering

Procedia PDF Downloads 290
3716 Consideration of Uncertainty in Engineering

Authors: A. Mohammadi, M. Moghimi, S. Mohammadi

Abstract:

Engineers need computational methods which could provide solutions less sensitive to the environmental effects, so the techniques should be used which take the uncertainty to account to control and minimize the risk associated with design and operation. In order to consider uncertainty in engineering problem, the optimization problem should be solved for a suitable range of the each uncertain input variable instead of just one estimated point. Using deterministic optimization problem, a large computational burden is required to consider every possible and probable combination of uncertain input variables. Several methods have been reported in the literature to deal with problems under uncertainty. In this paper, different methods presented and analyzed.

Keywords: uncertainty, Monte Carlo simulated, stochastic programming, scenario method

Procedia PDF Downloads 386
3715 Co-Evolutionary Fruit Fly Optimization Algorithm and Firefly Algorithm for Solving Unconstrained Optimization Problems

Authors: R. M. Rizk-Allah

Abstract:

This paper presents co-evolutionary fruit fly optimization algorithm based on firefly algorithm (CFOA-FA) for solving unconstrained optimization problems. The proposed algorithm integrates the merits of fruit fly optimization algorithm (FOA), firefly algorithm (FA) and elite strategy to refine the performance of classical FOA. Moreover, co-evolutionary mechanism is performed by applying FA procedures to ensure the diversity of the swarm. Finally, the proposed algorithm CFOA- FA is tested on several benchmark problems from the usual literature and the numerical results have demonstrated the superiority of the proposed algorithm for finding the global optimal solution.

Keywords: firefly algorithm, fruit fly optimization algorithm, unconstrained optimization problems

Procedia PDF Downloads 513
3714 Probabilistic Life Cycle Assessment of the Nano Membrane Toilet

Authors: A. Anastasopoulou, A. Kolios, T. Somorin, A. Sowale, Y. Jiang, B. Fidalgo, A. Parker, L. Williams, M. Collins, E. J. McAdam, S. Tyrrel

Abstract:

Developing countries are nowadays confronted with great challenges related to domestic sanitation services in view of the imminent water scarcity. Contemporary sanitation technologies established in these countries are likely to pose health risks unless waste management standards are followed properly. This paper provides a solution to sustainable sanitation with the development of an innovative toilet system, called Nano Membrane Toilet (NMT), which has been developed by Cranfield University and sponsored by the Bill & Melinda Gates Foundation. The particular technology converts human faeces into energy through gasification and provides treated wastewater from urine through membrane filtration. In order to evaluate the environmental profile of the NMT system, a deterministic life cycle assessment (LCA) has been conducted in SimaPro software employing the Ecoinvent v3.3 database. The particular study has determined the most contributory factors to the environmental footprint of the NMT system. However, as sensitivity analysis has identified certain critical operating parameters for the robustness of the LCA results, adopting a stochastic approach to the Life Cycle Inventory (LCI) will comprehensively capture the input data uncertainty and enhance the credibility of the LCA outcome. For that purpose, Monte Carlo simulations, in combination with an artificial neural network (ANN) model, have been conducted for the input parameters of raw material, produced electricity, NOX emissions, amount of ash and transportation of fertilizer. The given analysis has provided the distribution and the confidence intervals of the selected impact categories and, in turn, more credible conclusions are drawn on the respective LCIA (Life Cycle Impact Assessment) profile of NMT system. Last but not least, the specific study will also yield essential insights into the methodological framework that can be adopted in the environmental impact assessment of other complex engineering systems subject to a high level of input data uncertainty.

Keywords: sanitation systems, nano-membrane toilet, lca, stochastic uncertainty analysis, Monte Carlo simulations, artificial neural network

Procedia PDF Downloads 204
3713 Risk Measure from Investment in Finance by Value at Risk

Authors: Mohammed El-Arbi Khalfallah, Mohamed Lakhdar Hadji

Abstract:

Managing and controlling risk is a topic research in the world of finance. Before a risky situation, the stakeholders need to do comparison according to the positions and actions, and financial institutions must take measures of a particular market risk and credit. In this work, we study a model of risk measure in finance: Value at Risk (VaR), which is a new tool for measuring an entity's exposure risk. We explain the concept of value at risk, your average, tail, and describe the three methods for computing: Parametric method, Historical method, and numerical method of Monte Carlo. Finally, we briefly describe advantages and disadvantages of the three methods for computing value at risk.

Keywords: average value at risk, conditional value at risk, tail value at risk, value at risk

Procedia PDF Downloads 416
3712 On Coverage Probability of Confidence Intervals for the Normal Mean with Known Coefficient of Variation

Authors: Suparat Niwitpong, Sa-aat Niwitpong

Abstract:

Statistical inference of normal mean with known coefficient of variation has been investigated recently. This phenomenon occurs normally in environment and agriculture experiments when the scientist knows the coefficient of variation of their experiments. In this paper, we constructed new confidence intervals for the normal population mean with known coefficient of variation. We also derived analytic expressions for the coverage probability of each confidence interval. To confirm our theoretical results, Monte Carlo simulation will be used to assess the performance of these intervals based on their coverage probabilities.

Keywords: confidence interval, coverage probability, expected length, known coefficient of variation

Procedia PDF Downloads 365
3711 A Risk-Based Approach to Construction Management

Authors: Chloe E. Edwards, Yasaman Shahtaheri

Abstract:

Risk management plays a fundamental role in project planning and delivery. The purpose of incorporating risk management into project management practices is to identify and address uncertainties related to key project-related activities. The uncertainties, known as risk events, can relate to project deliverables that are quantifiable and are often measured by impact to project schedule, cost, or environmental impact. Risk management should be incorporated as an iterative practice throughout the planning, execution, and commissioning phases of a project. This paper specifically examines how risk management contributes to effective project planning and delivery through a case study of a transportation project. This case study focused solely on impacts to project schedule regarding three milestones: readiness for delivery, readiness for testing and commissioning, and completion of the facility. The case study followed the ISO 31000: Risk Management – Guidelines. The key factors that are outlined by these guidelines include understanding the scope and context of the project, conducting a risk assessment including identification, analysis, and evaluation, and lastly, risk treatment through mitigation measures. This process requires continuous consultation with subject matter experts and monitoring to iteratively update the risks accordingly. The risk identification process led to a total of fourteen risks related to design, permitting, construction, and commissioning. The analysis involved running 1,000 Monte Carlo simulations through @RISK 8.0 Industrial software to determine potential milestone completion dates based on the project baseline schedule. These dates include the best case, most likely case, and worst case to provide an estimated delay for each milestone. Evaluation of these results provided insight into which risks were the highest contributors to the projected milestone completion dates. Based on the analysis results, the risk management team was able to provide recommendations for mitigation measures to reduce the likelihood of risks occurring. The risk management team also provided recommendations for managing the identified risks and project activities moving forward to meet the most likely or best-case milestone completion dates.

Keywords: construction management, monte carlo simulation, project delivery, risk assessment, transportation engineering

Procedia PDF Downloads 87
3710 A One Dimensional Particle in Cell Model for Excimer Lamps

Authors: W. Benstaali, A. Belasri

Abstract:

In this work we study a planar lamp filled with neon-xenon gas. We use a one-dimensional particle in a cell with Monte Carlo simulation (PIC-MCC) to investigate the effect xenon concentration on the energy deposited on excitation, ionization and ions. A Xe-Ne discharge is studied for a gas pressure of 400 torr. The results show an efficient Xe20-Ne mixture with an applied voltage of 1.2KV; the xenon excitation energy represents 65% form total energy dissipated in the discharge. We have also studied electrical properties and the energy balance a discharge for Xe50-Ne which needs a voltage of 2kv; the xenon energy is than more important.

Keywords: dielectric barrier discharge, efficiency, excitation, lamps

Procedia PDF Downloads 140
3709 Finite Sample Inferences for Weak Instrument Models

Authors: Gubhinder Kundhi, Paul Rilstone

Abstract:

It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. Finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.

Keywords: bootstrap, Instrumental Variable, Edgeworth expansions, Saddlepoint expansions

Procedia PDF Downloads 288
3708 A Saturation Attack Simulation on a Navy Warship Based on Discrete-Event Simulation Models

Authors: Yawei Liang

Abstract:

Threat from cruise missiles is among the most dangerous considerations to a warship in the modern era: anti-ship cruise missiles are fast, accurate, and extremely destructive. In this paper, the goal was to use an object-orientated environment to program a simulation to model a scenario in which a lone frigate is attacked by a wave of missiles fired at given intervals. The parameters of the simulation are modified to examine the relationships between different variables in the situation, and an analysis is performed on various aspects of the defending ship’s equipment. Finally, the results are presented, along with a brief discussion.

Keywords: discrete event simulation, Monte Carlo simulation, naval resource management, weapon-target allocation/assignment

Procedia PDF Downloads 68
3707 The Generalized Pareto Distribution as a Model for Sequential Order Statistics

Authors: Mahdy ‎Esmailian, Mahdi ‎Doostparast, Ahmad ‎Parsian

Abstract:

‎In this article‎, ‎sequential order statistics (SOS) censoring type II samples coming from the generalized Pareto distribution are considered‎. ‎Maximum likelihood (ML) estimators of the unknown parameters are derived on the basis of the available multiple SOS data‎. ‎Necessary conditions for existence and uniqueness of the derived ML estimates are given‎. Due to complexity in the proposed likelihood function‎, ‎a useful re-parametrization is suggested‎. ‎For illustrative purposes‎, ‎a Monte Carlo simulation study is conducted and an illustrative example is analysed‎.

Keywords: bayesian estimation‎, generalized pareto distribution‎, ‎maximum likelihood estimation‎, sequential order statistics

Procedia PDF Downloads 481
3706 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction

Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach

Abstract:

X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.

Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast

Procedia PDF Downloads 227
3705 Modified Weibull Approach for Bridge Deterioration Modelling

Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight

Abstract:

State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.

Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models

Procedia PDF Downloads 697
3704 Design, Construction and Performance Evaluation of a HPGe Detector Shield

Authors: M. Sharifi, M. Mirzaii, F. Bolourinovin, H. Yousefnia, M. Akbari, K. Yousefi-Mojir

Abstract:

A multilayer passive shield composed of low-activity lead (Pb), copper (Cu), tin (Sn) and iron (Fe) was designed and manufactured for a coaxial HPGe detector placed at a surface laboratory for reducing background radiation and radiation dose to the personnel. The performance of the shield was evaluated and efficiency curves of the detector were plotted by using of the various standard sources in different distances. Monte Carlo simulations and a set of TLD chips were used for dose estimation in two distances of 20 and 40 cm. The results show that the shield reduced background spectrum and the personnel dose more than 95%.

Keywords: HPGe shield, background count, personnel dose, efficiency curve

Procedia PDF Downloads 430
3703 Investigating the Effects of Data Transformations on a Bi-Dimensional Chi-Square Test

Authors: Alexandru George Vaduva, Adriana Vlad, Bogdan Badea

Abstract:

In this research, we conduct a Monte Carlo analysis on a two-dimensional χ2 test, which is used to determine the minimum distance required for independent sampling in the context of chaotic signals. We investigate the impact of transforming initial data sets from any probability distribution to new signals with a uniform distribution using the Spearman rank correlation on the χ2 test. This transformation removes the randomness of the data pairs, and as a result, the observed distribution of χ2 test values differs from the expected distribution. We propose a solution to this problem and evaluate it using another chaotic signal.

Keywords: chaotic signals, logistic map, Pearson’s test, Chi Square test, bivariate distribution, statistical independence

Procedia PDF Downloads 71
3702 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit

Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic

Abstract:

Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.

Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method

Procedia PDF Downloads 94