Search results for: monte Calro estimation
2044 Unit Root Tests Based On the Robust Estimator
Authors: Wararit Panichkitkosolkul
Abstract:
The unit root tests based on the robust estimator for the first-order autoregressive process are proposed and compared with the unit root tests based on the ordinary least squares (OLS) estimator. The percentiles of the null distributions of the unit root test are also reported. The empirical probabilities of Type I error and powers of the unit root tests are estimated via Monte Carlo simulation. Simulation results show that all unit root tests can control the probability of Type I error for all situations. The empirical power of the unit root tests based on the robust estimator are higher than the unit root tests based on the OLS estimator.
Keywords: autoregressive, ordinary least squares, type i error, power of the test, Monte Carlo simulation
Procedia PDF Downloads 2872043 Efficient Motion Estimation by Fast Three Step Search Algorithm
Authors: S. M. Kulkarni, D. S. Bormane, S. L. Nalbalwar
Abstract:
The rapid development in the technology have dramatic impact on the medical health care field. Medical data base obtained with latest machines like CT Machine, MRI scanner requires large amount of memory storage and also it requires large bandwidth for transmission of data in telemedicine applications. Thus, there is need for video compression. As the database of medical images contain number of frames (slices), hence while coding of these images there is need of motion estimation. Motion estimation finds out movement of objects in an image sequence and gets motion vectors which represents estimated motion of object in the frame. In order to reduce temporal redundancy between successive frames of video sequence, motion compensation is preformed. In this paper three step search (TSS) block matching algorithm is implemented on different types of video sequences. It is shown that three step search algorithm produces better quality performance and less computational time compared with exhaustive full search algorithm.Keywords: block matching, exhaustive search motion estimation, three step search, video compression
Procedia PDF Downloads 4902042 Modeling the Transport of Charge Carriers in the Active Devices MESFET Based of GaInP by the Monte Carlo Method
Authors: N. Massoum, A. Guen. Bouazza, B. Bouazza, A. El Ouchdi
Abstract:
The progress of industry integrated circuits in recent years has been pushed by continuous miniaturization of transistors. With the reduction of dimensions of components at 0.1 micron and below, new physical effects come into play as the standard simulators of two dimensions (2D) do not consider. In fact the third dimension comes into play because the transverse and longitudinal dimensions of the components are of the same order of magnitude. To describe the operation of such components with greater fidelity, we must refine simulation tools and adapted to take into account these phenomena. After an analytical study of the static characteristics of the component, according to the different operating modes, a numerical simulation is performed of field-effect transistor with submicron gate MESFET GaInP. The influence of the dimensions of the gate length is studied. The results are used to determine the optimal geometric and physical parameters of the component for their specific applications and uses.Keywords: Monte Carlo simulation, transient electron transport, MESFET device, GaInP
Procedia PDF Downloads 4192041 Multiscale Simulation of Ink Seepage into Fibrous Structures through a Mesoscopic Variational Model
Authors: Athmane Bakhta, Sebastien Leclaire, David Vidal, Francois Bertrand, Mohamed Cheriet
Abstract:
This work presents a new three-dimensional variational model proposed for the simulation of ink seepage into paper sheets at the fiber level. The model, inspired by the Hising model, takes into account a finite volume of ink and describes the system state through gravity, cohesion, and adhesion force interactions. At the mesoscopic scale, the paper substrate is modeled using a discretized fiber structure generated using a numerical deposition procedure. A modified Monte Carlo method is introduced for the simulation of the ink dynamics. Besides, a multiphase lattice Boltzmann method is suggested to fine-tune the mesoscopic variational model parameters, and it is shown that the ink seepage behaviors predicted by the proposed model can resemble those predicted by a method relying on first principles.Keywords: fibrous media, lattice Boltzmann, modelling and simulation, Monte Carlo, variational model
Procedia PDF Downloads 1452040 Environmental, Climate Change, and Health Outcomes in the World
Authors: Felix Aberu
Abstract:
The high rate of greenhouse gas (CO₂) emission and increased concentration of Carbon Dioxide in the atmosphere are not unconnected to both human and natural activities. This has caused climate change and global warming in the world. The adverse effect of these climatic changes has no doubt threatened human existence. Hence, this study examined the effects of environmental and climate influence on mortality and morbidity rates, with particular reference to the world’s leading CO₂ emission countries, using both the pre-estimation, estimation, and post-estimation techniques for more dependable outcomes. Hence, the System Generalized Method of Moments (SGMM) was adopted as the main estimation technique for the data analysis from 1996 to 2023. The coefficient of carbon emissions confirmed a positive and significant relationship among CO₂ emission, mortality, and morbidity rates in the world’s leading CO₂ emissions countries, which implies that carbon emission has contributed to mortality and morbidity rates in the world. Therefore, significant action should be taken to facilitate the expansion of environmental protection and sustainability initiatives in any CO₂ emissions nations of the world.Keywords: environmental, mortality, morbidity, health outcomes, carbon emissions
Procedia PDF Downloads 532039 Phasor Measurement Unit Based on Particle Filtering
Authors: Rithvik Reddy Adapa, Xin Wang
Abstract:
Phasor Measurement Units (PMUs) are very sophisticated measuring devices that find amplitude, phase and frequency of various voltages and currents in a power system. Particle filter is a state estimation technique that uses Bayesian inference. Particle filters are widely used in pose estimation and indoor navigation and are very reliable. This paper studies and compares four different particle filters as PMUs namely, generic particle filter (GPF), genetic algorithm particle filter (GAPF), particle swarm optimization particle filter (PSOPF) and adaptive particle filter (APF). Two different test signals are used to test the performance of the filters in terms of responsiveness and correctness of the estimates.Keywords: phasor measurement unit, particle filter, genetic algorithm, particle swarm optimisation, state estimation
Procedia PDF Downloads 82038 Monte Carlo Neutronic Calculations on Laser Inertial Fusion Energy (LIFE)
Authors: Adem Acır
Abstract:
In this study, time dependent neutronic analysis of incineration of minor actinides of a Laser Fusion Inertial Confinement Fusion Fission Energy (LIFE) engine was performed. The calculations were carried out by using MCNP codes with ENDF/B.VI neutron data library. In the neutronic calculations, TRISO particles fueled with minor actinides with natural lithium coolant were performed. The natural lithium cooled LIFE engine used 10 % TRISO fuel minor actinides composition. Tritium breeding ratios (TBR) and energy multiplication factor (M) burnup values were computed as 1.46 and 3.75, respectively. The reactor operation time was calculated as ~ 21 years. The burnup values were obtained as ~1060 GWD/MT, respectively. As a result, the very higher burnup were achieved of LIFE engine.Keywords: Monte Carlo, minor actinides, nuclear waste, LIFE engine
Procedia PDF Downloads 2922037 Developing Fuzzy Logic Model for Reliability Estimation: Case Study
Authors: Soroor K. H. Al-Khafaji, Manal Mohammad Abed
Abstract:
The research aim of this paper is to evaluate the reliability of a complex engineering system and to design a fuzzy model for the reliability estimation. The designed model has been applied on Vegetable Oil Purification System (neutralization system) to help the specialist user based on the concept of FMEA (Failure Mode and Effect Analysis) to estimate the reliability of the repairable system at the vegetable oil industry. The fuzzy model has been used to predict the system reliability for a future time period, depending on a historical database for the two past years. The model can help to specify the system malfunctions and to predict its reliability during a future period in more accurate and reasonable results compared with the results obtained by the traditional method of reliability estimation.Keywords: fuzzy logic, reliability, repairable systems, FMEA
Procedia PDF Downloads 6142036 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood
Authors: Randa Alharbi, Vladislav Vyshemirsky
Abstract:
Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)
Procedia PDF Downloads 2012035 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation
Authors: Li-hsing Shih, Wei-Jen Hsu
Abstract:
Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation
Procedia PDF Downloads 672034 Analysis of Exponential Distribution under Step Stress Partially Accelerated Life Testing Plan Using Adaptive Type-I Hybrid Progressive Censoring Schemes with Competing Risks Data
Authors: Ahmadur Rahman, Showkat Ahmad Lone, Ariful Islam
Abstract:
In this article, we have estimated the parameters for the failure times of units based on the sampling technique adaptive type-I progressive hybrid censoring under the step-stress partially accelerated life tests for competing risk. The failure times of the units are assumed to follow an exponential distribution. Maximum likelihood estimation technique is used to estimate the unknown parameters of the distribution and tampered coefficient. Confidence interval also obtained for the parameters. A simulation study is performed by using Monte Carlo Simulation method to check the authenticity of the model and its assumptions.Keywords: adaptive type-I hybrid progressive censoring, competing risks, exponential distribution, simulation, step-stress partially accelerated life tests
Procedia PDF Downloads 3422033 Formulating the Stochastic Finite Elements for Free Vibration Analysis of Plates with Variable Elastic Modulus
Authors: Mojtaba Aghamiri Esfahani, Mohammad Karkon, Seyed Majid Hosseini Nezhad, Reza Hosseini-Ara
Abstract:
In this study, the effect of uncertainty in elastic modulus of a plate on free vibration response is investigated. For this purpose, the elastic modulus of the plate is modeled as stochastic variable with normal distribution. Moreover, the distance autocorrelation function is used for stochastic field. Then, by applying the finite element method and Monte Carlo simulation, stochastic finite element relations are extracted. Finally, with a numerical test, the effect of uncertainty in the elastic modulus on free vibration response of a plate is studied. The results show that the effect of uncertainty in elastic modulus of the plate cannot play an important role on the free vibration response.Keywords: stochastic finite elements, plate bending, free vibration, Monte Carlo, Neumann expansion method.
Procedia PDF Downloads 3932032 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets
Authors: Ece Cigdem Mutlu, Burak Alakent
Abstract:
Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.Keywords: average run length, M-estimators, quality control, robust estimators
Procedia PDF Downloads 1902031 Monte Carlo and Biophysics Analysis in a Criminal Trial
Authors: Luca Indovina, Carmela Coppola, Carlo Altucci, Riccardo Barberi, Rocco Romano
Abstract:
In this paper a real court case, held in Italy at the Court of Nola, in which a correct physical description, conducted with both a Monte Carlo and biophysical analysis, would have been sufficient to arrive at conclusions confirmed by documentary evidence, is considered. This will be an example of how forensic physics can be useful in confirming documentary evidence in order to reach hardly questionable conclusions. This was a libel trial in which the defendant, Mr. DS (Defendant for Slander), had falsely accused one of his neighbors, Mr. OP (Offended Person), of having caused him some damages. The damages would have been caused by an external plaster piece that would have detached from the neighbor’s property and would have hit Mr DS while he was in his garden, much more than a meter far away from the facade of the building from which the plaster piece would have detached. In the trial, Mr. DS claimed to have suffered a scratch on his forehead, but he never showed the plaster that had hit him, nor was able to tell from where the plaster would have arrived. Furthermore, Mr. DS presented a medical certificate with a diagnosis of contusion of the cerebral cortex. On the contrary, the images of Mr. OP’s security cameras do not show any movement in the garden of Mr. DS in a long interval of time (about 2 hours) around the time of the alleged accident, nor do they show any people entering or coming out from the house of Mr. DS in the same interval of time. Biophysical analysis shows that both the diagnosis of the medical certificate and the wound declared by the defendant, already in conflict with each other, are not compatible with the fall of external plaster pieces too small to be found. The wind was at a level 1 of the Beaufort scale, that is, unable to raise even dust (level 4 of the Beaufort scale). Therefore, the motion of the plaster pieces can be described as a projectile motion, whereas collisions with the building cornice can be treated using Newtons law of coefficients of restitution. Numerous numerical Monte Carlo simulations show that the pieces of plaster would not have been able to reach even the garden of Mr. DS, let alone a distance over 1.30 meters. Results agree with the documentary evidence (images of Mr. OP’s security cameras) that Mr. DS could not have been hit by plaster pieces coming from Mr. OP’s property.Keywords: biophysics analysis, Monte Carlo simulations, Newton’s law of restitution, projectile motion
Procedia PDF Downloads 1292030 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 2232029 Spatiotemporal Neural Network for Video-Based Pose Estimation
Authors: Bin Ji, Kai Xu, Shunyu Yao, Jingjing Liu, Ye Pan
Abstract:
Human pose estimation is a popular research area in computer vision for its important application in human-machine interface. In recent years, 2D human pose estimation based on convolution neural network has got great progress and development. However, in more and more practical applications, people often need to deal with tasks based on video. It’s not far-fetched for us to consider how to combine the spatial and temporal information together to achieve a balance between computing cost and accuracy. To address this issue, this study proposes a new spatiotemporal model, namely Spatiotemporal Net (STNet) to combine both temporal and spatial information more rationally. As a result, the predicted keypoints heatmap is potentially more accurate and spatially more precise. Under the condition of ensuring the recognition accuracy, the algorithm deal with spatiotemporal series in a decoupled way, which greatly reduces the computation of the model, thus reducing the resource consumption. This study demonstrate the effectiveness of our network over the Penn Action Dataset, and the results indicate superior performance of our network over the existing methods.Keywords: convolutional long short-term memory, deep learning, human pose estimation, spatiotemporal series
Procedia PDF Downloads 1472028 Tracing Sources of Sediment in an Arid River, Southern Iran
Authors: Hesam Gholami
Abstract:
Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran
Procedia PDF Downloads 742027 The Ability of Forecasting the Term Structure of Interest Rates Based on Nelson-Siegel and Svensson Model
Authors: Tea Poklepović, Zdravka Aljinović, Branka Marasović
Abstract:
Due to the importance of yield curve and its estimation it is inevitable to have valid methods for yield curve forecasting in cases when there are scarce issues of securities and/or week trade on a secondary market. Therefore in this paper, after the estimation of weekly yield curves on Croatian financial market from October 2011 to August 2012 using Nelson-Siegel and Svensson models, yield curves are forecasted using Vector auto-regressive model and Neural networks. In general, it can be concluded that both forecasting methods have good prediction abilities where forecasting of yield curves based on Nelson Siegel estimation model give better results in sense of lower Mean Squared Error than forecasting based on Svensson model Also, in this case Neural networks provide slightly better results. Finally, it can be concluded that most appropriate way of yield curve prediction is neural networks using Nelson-Siegel estimation of yield curves.Keywords: Nelson-Siegel Model, neural networks, Svensson Model, vector autoregressive model, yield curve
Procedia PDF Downloads 3312026 Wind Fragility for Honeycomb Roof Cladding Panels Using Screw Pull-Out Capacity
Authors: Viriyavudh Sim, Woo Young Jung
Abstract:
The failure of roof cladding mostly occurs due to the failing of the connection between claddings and purlins, which is the pull-out of the screw connecting the two parts when the pull-out load, i.e. typhoon, is higher than the resistance of the connection screw. As typhoon disasters in Korea are constantly on the rise, probability risk assessment (PRA) has become a vital tool to evaluate the performance of civil structures. In this study, we attempted to determine the fragility of roof cladding with the screw connection. Experimental study was performed to evaluate the pull-out resistance of screw joints between honeycomb panels and back frames. Subsequently, by means of Monte Carlo Simulation method, probability of failure for these types of roof cladding was determined. The results that the failure of roof cladding was depends on their location on the roof, for example, the edge most panel has the highest probability of failure.Keywords: Monte Carlo Simulation, roof cladding, screw pull-out strength, wind fragility
Procedia PDF Downloads 2522025 Monte Carlo Simulation of Pion Particles
Authors: Reza Reiazi
Abstract:
Attempts to verify Geant4 hadronic physic to transport antiproton beam using standard physics list have not reach to a reasonable results because of lack of reliable cross section data or non reliable model to predict the final states of annihilated particles. Since most of the antiproton annihilation energy is carried away by recoiling nuclear fragments which are result of pions interactions with surrounding nucleons, it should be investigated if the toolkit verified for pions. Geant4 version 9.4.6.p01 was used. Dose calculation was done using 700 MeV pions hitting a water tank applying standards physic lists. We conclude Geant4 standard physics lists to predict the depth dose of Pion minus beam is not same for all investigated models. Since the nuclear fragments will deposit their energy in a small distance, they are the most important source of dose deposition in the annihilation vertex of antiproton beams.Keywords: Monte Carlo, Pion, simulation, antiproton beam
Procedia PDF Downloads 4302024 Applicability of Cameriere’s Age Estimation Method in a Sample of Turkish Adults
Authors: Hatice Boyacioglu, Nursel Akkaya, Humeyra Ozge Yilanci, Hilmi Kansu, Nihal Avcu
Abstract:
The strong relationship between the reduction in the size of the pulp cavity and increasing age has been reported in the literature. This relationship can be utilized to estimate the age of an individual by measuring the pulp cavity size using dental radiographs as a non-destructive method. The purpose of this study is to develop a population specific regression model for age estimation in a sample of Turkish adults by applying Cameriere’s method on panoramic radiographs. The sample consisted of 100 panoramic radiographs of Turkish patients (40 men, 60 women) aged between 20 and 70 years. Pulp and tooth area ratios (AR) of the maxilla¬¬ry canines were measured by two maxillofacial radiologists and then the results were subjected to regression analysis. There were no statistically significant intra-observer and inter-observer differences. The correlation coefficient between age and the AR of the maxillary canines was -0.71 and the following regression equation was derived: Estimated Age = 77,365 – ( 351,193 × AR ). The mean prediction error was 4 years which is within acceptable errors limits for age estimation. This shows that the pulp/tooth area ratio is a useful variable for assessing age with reasonable accuracy. Based on the results of this research, it was concluded that Cameriere’s method is suitable for dental age estimation and it can be used for forensic procedures in Turkish adults. These instructions give you guidelines for preparing papers for conferences or journals.Keywords: age estimation by teeth, forensic dentistry, panoramic radiograph, Cameriere's method
Procedia PDF Downloads 4492023 Identification of Wiener Model Using Iterative Schemes
Authors: Vikram Saini, Lillie Dewan
Abstract:
This paper presents the iterative schemes based on Least square, Hierarchical Least Square and Stochastic Approximation Gradient method for the Identification of Wiener model with parametric structure. A gradient method is presented for the parameter estimation of wiener model with noise conditions based on the stochastic approximation. Simulation results are presented for the Wiener model structure with different static non-linear elements in the presence of colored noise to show the comparative analysis of the iterative methods. The stochastic gradient method shows improvement in the estimation performance and provides fast convergence of the parameters estimates.Keywords: hard non-linearity, least square, parameter estimation, stochastic approximation gradient, Wiener model
Procedia PDF Downloads 4052022 GPS Refinement in Cities Using Statistical Approach
Authors: Ashwani Kumar
Abstract:
GPS plays an important role in everyday life for safe and convenient transportation. While pedestrians use hand held devices to know their position in a city, vehicles in intelligent transport systems use relatively sophisticated GPS receivers for estimating their current position. However, in urban areas where the GPS satellites are occluded by tall buildings, trees and reflections of GPS signals from nearby vehicles, GPS position estimation becomes poor. In this work, an exhaustive GPS data is collected at a single point in urban area under different times of day and under dynamic environmental conditions. The data is analyzed and statistical refinement methods are used to obtain optimal position estimate among all the measured positions. The results obtained are compared with publically available datasets and obtained position estimation refinement results are promising.Keywords: global positioning system, statistical approach, intelligent transport systems, least squares estimation
Procedia PDF Downloads 2862021 An Estimation Process for Progress Rate Based on Labor-Quantity in Republic of Korea
Authors: Dong-Ho Kim, Zheng-Xun Jin, Yong-Woon Cha, Su-Sang Lim, Sang-Won Han, Chang-Taek Hyun
Abstract:
As construction is a labor-intensive industry, it is important to identify and manage labor quantities for accurate progress management of the construction project. However, the progress management that focuses on construction cost calculated based on materials rather than labor quantities has led to a difference in the implementation of cost and progress of the actual construction. In addition, since it is not easy to predict accurate labor quantities in the estimation of labor quantity-based progress rate, there have been limited researches into the progress rate estimation based on labor quantity. Accordingly, this study proposed a process for labor quantity-based progress rate estimation using a standard of estimate to predict accurate progress rate of the construction project in Republic Korea. It is expected that the utilization of the proposed process will help to identify the progress rate closer to that of the actual site management and adjust the workforce in each construction type, thereby contributing to improving construction efficiency.Keywords: labor based, labor cost, progress management, progress rate, progress payment
Procedia PDF Downloads 3432020 Adjusted LOLE and EENS Indices for the Consideration of Load Excess Transfer in Power Systems Adequacy Studies
Authors: François Vallée, Jean-François Toubeau, Zacharie De Grève, Jacques Lobry
Abstract:
When evaluating the capacity of a generation park to cover the load in transmission systems, traditional Loss of Load Expectation (LOLE) and Expected Energy not Served (EENS) indices can be used. If those indices allow computing the annual duration and severity of load non-covering situations, they do not take into account the fact that the load excess is generally shifted from one penury state (hour or quarter of an hour) to the following one. In this paper, a sequential Monte Carlo framework is introduced in order to compute adjusted LOLE and EENS indices. Practically, those adapted indices permit to consider the effect of load excess transfer on the global adequacy of a generation park, providing thus a more accurate evaluation of this quantity.Keywords: expected energy not served, loss of load expectation, Monte Carlo simulation, reliability, wind generation
Procedia PDF Downloads 4092019 Recent Advancement in Fetal Electrocardiogram Extraction
Authors: Savita, Anurag Sharma, Harsukhpreet Singh
Abstract:
Fetal Electrocardiogram (fECG) is a widely used technique to assess the fetal well-being and identify any changes that might be with problems during pregnancy and to evaluate the health and conditions of the fetus. Various techniques or methods have been employed to diagnose the fECG from abdominal signal. This paper describes the facile approach for the estimation of the fECG known as Adaptive Comb. Filter (ACF). The ACF can adjust according to the temporal variations in fundamental frequency by itself that used for the estimation of the quasi periodic signal of ECG signal.Keywords: aECG, ACF, fECG, mECG
Procedia PDF Downloads 4072018 Asymptotic Confidence Intervals for the Difference of Coefficients of Variation in Gamma Distributions
Authors: Patarawan Sangnawakij, Sa-Aat Niwitpong
Abstract:
In this paper, we proposed two new confidence intervals for the difference of coefficients of variation, CIw and CIs, in two independent gamma distributions. These proposed confidence intervals using the close form method of variance estimation which was presented by Donner and Zou (2010) based on concept of Wald and Score confidence interval, respectively. Monte Carlo simulation study is used to evaluate the performance, coverage probability and expected length, of these confidence intervals. The results indicate that values of coverage probabilities of the new confidence interval based on Wald and Score are satisfied the nominal coverage and close to nominal level 0.95 in various situations, particularly, the former proposed confidence interval is better when sample sizes are small. Moreover, the expected lengths of the proposed confidence intervals are nearly difference when sample sizes are moderate to large. Therefore, in this study, the confidence interval for the difference of coefficients of variation which based on Wald is preferable than the other one confidence interval.Keywords: confidence interval, score’s interval, wald’s interval, coefficient of variation, gamma distribution, simulation study
Procedia PDF Downloads 4262017 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton
Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani
Abstract:
Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton
Procedia PDF Downloads 3232016 Non-Invasive Imaging of Human Tissue Using NIR Light
Authors: Ashwani Kumar
Abstract:
Use of NIR light for imaging the biological tissue and to quantify its optical properties is a good choice over other invasive methods. Optical tomography involves two steps. One is the forward problem and the other is the reconstruction problem. The forward problem consists of finding the measurements of transmitted light through the tissue from source to detector, given the spatial distribution of absorption and scattering properties. The second step is the reconstruction problem. In X-ray tomography, there is standard method for reconstruction called filtered back projection method or the algebraic reconstruction methods. But this method cannot be applied as such, in optical tomography due to highly scattering nature of biological tissue. A hybrid algorithm for reconstruction has been implemented in this work which takes into account the highly scattered path taken by photons while back projecting the forward data obtained during Monte Carlo simulation. The reconstructed image suffers from blurring due to point spread function.Keywords: NIR light, tissue, blurring, Monte Carlo simulation
Procedia PDF Downloads 4922015 A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images
Authors: Qiang Wang, Hongyang Yu
Abstract:
Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality.Keywords: multi-human 3D pose estimation, RGB-D images, transformer, 3D joint locations
Procedia PDF Downloads 78