Search results for: (conditional) proba¬bility distributions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 347

Search results for: (conditional) proba¬bility distributions

287 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test

Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman

Abstract:

At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. This finding needs to be confirmed with a greater number of stations across other Australian states.

Keywords: Floods, FLIKE, probability distributions, flood frequency, outlier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3265
286 Siding Mode Control of Pitch-Rate of an F-16 Aircraft

Authors: Ekprasit Promtun, Sridhar Seshagiri

Abstract:

This paper considers the control of the longitudinal flight dynamics of an F-16 aircraft. The primary design objective is model-following of the pitch rate q, which is the preferred system for aircraft approach and landing. Regulation of the aircraft velocity V (or the Mach-hold autopilot) is also considered, but as a secondary objective. The problem is challenging because the system is nonlinear, and also non-affine in the input. A sliding mode controller is designed for the pitch rate, that exploits the modal decomposition of the linearized dynamics into its short-period and phugoid approximations. The inherent robustness of the SMC design provides a convenient way to design controllers without gain scheduling, with a steady-state response that is comparable to that of a conventional polynomial based gain-scheduled approach with integral control, but with improved transient performance. Integral action is introduced in the sliding mode design using the recently developed technique of “conditional integrators", and it is shown that robust regulation is achieved with asymptotically constant exogenous signals, without degrading the transient response. Through extensive simulation on the nonlinear multiple-input multiple-output (MIMO) longitudinal model of the F-16 aircraft, it is shown that the conditional integrator design outperforms the one based on the conventional linear control, without requiring any scheduling.

Keywords: Sliding-mode Control, Integral Control, Model Following, F-16 Longitudinal Dynamics, Pitch-Rate Control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3165
285 Pragati Node Popularity (PNP) Approach to Identify Congestion Hot Spots in MPLS

Authors: E. Ramaraj, A. Padmapriya

Abstract:

In large Internet backbones, Service Providers typically have to explicitly manage the traffic flows in order to optimize the use of network resources. This process is often referred to as Traffic Engineering (TE). Common objectives of traffic engineering include balance traffic distribution across the network and avoiding congestion hot spots. Raj P H and SVK Raja designed the Bayesian network approach to identify congestion hors pots in MPLS. In this approach for every node in the network the Conditional Probability Distribution (CPD) is specified. Based on the CPD the congestion hot spots are identified. Then the traffic can be distributed so that no link in the network is either over utilized or under utilized. Although the Bayesian network approach has been implemented in operational networks, it has a number of well known scaling issues. This paper proposes a new approach, which we call the Pragati (means Progress) Node Popularity (PNP) approach to identify the congestion hot spots with the network topology alone. In the new Pragati Node Popularity approach, IP routing runs natively over the physical topology rather than depending on the CPD of each node as in Bayesian network. We first illustrate our approach with a simple network, then present a formal analysis of the Pragati Node Popularity approach. Our PNP approach shows that for any given network of Bayesian approach, it exactly identifies the same result with minimum efforts. We further extend the result to a more generic one: for any network topology and even though the network is loopy. A theoretical insight of our result is that the optimal routing is always shortest path routing with respect to some considerations of hot spots in the networks.

Keywords: Conditional Probability Distribution, Congestion hotspots, Operational Networks, Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1927
284 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: Audit fee, heteroscedasticity, Lagrange multiplier test, periodicity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 688
283 An Optimal Unsupervised Satellite image Segmentation Approach Based on Pearson System and k-Means Clustering Algorithm Initialization

Authors: Ahmed Rekik, Mourad Zribi, Ahmed Ben Hamida, Mohamed Benjelloun

Abstract:

This paper presents an optimal and unsupervised satellite image segmentation approach based on Pearson system and k-Means Clustering Algorithm Initialization. Such method could be considered as original by the fact that it utilised K-Means clustering algorithm for an optimal initialisation of image class number on one hand and it exploited Pearson system for an optimal statistical distributions- affectation of each considered class on the other hand. Satellite image exploitation requires the use of different approaches, especially those founded on the unsupervised statistical segmentation principle. Such approaches necessitate definition of several parameters like image class number, class variables- estimation and generalised mixture distributions. Use of statistical images- attributes assured convincing and promoting results under the condition of having an optimal initialisation step with appropriated statistical distributions- affectation. Pearson system associated with a k-means clustering algorithm and Stochastic Expectation-Maximization 'SEM' algorithm could be adapted to such problem. For each image-s class, Pearson system attributes one distribution type according to different parameters and especially the Skewness 'β1' and the kurtosis 'β2'. The different adapted algorithms, K-Means clustering algorithm, SEM algorithm and Pearson system algorithm, are then applied to satellite image segmentation problem. Efficiency of those combined algorithms was firstly validated with the Mean Quadratic Error 'MQE' evaluation, and secondly with visual inspection along several comparisons of these unsupervised images- segmentation.

Keywords: Unsupervised classification, Pearson system, Satellite image, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1986
282 Creating Maintenance Cost Model for University Buildings

Authors: AbdulLateef A. Olanrewaju, Arazi Idrus, Mohd F. Khamidi

Abstract:

Maintenance costs incurred on building differs. The difference can be as results of the types, functions, age, building health index, size, form height, location and complexity of the building. These are contributing to the difficulty in maintenance development of deterministic maintenance cost model. This paper is concerns with reporting the preliminary findings on the creation of building maintenance cost distributions for universities in Malaysia. This study is triggered by the need to provide guides on maintenance costs distributions for decision making. For this purpose, a survey questionnaire was conducted to investigate the distribution of maintenance costs in the universities. Altogether, responses were received from twenty universities comprising both private and publicly owned. The research found that engineering services, roofing and finishes were the elements contributing the larger segment of the maintenance costs. Furthermore, the study indicates the significance of maintenance cost distribution as decision making tool towards maintenance management.

Keywords: Performance matrix, university buildings, costmodel, Malaysia

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1982
281 Fault Tolerant (n, k)-Star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems

Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj Biswas, Frank Ferrese

Abstract:

This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.

Keywords: (n, k)-star Topology, Fault Tolerance, Conditional Diagnosability, Multi-Agent System, Automated Power System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2405
280 Dempster-Shafer Evidence Theory for Image Segmentation: Application in Cells Images

Authors: S. Ben Chaabane, M. Sayadi, F. Fnaiech, E. Brassart

Abstract:

In this paper we propose a new knowledge model using the Dempster-Shafer-s evidence theory for image segmentation and fusion. The proposed method is composed essentially of two steps. First, mass distributions in Dempster-Shafer theory are obtained from the membership degrees of each pixel covering the three image components (R, G and B). Each membership-s degree is determined by applying Fuzzy C-Means (FCM) clustering to the gray levels of the three images. Second, the fusion process consists in defining three discernment frames which are associated with the three images to be fused, and then combining them to form a new frame of discernment. The strategy used to define mass distributions in the combined framework is discussed in detail. The proposed fusion method is illustrated in the context of image segmentation. Experimental investigations and comparative studies with the other previous methods are carried out showing thus the robustness and superiority of the proposed method in terms of image segmentation.

Keywords: Fuzzy C-means, Color image, data fusion, Dempster-Shafer's evidence theory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2159
279 Base Change for Fisher Metrics: Case of the q−Gaussian Inverse Distribution

Authors: Gabriel I. Loaiza O., Carlos A. Cadavid M., Juan C. Arango P.

Abstract:

It is known that the Riemannian manifold determined by the family of inverse Gaussian distributions endowed with the Fisher metric has negative constant curvature κ = −1/2 , as does the family of usual Gaussian distributions. In the present paper, firstly we arrive at this result by following a different path, much simpler than the previous ones. We first put the family in exponential form, thus endowing the family with a new set of parameters, or coordinates, θ1, θ2; then we determine the matrix of the Fisher metric in terms of these parameters; and finally we compute this matrix in the original parameters. Secondly, we define the Inverse q−Gaussian distribution family (q < 3), as the family obtained by replacing the usual exponential function by the Tsallis q−exponential function in the expression for the Inverse Gaussian distribution, and observe that it supports two possible geometries, the Fisher and the q−Fisher geometry. And finally, we apply our strategy to obtain results about the Fisher and q−Fisher geometry of the Inverse q−Gaussian distribution family, similar to the ones obtained in the case of the Inverse Gaussian distribution family.

Keywords: Base of Changes, Information Geometry, Inverse Gaussian distribution, Inverse q-Gaussian distribution, Statistical Manifolds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 314
278 Spreading Dynamics of a Viral Infection in a Complex Network

Authors: Khemanand Moheeput, Smita S. D. Goorah, Satish K. Ramchurn

Abstract:

We report a computational study of the spreading dynamics of a viral infection in a complex (scale-free) network. The final epidemic size distribution (FESD) was found to be unimodal or bimodal depending on the value of the basic reproductive number R0 . The FESDs occurred on time-scales long enough for intermediate-time epidemic size distributions (IESDs) to be important for control measures. The usefulness of R0 for deciding on the timeliness and intensity of control measures was found to be limited by the multimodal nature of the IESDs and by its inability to inform on the speed at which the infection spreads through the population. A reduction of the transmission probability at the hubs of the scale-free network decreased the occurrence of the larger-sized epidemic events of the multimodal distributions. For effective epidemic control, an early reduction in transmission at the index cell and its neighbors was essential.

Keywords: Basic reproductive number, epidemic control, scalefree network, viral infection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
277 Three Dimensional Numerical Simulation of a Full Scale CANDU Reactor Moderator to Study Temperature Fluctuations

Authors: A. Sarchami, N. Ashgriz, M. Kwee

Abstract:

Threedimensional numerical simulations are conducted on a full scale CANDU Moderator and Transient variations of the temperature and velocity distributions inside the tank are determined. The results show that the flow and temperature distributions inside the moderator tank are three dimensional and no symmetry plane can be identified.Competition between the upward moving buoyancy driven flows and the downward moving momentum driven flows, results in the formation of circulation zones. The moderator tank operates in the buoyancy driven mode and any small disturbances in the flow or temperature makes the system unstable and asymmetric. Different types of temperature fluctuations are noted inside the tank: (i) large amplitude are at the boundaries between the hot and cold (ii) low amplitude are in the core of the tank (iii) high frequency fluctuations are in the regions with high velocities and (iv) low frequency fluctuations are in the regions with lower velocities.

Keywords: Bruce, Fluctuations, Numerical, Temperature, Thermal hydraulics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
276 Life Experiences are Important Factors of Making Stronger SOC (Sense of Coherence) on the Workers in Tsukuba Research Park City (TRPC)

Authors: Shinichiro Sasahara, Yusuke Tomotsune, Yuichi Ohi, Shun Suzuki, Akihiro Seki, Junko Sakano, Yoshihiko Yamazaki, Ichiyo Matsuzaki

Abstract:

Via a large scale cross-sectional study among Japanese white color workers, the authors aimed to elucidate: (1) the distributions of Sense of Coherence (SOC), which reflect stress coping abilities, (2) the distributions of Life experience; (3) and the association between SOC and Life experience. Anonymous self-administered questionnaires were sent to 15,891 in 2001 and 21,922 in 2011 employees at educational and research institutions in Tsukuba Research Park City. A total of 5,868 (36.9%) and 9,528 (43.5%) respectively workers completed and returned the questionnaire; 5,715 and 9,515 respectively workers without missing data were analyzed. SOC scale scores differed by gender, age, and other demographic features in both study years. Among the life experiences, workers who have got over parenting or management position were higher SOC scale scores adjusted by gender and age. The life experiences that workers have got over could develop their stronger SOC in their life course.

Keywords: field study, life experience, mental health, SOC (sense of coherence)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493
275 Convective Heat Transfer Enhancement in an Enclosure with Fin Utilizing Nano Fluids

Authors: S. H. Anilkumar, Ghulam Jilani

Abstract:

The objective of the present work is to conduct investigations leading to a more complete explanation of single phase natural convective heat transfer in an enclosure with fin utilizing nano fluids. The nano fluid used, which is composed of Aluminum oxide nano particles in suspension of Ethylene glycol, is provided at various volume fractions. The study is carried out numerically for a range of Rayleigh numbers, fin heights and aspect ratio. The flow and temperature distributions are taken to be two-dimensional. Regions with the same velocity and temperature distributions are identified as symmetry of sections. One half of such a rectangular region is chosen as the computational domain taking into account the symmetry about the fin. Transport equations are modeled by a stream functionvorticity formulation and are solved numerically by finite-difference schemes. Comparisons with previously published works on the basis of special cases are done. Results are presented in the form of streamline, vector and isotherm plots as well as the variation of local Nusselt number along the fin under different conditions.

Keywords: Fin height, Nano fluid, natural convection, Rayleigh number.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423
274 Measurement of the Bipolarization Events

Authors: Stefan V. Stefanescu

Abstract:

We intend to point out the differences which exist between the classical Gini concentration coefficient and a proposed bipolarization index defined for an arbitrary random variable which have a finite support. In fact Gini's index measures only the "poverty degree" for the individuals from a given population taking into consideration their wages. The Gini coefficient is not so sensitive to the significant income variations in the "rich people class" . In practice there are multiple interdependent relations between the pauperization and the socio-economical polarization phenomena. The presence of a strong pauperization aspect inside the population induces often a polarization effect in this society. But the pauperization and the polarization phenomena are not identical. For this reason it isn't always adequate to use a Gini type coefficient, based on the Lorenz order, to estimate the bipolarization level of the individuals from the studied population. The present paper emphasizes these ideas by considering two families of random variables which have a linear or a triangular type distributions. In addition, the continuous variation, depending on the parameter "time" of the chosen distributions, could simulate a real dynamical evolution of the population.

Keywords: Bipolarization phenomenon, Gini coefficient, income distribution, poverty measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1109
273 On the Optimality Assessment of Nanoparticle Size Spectrometry and Its Association to the Entropy Concept

Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani

Abstract:

Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nanoparticles under the influence of electric field in Electrical Mobility Spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined fielddiffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multichannel EMS. The result, a cloud of particles with no uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using Computational Fluid Dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.

Keywords: Aerosol Nano-Particle, CFD, Electrical Mobility Spectrometer, Von Neumann entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1823
272 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: Statistical slope stability analysis, Skew distributions, Probability of failure, Functions of random variables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499
271 Simulation and Design of the Geometric Characteristics of the Oscillatory Thermal Cycler

Authors: Tse-Yu Hsieh, Jyh-Jian Chen

Abstract:

Since polymerase chain reaction (PCR) has been invented, it has emerged as a powerful tool in genetic analysis. The PCR products are closely linked with thermal cycles. Therefore, to reduce the reaction time and make temperature distribution uniform in the reaction chamber, a novel oscillatory thermal cycler is designed. The sample is placed in a fixed chamber, and three constant isothermal zones are established and lined in the system. The sample is oscillated and contacted with three different isothermal zones to complete thermal cycles. This study presents the design of the geometric characteristics of the chamber. The commercial software CFD-ACE+TM is utilized to investigate the influences of various materials, heating times, chamber volumes, and moving speed of the chamber on the temperature distributions inside the chamber. The chamber moves at a specific velocity and the boundary conditions with time variations are related to the moving speed. Whereas the chamber moves, the boundary is specified at the conditions of the convection or the uniform temperature. The user subroutines compiled by the FORTRAN language are used to make the numerical results realistically. Results show that the reaction chamber with a rectangular prism is heated on six faces; the effects of various moving speeds of the chamber on the temperature distributions are examined. Regarding to the temperature profiles and the standard deviation of the temperature at the Y-cut cross section, the non-uniform temperature inside chamber is found as the moving speed is larger than 0.01 m/s. By reducing the heating faces to four, the standard deviation of the temperature of the reaction chamber is under 1.4×10-3K with the range of velocities between 0.0001 m/s and 1 m/s. The nature convective boundary conditions are set at all boundaries while the chamber moves between two heaters, the effects of various moving velocities of the chamber on the temperature distributions are negligible at the assigned time duration.

Keywords: Polymerase chain reaction, oscillatory thermal cycler, standard deviation of temperature, nature convective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1565
270 FEM Models of Glued Laminated Timber Beams Enhanced by Bayesian Updating of Elastic Moduli

Authors: L. Melzerová, T. Janda, M. Šejnoha, J. Šejnoha

Abstract:

Two finite element (FEM) models are presented in this paper to address the random nature of the response of glued timber structures made of wood segments with variable elastic moduli evaluated from 3600 indentation measurements. This total database served to create the same number of ensembles as was the number of segments in the tested beam. Statistics of these ensembles were then assigned to given segments of beams and the Latin Hypercube Sampling (LHS) method was called to perform 100 simulations resulting into the ensemble of 100 deflections subjected to statistical evaluation. Here, a detailed geometrical arrangement of individual segments in the laminated beam was considered in the construction of two-dimensional FEM model subjected to in fourpoint bending to comply with the laboratory tests. Since laboratory measurements of local elastic moduli may in general suffer from a significant experimental error, it appears advantageous to exploit the full scale measurements of timber beams, i.e. deflections, to improve their prior distributions with the help of the Bayesian statistical method. This, however, requires an efficient computational model when simulating the laboratory tests numerically. To this end, a simplified model based on Mindlin’s beam theory was established. The improved posterior distributions show that the most significant change of the Young’s modulus distribution takes place in laminae in the most strained zones, i.e. in the top and bottom layers within the beam center region. Posterior distributions of moduli of elasticity were subsequently utilized in the 2D FEM model and compared with the original simulations.

Keywords: Bayesian inference, FEM, four point bending test, laminated timber, parameter estimation, prior and posterior distribution, Young’s modulus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2164
269 A Nano-Scaled SRAM Guard Band Design with Gaussian Mixtures Model of Complex Long Tail RTN Distributions

Authors: Worawit Somha, Hiroyuki Yamauchi

Abstract:

This paper proposes, for the first time, how the challenges facing the guard-band designs including the margin assist-circuits scheme for the screening-test in the coming process generations should be addressed. The increased screening error impacts are discussed based on the proposed statistical analysis models. It has been shown that the yield-loss caused by the misjudgment on the screening test would become 5-orders of magnitude larger than that for the conventional one when the amplitude of random telegraph noise (RTN) caused variations approaches to that of random dopant fluctuation. Three fitting methods to approximate the RTN caused complex Gamma mixtures distributions by the simple Gaussian mixtures model (GMM) are proposed and compared. It has been verified that the proposed methods can reduce the error of the fail-bit predictions by 4-orders of magnitude.

Keywords: Mixtures of Gaussian, Random telegraph noise, EM algorithm, Long-tail distribution, Fail-bit analysis, Static random access memory, Guard band design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
268 Electric Field and Potential Distributions along Surface of Silicone Rubber Polymer Insulators Using Finite Element Method

Authors: B. Marungsri, W. Onchantuek, A. Oonsivilai

Abstract:

This paper presents the simulation the results of electric field and potential distributions along surface of silicone rubber polymer insulators. Near the same leakage distance subjected to 15 kV in 50 cycle salt fog ageing test, alternate sheds silicone rubber polymer insulator showed better contamination performance than straight sheds silicone rubber polymer insulator. Severe surface ageing was observed on the straight sheds insulator. The objective of this work is to elucidate that electric field distribution along straight sheds insulator higher than alternate shed insulator in salt fog ageing test. Finite element method (FEM) is adopted for this work. The simulation results confirmed the experimental data, as well.

Keywords: Electric field distribution, potential distribution, silicone rubber polymer insulator, finite element method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2055
267 Production Throughput Modeling under Five Uncertain Variables Using Bayesian Inference

Authors: Amir Azizi, Amir Yazid B. Ali, Loh Wei Ping

Abstract:

Throughput is an important measure of performance of production system. Analyzing and modeling of production throughput is complex in today-s dynamic production systems due to uncertainties of production system. The main reasons are that uncertainties are materialized when the production line faces changes in setup time, machinery break down, lead time of manufacturing, and scraps. Besides, demand changes are fluctuating from time to time for each product type. These uncertainties affect the production performance. This paper proposes Bayesian inference for throughput modeling under five production uncertainties. Bayesian model utilized prior distributions related to previous information about the uncertainties where likelihood distributions are associated to the observed data. Gibbs sampling algorithm as the robust procedure of Monte Carlo Markov chain was employed for sampling unknown parameters and estimating the posterior mean of uncertainties. The Bayesian model was validated with respect to convergence and efficiency of its outputs. The results presented that the proposed Bayesian models were capable to predict the production throughput with accuracy of 98.3%.

Keywords: Bayesian inference, Uncertainty modeling, Monte Carlo Markov chain, Gibbs sampling, Production throughput

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2106
266 Protein Graph Partitioning by Mutually Maximization of cycle-distributions

Authors: Frank Emmert Streib

Abstract:

The classification of the protein structure is commonly not performed for the whole protein but for structural domains, i.e., compact functional units preserved during evolution. Hence, a first step to a protein structure classification is the separation of the protein into its domains. We approach the problem of protein domain identification by proposing a novel graph theoretical algorithm. We represent the protein structure as an undirected, unweighted and unlabeled graph which nodes correspond the secondary structure elements of the protein. This graph is call the protein graph. The domains are then identified as partitions of the graph corresponding to vertices sets obtained by the maximization of an objective function, which mutually maximizes the cycle distributions found in the partitions of the graph. Our algorithm does not utilize any other kind of information besides the cycle-distribution to find the partitions. If a partition is found, the algorithm is iteratively applied to each of the resulting subgraphs. As stop criterion, we calculate numerically a significance level which indicates the stability of the predicted partition against a random rewiring of the protein graph. Hence, our algorithm terminates automatically its iterative application. We present results for one and two domain proteins and compare our results with the manually assigned domains by the SCOP database and differences are discussed.

Keywords: Graph partitioning, unweighted graph, protein domains.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1308
265 Modeling of Steady State Creep in Thick-Walled Cylinders under Internal Pressure

Authors: Tejeet Singh, Ishavneet Singh

Abstract:

The present study focused on carrying out the creep analysis in an isotropic thick-walled composite cylindrical pressure vessel composed of aluminum matrix reinforced with silicon-carbide in particulate form. The creep behavior of the composite material has been described by the threshold stress based creep law. The values of stress exponent appearing in the creep law were selected as 3, 5 and 8. The constitutive equations were developed using well known von-Mises yield criteria. Models were developed to find out the distributions of creep stress and strain rate in thick-walled composite cylindrical pressure vessels under internal pressure. In order to obtain the stress distributions in the cylinder, the equilibrium equation of the continuum mechanics and the constitutive equations are solved together. It was observed that the radial stress, tangential stress and axial stress increases along with the radial distance. The cross-over was also obtained almost at the middle region of cylindrical vessel for tangential and axial stress for different values of stress exponent. The strain rates were also decreasing in nature along the entire radius.

Keywords: Steady state creep, composite, cylinder, pressure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
264 Evaluation of Best-Fit Probability Distribution for Prediction of Extreme Hydrologic Phenomena

Authors: Karim Hamidi Machekposhti, Hossein Sedghi

Abstract:

The probability distributions are the best method for forecasting of extreme hydrologic phenomena such as rainfall and flood flows. In this research, in order to determine suitable probability distribution for estimating of annual extreme rainfall and flood flows (discharge) series with different return periods, precipitation with 40 and discharge with 58 years time period had been collected from Karkheh River at Iran. After homogeneity and adequacy tests, data have been analyzed by Stormwater Management and Design Aid (SMADA) software and residual sum of squares (R.S.S). The best probability distribution was Log Pearson Type III with R.S.S value (145.91) and value (13.67) for peak discharge and Log Pearson Type III with R.S.S values (141.08) and (8.95) for maximum discharge in Jelogir Majin and Pole Zal stations, respectively. The best distribution for maximum precipitation in Jelogir Majin and Pole Zal stations was Log Pearson Type III distribution with R.S.S values (1.74&1.90) and then Pearson Type III distribution with R.S.S values (1.53&1.69). Overall, the Log Pearson Type III distributions are acceptable distribution types for representing statistics of extreme hydrologic phenomena in Karkheh River at Iran with the Pearson Type III distribution as a potential alternative.

Keywords: Karkheh river, log pearson type III, probability distribution, residual sum of squares.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
263 Statistical Distributions of the Lapped Transform Coefficients for Images

Authors: Vijay Kumar Nath, Deepika Hazarika, Anil Mahanta,

Abstract:

Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.

Keywords: Lapped orthogonal transform, Lapped biorthogonal transform, Image compression, KS test,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556
262 Levels of Students’ Understandings of Electric Field Due to a Continuous Charged Distribution: A Case Study of a Uniformly Charged Insulating Rod

Authors: Thanida Sujarittham, Narumon Emarat, Jintawat Tanamatayarat, Kwan Arayathanitkul, Suchai Nopparatjamjomras

Abstract:

Electric field is an important fundamental concept in electrostatics. In high-school, generally Thai students have already learned about definition of electric field, electric field due to a point charge, and superposition of electric fields due to multiple-point charges. Those are the prerequisite basic knowledge students holding before entrancing universities. In the first-year university level, students will be quickly revised those basic knowledge and will be then introduced to a more complicated topic—electric field due to continuous charged distributions. We initially found that our freshman students, who were from the Faculty of Science and enrolled in the introductory physic course (SCPY 158), often seriously struggled with the basic physics concepts—superposition of electric fields and inverse square law and mathematics being relevant to this topic. These also then resulted on students’ understanding of advanced topics within the course such as Gauss's law, electric potential difference, and capacitance. Therefore, it is very important to determine students' understanding of electric field due to continuous charged distributions. The open-ended question about sketching net electric field vectors from a uniformly charged insulating rod was administered to 260 freshman science students as pre- and post-tests. All of their responses were analyzed and classified into five levels of understandings. To get deep understanding of each level, 30 students were interviewed toward their individual responses. The pre-test result found was that about 90% of students had incorrect understanding. Even after completing the lectures, there were only 26.5% of them could provide correct responses. Up to 50% had confusions and irrelevant ideas. The result implies that teaching methods in Thai high schools may be problematic. In addition for our benefit, these students’ alternative conceptions identified could be used as a guideline for developing the instructional method currently used in the course especially for teaching electrostatics.

Keywords: Electrostatics Electric field due to continuous charged distributions, inverse square law, superposition principle, levels of student understandings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2056
261 Stochastic Modeling for Parameters of Modified Car-Following Model in Area-Based Traffic Flow

Authors: N. C. Sarkar, A. Bhaskar, Z. Zheng

Abstract:

The driving behavior in area-based (i.e., non-lane based) traffic is induced by the presence of other individuals in the choice space from the driver’s visual perception area. The driving behavior of a subject vehicle is constrained by the potential leaders and leaders are frequently changed over time. This paper is to determine a stochastic model for a parameter of modified intelligent driver model (MIDM) in area-based traffic (as in developing countries). The parametric and non-parametric distributions are presented to fit the parameters of MIDM. The goodness of fit for each parameter is measured in two different ways such as graphically and statistically. The quantile-quantile (Q-Q) plot is used for a graphical representation of a theoretical distribution to model a parameter and the Kolmogorov-Smirnov (K-S) test is used for a statistical measure of fitness for a parameter with a theoretical distribution. The distributions are performed on a set of estimated parameters of MIDM. The parameters are estimated on the real vehicle trajectory data from India. The fitness of each parameter with a stochastic model is well represented. The results support the applicability of the proposed modeling for parameters of MIDM in area-based traffic flow simulation.

Keywords: Area-based traffic, car-following model, micro-simulation, stochastic modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 724
260 Heat Flux Reduction Research in Hypersonic Flow with Opposing Jet

Authors: Yisheng Rong, Jian Sun, Weiqiang Liu, Renjun Zhan

Abstract:

A CFD study on heat flux reduction in hypersonic flow with opposing jet has been conducted. Flowfield parameters, reattachment point position, surface pressure distributions and heat flux distributions are obtained and validated with experiments. The physical mechanism of heat reduction has been analyzed. When the opposing jet blows, the freestream is blocked off, flows to the edges and not interacts with the surface to form aerodynamic heating. At the same time, the jet flows back to form cool recirculation region, which reduces the difference in temperature between the surface and the nearby gas, and then reduces the heat flux. As the pressure ratio increases, the interface between jet and freestream is gradually pushed away from the surface. Larger the total pressure ratio is, lower the heat flux is. To study the effect of the intensity of opposing jet more reasonably, a new parameter RPA has been introduced by combining the flux and the total pressure ratio. The study shows that the same shock wave position and total heat load can be obtained with the same RPA with different fluxes and the total pressures, which means the new parameter could stand for the intensity of opposing jet and could be used to analyze the influence of opposing jet on flow field and aerodynamic heating.

Keywords: opposing jet, aerodynamic heating, total pressure ratio, thermal protection system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014
259 Experimental Investigation of Phase Distributions of Two-phase Air-silicone Oil Flow in a Vertical Pipe

Authors: M. Abdulkadir, V. Hernandez-Perez, S. Sharaf, I. S. Lowndes, B. J. Azzopardi

Abstract:

This paper reports the results of an experimental study conducted to characterise the gas-liquid multiphase flows experienced within a vertical riser transporting a range of gas-liquid flow rates. The scale experiments were performed using an air/silicone oil mixture within a 6 m long riser. The superficial air velocities studied ranged from 0.047 to 2.836 m/ s, whilst maintaining a liquid superficial velocity at 0.047 m/ s. Measurements of the mean cross-sectional and time average radial void fraction were obtained using a wire mesh sensor (WMS). The data were recorded at an acquisition frequency of 1000 Hz over an interval of 60 seconds. For the range of flow conditions studied, the average void fraction was observed to vary between 0.1 and 0.9. An analysis of the data collected concluded that the observed void fraction was strongly affected by the superficial gas velocity, whereby the higher the superficial gas velocity, the higher was the observed average void fraction. The average void fraction distributions observed were in good agreement with the results obtained by other researchers. When the air-silicone oil flows were fully developed reasonably symmetric profiles were observed, with the shape of the symmetry profile being strongly dependent on the superficial gas velocity.

Keywords: WMS, phase distribution, silicone-oil, riser

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2227
258 MONPAR - A Page Replacement Algorithm for a Spatiotemporal Database

Authors: U. Kalay, O. Kalıpsız

Abstract:

For a spatiotemporal database management system, I/O cost of queries and other operations is an important performance criterion. In order to optimize this cost, an intense research on designing robust index structures has been done in the past decade. With these major considerations, there are still other design issues that deserve addressing due to their direct impact on the I/O cost. Having said this, an efficient buffer management strategy plays a key role on reducing redundant disk access. In this paper, we proposed an efficient buffer strategy for a spatiotemporal database index structure, specifically indexing objects moving over a network of roads. The proposed strategy, namely MONPAR, is based on the data type (i.e. spatiotemporal data) and the structure of the index structure. For the purpose of an experimental evaluation, we set up a simulation environment that counts the number of disk accesses while executing a number of spatiotemporal range-queries over the index. We reiterated simulations with query sets with different distributions, such as uniform query distribution and skewed query distribution. Based on the comparison of our strategy with wellknown page-replacement techniques, like LRU-based and Prioritybased buffers, we conclude that MONPAR behaves better than its competitors for small and medium size buffers under all used query-distributions.

Keywords: Buffer Management, Spatiotemporal databases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439