Search results for: higher dimensional pmf estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14387

Search results for: higher dimensional pmf estimation

13877 Numerical Solution of Manning's Equation in Rectangular Channels

Authors: Abdulrahman Abdulrahman

Abstract:

When the Manning equation is used, a unique value of normal depth in the uniform flow exists for a given channel geometry, discharge, roughness, and slope. Depending on the value of normal depth relative to the critical depth, the flow type (supercritical or subcritical) for a given characteristic of channel conditions is determined whether or not flow is uniform. There is no general solution of Manning's equation for determining the flow depth for a given flow rate, because the area of cross section and the hydraulic radius produce a complicated function of depth. The familiar solution of normal depth for a rectangular channel involves 1) a trial-and-error solution; 2) constructing a non-dimensional graph; 3) preparing tables involving non-dimensional parameters. Author in this paper has derived semi-analytical solution to Manning's equation for determining the flow depth given the flow rate in rectangular open channel. The solution was derived by expressing Manning's equation in non-dimensional form, then expanding this form using Maclaurin's series. In order to simplify the solution, terms containing power up to 4 have been considered. The resulted equation is a quartic equation with a standard form, where its solution was obtained by resolving this into two quadratic factors. The proposed solution for Manning's equation is valid over a large range of parameters, and its maximum error is within -1.586%.

Keywords: channel design, civil engineering, hydraulic engineering, open channel flow, Manning's equation, normal depth, uniform flow

Procedia PDF Downloads 197
13876 Statistical Analysis of Extreme Flow (Regions of Chlef)

Authors: Bouthiba Amina

Abstract:

The estimation of the statistics bound to the precipitation represents a vast domain, which puts numerous challenges to meteorologists and hydrologists. Sometimes, it is necessary, to approach in value the extreme events for sites where there is little, or no datum, as well as their periods of return. The search for a model of the frequency of the heights of daily rains dresses a big importance in operational hydrology: It establishes a basis for predicting the frequency and intensity of floods by estimating the amount of precipitation in past years. The most known and the most common approach is the statistical approach, It consists in looking for a law of probability that fits best the values observed by the random variable " daily maximal rain " after a comparison of various laws of probability and methods of estimation by means of tests of adequacy. Therefore, a frequent analysis of the annual series of daily maximal rains was realized on the data of 54 pluviometric stations of the pond of high and average. This choice was concerned with five laws usually applied to the study and the analysis of frequent maximal daily rains. The chosen period is from 1970 to 2013. It was of use to the forecast of quantiles. The used laws are the law generalized by extremes to three components, those of the extreme values to two components (Gumbel and log-normal) in two parameters, the law Pearson typifies III and Log-Pearson III in three parameters. In Algeria, Gumbel's law has been used for a long time to estimate the quantiles of maximum flows. However, and we will check and choose the most reliable law.

Keywords: return period, extreme flow, statistics laws, Gumbel, estimation

Procedia PDF Downloads 58
13875 Life Cycle Carbon Dioxide Emissions from the Construction Phase of Highway Sector in China

Authors: Yuanyuan Liu, Yuanqing Wang, Di Li

Abstract:

Carbon dioxide (CO2) emissions mitigation from road construction activities is one of the potential pathways to deal with climate change due to its higher use of materials, machinery energy consumption, and high quantity of vehicle and equipment fuels for transportation and on-site construction activities. Aiming to assess the environmental impact of the road infrastructure construction activities and to identify hotspots of emissions sources, this study developed a life-cycle CO2 emissions assessment framework covering three stages of material production, to-site and on-site transportation under the guidance of the principle of LCA ISO14040. Then streamlined inventory analysis on sub-processes of each stage was conducted based on the budget files from cases of highway projects in China. The calculation results were normalized into functional unit represented as ton per km per lane. Then a comparison between the amount of emissions from each stage, and sub-process was made to identify the major contributor in the whole highway lifecycle. In addition, the calculating results were used to be compared with results in other countries for understanding the level of CO2 emissions associated with Chinese road infrastructure in the world. The results showed that materials production stage produces the most of the CO2 emissions (for more than 80%), and the production of cement and steel accounts for large quantities of carbon emissions. Life cycle CO2 emissions of fuel and electric energy associated with to-site and on-site transportation vehicle and equipment are a minor component of total life cycle CO2 emissions from highway project construction activities. Bridges and tunnels are dominant large carbon contributor compared to the road segments. The life cycle CO2 emissions of road segment in highway project in China are slightly higher than the estimation results of highways in European countries and USA, about 1500 ton per km per lane. In particularly, the life cycle CO2 emissions of road pavement in majority cities all over the world are about 500 ton per km per lane. However, there is obvious difference between the cities when the estimation on life cycle CO2 emissions of highway projects included bridge and tunnel. The findings of the study could offer decision makers a more comprehensive reference to understand the contribution of road infrastructure to climate change, especially understand the contribution from road infrastructure construction activities in China. In addition, the identified hotspots of emissions sources provide the insights of how to reduce road carbon emissions for development of sustainable transportation.

Keywords: carbon dioxide emissions, construction activities, highway, life cycle assessment

Procedia PDF Downloads 247
13874 Non-Parametric, Unconditional Quantile Estimation of Efficiency in Microfinance Institutions

Authors: Komlan Sedzro

Abstract:

We apply the non-parametric, unconditional, hyperbolic order-α quantile estimator to appraise the relative efficiency of Microfinance Institutions in Africa in terms of outreach. Our purpose is to verify if these institutions, which must constantly try to strike a compromise between their social role and financial sustainability are operationally efficient. Using data on African MFIs extracted from the Microfinance Information eXchange (MIX) database and covering the 2004 to 2006 periods, we find that more efficient MFIs are also the most profitable. This result is in line with the view that social performance is not in contradiction with the pursuit of excellent financial performance. Our results also show that large MFIs in terms of asset and those charging the highest fees are not necessarily the most efficient.

Keywords: data envelopment analysis, microfinance institutions, quantile estimation of efficiency, social and financial performance

Procedia PDF Downloads 286
13873 Continuous Catalytic Hydrogenation and Purification for Synthesis Non-Phthalate

Authors: Chia-Ling Li

Abstract:

The scope of this article includes the production of 10,000 metric tons of non-phthalate per annum. The production process will include hydrogenation, separation, purification, and recycling of unprocessed feedstock. Based on experimental data, conversion and selectivity were chosen as reaction model parameters. The synthesis and separation processes of non-phthalate and phthalate were established by using Aspen Plus software. The article will be divided into six parts: estimation of physical properties, integration of production processes, purification case study, utility consumption, economic feasibility study and identification of bottlenecks. The purities of products was higher than 99.9 wt. %. Process parameters have important guiding significance to the commercialization of hydrogenation of phthalate.

Keywords: economic analysis, hydrogenation, non-phthalate, process simulation

Procedia PDF Downloads 254
13872 The New Propensity Score Method and Assessment of Propensity Score: A Simulation Study

Authors: Azam Najafkouchak, David Todem, Dorothy Pathak, Pramod Pathak, Joseph Gardiner

Abstract:

Propensity score (PS) methods have recently become the standard analysis tool for causal inference in observational studies where exposure is not randomly assigned. Thus, confounding can impact the estimation of treatment effect on the outcome. Due to the dangers of discretizing continuous variables, the focus of this paper will be on how the variation in cut-points or boundaries will affect the average treatment effect utilizing the stratification of the PS method. In this study, we will develop a new methodology to improve the efficiency of the PS analysis through stratification and simulation study. We will also explore the property of empirical distribution of average treatment effect theoretically, including asymptotic distribution, variance estimation and 95% confident Intervals.

Keywords: propensity score, stratification, emprical distribution, average treatment effect

Procedia PDF Downloads 80
13871 Thrust Enhancement on a Two Dimensional Elliptic Airfoil in a Forward Flight

Authors: S. M. Dash, K. B. Lua, T. T. Lim

Abstract:

This paper presents results of numerical and experimental studies on a two-dimensional (2D) flapping elliptic airfoil in a forward flight condition at Reynolds number of 5000. The study is motivated from an earlier investigation which shows that the deterioration in thrust performance of a sinusoidal heaving and pitching 2D (NACA0012) airfoil at high flapping frequency can be recovered by changing the effective angle of attack profile to square wave, sawtooth, or cosine wave shape. To better understand why such modifications lead to superior thrust performance, we take a closer look at the transient aerodynamic force behavior of an airfoil when the effective angle of attack profile changes gradually from a generic smooth trapezoidal profile to a sinusoid shape by modifying the base length of the trapezoid. The choice of using a smooth trapezoidal profile is to avoid the infinite acceleration condition encountered in the square wave profile. Our results show that the enhancement in the time-averaged thrust performance at high flapping frequency can be attributed to the delay and reduction in the drag producing valley region in the transient thrust force coefficient when the effective angle of attack profile changes from sinusoidal to trapezoidal.

Keywords: two-dimensional flapping airfoil, thrust performance, effective angle of attack, CFD, experiments

Procedia PDF Downloads 343
13870 Phase-Averaged Analysis of Three-Dimensional Vorticity in the Wake of Two Yawed Side-By-Side Circular Cylinders

Authors: T. Zhou, S. F. Mohd Razali, Y. Zhou, H. Wang, L. Cheng

Abstract:

The wake flow behind two yawed side-by-side circular cylinders is investigated using a three-dimensional vorticity probe. Four yaw angles (α), namely, 0°, 15°, 30° and 45° and two cylinder spacing ratios T* of 1.7 and 3.0 were tested. For T* = 3.0, there exist two vortex streets and the cylinders behave as independent and isolated ones. The maximum contour value of the coherent stream-wise vorticity is only about 10% of that of the spanwise vorticity. With the increase of α, increases whereas decreases. At α = 45°, is about 67% of. For T* = 1.7, only a single peak is detected in the energy spectrum. The span-wise vorticity contours have an organized pattern only at α = 0°. The maximum coherent vorticity contours of and for T* = 1.7 are about 30% and 7% of those for T* = 3.0. The independence principle (IP) in terms of Strouhal numbers is applicable in both wakes when α< 40°.

Keywords: circular cylinder wake, vorticity, vortex shedding, side-by-side

Procedia PDF Downloads 319
13869 Theoretical Framework and Empirical Simulation of Policy Design on Trans-Dimensional Resource Recycling

Authors: Yufeng Wu, Yifan Gu, Bin Li, Wei Wang

Abstract:

Resource recycling process contains a subsystem with interactions of three dimensions including coupling allocation of primary and secondary resources, responsibility coordination of stakeholders in forward and reverse supply chains, and trans-boundary transfer of hidden resource and environmental responsibilities between regions. Overlap or lack of responsibilities is easy to appear at the intersection of the three management dimensions. It is urgent to make an overall design of the policy system for recycling resources. From theoretical perspective, this paper analyzes the unique external differences of resource and environment in various dimensions and explores the reason why the effects of trans-dimensional policies are strongly correlated. Taking the example of the copper resources contained in the waste electrical and electronic equipment, this paper constructs reduction effect accounting model of resources recycling and set four trans-dimensional policy scenarios including resources tax and environmental tax reform of the raw and secondary resources, application of extended producer responsibility system, promotion of clean development mechanism, and strict entry barriers of imported wastes. In these ways, the paper simulates the impact effect of resources recycling process on resource deduction and emission reduction of waste water and gas, and constructs trans-dimensional policy mix scenario through integrating dominant strategy. The results show that combined application of various dimensional policies can achieve incentive compatibility and the trans-dimensional policy mix scenario can reach a better effect. Compared with baseline scenario, this scenario will increase 91.06% copper resources reduction effect and improve emission reduction of waste water and gas by eight times from 2010 to 2030. This paper further analyzes the development orientation of policies in various dimension. In resource dimension, the combined application of compulsory, market and authentication methods should be promoted to improve the use ratio of secondary resources. In supply chain dimension, resource value, residual functional value and potential information value contained in waste products should be fully excavated to construct a circular business system. In regional dimension, it should give full play to the comparative advantages of manufacturing power to improve China’s voice in resource recycling in the world.

Keywords: resource recycling, trans-dimension, policy design, incentive compatibility, life cycle

Procedia PDF Downloads 109
13868 Three-Dimensional Numerical Model of an Earth Air Heat Exchanger under a Constrained Urban Environment in India: Modeling and Validation

Authors: V. Rangarajan, Priyanka Kaushal

Abstract:

This study investigates the effectiveness of a typical Earth Air Heat Exchanger (EATHE) for energy efficient space cooling in an urban environment typified by space and soil-related constraints that preclude an optimal design. It involves the development of a three-dimensional numerical transient model that is validated by measurements at a live site in India. It is found that the model accurately predicts the soil temperatures at various depths as well as the EATHE outlet air temperature. The study shows that such an EATHE, even when designed under constraints, does provide effective space cooling especially during the hot months of the year.

Keywords: earth air heat exchanger (EATHE), India, MATLAB, model, simulation

Procedia PDF Downloads 305
13867 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand

Authors: Chukiat Chaiboonsri, Satawat Wannapan

Abstract:

This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.

Keywords: TThailand tourism, Maximum Entropy Bootstrapping approach, macroeconomic model, asymmetric information

Procedia PDF Downloads 278
13866 Performance Comparison of Wideband Covariance Matrix Sparse Representation (W-CMSR) with Other Wideband DOA Estimation Methods

Authors: Sandeep Santosh, O. P. Sahu

Abstract:

In this paper, performance comparison of wideband covariance matrix sparse representation (W-CMSR) method with other existing wideband Direction of Arrival (DOA) estimation methods has been made.W-CMSR relies less on a priori information of the incident signal number than the ordinary subspace based methods.Consider the perturbation free covariance matrix of the wideband array output. The diagonal covariance elements are contaminated by unknown noise variance. The covariance matrix of array output is conjugate symmetric i.e its upper right triangular elements can be represented by lower left triangular ones.As the main diagonal elements are contaminated by unknown noise variance,slide over them and align the lower left triangular elements column by column to obtain a measurement vector.Simulation results for W-CMSR are compared with simulation results of other wideband DOA estimation methods like Coherent signal subspace method (CSSM), Capon, l1-SVD, and JLZA-DOA. W-CMSR separate two signals very clearly and CSSM, Capon, L1-SVD and JLZA-DOA fail to separate two signals clearly and an amount of pseudo peaks exist in the spectrum of L1-SVD.

Keywords: W-CMSR, wideband direction of arrival (DOA), covariance matrix, electrical and computer engineering

Procedia PDF Downloads 453
13865 An E-coaching Methodology for Higher Education in Saudi Arabia

Authors: Essam Almuhsin, Ben Soh, Alice Li, Azmat Ullah

Abstract:

It is widely accepted that university students must acquire new knowledge, skills, awareness, and understanding to increase opportunities for professional and personal growth. The study reveals a significant increase in users engaging in e-coaching activities and a growing need for it during the COVID-19 pandemic. The paper proposes an e-coaching methodology for higher education in Saudi Arabia to address the need for effective coaching in the current online learning environment.

Keywords: role of e-coaching, e-coaching in higher education, Saudi higher education environment, e-coaching methodology, the importance of e-coaching

Procedia PDF Downloads 88
13864 Energy Consumption Estimation for Hybrid Marine Power Systems: Comparing Modeling Methodologies

Authors: Kamyar Maleki Bagherabadi, Torstein Aarseth Bø, Truls Flatberg, Olve Mo

Abstract:

Hydrogen fuel cells and batteries are one of the promising solutions aligned with carbon emission reduction goals for the marine sector. However, the higher installation and operation costs of hydrogen-based systems compared to conventional diesel gensets raise questions about the appropriate hydrogen tank size, energy, and fuel consumption estimations. Ship designers need methodologies and tools to calculate energy and fuel consumption for different component sizes to facilitate decision-making regarding feasibility and performance for retrofits and design cases. The aim of this work is to compare three alternative modeling approaches for the estimation of energy and fuel consumption with various hydrogen tank sizes, battery capacities, and load-sharing strategies. A fishery vessel is selected as an example, using logged load demand data over a year of operations. The modeled power system consists of a PEM fuel cell, a diesel genset, and a battery. The methodologies used are: first, an energy-based model; second, considering load variations during the time domain with a rule-based Power Management System (PMS); and third, a load variations model and dynamic PMS strategy based on optimization with perfect foresight. The errors and potentials of the methods are discussed, and design sensitivity studies for this case are conducted. The results show that the energy-based method can estimate fuel and energy consumption with acceptable accuracy. However, models that consider time variation of the load provide more realistic estimations of energy and fuel consumption regarding hydrogen tank and battery size, still within low computational time.

Keywords: fuel cell, battery, hydrogen, hybrid power system, power management system

Procedia PDF Downloads 11
13863 A Study on Improvement of Straightness of Preform Pulling Process of Hollow Pipe by Finete Element Analysis Method

Authors: Yeon-Jong Jeong, Jun-Hong Park, Hyuk Choi

Abstract:

In this study, we have studied the design of intermediate die in multipass drawing. Research has been continuously studied because of the advantage of better dimensional accuracy, smooth surface and improved mechanical properties in the case of drawing. Among them, multipass drawing, which is a method to realize complicated shape by drawing, was discussed in this study. The most important factor in the multipass drawing is the dimensional accuracy and simplify the process. To accomplish this, a multistage shape drawing was performed using various intermediate die shape designs, and finite element analysis was performed.

Keywords: FEM (Finite Element Method), multipass drawing, intermediate die, hollow pipe

Procedia PDF Downloads 302
13862 A System Dynamics Approach to Technological Learning Impact for Cost Estimation of Solar Photovoltaics

Authors: Rong Wang, Sandra Hasanefendic, Elizabeth von Hauff, Bart Bossink

Abstract:

Technological learning and learning curve models have been continuously used to estimate the photovoltaics (PV) cost development over time for the climate mitigation targets. They can integrate a number of technological learning sources which influence the learning process. Yet the accuracy and realistic predictions for cost estimations of PV development are still difficult to achieve. This paper develops four hypothetical-alternative learning curve models by proposing different combinations of technological learning sources, including both local and global technology experience and the knowledge stock. This paper specifically focuses on the non-linear relationship between the costs and technological learning source and their dynamic interaction and uses the system dynamics approach to predict a more accurate PV cost estimation for future development. As the case study, the data from China is gathered and drawn to illustrate that the learning curve model that incorporates both the global and local experience is more accurate and realistic than the other three models for PV cost estimation. Further, absorbing and integrating the global experience into the local industry has a positive impact on PV cost reduction. Although the learning curve model incorporating knowledge stock is not realistic for current PV cost deployment in China, it still plays an effective positive role in future PV cost reduction.

Keywords: photovoltaic, system dynamics, technological learning, learning curve

Procedia PDF Downloads 79
13861 The Use of the Matlab Software as the Best Way to Recognize Penumbra Region in Radiotherapy

Authors: Alireza Shayegan, Morteza Amirabadi

Abstract:

The y tool was developed to quantitatively compare dose distributions, either measured or calculated. Before computing ɣ, the dose and distance scales of the two distributions, referred to as evaluated and reference, are re-normalized by dose and distance criteria, respectively. The re-normalization allows the dose distribution comparison to be conducted simultaneously along dose and distance axes. Several two-dimensional images were acquired using a Scanning Liquid Ionization Chamber EPID and Extended Dose Range (EDR2) films for regular and irregular radiation fields. The raw images were then converted into two-dimensional dose maps. Transitional and rotational manipulations were performed for images using Matlab software. As evaluated dose distribution maps, they were then compared with the corresponding original dose maps as the reference dose maps.

Keywords: energetic electron, gamma function, penumbra, Matlab software

Procedia PDF Downloads 284
13860 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 118
13859 RP-HPLC Method Development and Its Validation for Simultaneous Estimation of Metoprolol Succinate and Olmesartan Medoxomil Combination in Bulk and Tablet Dosage Form

Authors: S. Jain, R. Savalia, V. Saini

Abstract:

A simple, accurate, precise, sensitive and specific RP-HPLC method was developed and validated for simultaneous estimation of Metoprolol Succinate and Olmesartan Medoxomil in bulk and tablet dosage form. The RP-HPLC method has shown adequate separation for Metoprolol Succinate and Olmesartan Medoxomil from its degradation products. The separation was achieved on a Phenomenex luna ODS C18 (250mm X 4.6mm i.d., 5μm particle size) with an isocratic mixture of acetonitrile: 50mM phosphate buffer pH 4.0 adjusted with glacial acetic acid in the ratio of 55:45 v/v. The mobile phase at a flow rate of 1.0ml/min, Injection volume 20μl and wavelength of detection was kept at 225nm. The retention time for Metoprolol Succinate and Olmesartan Medoxomil was 2.451±0.1min and 6.167±0.1min, respectively. The linearity of the proposed method was investigated in the range of 5-50μg/ml and 2-20μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively. Correlation coefficient was 0.999 and 0.9996 for Metoprolol Succinate and Olmesartan Medoxomil, respectively. The limit of detection was 0.2847μg/ml and 0.1251μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively and the limit of quantification was 0.8630μg/ml and 0.3793μg/ml for Metoprolol and Olmesartan, respectively. Proposed methods were validated as per ICH guidelines for linearity, accuracy, precision, specificity and robustness for estimation of Metoprolol Succinate and Olmesartan Medoxomil in commercially available tablet dosage form and results were found to be satisfactory. Thus the developed and validated stability indicating method can be used successfully for marketed formulations.

Keywords: metoprolol succinate, olmesartan medoxomil, RP-HPLC method, validation, ICH

Procedia PDF Downloads 294
13858 State Estimation Based on Unscented Kalman Filter for Burgers’ Equation

Authors: Takashi Shimizu, Tomoaki Hashimoto

Abstract:

Controlling the flow of fluids is a challenging problem that arises in many fields. Burgers’ equation is a fundamental equation for several flow phenomena such as traffic, shock waves, and turbulence. The optimal feedback control method, so-called model predictive control, has been proposed for Burgers’ equation. However, the model predictive control method is inapplicable to systems whose all state variables are not exactly known. In practical point of view, it is unusual that all the state variables of systems are exactly known, because the state variables of systems are measured through output sensors and limited parts of them can be only available. In fact, it is usual that flow velocities of fluid systems cannot be measured for all spatial domains. Hence, any practical feedback controller for fluid systems must incorporate some type of state estimator. To apply the model predictive control to the fluid systems described by Burgers’ equation, it is needed to establish a state estimation method for Burgers’ equation with limited measurable state variables. To this purpose, we apply unscented Kalman filter for estimating the state variables of fluid systems described by Burgers’ equation. The objective of this study is to establish a state estimation method based on unscented Kalman filter for Burgers’ equation. The effectiveness of the proposed method is verified by numerical simulations.

Keywords: observer systems, unscented Kalman filter, nonlinear systems, Burgers' equation

Procedia PDF Downloads 134
13857 The Learning Impact of a 4-Dimensional Digital Construction Learning Environment

Authors: Chris Landorf, Stephen Ward

Abstract:

This paper addresses a virtual environment approach to work integrated learning for students in construction-related disciplines. The virtual approach provides a safe and pedagogically rigorous environment where students can apply theoretical knowledge in a simulated real-world context. The paper describes the development of a 4-dimensional digital construction environment and associated learning activities funded by the Australian Office for Learning and Teaching. The environment was trialled with over 1,300 students and evaluated through questionnaires, observational studies and coursework analysis. Results demonstrate a positive impact on students’ technical learning and collaboration skills, but there is need for further research in relation to critical thinking skills and work-readiness.

Keywords: architectural education, construction industry, digital learning environments, immersive learning

Procedia PDF Downloads 387
13856 A Digital Filter for Symmetrical Components Identification

Authors: Khaled M. El-Naggar

Abstract:

This paper presents a fast and efficient technique for monitoring and supervising power system disturbances generated due to dynamic performance of power systems or faults. Monitoring power system quantities involve monitoring fundamental voltage, current magnitudes, and their frequencies as well as their negative and zero sequence components under different operating conditions. The proposed technique is based on simulated annealing optimization technique (SA). The method uses digital set of measurements for the voltage or current waveforms at power system bus to perform the estimation process digitally. The algorithm is tested using different simulated data to monitor the symmetrical components of power system waveforms. Different study cases are considered in this work. Effects of number of samples, sampling frequency and the sample window size are studied. Results are reported and discussed.

Keywords: estimation, faults, measurement, symmetrical components

Procedia PDF Downloads 445
13855 A Bathtub Curve from Nonparametric Model

Authors: Eduardo C. Guardia, Jose W. M. Lima, Afonso H. M. Santos

Abstract:

This paper presents a nonparametric method to obtain the hazard rate “Bathtub curve” for power system components. The model is a mixture of the three known phases of a component life, the decreasing failure rate (DFR), the constant failure rate (CFR) and the increasing failure rate (IFR) represented by three parametric Weibull models. The parameters are obtained from a simultaneous fitting process of the model to the Kernel nonparametric hazard rate curve. From the Weibull parameters and failure rate curves the useful lifetime and the characteristic lifetime were defined. To demonstrate the model the historic time-to-failure of distribution transformers were used as an example. The resulted “Bathtub curve” shows the failure rate for the equipment lifetime which can be applied in economic and replacement decision models.

Keywords: bathtub curve, failure analysis, lifetime estimation, parameter estimation, Weibull distribution

Procedia PDF Downloads 426
13854 Flame Volume Prediction and Validation for Lean Blowout of Gas Turbine Combustor

Authors: Ejaz Ahmed, Huang Yong

Abstract:

The operation of aero engines has a critical importance in the vicinity of lean blowout (LBO) limits. Lefebvre’s model of LBO based on empirical correlation has been extended to flame volume concept by the authors. The flame volume takes into account the effects of geometric configuration, the complex spatial interaction of mixing, turbulence, heat transfer and combustion processes inside the gas turbine combustion chamber. For these reasons, flame volume based LBO predictions are more accurate. Although LBO prediction accuracy has improved, it poses a challenge associated with Vf estimation in real gas turbine combustors. This work extends the approach of flame volume prediction previously based on fuel iterative approximation with cold flow simulations to reactive flow simulations. Flame volume for 11 combustor configurations has been simulated and validated against experimental data. To make prediction methodology robust as required in the preliminary design stage, reactive flow simulations were carried out with the combination of probability density function (PDF) and discrete phase model (DPM) in FLUENT 15.0. The criterion for flame identification was defined. Two important parameters i.e. critical injection diameter (Dp,crit) and critical temperature (Tcrit) were identified, and their influence on reactive flow simulation was studied for Vf estimation. Obtained results exhibit ±15% error in Vf estimation with experimental data.

Keywords: CFD, combustion, gas turbine combustor, lean blowout

Procedia PDF Downloads 250
13853 On Modeling Data Sets by Means of a Modified Saddlepoint Approximation

Authors: Serge B. Provost, Yishan Zhang

Abstract:

A moment-based adjustment to the saddlepoint approximation is introduced in the context of density estimation. First applied to univariate distributions, this methodology is extended to the bivariate case. It then entails estimating the density function associated with each marginal distribution by means of the saddlepoint approximation and applying a bivariate adjustment to the product of the resulting density estimates. The connection to the distribution of empirical copulas will be pointed out. As well, a novel approach is proposed for estimating the support of distribution. As these results solely rely on sample moments and empirical cumulant-generating functions, they are particularly well suited for modeling massive data sets. Several illustrative applications will be presented.

Keywords: empirical cumulant-generating function, endpoints identification, saddlepoint approximation, sample moments, density estimation

Procedia PDF Downloads 148
13852 An Efficient Propensity Score Method for Causal Analysis With Application to Case-Control Study in Breast Cancer Research

Authors: Ms Azam Najafkouchak, David Todem, Dorothy Pathak, Pramod Pathak, Joseph Gardiner

Abstract:

Propensity score (PS) methods have recently become the standard analysis as a tool for the causal inference in the observational studies where exposure is not randomly assigned, thus, confounding can impact the estimation of treatment effect on the outcome. For the binary outcome, the effect of treatment on the outcome can be estimated by odds ratios, relative risks, and risk differences. However, using the different PS methods may give you a different estimation of the treatment effect on the outcome. Several methods of PS analyses have been used mainly, include matching, inverse probability of weighting, stratification, and covariate adjusted on PS. Due to the dangers of discretizing continuous variables (exposure, covariates), the focus of this paper will be on how the variation in cut-points or boundaries will affect the average treatment effect (ATE) utilizing the stratification of PS method. Therefore, we are trying to avoid choosing arbitrary cut-points, instead, we continuously discretize the PS and accumulate information across all cut-points for inferences. We will use Monte Carlo simulation to evaluate ATE, focusing on two PS methods, stratification and covariate adjusted on PS. We will then show how this can be observed based on the analyses of the data from a case-control study of breast cancer, the Polish Women’s Health Study.

Keywords: average treatment effect, propensity score, stratification, covariate adjusted, monte Calro estimation, breast cancer, case_control study

Procedia PDF Downloads 88
13851 On Confidence Intervals for the Difference between Inverse of Normal Means with Known Coefficients of Variation

Authors: Arunee Wongkhao, Suparat Niwitpong, Sa-aat Niwitpong

Abstract:

In this paper, we propose two new confidence intervals for the difference between the inverse of normal means with known coefficients of variation. One of these two confidence intervals for this problem is constructed based on the generalized confidence interval and the other confidence interval is constructed based on the closed form method of variance estimation. We examine the performance of these confidence intervals in terms of coverage probabilities and expected lengths via Monte Carlo simulation.

Keywords: coverage probability, expected length, inverse of normal mean, coefficient of variation, generalized confidence interval, closed form method of variance estimation

Procedia PDF Downloads 295
13850 Modeling of Turbulent Flow for Two-Dimensional Backward-Facing Step Flow

Authors: Alex Fedoseyev

Abstract:

This study investigates a generalized hydrodynamic equation (GHE) simplified model for the simulation of turbulent flow over a two-dimensional backward-facing step (BFS) at Reynolds number Re=132000. The GHE were derived from the generalized Boltzmann equation (GBE). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considers particles of finite dimensions. The GHE has additional terms, temporal and spatial fluctuations, compared to the Navier-Stokes equations (NSE). These terms have a timescale multiplier τ, and the GHE becomes the NSE when $\tau$ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The BFS flow modeling results obtained by 2D calculations cannot match the experimental data for Re>450. One or two additional equations are required for the turbulence model to be added to the NSE, which typically has two to five parameters to be tuned for specific problems. It is shown that the GHE does not require an additional turbulence model, whereas the turbulent velocity results are in good agreement with the experimental results. A review of several studies on the simulation of flow over the BFS from 1980 to 2023 is provided. Most of these studies used different turbulence models when Re>1000. In this study, the 2D turbulent flow over a BFS with height H=L/3 (where L is the channel height) at Reynolds number Re=132000 was investigated using numerical solutions of the GHE (by a finite-element method) and compared to the solutions from the Navier-Stokes equations, k–ε turbulence model, and experimental results. The comparison included the velocity profiles at X/L=5.33 (near the end of the recirculation zone, available from the experiment), recirculation zone length, and velocity flow field. The mean velocity of NSE was obtained by averaging the solution over the number of time steps. The solution with a standard k −ε model shows a velocity profile at X/L=5.33, which has no backward flow. A standard k−ε model underpredicts the experimental recirculation zone length X/L=7.0∓0.5 by a substantial amount of 20-25%, and a more sophisticated turbulence model is needed for this problem. The obtained data confirm that the GHE results are in good agreement with the experimental results for turbulent flow over two-dimensional BFS. A turbulence model was not required in this case. The computations were stable. The solution time for the GHE is the same or less than that for the NSE and significantly less than that for the NSE with the turbulence model. The proposed approach was limited to 2D and only one Reynolds number. Further work will extend this approach to 3D flow and a higher Re.

Keywords: backward-facing step, comparison with experimental data, generalized hydrodynamic equations, separation, reattachment, turbulent flow

Procedia PDF Downloads 42
13849 Behaviour of Model Square Footing Resting on Three Dimensional Geogrid Reinforced Sand Bed

Authors: Femy M. Makkar, S. Chandrakaran, N. Sankar

Abstract:

The concept of reinforced earth has been used in the field of geotechnical engineering since 1960s, for many applications such as, construction of road and rail embankments, pavements, retaining walls, shallow foundations, soft ground improvement and so on. Conventionally, planar geosynthetic materials such as geotextiles and geogrids were used as the reinforcing elements. Recently, the use of three dimensional reinforcements becomes one of the emerging trends in this field. So, in the present investigation, three dimensional geogrid is proposed as a reinforcing material. Laboratory scaled plate load tests are conducted on a model square footing resting on 3D geogrid reinforced sand bed. The performance of 3D geogrids in triangular and square pattern was compared with conventional geogrids and the improvement in bearing capacity and reduction in settlement and heave are evaluated. When single layer of reinforcement was placed at an optimum depth of 0.25B from the bottom of the footing, the bearing capacity of conventional geogrid reinforced soil improved by 1.85 times compared to unreinforced soil, where as 3D geogrid reinforced soil with triangular pattern and square pattern shows 2.69 and 3.05 times improvement respectively compared to unreinforced soil. Also, 3D geogrids performs better than conventional geogrids in reducing the settlement and heave of sand bed around the model footing.

Keywords: 3D reinforcing elements, bearing capacity, heavy, settlement

Procedia PDF Downloads 283
13848 Protein Tertiary Structure Prediction by a Multiobjective Optimization and Neural Network Approach

Authors: Alexandre Barbosa de Almeida, Telma Woerle de Lima Soares

Abstract:

Protein structure prediction is a challenging task in the bioinformatics field. The biological function of all proteins majorly relies on the shape of their three-dimensional conformational structure, but less than 1% of all known proteins in the world have their structure solved. This work proposes a deep learning model to address this problem, attempting to predict some aspects of the protein conformations. Throughout a process of multiobjective dominance, a recurrent neural network was trained to abstract the particular bias of each individual multiobjective algorithm, generating a heuristic that could be useful to predict some of the relevant aspects of the three-dimensional conformation process formation, known as protein folding.

Keywords: Ab initio heuristic modeling, multiobjective optimization, protein structure prediction, recurrent neural network

Procedia PDF Downloads 188