Search results for: uniformly minimum variance unbiased estimator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3415

Search results for: uniformly minimum variance unbiased estimator

3295 Performance Evaluation of Iar Multi Crop Thresher

Authors: Idris Idris Sunusi, U.S. Muhammed, N.A. Sale, I.B. Dalha, N.A. Adam

Abstract:

Threshing efficiency and mechanical grain damages are among the important parameters used in rating the performance of agricultural threshers. To be acceptable to farmers, threshers should have high threshing efficiency and low grain. The objective of the research is to evaluate the performances of the thresher using sorghum and millet, the performances parameters considered are; threshing efficiency and mechanical grain damage. For millet, four drum speed levels; 700, 800, 900 and 1000 rpm were considered while for sorghum; 600, 700, 800 and 900 rpm were considered. The feed rate levels were 3, 4, 5 and 6 kg/min for both sorghum and millet; the levels of moisture content were 8.93 and 10.38% for sorghum and 9.21 and 10.81% for millet. For millet the test result showed a maximum of 98.37 threshing efficiencies and a minimum of 0.24% mechanical grain damage while for sorghum the test result indicated a maximum of 99.38 threshing efficiencies, and a minimum of 0.75% mechanical grain damage. In comparison to the previous thresher, the threshing efficiency and mechanical grain damage of the modified machine has improved by 2.01% and 330.56% for millet and 5.31%, 287.64% for sorghum. Also analysis of variance (ANOVA) showed that, the effect of drum speed, feed rate and moisture content were significant on the performance parameters.

Keywords: Threshing Efficiency, Mechanical Grain Damages, Sorghum and Millet, Multi Crop Thresher

Procedia PDF Downloads 322
3294 Considering the Reliability of Measurements Issue in Distributed Adaptive Estimation Algorithms

Authors: Wael M. Bazzi, Amir Rastegarnia, Azam Khalili

Abstract:

In this paper we consider the issue of reliability of measurements in distributed adaptive estimation problem. To this aim, we assume a sensor network with different observation noise variance among the sensors and propose new estimation method based on incremental distributed least mean-square (IDLMS) algorithm. The proposed method contains two phases: I) Estimation of each sensors observation noise variance, and II) Estimation of the desired parameter using the estimated observation variances. To deal with the reliability of measurements, in the second phase of the proposed algorithm, the step-size parameter is adjusted for each sensor according to its observation noise variance. As our simulation results show, the proposed algorithm considerably improves the performance of the IDLMS algorithm in the same condition.

Keywords: adaptive filter, distributed estimation, sensor network, IDLMS algorithm

Procedia PDF Downloads 608
3293 New Estimation in Autoregressive Models with Exponential White Noise by Using Reversible Jump MCMC Algorithm

Authors: Suparman Suparman

Abstract:

A white noise in autoregressive (AR) model is often assumed to be normally distributed. In application, the white noise usually do not follows a normal distribution. This paper aims to estimate a parameter of AR model that has a exponential white noise. A Bayesian method is adopted. A prior distribution of the parameter of AR model is selected and then this prior distribution is combined with a likelihood function of data to get a posterior distribution. Based on this posterior distribution, a Bayesian estimator for the parameter of AR model is estimated. Because the order of AR model is considered a parameter, this Bayesian estimator cannot be explicitly calculated. To resolve this problem, a method of reversible jump Markov Chain Monte Carlo (MCMC) is adopted. A result is a estimation of the parameter AR model can be simultaneously calculated.

Keywords: autoregressive (AR) model, exponential white Noise, bayesian, reversible jump Markov Chain Monte Carlo (MCMC)

Procedia PDF Downloads 328
3292 Fathers' Knowledge and Attitude towards Breastfeeding: A Cross Sectional Study

Authors: Jacqueline R. Llamas, Agnes Regal

Abstract:

Objective: To determine the breastfeeding knowledge and attitudes of fathers seen at the University of Santo Tomas Hospital. Design: Cross-sectional design. Setting: University of Santo Tomas Hospital (USTH). Participants: 156 fathers who were accompanying their wives/children at the USTH. Findings: Outcome of the Iowa Infant Feeding Attitude Scale showed fathers to be generally unbiased whether their child be fed breast milk or milk formula. About 85% agreed that breast milk is the ideal food for babies, 79% believed that breastfed babies are healthier than formula fed and 55% of them do not believe that breast milk lacks iron. About 80% agreed that it is easily digested, 87% are aware of the economical value and 57% agreed of its convenience. Breastfeeding support was noted when 55% of the fathers would encourage mothers to breastfeed so as not to miss the joys of motherhood, 91% believed that breastfeeding increased mother-infant bonding. About 57% do not feel left out whenever the mothers breastfeed. However, 46.6% support the decision of their wives to switch to formula feeding once they go back to work, 42% only find breastfeeding in public to be acceptable and 57% will not allow breast feeding to mothers who drink alcohol. Conclusion: In the study, although fathers’ attitude toward breastfeeding is unbiased towards breastfeeding or formula feeding, the majority of the fathers appreciate breastfeeding and its benefits. Also, how the father’s level of education, age, profession, household income and number of children had an effect on their attitude towards breastfeeding.

Keywords: father, breastfeeding, breast milk, knowledge

Procedia PDF Downloads 391
3291 The Relationship between Dispositional Mindfulness, Adult Attachment Orientations, and Emotion Regulation

Authors: Jodie Stevenson, Lisa-Marie Emerson, Abigail Millings

Abstract:

Mindfulness has been conceptualized as a dispositional trait, which is different across individuals. Previous research has independently identified both adult attachment orientations and emotion regulation abilities as correlates of dispositional mindfulness. Research has also presented a two-factor model of the relationship between these three constructs. The present study aimed to further develop this model and investigated theses relationships in a sample of 186 participants. Participants completed the Five Factor Mindfulness Questionnaire Short Form (FFMQ-SF), the Experiences in Close Relationships Scale for global attachment (ECR), the Emotion Regulation Questionnaire (ERC), and the Adult Disorganized Attachment scale (ADA). Exploratory factor analysis revealed a 3-factor solution accounting for 59% of the variance across scores on these measures. The first factor accounted for 32% of the variance and loaded highly on attachment and mindfulness subscales. The second factor accounted for 15% of the variance with strong loadings on emotion regulation subscales. The third factor accounted for 12% of the variance with strong loadings on disorganized attachment, and the mindfulness observes subscale. The results further confirm the relationship between attachment, mindfulness, and emotion regulation along with the unique addition of disorganized attachment. The extracted factors will then be used to predict well-being outcomes for an undergraduate student population.

Keywords: adult attachment, emotion regulation, mindfulness, well-being

Procedia PDF Downloads 351
3290 A Nonlinear Dynamical System with Application

Authors: Abdullah Eqal Al Mazrooei

Abstract:

In this paper, a nonlinear dynamical system is presented. This system is a bilinear class. The bilinear systems are very important kind of nonlinear systems because they have many applications in real life. They are used in biology, chemistry, manufacturing, engineering, and economics where linear models are ineffective or inadequate. They have also been recently used to analyze and forecast weather conditions. Bilinear systems have three advantages: First, they define many problems which have a great applied importance. Second, they give us approximations to nonlinear systems. Thirdly, they have a rich geometric and algebraic structures, which promises to be a fruitful field of research for scientists and applications. The type of nonlinearity that is treated and analyzed consists of bilinear interaction between the states vectors and the system input. By using some properties of the tensor product, these systems can be transformed to linear systems. But, here we discuss the nonlinearity when the state vector is multiplied by itself. So, this model will be able to handle evolutions according to the Lotka-Volterra models or the Lorenz weather models, thus enabling a wider and more flexible application of such models. Here we apply by using an estimator to estimate temperatures. The results prove the efficiency of the proposed system.

Keywords: Lorenz models, nonlinear systems, nonlinear estimator, state-space model

Procedia PDF Downloads 233
3289 Introduction of Robust Multivariate Process Capability Indices

Authors: Behrooz Khalilloo, Hamid Shahriari, Emad Roghanian

Abstract:

Process capability indices (PCIs) are important concepts of statistical quality control and measure the capability of processes and how much processes are meeting certain specifications. An important issue in statistical quality control is parameter estimation. Under the assumption of multivariate normality, the distribution parameters, mean vector and variance-covariance matrix must be estimated, when they are unknown. Classic estimation methods like method of moment estimation (MME) or maximum likelihood estimation (MLE) makes good estimation of the population parameters when data are not contaminated. But when outliers exist in the data, MME and MLE make weak estimators of the population parameters. So we need some estimators which have good estimation in the presence of outliers. In this work robust M-estimators for estimating these parameters are used and based on robust parameter estimators, robust process capability indices are introduced. The performances of these robust estimators in the presence of outliers and their effects on process capability indices are evaluated by real and simulated multivariate data. The results indicate that the proposed robust capability indices perform much better than the existing process capability indices.

Keywords: multivariate process capability indices, robust M-estimator, outlier, multivariate quality control, statistical quality control

Procedia PDF Downloads 250
3288 Weighted Rank Regression with Adaptive Penalty Function

Authors: Kang-Mo Jung

Abstract:

The use of regularization for statistical methods has become popular. The least absolute shrinkage and selection operator (LASSO) framework has become the standard tool for sparse regression. However, it is well known that the LASSO is sensitive to outliers or leverage points. We consider a new robust estimation which is composed of the weighted loss function of the pairwise difference of residuals and the adaptive penalty function regulating the tuning parameter for each variable. Rank regression is resistant to regression outliers, but not to leverage points. By adopting a weighted loss function, the proposed method is robust to leverage points of the predictor variable. Furthermore, the adaptive penalty function gives us good statistical properties in variable selection such as oracle property and consistency. We develop an efficient algorithm to compute the proposed estimator using basic functions in program R. We used an optimal tuning parameter based on the Bayesian information criterion (BIC). Numerical simulation shows that the proposed estimator is effective for analyzing real data set and contaminated data.

Keywords: adaptive penalty function, robust penalized regression, variable selection, weighted rank regression

Procedia PDF Downloads 431
3287 Particle Swarm Optimization Based Method for Minimum Initial Marking in Labeled Petri Nets

Authors: Hichem Kmimech, Achref Jabeur Telmoudi, Lotfi Nabli

Abstract:

The estimation of the initial marking minimum (MIM) is a crucial problem in labeled Petri nets. In the case of multiple choices, the search for the initial marking leads to a problem of optimization of the minimum allocation of resources with two constraints. The first concerns the firing sequence that could be legal on the initial marking with respect to the firing vector. The second deals with the total number of tokens that can be minimal. In this article, the MIM problem is solved by the meta-heuristic particle swarm optimization (PSO). The proposed approach presents the advantages of PSO to satisfy the two previous constraints and find all possible combinations of minimum initial marking with the best computing time. This method, more efficient than conventional ones, has an excellent impact on the resolution of the MIM problem. We prove through a set of definitions, lemmas, and examples, the effectiveness of our approach.

Keywords: marking, production system, labeled Petri nets, particle swarm optimization

Procedia PDF Downloads 145
3286 Analysis of the Homogeneous Turbulence Structure in Uniformly Sheared Bubbly Flow Using First and Second Order Turbulence Closures

Authors: Hela Ayeb Mrabtini, Ghazi Bellakhal, Jamel Chahed

Abstract:

The presence of the dispersed phase in gas-liquid bubbly flow considerably alters the liquid turbulence. The bubbles induce turbulent fluctuations that enhance the global liquid turbulence level and alter the mechanisms of turbulence. RANS modeling of uniformly sheared flows on an isolated sphere centered in a control volume is performed using first and second order turbulence closures. The sphere is placed in the production-dissipation equilibrium zone where the liquid velocity is set equal to the relative velocity of the bubbles. The void fraction is determined by the ratio between the sphere volume and the control volume. The analysis of the turbulence statistics on the control volume provides numerical results that are interpreted with regard to the effect of the bubbles wakes on the turbulence structure in uniformly sheared bubbly flow. We assumed for this purpose that at low void fraction where there is no hydrodynamic interaction between the bubbles, the single-phase flow simulation on an isolated sphere is representative on statistical average of a sphere network. The numerical simulations were firstly validated against the experimental data of bubbly homogeneous turbulence with constant shear and then extended to produce numerical results for a wide range of shear rates from 0 to 10 s^-1. These results are compared with our turbulence closure proposed for gas-liquid bubbly flows. In this closure, the turbulent stress tensor in the liquid is split into a turbulent dissipative part produced by the gradient of the mean velocity which also contains the turbulence generated in the bubble wakes and a pseudo-turbulent non-dissipative part induced by the bubbles displacements. Each part is determined by a specific transport equation. The simulations of uniformly sheared flows on an isolated sphere reproduce the mechanisms related to the turbulent part, and the numerical results are in perfect accordance with the modeling of the transport equation of the turbulent part. The reduction of second order turbulence closure provides a description of the modification of turbulence structure by the bubbles presence using a dimensionless number expressed in terms of two-time scales characterizing the turbulence induced by the shear and that induced by bubbles displacements. The numerical simulations carried out in the framework of a comprehensive analysis reproduce particularly the attenuation of the turbulent friction showed in the experimental results of bubbly homogeneous turbulence subjected to a constant shear.

Keywords: gas-liquid bubbly flows, homogeneous turbulence, turbulence closure, uniform shear

Procedia PDF Downloads 431
3285 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 273
3284 Fire Safety Engineering of Wood Dust Layer or Cloud

Authors: Marzena Półka, Bożena Kukfisz

Abstract:

This paper presents an analysis of dust explosion hazards in the process industries. It includes selected testing method of dust explosibility and presentation two of them according to experimental standards used by Department of Combustion and Fire Theory in The Main School of Fire Service in Warsaw. In the article are presented values of maximum acceptable surface temperature (MAST) of machines operating in the presence of dust cloud and chosen dust layer with thickness of 5 and 12,5mm. The comparative analysis, points to the conclusion that the value of the minimum ignition temperature of the layer (MITL) and the minimum ignition temperature of dust cloud (MTCD) depends on the granularity of the substance. Increasing the thickness of the dust layer reduces minimum ignition temperature of dust layer. Increasing the thickness of dust at the same time extends the flameless combustion and delays the ignition.

Keywords: fire safety engineering, industrial hazards, minimum ignition temperature, wood dust

Procedia PDF Downloads 288
3283 Application of Neural Network on the Loading of Copper onto Clinoptilolite

Authors: John Kabuba

Abstract:

The study investigated the implementation of the Neural Network (NN) techniques for prediction of the loading of Cu ions onto clinoptilolite. The experimental design using analysis of variance (ANOVA) was chosen for testing the adequacy of the Neural Network and for optimizing of the effective input parameters (pH, temperature and initial concentration). Feed forward, multi-layer perceptron (MLP) NN successfully tracked the non-linear behavior of the adsorption process versus the input parameters with mean squared error (MSE), correlation coefficient (R) and minimum squared error (MSRE) of 0.102, 0.998 and 0.004 respectively. The results showed that NN modeling techniques could effectively predict and simulate the highly complex system and non-linear process such as ion-exchange.

Keywords: clinoptilolite, loading, modeling, neural network

Procedia PDF Downloads 389
3282 On the Construction of Some Optimal Binary Linear Codes

Authors: Skezeer John B. Paz, Ederlina G. Nocon

Abstract:

Finding an optimal binary linear code is a central problem in coding theory. A binary linear code C = [n, k, d] is called optimal if there is no linear code with higher minimum distance d given the length n and the dimension k. There are bounds giving limits for the minimum distance d of a linear code of fixed length n and dimension k. The lower bound which can be taken by construction process tells that there is a known linear code having this minimum distance. The upper bound is given by theoretic results such as Griesmer bound. One way to find an optimal binary linear code is to make the lower bound of d equal to its higher bound. That is, to construct a binary linear code which achieves the highest possible value of its minimum distance d, given n and k. Some optimal binary linear codes were presented by Andries Brouwer in his published table on bounds of the minimum distance d of binary linear codes for 1 ≤ n ≤ 256 and k ≤ n. This was further improved by Markus Grassl by giving a detailed construction process for each code exhibiting the lower bound. In this paper, we construct new optimal binary linear codes by using some construction processes on existing binary linear codes. Particularly, we developed an algorithm applied to the codes already constructed to extend the list of optimal binary linear codes up to 257 ≤ n ≤ 300 for k ≤ 7.

Keywords: bounds of linear codes, Griesmer bound, construction of linear codes, optimal binary linear codes

Procedia PDF Downloads 719
3281 Structural and Electrical Properties of VO₂/ZnO Nanostructures

Authors: Sang-Wook Han, Zhenlan Jin, In-Hui Hwang, Chang-In Park

Abstract:

We examined structural and electrical properties of uniformly-oriented VO₂/ZnO nanostructures. VO₂ was deposited on ZnO templates by using a direct current-sputtering deposition. Scanning electron microscope and transmission electron microscope measurements indicated that b-oriented VO₂ were uniformly crystallized on ZnO templates with different lengths. VO₂/ZnO formed nanorods on ZnO nanorods with length longer than 250 nm. X-ray absorption fine structure at V K edge of VO₂/ZnO showed M1 and R phases of VO₂ at 30 and 100 ℃, respectively, suggesting structural phase transition between temperatures. Temperature-dependent resistance measurements of VO₂/ZnO nanostructures revealed metal-to-insulator transition at 65 ℃ and 55 ℃ during heating and cooling, respectively, regardless of ZnO length. The bond lengths of V-O and V-V pairs in VO₂/ZnO nanorods were somewhat distorted, and a substantial amount of structural disorder existed in the atomic pairs, compared to those of VO₂ films without ZnO. Resistance from VO₂/ZnO nanorods revealed a sharp MIT near 65 ℃ during heating and a hysteresis behavior. The resistance results suggest that microchannel for charge carriers exist nearly room temperature during cooling. VO₂/ZnO nanorods are quite stable and reproducible so that they can be widely used for practical applications to electronic devices, gas sensors, and ultra-fast switches, as examples.

Keywords: metal-to-insulator transition, VO₂, ZnO, XAFS, structural-phase transition

Procedia PDF Downloads 454
3280 Environmentally Adaptive Acoustic Echo Suppression for Barge-in Speech Recognition

Authors: Jong Han Joo, Jung Hoon Lee, Young Sun Kim, Jae Young Kang, Seung Ho Choi

Abstract:

In this study, we propose a novel technique for acoustic echo suppression (AES) during speech recognition under barge-in conditions. Conventional AES methods based on spectral subtraction apply fixed weights to the estimated echo path transfer function (EPTF) at the current signal segment and to the EPTF estimated until the previous time interval. We propose a new approach that adaptively updates weight parameters in response to abrupt changes in the acoustic environment due to background noises or double-talk. Furthermore, we devised a voice activity detector and an initial time-delay estimator for barge-in speech recognition in communication networks. The initial time delay is estimated using log-spectral distance measure, as well as cross-correlation coefficients. The experimental results show that the developed techniques can be successfully applied in barge-in speech recognition systems.

Keywords: acoustic echo suppression, barge-in, speech recognition, echo path transfer function, initial delay estimator, voice activity detector

Procedia PDF Downloads 342
3279 The Roles of Pay Satisfaction and Intent to Leave on Counterproductive Work Behavior among Non-Academic University Employees

Authors: Abiodun Musbau Lawal, Sunday Samson Babalola, Uzor Friday Ordu

Abstract:

Issue of employees counterproductive work behavior in government owned organization in emerging economies has continued to be a major concern. This study investigated the factors of pay satisfaction, intent to leave and age as predictors of counterproductive work behavior among non-academic employee in a Nigerian federal government owned university. A sample of 200 non-academic employees completed questionnaires. Hierarchical multiple regression was conducted to determine the contribution of each of the predictor variables on the criterion variable on counterproductive work behavior. Results indicate that age of participants (β = -.18; p < .05) significantly independently predicted CWB by accounting for 3% of the explained variance. Addition of pay satisfaction (β = -.14; p < .05) significantly accounted for 5% of the explained variance, while intent to leave (β = -.17; p < .05) further resulted in 8% of the explained variance in counterproductive work behavior. The importance of these findings with regards to reduction in counterproductive work behavior is highlighted.

Keywords: counterproductive, work behaviour, pay satisfaction, intent to leave

Procedia PDF Downloads 346
3278 Indoor Temperature Estimation with FIR Filter Using R-C Network Model

Authors: Sung Hyun You, Jeong Hoon Kim, Dae Ki Kim, Choon Ki Ahn

Abstract:

In this paper, we proposed a new strategy for estimating indoor temperature based on the modified resistance capacitance (R–C) network thermal dynamic model. Using minimum variance finite impulse response (FIR) filter, accurate indoor temperature estimation can be achieved. Our study is clarified by the experimental validation of the proposed indoor temperature estimation method. This experiment scenario environment is composed of a demand response (DR) server and home energy management system (HEMS) in a test bed.

Keywords: energy consumption, resistance-capacitance network model, demand response, finite impulse response filter

Procedia PDF Downloads 423
3277 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 185
3276 An Empirical Study of the Best Fitting Probability Distributions for Stock Returns Modeling

Authors: Jayanta Pokharel, Gokarna Aryal, Netra Kanaal, Chris Tsokos

Abstract:

Investment in stocks and shares aims to seek potential gains while weighing the risk of future needs, such as retirement, children's education etc. Analysis of the behavior of the stock market returns and making prediction is important for investors to mitigate risk on investment. Historically, the normal variance models have been used to describe the behavior of stock market returns. However, the returns of the financial assets are actually skewed with higher kurtosis, heavier tails, and a higher center than the normal distribution. The Laplace distribution and its family are natural candidates for modeling stock returns. The Variance-Gamma (VG) distribution is the most sought-after distributions for modeling asset returns and has been extensively discussed in financial literatures. In this paper, it explore the other Laplace family, such as Asymmetric Laplace, Skewed Laplace, Kumaraswamy Laplace (KS) together with Variance-Gamma to model the weekly returns of the S&P 500 Index and it's eleven business sector indices. The method of maximum likelihood is employed to estimate the parameters of the distributions and our empirical inquiry shows that the Kumaraswamy Laplace distribution performs much better for stock returns modeling among the choice of distributions used in this study and in practice, KS can be used as a strong alternative to VG distribution.

Keywords: stock returns, variance-gamma, kumaraswamy laplace, maximum likelihood

Procedia PDF Downloads 40
3275 Design of a Low Cost Programmable LED Lighting System

Authors: S. Abeysekera, M. Bazghaleh, M. P. L. Ooi, Y. C. Kuang, V. Kalavally

Abstract:

Smart LED-based lighting systems have significant advantages over traditional lighting systems due to their capability of producing tunable light spectrums on demand. The main challenge in the design of smart lighting systems is to produce sufficient luminous flux and uniformly accurate output spectrum for sufficiently broad area. This paper outlines the programmable LED lighting system design principles of design to achieve the two aims. In this paper, a seven-channel design using low-cost discrete LEDs is presented. Optimization algorithms are used to calculate the number of required LEDs, LEDs arrangements and optimum LED separation distance. The results show the illumination uniformity for each channel. The results also show that the maximum color error is below 0.0808 on the CIE1976 chromaticity scale. In conclusion, this paper considered the simulation and design of a seven-channel programmable lighting system using low-cost discrete LEDs to produce sufficient luminous flux and uniformly accurate output spectrum for sufficiently broad area.

Keywords: light spectrum control, LEDs, smart lighting, programmable LED lighting system

Procedia PDF Downloads 160
3274 The Influence of Oil Price Fluctuations on Macroeconomics Variables of the Kingdom of Saudi Arabia

Authors: Khalid Mujaljal, Hassan Alhajhoj

Abstract:

This paper empirically investigates the influence of oil price fluctuations on the key macroeconomic variables of the Kingdom of Saudi Arabia using unrestricted VAR methodology. Two analytical tools- Granger-causality and variance decomposition are used. The Granger-causality test reveals that almost all specifications of oil price shocks significantly Granger-cause GDP and demonstrates evidence of causality between oil price changes and money supply (M3) and consumer price index percent (CPIPC) in the case of positive oil price shocks. Surprisingly, almost all specifications of oil price shocks do not Granger-cause government expenditure. The outcomes from variance decomposition analysis suggest that positive oil shocks contribute about 25 percent in causing inflation in the country. Also, contribution of symmetric linear oil price shocks and asymmetric positive oil price shocks is significant and persistent with 25 percent explaining variation in world consumer price index till end of the period.

Keywords: Granger causality, oil prices changes, Saudi Arabian economy, variance decomposition

Procedia PDF Downloads 296
3273 Stereoselective Glycosylation and Functionalization of Unbiased Site of Sweet System via Dual-Catalytic Transition Metal Systems/Wittig Reaction

Authors: Mukul R. Gupta, Rajkumar Gandhi, Rajitha Sachan, Naveen K. Khare

Abstract:

The field of glycoscience has burgeoned in the last several decades, leading to the identification of many glycosides which could serve critical roles in a wide range of biological processes. This has prompted a resurgence in synthetic interest, with a particular focus on new approaches to construct the selective glycosidic bond. Despite the numerous elegant strategies and methods developed for the formation of glycosidic bonds, stereoselective construction of glycosides remains challenging. Here, we have recently developed the novel Hexafluoroisopropanol (HFIP) catalyzed stereoselective glycosylation methods by using KDN imidate glycosyl donor and a variety of alcohols in excellent yield. This method is broadly applicable to a wide range of substrates and with excellent selectivity of glycoside. Also, herein we are reporting the functionalization of the unbiased side of newly formed glycosides by dual-catalytic transition metal systems (Ru- or Fe-). We are using the innovative Reverse & Catalyst strategy, i.e., a reversible activation reaction by one catalyst with a functionalization reaction by another catalyst, together with enabling functionalization of substrates at their inherently unreactive sites. As well, we are targeting the diSia derivative synthesis by Wittig reaction. This synthetic method is applicable in mild conditions, functional group tolerance of the dual-catalytic systems and also highlights the potential of the multicatalytic approach to address challenging transformations to avoid multistep procedures in carbohydrate synthesis.

Keywords: KDN, stereoselective glycosylation, dual-catalytic functionalization, Wittig reaction

Procedia PDF Downloads 162
3272 Carbohydrate Intake Estimation in Type I Diabetic Patients Described by UVA/Padova Model

Authors: David A. Padilla, Rodolfo Villamizar

Abstract:

In recent years, closed loop control strategies have been developed in order to establish a healthy glucose profile in type 1 diabetic mellitus (T1DM) patients. However, the controller itself is unable to define a suitable reference trajectory for glucose. In this paper, a control strategy Is proposed where the shape of the reference trajectory is generated bases in the amount of carbohydrates present during the digestive process, due to the effect of carbohydrate intake. Since there no exists a sensor to measure the amount of carbohydrates consumed, an estimator is proposed. Thus this paper presents the entire process of designing a carbohydrate estimator, which allows estimate disturbance for a predictive controller (MPC) in a T1MD patient, the estimation will be used to establish a profile of reference and improve the response of the controller by providing the estimated information of ingested carbohydrates. The dynamics of the diabetic model used are due to the equations described by the UVA/Padova model of the T1DMS simulator, the system was developed and simulated in Simulink, taking into account the noise and limitations of the glucose control system actuators.

Keywords: estimation, glucose control, predictive controller, MPC, UVA/Padova

Procedia PDF Downloads 231
3271 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria

Procedia PDF Downloads 240
3270 Minimum Ratio of Flexural Reinforcement for High Strength Concrete Beams

Authors: Azad A. Mohammed, Dunyazad K. Assi, Alan S. Abdulrahman

Abstract:

Current ACI 318 Code provides two limits for minimum steel ratio for concrete beams. When concrete compressive strength be larger than 31 MPa the limit of √(fc')/4fy usually governs. In this paper shortcomings related to using this limit was fairly discussed and showed that the limit is based on 90% safety factor and was derived based on modulus of rupture equation suitable for concretes of compressive strength lower than 31 MPa. Accordingly, the limit is nor suitable and critical for concretes of higher compressive strength. An alternative equation was proposed for minimum steel ratio of rectangular beams and was found that the proposed limit is accurate for beams of wide range of concrete compressive strength. Shortcomings of the current ACI 318 Code equation and accuracy of the proposed equation were supported by test data obtained from testing six reinforced concrete beams.

Keywords: concrete beam, compressive strength, minimum steel ratio, modulus of rupture

Procedia PDF Downloads 513
3269 Diagonal Vector Autoregressive Models and Their Properties

Authors: Usoro Anthony E., Udoh Emediong

Abstract:

Diagonal Vector Autoregressive Models are special classes of the general vector autoregressive models identified under certain conditions, where parameters are restricted to the diagonal elements in the coefficient matrices. Variance, autocovariance, and autocorrelation properties of the upper and lower diagonal VAR models are derived. The new set of VAR models is verified with empirical data and is found to perform favourably with the general VAR models. The advantage of the diagonal models over the existing models is that the new models are parsimonious, given the reduction in the interactive coefficients of the general VAR models.

Keywords: VAR models, diagonal VAR models, variance, autocovariance, autocorrelations

Procedia PDF Downloads 81
3268 Shuffled Structure for 4.225 GHz Antireflective Plates: A Proposal Proven by Numerical Simulation

Authors: Shin-Ku Lee, Ming-Tsu Ho

Abstract:

A newly proposed antireflective selector with shuffled structure is reported in this paper. The proposed idea is made of two different quarter wavelength (QW) slabs and numerically supported by the one-dimensional simulation results provided by the method of characteristics (MOC) to function as an antireflective selector. These two QW slabs are characterized by dielectric constants εᵣA and εᵣB, uniformly divided into N and N+1 pieces respectively which are then shuffled to form an antireflective plate with B(AB)N structure such that there is always one εᵣA piece between two εᵣB pieces. Another is A(BA)N structure where every εᵣB piece is sandwiched by two εᵣA pieces. Both proposed structures are numerically proved to function as QW plates. In order to allow maximum transmission through the proposed structures, the two dielectric constants are chosen to have the relation of (εᵣA)² = εᵣB > 1. The advantages of the proposed structures over the traditional anti-reflection coating techniques are two components with two thicknesses and to shuffle to form new QW structures. The design wavelength used to validate the proposed idea is 71 mm corresponding to a frequency about 4.225 GHz. The computational results are shown in both time and frequency domains revealing that the proposed structures produce minimum reflections around the frequency of interest.

Keywords: method of characteristics, quarter wavelength, anti-reflective plate, propagation of electromagnetic fields

Procedia PDF Downloads 116
3267 Group Sequential Covariate-Adjusted Response Adaptive Designs for Survival Outcomes

Authors: Yaxian Chen, Yeonhee Park

Abstract:

Driven by evolving FDA recommendations, modern clinical trials demand innovative designs that strike a balance between statistical rigor and ethical considerations. Covariate-adjusted response-adaptive (CARA) designs bridge this gap by utilizing patient attributes and responses to skew treatment allocation in favor of the treatment that is best for an individual patient’s profile. However, existing CARA designs for survival outcomes often hinge on specific parametric models, constraining their applicability in clinical practice. In this article, we address this limitation by introducing a CARA design for survival outcomes (CARAS) based on the Cox model and a variance estimator. This method addresses issues of model misspecification and enhances the flexibility of the design. We also propose a group sequential overlapweighted log-rank test to preserve type I error rate in the context of group sequential trials using extensive simulation studies to demonstrate the clinical benefit, statistical efficiency, and robustness to model misspecification of the proposed method compared to traditional randomized controlled trial designs and response-adaptive randomization designs.

Keywords: cox model, log-rank test, optimal allocation ratio, overlap weight, survival outcome

Procedia PDF Downloads 23
3266 Design and Performance Analysis of Resource Management Algorithms in Response to Emergency and Disaster Situations

Authors: Volkan Uygun, H. Birkan Yilmaz, Tuna Tugcu

Abstract:

This study focuses on the development and use of algorithms that address the issue of resource management in response to emergency and disaster situations. The presented system, named Disaster Management Platform (DMP), takes the data from the data sources of service providers and distributes the incoming requests accordingly both to manage load balancing and minimize service time, which results in improved user satisfaction. Three different resource management algorithms, which give different levels of importance to load balancing and service time, are proposed for the study. The first one is the Minimum Distance algorithm, which assigns the request to the closest resource. The second one is the Minimum Load algorithm, which assigns the request to the resource with the minimum load. Finally, the last one is the Hybrid algorithm, which combines the previous two approaches. The performance of the proposed algorithms is evaluated with respect to waiting time, success ratio, and maximum load ratio. The metrics are monitored from simulations, to find the optimal scheme for different loads. Two different simulations are performed in the study, one is time-based and the other is lambda-based. The results indicate that, the Minimum Load algorithm is generally the best in all metrics whereas the Minimum Distance algorithm is the worst in all cases and in all metrics. The leading position in performance is switched between the Minimum Distance and the Hybrid algorithms, as lambda values change.

Keywords: emergency and disaster response, resource management algorithm, disaster situations, disaster management platform

Procedia PDF Downloads 314