Search results for: variance ratio
5642 An Approach to Noise Variance Estimation in Very Low Signal-to-Noise Ratio Stochastic Signals
Authors: Miljan B. Petrović, Dušan B. Petrović, Goran S. Nikolić
Abstract:
This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.Keywords: noise, signal-to-noise ratio, stochastic signals, variance estimation
Procedia PDF Downloads 3865641 A Generalized Family of Estimators for Estimation of Unknown Population Variance in Simple Random Sampling
Authors: Saba Riaz, Syed A. Hussain
Abstract:
This paper is addressing the estimation method of the unknown population variance of the variable of interest. A new generalized class of estimators of the finite population variance has been suggested using the auxiliary information. To improve the precision of the proposed class, known population variance of the auxiliary variable has been used. Mathematical expressions for the biases and the asymptotic variances of the suggested class are derived under large sample approximation. Theoretical and numerical comparisons are made to investigate the performances of the proposed class of estimators. The empirical study reveals that the suggested class of estimators performs better than the usual estimator, classical ratio estimator, classical product estimator and classical linear regression estimator. It has also been found that the suggested class of estimators is also more efficient than some recently published estimators.Keywords: study variable, auxiliary variable, finite population variance, bias, asymptotic variance, percent relative efficiency
Procedia PDF Downloads 2255640 A Comparative Analysis of Global Minimum Variance and Naïve Portfolios: Performance across Stock Market Indices and Selected Economic Regimes Using Various Risk-Return Metrics
Authors: Lynmar M. Didal, Ramises G. Manzano Jr., Jacque Bon-Isaac C. Aboy
Abstract:
This study analyzes the performance of global minimum variance and naive portfolios across different economic periods, using monthly stock returns from the Philippine Stock Exchange Index (PSEI), S&P 500, and Dow Jones Industrial Average (DOW). The performance is evaluated through the Sharpe ratio, Sortino ratio, Jensen’s Alpha, Treynor ratio, and Information ratio. Additionally, the study investigates the impact of short selling on portfolio performance. Six-time periods are defined for analysis, encompassing events such as the global financial crisis and the COVID-19 pandemic. Findings indicate that the Naive portfolio generally outperforms the GMV portfolio in the S&P 500, signifying higher returns with increased volatility. Conversely, in the PSEI and DOW, the GMV portfolio shows more efficient risk-adjusted returns. Short selling significantly impacts the GMV portfolio during mid-GFC and mid-COVID periods. The study offers insights for investors, suggesting the Naive portfolio for higher risk tolerance and the GMV portfolio as a conservative alternative.Keywords: portfolio performance, global minimum variance, naïve portfolio, risk-adjusted metrics, short-selling
Procedia PDF Downloads 965639 Efficient Frontier: Comparing Different Volatility Estimators
Authors: Tea Poklepović, Zdravka Aljinović, Mario Matković
Abstract:
Modern Portfolio Theory (MPT) according to Markowitz states that investors form mean-variance efficient portfolios which maximizes their utility. Markowitz proposed the standard deviation as a simple measure for portfolio risk and the lower semi-variance as the only risk measure of interest to rational investors. This paper uses a third volatility estimator based on intraday data and compares three efficient frontiers on the Croatian Stock Market. The results show that range-based volatility estimator outperforms both mean-variance and lower semi-variance model.Keywords: variance, lower semi-variance, range-based volatility, MPT
Procedia PDF Downloads 5135638 Financial Portfolio Optimization in Electricity Markets: Evaluation via Sharpe Ratio
Authors: F. Gökgöz, M. E. Atmaca
Abstract:
Electricity plays an indispensable role in human life and the economy. It is a unique product or service that must be balanced instantaneously, as electricity is not stored, generation and consumption should be proportional. Effective and efficient use of electricity is very important not only for society, but also for the environment. A competitive electricity market is one of the best ways to provide a suitable platform for effective and efficient use of electricity. On the other hand, it carries some risks that should be carefully managed by the market players. Risk management is an essential part in market players’ decision making. In this paper, risk management through diversification is applied with the help of Markowitz’s Mean-variance, Down-side and Semi-variance methods for a case study. Performance of optimal electricity sale solutions are measured and evaluated via Sharpe-Ratio, and the optimal portfolio solutions are improved. Two years of historical weekdays’ price data of the Turkish Day Ahead Market are used to demonstrate the approach.Keywords: electricity market, portfolio optimization, risk management in electricity market, sharpe ratio
Procedia PDF Downloads 3655637 BIASS in the Estimation of Covariance Matrices and Optimality Criteria
Authors: Juan M. Rodriguez-Diaz
Abstract:
The precision of parameter estimators in the Gaussian linear model is traditionally accounted by the variance-covariance matrix of the asymptotic distribution. However, this measure can underestimate the true variance, specially for small samples. Traditionally, optimal design theory pays attention to this variance through its relationship with the model's information matrix. For this reason it seems convenient, at least in some cases, adapt the optimality criteria in order to get the best designs for the actual variance structure, otherwise the loss in efficiency of the designs obtained with the traditional approach may be very important.Keywords: correlated observations, information matrix, optimality criteria, variance-covariance matrix
Procedia PDF Downloads 4435636 The Cut-Off Value of TG/HDL Ratio of High Pericardial Adipose Tissue
Authors: Nam-Seok Joo, Da-Eun Jung, Beom-Hee Choi
Abstract:
Background and Objectives: Recently, the triglyceride/high-density lipoprotine cholesterol (TG/HDL) ratio and pericardial adipose tissue (PAT) has gained attention as an indicator related to metabolic syndrome (MS). To date, there has been no research on the relationship between TG/HDL and PAT, we aimed to investigate the association between the TG/HDL and PAT. Methods: In this cross-sectional study, we investigated 627 patients who underwent coronary multidetector computed tomography and metabolic parameters. We divided subjects into two groups according to the cut-off PAT volume associated with MS, which is 142.2 cm³, and we compared metabolic parameters between those groups. We divided the TG/HDL ratio into tertiles according to Log(TG/HDL) and compared PAT-related parameters by analysis of variance. Finally, we applied logistic regression analysis to obtain the odds ratio of high PAT (PAT volume≥142.2 cm³) in each tertile, and we performed receiver operating characteristic (ROC) analysis to get the cut-off of TG/HDL ratio according to high PAT. Results: The mean TG/ HDL ratio of the high PAT volume group was 3.6, and TG/ HDL ratio had a strong positive correlation with various metabolic parameters. In addition, in the Log (TG/HDL) tertile groups, the higher tertile had more metabolic derangements, including PAT, and showed higher odds ratios of having high PAT (OR=4.10 in the second tertile group and OR=5.06 in their third tertile group, respectively) after age, sex, smoking adjustments. TG/HDL ratio according to the having increased PAT by ROC curve showed 1.918 (p < 0.001). Conclusion: TG/HDL ratio and high PAT volume have a significant positive correlation, and higher TG/HDL ratio showed high PAT. The cut-off value of the TG/HDL ratio was 1.918 to have a high PAT.Keywords: triglyceride, high-density lipoprotein, pericardial adipose tissue, cut-off value
Procedia PDF Downloads 155635 Distributed Energy Storage as a Potential Solution to Electrical Network Variance
Authors: V. Rao, A. Bedford
Abstract:
As the efficient performance of national grid becomes increasingly important to maintain the electrical network stability, the balance between the generation and the demand must be effectively maintained. To do this, any losses that occur in the power network must be reduced by compensating for it. In this paper, one of the main cause for the losses in the network is identified as the variance, which hinders the grid’s power carrying capacity. The reason for the variance in the grid is investigated and identified as the rise in the integration of renewable energy sources (RES) such as wind and solar power. The intermittent nature of these RES along with fluctuating demands gives rise to variance in the electrical network. The losses that occur during this process is estimated by analyzing the network’s power profiles. Whilst researchers have identified different ways to tackle this problem, little consideration is given to energy storage. This paper seeks to redress this by considering the role of energy storage systems as potential solutions to reduce variance in the network. The implementation of suitable energy storage systems based on different applications is presented in this paper as part of variance reduction method and thus contribute towards maintaining a stable and efficient grid operation.Keywords: energy storage, electrical losses, national grid, renewable energy, variance
Procedia PDF Downloads 3175634 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation
Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski
Abstract:
In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming
Procedia PDF Downloads 4065633 Sales-Based Dynamic Investment and Leverage Decisions: A Longitudinal Study
Authors: Rihab Belguith, Fathi Abid
Abstract:
The paper develops a system-based approach to investigate the dynamic adjustment of debt structure and investment policies of the Dow-Jones index. This approach enables the assessment of relations among sales, debt, and investment opportunities by considering the simultaneous effect of the market environmental change and future growth opportunities. We integrate the firm-specific sales variance to capture the industries' conditions in the model. Empirical results were obtained through a panel data set of firms with different sectors. The analysis support that environmental change does not affect equally the different industry since operating leverage differs among industries and so the sensitivity to sales variance. Including adjusted-specific variance, we find that there is no monotonic relation between leverage, sales, and investment. The firm may choose a low debt level in response to high sales variance but high leverage to attenuate the negative relation between sales variance and the current level of investment. We further find that while the overall effect of debt maturity on leverage is unaffected by the level of growth opportunities, the shorter the maturity of debt is, the smaller the direct effect of sales variance on investment.Keywords: dynamic panel, investment, leverage decision, sales uncertainty
Procedia PDF Downloads 2435632 Multi-Objective Optimization of Electric Discharge Machining for Inconel 718
Authors: Pushpendra S. Bharti, S. Maheshwari
Abstract:
Electric discharge machining (EDM) is one of the most widely used non-conventional manufacturing process to shape difficult-to-cut materials. The process yield, in terms of material removal rate, surface roughness and tool wear rate, of EDM may considerably be improved by selecting the optimal combination(s) of process parameters. This paper employs Multi-response signal-to-noise (MRSN) ratio technique to find the optimal combination(s) of the process parameters during EDM of Inconel 718. Three cases v.i.z. high cutting efficiency, high surface finish, and normal machining have been taken and the optimal combinations of input parameters have been obtained for each case. Analysis of variance (ANOVA) has been employed to find the dominant parameter(s) in all three cases. The experimental verification of the obtained results has also been made. MRSN ratio technique found to be a simple and effective multi-objective optimization technique.Keywords: electric discharge machining, material removal rate, surface roughness, too wear rate, multi-response signal-to-noise ratio, multi response signal-to-noise ratio, optimization
Procedia PDF Downloads 3545631 Methods of Variance Estimation in Two-Phase Sampling
Authors: Raghunath Arnab
Abstract:
The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators
Procedia PDF Downloads 5885630 Application of Golden Ratio in Contemporary Textile Industry and Its Effect on Consumer Preferences
Authors: Rafia Asghar, Abdul Hafeez
Abstract:
This research aims to determine the influence of Fibonacci numbers and golden ratio through textile designs. This study was carried out by collecting a variety of designs from different textile industries. Top textile designers were also interviewed regarding golden ratio and its application on their designs and design execution process. This study revealed that most of the designs fulfilled the golden ratio and the designs that were according to golden ratio were more favorite to the consumers.Keywords: golden ratio, Fibonacci numbers, textile design, designs
Procedia PDF Downloads 7185629 The Evaluation of the Performance of Different Filtering Approaches in Tracking Problem and the Effect of Noise Variance
Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri
Abstract:
Performance of different filtering approaches depends on modeling of dynamical system and algorithm structure. For modeling and smoothing the data the evaluation of posterior distribution in different filtering approach should be chosen carefully. In this paper different filtering approaches like filter KALMAN, EKF, UKF, EKS and smoother RTS is simulated in some trajectory tracking of path and accuracy and limitation of these approaches are explained. Then probability of model with different filters is compered and finally the effect of the noise variance to estimation is described with simulations results.Keywords: Gaussian approximation, Kalman smoother, parameter estimation, noise variance
Procedia PDF Downloads 4395628 A Mean–Variance–Skewness Portfolio Optimization Model
Authors: Kostas Metaxiotis
Abstract:
Portfolio optimization is one of the most important topics in finance. This paper proposes a mean–variance–skewness (MVS) portfolio optimization model. Traditionally, the portfolio optimization problem is solved by using the mean–variance (MV) framework. In this study, we formulate the proposed model as a three-objective optimization problem, where the portfolio's expected return and skewness are maximized whereas the portfolio risk is minimized. For solving the proposed three-objective portfolio optimization model we apply an adapted version of the non-dominated sorting genetic algorithm (NSGAII). Finally, we use a real dataset from FTSE-100 for validating the proposed model.Keywords: evolutionary algorithms, portfolio optimization, skewness, stock selection
Procedia PDF Downloads 1985627 Portfolio Optimization under a Hybrid Stochastic Volatility and Constant Elasticity of Variance Model
Authors: Jai Heui Kim, Sotheara Veng
Abstract:
This paper studies the portfolio optimization problem for a pension fund under a hybrid model of stochastic volatility and constant elasticity of variance (CEV) using asymptotic analysis method. When the volatility component is fast mean-reverting, it is able to derive asymptotic approximations for the value function and the optimal strategy for general utility functions. Explicit solutions are given for the exponential and hyperbolic absolute risk aversion (HARA) utility functions. The study also shows that using the leading order optimal strategy results in the value function, not only up to the leading order, but also up to first order correction term. A practical strategy that does not depend on the unobservable volatility level is suggested. The result is an extension of the Merton's solution when stochastic volatility and elasticity of variance are considered simultaneously.Keywords: asymptotic analysis, constant elasticity of variance, portfolio optimization, stochastic optimal control, stochastic volatility
Procedia PDF Downloads 2995626 The Effect of "Trait" Variance of Personality on Depression: Application of the Trait-State-Occasion Modeling
Authors: Pei-Chen Wu
Abstract:
Both preexisting cross-sectional and longitudinal studies of personality-depression relationship have suffered from one main limitation: they ignored the stability of the construct of interest (e.g., personality and depression) can be expected to influence the estimate of the association between personality and depression. To address this limitation, the Trait-State-Occasion (TSO) modeling was adopted to analyze the sources of variance of the focused constructs. A TSO modeling was operated by partitioning a state variance into time-invariant (trait) and time-variant (occasion) components. Within a TSO framework, it is possible to predict change on the part of construct that really changes (i.e., time-variant variance), when controlling the trait variances. 750 high school students were followed for 4 waves over six-month intervals. The baseline data (T1) were collected from the senior high schools (aged 14 to 15 years). Participants were given Beck Depression Inventory and Big Five Inventory at each assessment. TSO modeling revealed that 70~78% of the variance in personality (five constructs) was stable over follow-up period; however, 57~61% of the variance in depression was stable. For personality construct, there were 7.6% to 8.4% of the total variance from the autoregressive occasion factors; for depression construct there were 15.2% to 18.1% of the total variance from the autoregressive occasion factors. Additionally, results showed that when controlling initial symptom severity, the time-invariant components of all five dimensions of personality were predictive of change in depression (Extraversion: B= .32, Openness: B = -.21, Agreeableness: B = -.27, Conscientious: B = -.36, Neuroticism: B = .39). Because five dimensions of personality shared some variance, the models in which all five dimensions of personality were simultaneous to predict change in depression were investigated. The time-invariant components of five dimensions were still significant predictors for change in depression (Extraversion: B = .30, Openness: B = -.24, Agreeableness: B = -.28, Conscientious: B = -.35, Neuroticism: B = .42). In sum, the majority of the variability of personality was stable over 2 years. Individuals with the greater tendency of Extraversion and Neuroticism have higher degrees of depression; individuals with the greater tendency of Openness, Agreeableness and Conscientious have lower degrees of depression.Keywords: assessment, depression, personality, trait-state-occasion model
Procedia PDF Downloads 1755625 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction
Authors: Bastien Batardière, Joon Kwon
Abstract:
For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.Keywords: convex optimization, variance reduction, adaptive algorithms, loopless
Procedia PDF Downloads 705624 Analysis of Fixed Beamforming Algorithms for Smart Antenna Systems
Authors: Muhammad Umair Shahid, Abdul Rehman, Mudassir Mukhtar, Muhammad Nauman
Abstract:
The smart antenna is the prominent technology that has become known in recent years to meet the growing demands of wireless communications. In an overcrowded atmosphere, its application is growing gradually. A methodical evaluation of the performance of Fixed Beamforming algorithms for smart antennas such as Multiple Sidelobe Canceller (MSC), Maximum Signal-to-interference ratio (MSIR) and minimum variance (MVDR) has been comprehensively presented in this paper. Simulation results show that beamforming is helpful in providing optimized response towards desired directions. MVDR beamformer provides the most optimal solution.Keywords: fixed weight beamforming, array pattern, signal to interference ratio, power efficiency, element spacing, array elements, optimum weight vector
Procedia PDF Downloads 1825623 Evaluation of the Operating Parameters for Biodiesel Production Using a Membrane Reactor
Authors: S. S. L. Andrade, E. A. Souza, L. C. L. Santos, C. Moraes, A. K. C. L. Lobato
Abstract:
Biodiesel production using membrane reactor has become increasingly studied, because this process minimizes some of the main problems encountered in the biodiesel purification. The membrane reactor tries to minimize post-treatment steps, resulting in cost savings and enabling the competitiveness of biodiesel produced by homogeneous alkaline catalysis. This is due to the reaction and product separation may occur simultaneously. In order to evaluate the production of biodiesel from soybean oils using a tubular membrane reactor, a factorial experimental design was conducted (2³) to evaluate the influence of following variables: temperature (45 to 60 °C), catalyst concentration (0.5 to 1% by weight) and molar ratio of oil/methanol (1/6 to 1/9). In addition, the parametric sensitivity was evaluated by the analysis of variance and model through the response surface. The results showed a tendency of influence of the variables in the reaction conversion. The significance effect was higher for the catalyst concentration followed by the molar ratio of oil/methanol and finally the temperature. The best result was obtained under the conditions of 1% catalyst (KOH), molar ratio oil/methanol of 1/9 and temperature of 60 °C, resulting in an ester content of 99.07%.Keywords: biodiesel production, factorial design, membrane reactor, soybean oil
Procedia PDF Downloads 3775622 Physico-Mechanical Properties of Wood-Plastic Composites Produced from Polyethylene Terephthalate Plastic Bottle Wastes and Sawdust of Three Tropical Hardwood Species
Authors: Amos Olajide Oluyege, Akpanobong Akpan Ekong, Emmanuel Uchechukwu Opara, Sunday Adeniyi Adedutan, Joseph Adeola Fuwape, Olawale John Olukunle
Abstract:
This study was carried out to evaluate the influence of wood species and wood plastic ratio on the physical and mechanical properties of wood plastic composites (WPCs) produced from polyethylene terephthalate (PET) plastic bottle wastes and sawdust from three hardwood species, namely, Terminalia superba, Gmelina arborea, and Ceiba pentandra. The experimental WPCs were prepared from sawdust particle size classes of ≤ 0.5, 0.5 – 1.0, and 1.0 – 2.0 mm at wood/plastic ratios of 40:60, 50:50 and 60:40 (percentage by weight). The WPCs for each study variable combination were prepared in 3 replicates and laid out in a randomized complete block design (RCBD). The physical properties investigated water absorption (WA), linear expansion (LE) and thickness swelling (TS) while the mechanical properties evaluated were Modulus of Elasticity (MOE) and Modulus of Rupture (MOR). The mean values for WA, LE and TS ranged from 1.07 to 34.04, 0.11 to 1.76 and 0.11 to 4.05 %, respectively. The mean values of the three physical properties increased with decrease in wood plastic ratio. Wood plastic ratio of 40:60 at each particle size class generally resulted in the lowest values while wood plastic ratio of 60:40 had the highest values for each of the three species. For each of the physical properties, T. superba had the least mean values followed by G. arborea, while the highest values were observed C. pentandra. The mean values for MOE and MOR ranged from 458.17 to 1875.67 and 2.64 to 18.39 N/mm2, respectively. The mean values of the two mechanical properties decreased with increase in wood plastic ratio. Wood plastic ratio of 40:60 at each wood particle size class generally had the highest values while wood plastic ratio of 60:40 had the least values for each of the three species. For each of the mechanical properties, C. pentandra had the highest mean values followed by G. arborea, while the least values were observed T. superba. There were improvements in both the physical and mechanical properties due to decrease in sawdust particle size class with the particle size class of ≤ 0.5 mm giving the best result. The results of the Analysis of variance revealed significant (P < 0.05) effects of the three study variables – wood species, sawdust particle size class and wood/plastic ratio on all the physical and mechanical properties of the WPCs. It can be concluded from the results of this study that wood plastic composites from sawdust particle size ≤ 0.5 and PET plastic bottle wastes with acceptable physical and mechanical properties are better produced using 40:60 wood/plastic ratio, and that at this ratio, all the three species are suitable for the production of wood plastic composites.Keywords: polyethylene terephthalate plastic bottle wastes, wood plastic composite, physical properties, mechanical properties
Procedia PDF Downloads 2015621 The Effects and Interactions of Synthesis Parameters on Properties of Mg Substituted Hydroxyapatite
Authors: S. Sharma, U. Batra, S. Kapoor, A. Dua
Abstract:
In this study, the effects and interactions of reaction time and capping agent assistance during sol-gel synthesis of magnesium substituted hydroxyapatite nanopowder (MgHA) on hydroxyapatite (HA) to β-tricalcium phosphate (β-TCP) ratio, Ca/P ratio and mean crystallite size was examined experimentally as well as through statistical analysis. MgHA nanopowders were synthesized by sol-gel technique at room temperature using aqueous solution of calcium nitrate tetrahydrate, magnesium nitrate hexahydrate and potassium dihydrogen phosphate as starting materials. The reaction time for sol-gel synthesis was varied between 15 to 60 minutes. Two process routes were followed with and without addition of triethanolamine (TEA) in the solutions. The elemental compositions of as-synthesized powders were determined using X-ray fluorescence (XRF) spectroscopy. The functional groups present in the as-synthesized MgHA nanopowders were established through Fourier Transform Infrared Spectroscopy (FTIR). The amounts of phases present, Ca/P ratio and mean crystallite sizes of MgHA nanopowders were determined using X-ray diffraction (XRD). The HA content in biphasic mixture of HA and β-TCP and Ca/P ratio in as-synthesized MgHA nanopowders increased effectively with reaction time of sols (p < 0.0001, two way Anova), however, these were independent of TEA addition (p > 0.15, two way Anova). The MgHA nanopowders synthesized with TEA assistance exhibited 14 nm lower crystallite size (p < 0.018, 2 sample t-test) compared to the powder synthesized without TEA assistance.Keywords: capping agent, hydroxyapatite, regression analysis, sol-gel, 2- sample t-test, two-way analysis of variance (ANOVA)
Procedia PDF Downloads 3705620 Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance
Authors: Nada Jasim Habeeb, Rana Saad Mohammed, Muntaha Khudair Abbass
Abstract:
For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques.Keywords: temporal differencing, video summarization, histogram differencing, sum conditional variance
Procedia PDF Downloads 3485619 The Influence of Feedgas Ratio on the Ethene Hydroformylation using Rh-Co Bimetallic Catalyst Supported by Reduced Graphene Oxide
Authors: Jianli Chang, Yusheng Zhang, Yali Yao, Diane Hildebrandt, Xinying Liu
Abstract:
The influence of feed-gas ratio on the ethene hydroformylation over an Rh-Co bimetallic catalyst supported by reduced graphene oxide (RGO) has been investigated in a tubular fixed bed reactor. Argon was used as balance gas when the feed-gas ratio was changed, which can keep the partial pressure of the other two kinds of gas constant while the ratio of one component in feed-gas was changed. First, the effect of single-component gas ratio on the performance of ethene hydroformylation was studied one by one (H₂, C₂H₄ and CO). Then an optimized ratio was found to obtain a high selectivity to C₃ oxygenates. The results showed that: (1) 0.5%Rh-20%Co/RGO is a promising heterogeneous catalyst for ethene hydroformylation. (2) H₂ and CO have a more significant influence than C₂H₄ on selectivity to oxygenates. (3) A lower H₂ ratio and a higher CO ratio in feed-gas can lead to a higher selectivity to oxygenates. (4) The highest selectivity to oxygenates, 61.70%, was obtained at the feed-gas ratio CO: C₂H₄: H₂ = 4: 2: 1.Keywords: ethene hydroformylation, reduced graphene oxide, rhodium cobalt bimetallic catalyst, the effect of feed-gas ratio
Procedia PDF Downloads 1635618 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 1405617 Beyond Classic Program Evaluation and Review Technique: A Generalized Model for Subjective Distributions with Flexible Variance
Authors: Byung Cheol Kim
Abstract:
The Program Evaluation and Review Technique (PERT) is widely used for project management, but it struggles with subjective distributions, particularly due to its assumptions of constant variance and light tails. To overcome these limitations, we propose the Generalized PERT (G-PERT) model, which enhances PERT by incorporating variability in three-point subjective estimates. Our methodology extends the original PERT model to cover the full range of unimodal beta distributions, enabling the model to handle thick-tailed distributions and offering formulas for computing mean and variance. This maintains the simplicity of PERT while providing a more accurate depiction of uncertainty. Our empirical analysis demonstrates that the G-PERT model significantly improves performance, particularly when dealing with heavy-tail subjective distributions. In comparative assessments with alternative models such as triangular and lognormal distributions, G-PERT shows superior accuracy and flexibility. These results suggest that G-PERT offers a more robust solution for project estimation while still retaining the user-friendliness of the classic PERT approach.Keywords: PERT, subjective distribution, project management, flexible variance
Procedia PDF Downloads 185616 Taylor’s Law and Relationship between Life Expectancy at Birth and Variance in Age at Death in Period Life Table
Authors: David A. Swanson, Lucky M. Tedrow
Abstract:
Taylor’s Law is a widely observed empirical pattern that relates variances to means in sets of non-negative measurements via an approximate power function, which has found application to human mortality. This study adds to this research by showing that Taylor’s Law leads to a model that reasonably describes the relationship between life expectancy at birth (e0, which also is equal to mean age at death in a life table) and variance at age of death in seven World Bank regional life tables measured at two points in time, 1970 and 2000. Using as a benchmark a non-random sample of four Japanese female life tables covering the period from 1950 to 2004, the study finds that the simple linear model provides reasonably accurate estimates of variance in age at death in a life table from e0, where the latter range from 60.9 to 85.59 years. Employing 2017 life tables from the Human Mortality Database, the simple linear model is used to provide estimates of variance at age in death for six countries, three of which have high e0 values and three of which have lower e0 values. The paper provides a substantive interpretation of Taylor’s Law relative to e0 and concludes by arguing that reasonably accurate estimates of variance in age at death in a period life table can be calculated using this approach, which also can be used where e0 itself is estimated rather than generated through the construction of a life table, a useful feature of the model.Keywords: empirical pattern, mean age at death in a life table, mean age of a stationary population, stationary population
Procedia PDF Downloads 3305615 The Simple Two-Step Polydimethylsiloxane (PDMS) Transferring Process for High Aspect Ratio Microstructures
Authors: Shaoxi Wang, Pouya Rezai
Abstract:
High aspect ratio is the necessary parts of complex microstructures. Some methods available to achieve high aspect ratio requires expensive materials or complex process; others is difficult to research simple high aspect ratio structures. The paper presents a simple and cheap two-step Polydimethylsioxane (PDMS) transferring process to get high aspect ratio single pillars, which only requires covering the PDMS mold with Brij@52 surface solution. The experimental results demonstrate the method efficiency and effective.Keywords: high aspect ratio, microstructure, PDMS, Brij
Procedia PDF Downloads 2645614 Estimates of (Co)Variance Components and Genetic Parameters for Body Weights and Growth Efficiency Traits in the New Zealand White Rabbits
Authors: M. Sakthivel, A. Devaki, D. Balasubramanyam, P. Kumarasamy, A. Raja, R. Anilkumar, H. Gopi
Abstract:
The genetic parameters of growth traits in the New Zealand White rabbits maintained at Sheep Breeding and Research Station, Sandynallah, The Nilgiris, India were estimated by partitioning the variance and covariance components. The (co)variance components of body weights at weaning (W42), post-weaning (W70) and marketing (W135) age and growth efficiency traits viz., average daily gain (ADG), relative growth rate (RGR) and Kleiber ratio (KR) estimated on a daily basis at different age intervals (1=42 to 70 days; 2=70 to 135 days and 3=42 to 135 days) from weaning to marketing were estimated by restricted maximum likelihood, fitting six animal models with various combinations of direct and maternal effects. Data were collected over a period of 15 years (1998 to 2012). A log-likelihood ratio test was used to select the most appropriate univariate model for each trait, which was subsequently used in bivariate analysis. Heritability estimates for W42, W70 and W135 were 0.42 ± 0.07, 0.40 ± 0.08 and 0.27 ± 0.07, respectively. Heritability estimates of growth efficiency traits were moderate to high (0.18 to 0.42). Of the total phenotypic variation, maternal genetic effect contributed 14 to 32% for early body weight traits (W42 and W70) and ADG1. The contribution of maternal permanent environmental effect varied from 6 to 18% for W42 and for all the growth efficiency traits except for KR2. Maternal permanent environmental effect on most of the growth efficiency traits was a carryover effect of maternal care during weaning. Direct maternal genetic correlations, for the traits in which maternal genetic effect was significant, were moderate to high in magnitude and negative in direction. Maternal effect declined as the age of the animal increased. The estimates of total heritability and maternal across year repeatability for growth traits were moderate and an optimum rate of genetic progress seems possible in the herd by mass selection. The estimates of genetic and phenotypic correlations among body weight traits were moderate to high and positive; among growth efficiency traits were low to high with varying directions; between body weights and growth efficiency traits were very low to high in magnitude and mostly negative in direction. Moderate to high heritability and higher genetic correlation in body weight traits promise good scope for genetic improvement provided measures are taken to keep the inbreeding at the lowest level.Keywords: genetic parameters, growth traits, maternal effects, rabbit genetics
Procedia PDF Downloads 4475613 Wind Turbine Control Performance Evaluation Based on Minimum-Variance Principles
Authors: Zheming Cao
Abstract:
Control loops are the most important components in the wind turbine system. Product quality, operation safety, and the economic performance are directly or indirectly connected to the performance of control systems. This paper proposed a performance evaluation method based on minimum-variance for wind turbine control system. This method can be applied on PID controller for pitch control system in the wind turbine. The good performance result demonstrated in the paper was achieved by retuning and optimizing the controller settings based on the evaluation result. The concepts presented in this paper are illustrated with the actual data of the industrial wind farm.Keywords: control performance, evaluation, minimum-variance, wind turbine
Procedia PDF Downloads 370