Search results for: S-EMG estimators
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 119

Search results for: S-EMG estimators

29 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties

Authors: Sonal Budhiraja, Biswabrata Pradhan

Abstract:

This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.

Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval

Procedia PDF Downloads 223
28 An Approach for Estimation in Hierarchical Clustered Data Applicable to Rare Diseases

Authors: Daniel C. Bonzo

Abstract:

Practical considerations lead to the use of unit of analysis within subjects, e.g., bleeding episodes or treatment-related adverse events, in rare disease settings. This is coupled with data augmentation techniques such as extrapolation to enlarge the subject base. In general, one can think about extrapolation of data as extending information and conclusions from one estimand to another estimand. This approach induces hierarchichal clustered data with varying cluster sizes. Extrapolation of clinical trial data is being accepted increasingly by regulatory agencies as a means of generating data in diverse situations during drug development process. Under certain circumstances, data can be extrapolated to a different population, a different but related indication, and different but similar product. We consider here the problem of estimation (point and interval) using a mixed-models approach under an extrapolation. It is proposed that estimators (point and interval) be constructed using weighting schemes for the clusters, e.g., equally weighted and with weights proportional to cluster size. Simulated data generated under varying scenarios are then used to evaluate the performance of this approach. In conclusion, the evaluation result showed that the approach is a useful means for improving statistical inference in rare disease settings and thus aids not only signal detection but risk-benefit evaluation as well.

Keywords: clustered data, estimand, extrapolation, mixed model

Procedia PDF Downloads 117
27 State Estimation of a Biotechnological Process Using Extended Kalman Filter and Particle Filter

Authors: R. Simutis, V. Galvanauskas, D. Levisauskas, J. Repsyte, V. Grincas

Abstract:

This paper deals with advanced state estimation algorithms for estimation of biomass concentration and specific growth rate in a typical fed-batch biotechnological process. This biotechnological process was represented by a nonlinear mass-balance based process model. Extended Kalman Filter (EKF) and Particle Filter (PF) was used to estimate the unmeasured state variables from oxygen uptake rate (OUR) and base consumption (BC) measurements. To obtain more general results, a simplified process model was involved in EKF and PF estimation algorithms. This model doesn’t require any special growth kinetic equations and could be applied for state estimation in various bioprocesses. The focus of this investigation was concentrated on the comparison of the estimation quality of the EKF and PF estimators by applying different measurement noises. The simulation results show that Particle Filter algorithm requires significantly more computation time for state estimation but gives lower estimation errors both for biomass concentration and specific growth rate. Also the tuning procedure for Particle Filter is simpler than for EKF. Consequently, Particle Filter should be preferred in real applications, especially for monitoring of industrial bioprocesses where the simplified implementation procedures are always desirable.

Keywords: biomass concentration, extended Kalman filter, particle filter, state estimation, specific growth rate

Procedia PDF Downloads 405
26 Design and Analysis of Adaptive Type-I Progressive Hybrid Censoring Plan under Step Stress Partially Accelerated Life Testing Using Competing Risk

Authors: Ariful Islam, Showkat Ahmad Lone

Abstract:

Statistical distributions have long been employed in the assessment of semiconductor devices and product reliability. The power function-distribution is one of the most important distributions in the modern reliability practice and can be frequently preferred over mathematically more complex distributions, such as the Weibull and the lognormal, because of its simplicity. Moreover, it may exhibit a better fit for failure data and provide more appropriate information about reliability and hazard rates in some circumstances. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests for competing risk based on adoptive type-I progressive hybrid censoring criteria. The life data of the units under test is assumed to follow Mukherjee-Islam distribution. The point and interval maximum-likelihood estimations are obtained for distribution parameters and tampering coefficient. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Keywords: adoptive progressive hybrid censoring, competing risk, mukherjee-islam distribution, partially accelerated life testing, simulation study

Procedia PDF Downloads 329
25 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals

Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam

Abstract:

The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study

Procedia PDF Downloads 300
24 The Impact of Economic Growth on Carbon Footprints of High-Income and Non-High-Income Countries: A Comparative Analysis

Authors: Ghunchq Khan

Abstract:

The increase in greenhouse gas (GHGs) emissions is a main environmental problem. Diverse human activities and inappropriate economic growth have stimulated a trade-off between economic growth and environmental deterioration all over the world. The impact of economic growth on the environment has received attention as global warming and environmental problems have become more serious. The focus of this study is on carbon footprints (production and consumption) and analyses the impact of GDP per capita on carbon footprints. A balanced panel of 99 countries from 2000 to 2016 is estimated by employing autoregressive distributed lags (ARDL) model – mean group (MG) and pooled mean group (PMG) estimators. The empirical results indicate that GDP per capita has a significant and positive impact in the short run but a negative effect in the long run on the carbon footprint of production in high-income countries by controlling trade openness, industry share, biological capacity, and population density. At the same time, GDP per capita has a significant and positive impact in both the short and long run on the carbon footprint of the production of non-high-income countries. The results also indicate that GDP per capita negatively impacts the carbon footprint of consumption for high-income countries; on the other hand, the carbon footprint of consumption increases as GDP per capita grows in non-high-income countries.

Keywords: ARDL, carbon footprint, economic growth, industry share, trade openness

Procedia PDF Downloads 77
23 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: generalized extreme values, likelihood estimation, precipitation data, Wakeby distribution

Procedia PDF Downloads 122
22 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System

Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi

Abstract:

Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.

Keywords: channel estimation, OFDM, pilot-assist, VLC

Procedia PDF Downloads 160
21 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 123
20 Novel Inference Algorithm for Gaussian Process Classification Model with Multiclass and Its Application to Human Action Classification

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

In this paper, we propose a novel inference algorithm for the multi-class Gaussian process classification model that can be used in the field of human behavior recognition. This algorithm can drive simultaneously both a posterior distribution of a latent function and estimators of hyper-parameters in a Gaussian process classification model with multi-class. Our algorithm is based on the Laplace approximation (LA) technique and variational EM framework. This is performed in two steps: called expectation and maximization steps. First, in the expectation step, using the Bayesian formula and LA technique, we derive approximately the posterior distribution of the latent function indicating the possibility that each observation belongs to a certain class in the Gaussian process classification model. Second, in the maximization step, using a derived posterior distribution of latent function, we compute the maximum likelihood estimator for hyper-parameters of a covariance matrix necessary to define prior distribution for latent function. These two steps iteratively repeat until a convergence condition satisfies. Moreover, we apply the proposed algorithm with human action classification problem using a public database, namely, the KTH human action data set. Experimental results reveal that the proposed algorithm shows good performance on this data set.

Keywords: bayesian rule, gaussian process classification model with multiclass, gaussian process prior, human action classification, laplace approximation, variational EM algorithm

Procedia PDF Downloads 310
19 Mediating and Moderating Function of Corporate Governance on Firm Tax Planning and Firm Tax Disclosure Relationship

Authors: Mahfoudh Hussein Mgammal

Abstract:

The purpose of this paper is to investigate the moderating and mediating effect of corporate governance mechanisms proxy on the relationship of tax planning measured by effective tax rate components and tax disclosure. This paper tested the hypotheses by a 3-step hierarchical regression with 2010 to 2012 Malaysian-listed nonfinancial firms. We found companies positively value tax-planning activities. This indicates that tax planning is seen as a source of companies' wealth creation as the results show that there is an association between the tax disclosure and the extent of tax planning, and this relationship is highly significant. Examination of the implications of corporate governance mechanisms on the tax disclosure-tax planning association showed the lack of a significant coefficient related to any of the interactive variables. This makes it hard to understand the nature of the association. Finally, we further study the sensitivity of the results, the outcomes were also examined for the robustness and strength of the model specification utilizing OLS-effect estimators and the absence of tax planning related factors (GRTH, LEVE, and CAPNT). The findings of these tests display there is no effect on the tax planning-tax disclosure association. The outcomes of the annual regressions test show that the panel regressions results differ over time because there is a time difference impact on the associations, and the different models are not completely proportionate as a whole. Moreover, our paper lends some support to recent theory on the importance of taxes to corporate governance by demonstrating how the agency costs of tax planning allow certain shareholders to benefit from firm activities at the expense of others.

Keywords: tax disclosure, tax planning, corporate governance, effective tax rate

Procedia PDF Downloads 129
18 Primary Analysis of a Randomized Controlled Trial of Topical Analgesia Post Haemorrhoidectomy

Authors: James Jin, Weisi Xia, Runzhe Gao, Alain Vandal, Darren Svirkis, Andrew Hill

Abstract:

Background: Post-haemorrhoidectomy pain is concerned by patients/clinicians. Minimizing the postoperation pain is highly interested clinically. Combinations of topical cream targeting three hypothesised post-haemorrhoidectomy pain mechanisms were developed and their effectiveness were evaluated. Specifically, a multi-centred double-blinded randomized clinical trial (RCT) was conducted in adults undergoing excisional haemorrhoidectomy. The primary analysis was conveyed on the data collected to evaluate the effectiveness of the combinations of topical cream targeting three hypothesized pain mechanisms after the operations. Methods: 192 patients were randomly allocated to 4 arms (each arm has 48 patients), and each arm was provided with pain cream 10% metronidazole (M), M and 2% diltiazem (MD), M with 4% lidocaine (ML), or MDL, respectively. Patients were instructed to apply topical treatments three times a day for 7 days, and record outcomes for 14 days after the operations. The primary outcome was VAS pain on day 4. Covariates and models were selected in the blind review stage. Multiple imputations were applied for the missingness. LMER, GLMER models together with natural splines were applied. Sandwich estimators and Wald statistics were used. P-values < 0.05 were considered as significant. Conclusions: The addition of topical lidocaine or diltiazem to metronidazole does not add any benefit. ML had significantly better pain and recovery scores than combination MDL. Multimodal topical analgesia with ML after haemorrhoidectomy could be considered for further evaluation. Further trials considering only 3 arms (M, ML, MD) might be worth exploring.

Keywords: RCT, primary analysis, multiple imputation, pain scores, haemorrhoidectomy, analgesia, lmer

Procedia PDF Downloads 88
17 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 175
16 The Relationship between Renewable Energy, Real Income, Tourism and Air Pollution

Authors: Eyup Dogan

Abstract:

One criticism of the energy-growth-environment literature, to the best of our knowledge, is that only a few studies analyze the influence of tourism on CO₂ emissions even though tourism sector is closely related to the environment. The other criticism is the selection of methodology. Panel estimation techniques that fail to consider both heterogeneity and cross-sectional dependence across countries can cause forecasting errors. To fulfill the mentioned gaps in the literature, this study analyzes the impacts of real GDP, renewable energy and tourism on the levels of carbon dioxide (CO₂) emissions for the top 10 most-visited countries around the world. This study focuses on the top 10 touristic (most-visited) countries because they receive about the half of the worldwide tourist arrivals in late years and are among the top ones in 'Renewables Energy Country Attractiveness Index (RECAI)'. By looking at Pesaran’s CD test and average growth rates of variables for each country, we detect the presence of cross-sectional dependence and heterogeneity. Hence, this study uses second generation econometric techniques (cross-sectionally augmented Dickey-Fuller (CADF), and cross-sectionally augmented IPS (CIPS) unit root test, the LM bootstrap cointegration test, and the DOLS and the FMOLS estimators) which are robust to the mentioned issues. Therefore, the reported results become accurate and reliable. It is found that renewable energy mitigates the pollution whereas real GDP and tourism contribute to carbon emissions. Thus, regulatory policies are necessary to increase the awareness of sustainable tourism. In addition, the use of renewable energy and the adoption of clean technologies in tourism sector as well as in producing goods and services play significant roles in reducing the levels of emissions.

Keywords: air pollution, tourism, renewable energy, income, panel data

Procedia PDF Downloads 167
15 The Impact of Bim Technology on the Whole Process Cost Management of Civil Engineering Projects in Kenya

Authors: Nsimbe Allan

Abstract:

The study examines the impact of Building Information Modeling (BIM) on the cost management of engineering projects, focusing specifically on the Mombasa Port Area Development Project. The objective of this research venture is to determine the mechanisms through which Building Information Modeling (BIM) facilitates stakeholder collaboration, reduces construction-related expenses, and enhances the precision of cost estimation. Furthermore, the study investigates barriers to execution, assesses the impact on the project's transparency, and suggests approaches to maximize resource utilization. The study, selected for its practical significance and intricate nature, conducted a Systematic Literature Review (SLR) using credible databases, including ScienceDirect and IEEE Xplore. To constitute the diverse sample, 69 individuals, including project managers, cost estimators, and BIM administrators, were selected via stratified random sampling. The data were obtained using a mixed-methods approach, which prioritized ethical considerations. SPSS and Microsoft Excel were applied to the analysis. The research emphasizes the crucial role that project managers, architects, and engineers play in the decision-making process (47% of respondents). Furthermore, a significant improvement in cost estimation accuracy was reported by 70% of the participants. It was found that the implementation of BIM resulted in enhanced project visibility, which in turn optimized resource allocation and facilitated the process of budgeting. In brief, the study highlights the positive impacts of Building Information Modeling (BIM) on collaborative decision-making and cost estimation, addresses challenges related to implementation, and provides solutions for the efficient assimilation and understanding of BIM principles.

Keywords: cost management, resource utilization, stakeholder collaboration, project transparency

Procedia PDF Downloads 35
14 Occupational Attainment of Second Generation of Ethnic Minority Immigrants in the UK

Authors: Rukhsana Kausar, Issam Malki

Abstract:

The integration and assimilation of ethnic minority immigrants (EMIs) and their subsequent generations remains a serious unsettled issue in most of the host countries. This study conducts the labour market gender analysis to investigate specifically whether second generation of ethnic minority immigrants in the UK is gaining access to professional and managerial employment and advantaged occupational positions on par with their native counterparts. The data used to examine the labour market achievements of EMIs is taken from Labour Force Survey (LFS) for the period 2014-2018. We apply a multivalued treatment under ignorability as proposed by Cattaneo (2010), which refers to treatment effects under the assumptions of (i) selection – on – observables and (ii) common support. We report estimates of Average Treatment Effect (ATE), Average Treatment Effect on the Treated (ATET), and Potential Outcomes Means (POM) using three estimators, including the Regression Adjustment (RA), Augmented Inverse Probability Weighting (AIPW) and Inverse Probability Weighting- Regression Adjustment (IPWRA). We consider two cases: the case with four categories where the first-generation natives are the base category, the second case combine all natives as a base group. Our findings suggest the following. Under Case 1, the estimated probabilities and differences across groups are consistently similar and highly significant. As expected, first generation natives have the highest probability for higher career attainment among both men and women. The findings also suggest that first generation immigrants perform better than the remaining two groups, including the second-generation natives and immigrants. Furthermore, second generation immigrants have higher probability to attain higher professional career, while this is lower for a managerial career. Similar conclusions are reached under Case 2. That is to say that both first – generation and second – generation immigrants have a lower probability for higher career and managerial attainment. First – generation immigrants are found to perform better than second – generation immigrants.

Keywords: immigrnats, second generation, occupational attainment, ethnicity

Procedia PDF Downloads 89
13 Dynamic Modelling of Hepatitis B Patient Using Sihar Model

Authors: Alakija Temitope Olufunmilayo, Akinyemi, Yagba Joy

Abstract:

Hepatitis is the inflammation of the liver tissue that can cause whiteness of the eyes (Jaundice), lack of appetite, vomiting, tiredness, abdominal pain, diarrhea. Hepatitis is acute if it resolves within 6 months and chronic if it last longer than 6 months. Acute hepatitis can resolve on its own, lead to chronic hepatitis or rarely result in acute liver failure. Chronic hepatitis may lead to scarring of the liver (Cirrhosis), liver failure and liver cancer. Modelling Hepatitis B may become necessary in order to reduce its spread. So, dynamic SIR model can be used. This model consists of a system of three coupled non-linear ordinary differential equation which does not have an explicit formula solution. It is an epidemiological model used to predict the dynamics of infectious disease by categorizing the population into three possible compartments. In this study, a five-compartment dynamic model of Hepatitis B disease was proposed and developed by adding control measure of sensitizing the public called awareness. All the mathematical and statistical formulation of the model, especially the general equilibrium of the model, was derived, including the nonlinear least square estimators. The initial parameters of the model were derived using nonlinear least square embedded in R code. The result study shows that the proportion of Hepatitis B patient in the study population is 1.4 per 1,000,000 populations. The estimated Hepatitis B induced death rate is 0.0108, meaning that 1.08% of the infected individuals die of the disease. The reproduction number of Hepatitis B diseases in Nigeria is 6.0, meaning that one individual can infect more than 6.0 people. The effect of sensitizing the public on the basic reproduction number is significant as the reproduction number is reduced. The study therefore recommends that programme should be designed by government and non-governmental organization to sensitize the entire Nigeria population in order to reduce cases of Hepatitis B disease among the citizens.

Keywords: hepatitis B, modelling, non-linear ordinary differential equation, sihar model, sensitization

Procedia PDF Downloads 61
12 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 322
11 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 158
10 Preliminary Study Investigating Trunk Muscle Fatigue and Cognitive Function in Event Riders during a Simulated Jumping Test

Authors: Alice Carter, Lucy Dumbell, Lorna Cameron, Victoria Lewis

Abstract:

The Olympic discipline of eventing is the triathlon of equestrian sport, consisting of dressage, cross-country and show jumping. Falls on the cross-country are common and can be serious even causing death to rider. Research identifies an increased risk of a fall with an increasing number of obstacles and for jumping efforts later in the course suggesting fatigue maybe a contributing factor. Advice based on anecdotal evidence suggests riders undertake strength and conditioning programs to improve their ‘core’, thus improving their ability to maintain and control their riding position. There is little empirical evidence to support this advice. Therefore, the aim of this study is to investigate truck muscle fatigue and cognitive function during a simulated jumping test. Eight adult riders participated in a riding test on a Racewood Event simulator for 10 minutes, over a continuous jumping programme. The SEMG activity of six trunk muscles were bilaterally measured at every minute, and normalised root mean squares (RMS) and median frequencies (MDF) were computed from the EMG power spectra. Visual analogue scales (VAS) measuring Fatigue and Pain levels and Cognitive Function ‘tapping’ tests were performed before and after the riding test. Average MDF values for all muscles differed significantly between each sampled minute (p = 0.017), however a consistent decrease from Minute 1 and Minute 9 was not found, suggesting the trunk muscles fatigued and then recovered as other muscle groups important in maintaining the riding position during dynamic movement compensated. Differences between the MDF and RMS of different muscles were highly significant (H=213.01, DF=5, p < 0.001), supporting previous anecdotal evidence that different trunk muscles carry out different roles of posture maintenance during riding. RMS values were not significantly different between the sampled minutes or between riders, suggesting the riding test produced a consistent and repeatable effect on the trunk muscles. MDF values differed significantly between riders (H=50.8, DF = 5, p < 0.001), suggesting individuals may experience localised muscular fatigue of the same test differently, and that other parameters of physical fitness should be investigated to provide conclusions. Lumbar muscles were shown to be important in maintaining the position, therefore physical training program should focus on these areas. No significant differences were found between pre- and post-riding test VAS Pain and Fatigue scores or cognitive function test scores, suggesting the riding test was not significantly fatiguing for participants. However, a near significant correlation was found between time of riding test and VAS Pain score (p = 0.06), suggesting somatic pain may be a limiting factor to performance. No other correlations were found between the factors of participant riding test time, VAS Pain and Fatigue, however a larger sample needs to be tested to improve statistical analysis. The findings suggest the simulator riding test was not sufficient to provoke fatigue in the riders, however foundations for future studies have been laid to enable methodologies in realistic eventing settings.

Keywords: eventing, fatigue, horse-rider, surface EMG, trunk muscles

Procedia PDF Downloads 172
9 Evaluation of a Piecewise Linear Mixed-Effects Model in the Analysis of Randomized Cross-over Trial

Authors: Moses Mwangi, Geert Verbeke, Geert Molenberghs

Abstract:

Cross-over designs are commonly used in randomized clinical trials to estimate efficacy of a new treatment with respect to a reference treatment (placebo or standard). The main advantage of using cross-over design over conventional parallel design is its flexibility, where every subject become its own control, thereby reducing confounding effect. Jones & Kenward, discuss in detail more recent developments in the analysis of cross-over trials. We revisit the simple piecewise linear mixed-effects model, proposed by Mwangi et. al, (in press) for its first application in the analysis of cross-over trials. We compared performance of the proposed piecewise linear mixed-effects model with two commonly cited statistical models namely, (1) Grizzle model; and (2) Jones & Kenward model, used in estimation of the treatment effect, in the analysis of randomized cross-over trial. We estimate two performance measurements (mean square error (MSE) and coverage probability) for the three methods, using data simulated from the proposed piecewise linear mixed-effects model. Piecewise linear mixed-effects model yielded lowest MSE estimates compared to Grizzle and Jones & Kenward models for both small (Nobs=20) and large (Nobs=600) sample sizes. It’s coverage probability were highest compared to Grizzle and Jones & Kenward models for both small and large sample sizes. A piecewise linear mixed-effects model is a better estimator of treatment effect than its two competing estimators (Grizzle and Jones & Kenward models) in the analysis of cross-over trials. The data generating mechanism used in this paper captures two time periods for a simple 2-Treatments x 2-Periods cross-over design. Its application is extendible to more complex cross-over designs with multiple treatments and periods. In addition, it is important to note that, even for single response models, adding more random effects increases the complexity of the model and thus may be difficult or impossible to fit in some cases.

Keywords: Evaluation, Grizzle model, Jones & Kenward model, Performance measures, Simulation

Procedia PDF Downloads 102
8 The Relationship between Central Bank Independence and Inflation: Evidence from Africa

Authors: R. Bhattu Babajee, Marie Sandrine Estelle Benoit

Abstract:

The past decades have witnessed a considerable institutional shift towards Central Bank Independence across economies of the world. The motivation behind such a change is the acceptance that increased central bank autonomy has the power of alleviating inflation bias. Hence, studying whether Central Bank Independence acts as a significant factor behind the price stability in the African economies or whether this macroeconomic aim in these countries result from other economic, political or social factors is a pertinent issue. The main research objective of this paper is to assess the relationship between central bank autonomy and inflation in African economies where inflation has proved to be a serious problem. In this optic, we shall measure the degree of CBI in Africa by computing the turnover rates of central banks governors thereby studying whether decisions made by African central banks are affected by external forces. The purpose of this study is to investigate empirically the association between Central Bank Independence (CBI) and inflation for 10 African economies over a period of 17 years, from 1995 to 2012. The sample includes Botswana, Egypt, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Nigeria, South Africa, and Uganda. In contrast to empirical research, we have not been using the usual static panel model for it is associated with potential mis specification arising from the absence of dynamics. To this issue a dynamic panel data model which integrates several control variables has been used. Firstly, the analysis includes dynamic terms to explain the tenacity of inflation. Given the confirmation of inflation inertia, that is very likely in African countries there exists the need for including lagged inflation in the empirical model. Secondly, due to known reverse causality between Central Bank Independence and inflation, the system generalized method of moments (GMM) is employed. With GMM estimators, the presence of unknown forms of heteroskedasticity is admissible as well as auto correlation in the error term. Thirdly, control variables have been used to enhance the efficiency of the model. The main finding of this paper is that central bank independence is negatively associated with inflation even after including control variables.

Keywords: central bank independence, inflation, macroeconomic variables, price stability

Procedia PDF Downloads 349
7 Testing the Life Cycle Theory on the Capital Structure Dynamics of Trade-Off and Pecking Order Theories: A Case of Retail, Industrial and Mining Sectors

Authors: Freddy Munzhelele

Abstract:

Setting: the empirical research has shown that the life cycle theory has an impact on the firms’ financing decisions, particularly the dividend pay-outs. Accordingly, the life cycle theory posits that as a firm matures, it gets to a level and capacity where it distributes more cash as dividends. On the other hand, the young firms prioritise investment opportunities sets and their financing; thus, they pay little or no dividends. The research on firms’ financing decisions also demonstrated, among others, the adoption of trade-off and pecking order theories on the dynamics of firms capital structure. The trade-off theory talks to firms holding a favourable position regarding debt structures particularly as to the cost and benefits thereof; and pecking order is concerned with firms preferring a hierarchical order as to choosing financing sources. The case of life cycle hypothesis explaining the financial managers’ decisions as regards the firms’ capital structure dynamics appears to be an interesting link, yet this link has been neglected in corporate finance research. If this link is to be explored as an empirical research, the financial decision-making alternatives will be enhanced immensely, since no conclusive evidence has been found yet as to the dynamics of capital structure. Aim: the aim of this study is to examine the impact of life cycle theory on the capital structure dynamics trade-off and pecking order theories of firms listed in retail, industrial and mining sectors of the JSE. These sectors are among the key contributors to the GDP in the South African economy. Design and methodology: following the postpositivist research paradigm, the study is quantitative in nature and utilises secondary data obtainable from the financial statements of sampled firm for the period 2010 – 2022. The firms’ financial statements will be extracted from the IRESS database. Since the data will be in panel form, a combination of the static and dynamic panel data estimators will used to analyse data. The overall data analyses will be done using STATA program. Value add: this study directly investigates the link between the life cycle theory and the dynamics of capital structure decisions, particularly the trade-off and pecking order theories.

Keywords: life cycle theory, trade-off theory, pecking order theory, capital structure, JSE listed firms

Procedia PDF Downloads 44
6 Factors That Determine International Competitiveness of Agricultural Products in Latin America 1990-2020

Authors: Oluwasefunmi Eunice Irewole, Enrique Armas Arévalos

Abstract:

Agriculture has played a crucial role in the economy and the development of many countries. Moreover, the basic needs for human survival are; food, shelter, and cloth are link on agricultural production. Most developed countries see that agriculture provides them with food and raw materials for different goods such as (shelter, medicine, fuel and clothing) which has led to an increase in incomes, livelihoods and standard of living. This study aimed at analysing the relationship between International competitiveness of agricultural products, with the area, fertilizer, labour force, economic growth, foreign direct investment, exchange rate and inflation rate in Latin America during the period of 1991-to 2019. In this study, panel data econometric methods were used, as well as cross-section dependence (Pesaran test), unit root (cross-section Augumented Dickey Fuller and Cross-sectional Im, Pesaran, and Shin tests), cointergration (Pedroni and Fisher-Johansen tests), and heterogeneous causality (Pedroni and Fisher-Johansen tests) (Hurlin and Dumitrescu test). The results reveal that the model has cross-sectional dependency and that they are integrated at one I. (1). The "fully modified OLS and dynamic OLS estimators" were used to examine the existence of a long-term relationship, and it was found that a long-term relationship existed between the selected variables. The study revealed a positive significant relationship between International Competitiveness of the agricultural raw material and area, fertilizer, labour force, economic growth, and foreign direct investment, while international competitiveness has a negative relationship with the advantages of the exchange rate and inflation. The economy policy recommendations deducted from this investigation is that Foreign Direct Investment and the labour force have a positive contribution to the increase of International Competitiveness of agricultural products.

Keywords: revealed comparative advantage, agricultural products, area, fertilizer, economic growth, granger causality, panel unit root

Procedia PDF Downloads 84
5 Achieving Design-Stage Elemental Cost Planning Accuracy: Case Study of New Zealand

Authors: Johnson Adafin, James O. B. Rotimi, Suzanne Wilkinson, Abimbola O. Windapo

Abstract:

An aspect of client expenditure management that requires attention is the level of accuracy achievable in design-stage elemental cost planning. This has been a major concern for construction clients and practitioners in New Zealand (NZ). Pre-tender estimating inaccuracies are significantly influenced by the level of risk information available to estimators. Proper cost planning activities should ensure the production of a project’s likely construction costs (initial and final), and subsequent cost control activities should prevent unpleasant consequences of cost overruns, disputes and project abandonment. If risks were properly identified and priced at the design stage, observed variance between design-stage elemental cost plans (ECPs) and final tender sums (FTS) (initial contract sums) could be reduced. This study investigates the variations between design-stage ECPs and FTS of construction projects, with a view to identifying risk factors that are responsible for the observed variance. Data were sourced through interviews, and risk factors were identified by using thematic analysis. Access was obtained to project files from the records of study participants (consultant quantity surveyors), and document analysis was employed in complementing the responses from the interviews. Study findings revealed the discrepancies between ECPs and FTS in the region of -14% and +16%. It is opined in this study that the identified risk factors were responsible for the variability observed. The values obtained from the analysis would enable greater accuracy in the forecast of FTS by Quantity Surveyors. Further, whilst inherent risks in construction project developments are observed globally, these findings have important ramifications for construction projects by expanding existing knowledge on what is needed for reasonable budgetary performance and successful delivery of construction projects. The findings contribute significantly to the study by providing quantitative confirmation to justify the theoretical conclusions generated in the literature from around the world. This therefore adds to and consolidates existing knowledge.

Keywords: accuracy, design-stage, elemental cost plan, final tender sum

Procedia PDF Downloads 247
4 Multicollinearity and MRA in Sustainability: Application of the Raise Regression

Authors: Claudia García-García, Catalina B. García-García, Román Salmerón-Gómez

Abstract:

Much economic-environmental research includes the analysis of possible interactions by using Moderated Regression Analysis (MRA), which is a specific application of multiple linear regression analysis. This methodology allows analyzing how the effect of one of the independent variables is moderated by a second independent variable by adding a cross-product term between them as an additional explanatory variable. Due to the very specification of the methodology, the moderated factor is often highly correlated with the constitutive terms. Thus, great multicollinearity problems arise. The appearance of strong multicollinearity in a model has important consequences. Inflated variances of the estimators may appear, there is a tendency to consider non-significant regressors that they probably are together with a very high coefficient of determination, incorrect signs of our coefficients may appear and also the high sensibility of the results to small changes in the dataset. Finally, the high relationship among explanatory variables implies difficulties in fixing the individual effects of each one on the model under study. These consequences shifted to the moderated analysis may imply that it is not worth including an interaction term that may be distorting the model. Thus, it is important to manage the problem with some methodology that allows for obtaining reliable results. After a review of those works that applied the MRA among the ten top journals of the field, it is clear that multicollinearity is mostly disregarded. Less than 15% of the reviewed works take into account potential multicollinearity problems. To overcome the issue, this work studies the possible application of recent methodologies to MRA. Particularly, the raised regression is analyzed. This methodology mitigates collinearity from a geometrical point of view: the collinearity problem arises because the variables under study are very close geometrically, so by separating both variables, the problem can be mitigated. Raise regression maintains the available information and modifies the problematic variables instead of deleting variables, for example. Furthermore, the global characteristics of the initial model are also maintained (sum of squared residuals, estimated variance, coefficient of determination, global significance test and prediction). The proposal is implemented to data from countries of the European Union during the last year available regarding greenhouse gas emissions, per capita GDP and a dummy variable that represents the topography of the country. The use of a dummy variable as the moderator is a special variant of MRA, sometimes called “subgroup regression analysis.” The main conclusion of this work is that applying new techniques to the field can improve in a substantial way the results of the analysis. Particularly, the use of raised regression mitigates great multicollinearity problems, so the researcher is able to rely on the interaction term when interpreting the results of a particular study.

Keywords: multicollinearity, MRA, interaction, raise

Procedia PDF Downloads 80
3 Modeling the Relation between Discretionary Accrual Earnings Management, International Financial Reporting Standards and Corporate Governance

Authors: Ikechukwu Ndu

Abstract:

This study examines the econometric modeling of the relation between discretionary accrual earnings management, International Financial Reporting Standards (IFRS), and certain corporate governance factors with regard to listed Nigerian non-financial firms. Although discretionary accrual earnings management is a well-known and global problem that has an adverse impact on users of the financial statements, its relationship with IFRS and corporate governance is neither adequately researched nor properly systematically investigated in Nigeria. The dearth of research in the relation between discretionary accrual earnings management, IFRS and corporate governance in Nigeria has made it difficult for academics, practitioners, government setting bodies, regulators and international bodies to achieve a clearer understanding of how discretionary accrual earnings management relates to IFRS and certain corporate governance characteristics. This is the first study to the author’s best knowledge to date that makes interesting research contributions that significantly add to the literature of discretionary accrual earnings management and its relation with corporate governance and IFRS pertaining to the Nigerian context. A comprehensive review is undertaken of the literature of discretionary total accrual earnings management, IFRS, and certain corporate governance characteristics as well as the data, models, methodologies, and different estimators used in the study. Secondary financial statement, IFRS, and corporate governance data are sourced from Bloomberg database and published financial statements of Nigerian non-financial firms for the period 2004 to 2016. The methodology uses both the total and working capital accrual basis. This study has a number of interesting preliminary findings. First, there is a negative relationship between the level of discretionary accrual earnings management and the adoption of IFRS. However, this relationship does not appear to be statistically significant. Second, there is a significant negative relationship between the size of the board of directors and discretionary accrual earnings management. Third, CEO Separation of roles does not constrain earnings management, indicating the need to preserve relationships, personal connections, and maintain bonded friendships between the CEO, Chairman, and executive directors. Fourth, there is a significant negative relationship between discretionary accrual earnings management and the use of a Big Four firm as an auditor. Fifth, including shareholders in the audit committee, leads to a reduction in discretionary accrual earnings management. Sixth, the debt and return on assets (ROA) variables are significant and positively related to discretionary accrual earnings management. Finally, the company size variable indicated by the log of assets is surprisingly not found to be statistically significant and indicates that all Nigerian companies irrespective of size engage in discretionary accrual management. In conclusion, this study provides key insights that enable a better understanding of the relationship between discretionary accrual earnings management, IFRS, and corporate governance in the Nigerian context. It is expected that the results of this study will be of interest to academics, practitioners, regulators, governments, international bodies and other parties involved in policy setting and economic development in areas of financial reporting, securities regulation, accounting harmonization, and corporate governance.

Keywords: discretionary accrual earnings management, earnings manipulation, IFRS, corporate governance

Procedia PDF Downloads 114
2 The Brain’s Attenuation Coefficient as a Potential Estimator of Temperature Elevation during Intracranial High Intensity Focused Ultrasound Procedures

Authors: Daniel Dahis, Haim Azhari

Abstract:

Noninvasive image-guided intracranial treatments using high intensity focused ultrasound (HIFU) are on the course of translation into clinical applications. They include, among others, tumor ablation, hyperthermia, and blood-brain-barrier (BBB) penetration. Since many of these procedures are associated with local temperature elevation, thermal monitoring is essential. MRI constitutes an imaging method with high spatial resolution and thermal mapping capacity. It is the currently leading modality for temperature guidance, commonly under the name MRgHIFU (magnetic-resonance guided HIFU). Nevertheless, MRI is a very expensive non-portable modality which jeopardizes its accessibility. Ultrasonic thermal monitoring, on the other hand, could provide a modular, cost-effective alternative with higher temporal resolution and accessibility. In order to assess the feasibility of ultrasonic brain thermal monitoring, this study investigated the usage of brain tissue attenuation coefficient (AC) temporal changes as potential estimators of thermal changes. Newton's law of cooling describes a temporal exponential decay behavior for the temperature of a heated object immersed in a relatively cold surrounding. Similarly, in the case of cerebral HIFU treatments, the temperature in the region of interest, i.e., focal zone, is suggested to follow the same law. Thus, it was hypothesized that the AC of the irradiated tissue may follow a temporal exponential behavior during cool down regime. Three ex-vivo bovine brain tissue specimens were inserted into plastic containers along with four thermocouple probes in each sample. The containers were placed inside a specially built ultrasonic tomograph and scanned at room temperature. The corresponding pixel-averaged AC was acquired for each specimen and used as a reference. Subsequently, the containers were placed in a beaker containing hot water and gradually heated to about 45ᵒC. They were then repeatedly rescanned during cool down using ultrasonic through-transmission raster trajectory until reaching about 30ᵒC. From the obtained images, the normalized AC and its temporal derivative as a function of temperature and time were registered. The results have demonstrated high correlation (R² > 0.92) between both the brain AC and its temporal derivative to temperature. This indicates the validity of the hypothesis and the possibility of obtaining brain tissue temperature estimation from the temporal AC thermal changes. It is important to note that each brain yielded different AC values and slopes. This implies that a calibration step is required for each specimen. Thus, for a practical acoustic monitoring of the brain, two steps are suggested. The first step consists of simply measuring the AC at normal body temperature. The second step entails measuring the AC after small temperature elevation. In face of the urging need for a more accessible thermal monitoring technique for brain treatments, the proposed methodology enables a cost-effective high temporal resolution acoustical temperature estimation during HIFU treatments.

Keywords: attenuation coefficient, brain, HIFU, image-guidance, temperature

Procedia PDF Downloads 141
1 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 105