Search results for: parametric estimation
2222 DEMs: A Multivariate Comparison Approach
Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo
Abstract:
The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variablesKeywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison
Procedia PDF Downloads 1152221 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights
Authors: Nelson Bii, Christopher Ouma, John Odhiambo
Abstract:
Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths
Procedia PDF Downloads 1412220 RP-HPLC Method Development and Its Validation for Simultaneous Estimation of Metoprolol Succinate and Olmesartan Medoxomil Combination in Bulk and Tablet Dosage Form
Authors: S. Jain, R. Savalia, V. Saini
Abstract:
A simple, accurate, precise, sensitive and specific RP-HPLC method was developed and validated for simultaneous estimation of Metoprolol Succinate and Olmesartan Medoxomil in bulk and tablet dosage form. The RP-HPLC method has shown adequate separation for Metoprolol Succinate and Olmesartan Medoxomil from its degradation products. The separation was achieved on a Phenomenex luna ODS C18 (250mm X 4.6mm i.d., 5μm particle size) with an isocratic mixture of acetonitrile: 50mM phosphate buffer pH 4.0 adjusted with glacial acetic acid in the ratio of 55:45 v/v. The mobile phase at a flow rate of 1.0ml/min, Injection volume 20μl and wavelength of detection was kept at 225nm. The retention time for Metoprolol Succinate and Olmesartan Medoxomil was 2.451±0.1min and 6.167±0.1min, respectively. The linearity of the proposed method was investigated in the range of 5-50μg/ml and 2-20μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively. Correlation coefficient was 0.999 and 0.9996 for Metoprolol Succinate and Olmesartan Medoxomil, respectively. The limit of detection was 0.2847μg/ml and 0.1251μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively and the limit of quantification was 0.8630μg/ml and 0.3793μg/ml for Metoprolol and Olmesartan, respectively. Proposed methods were validated as per ICH guidelines for linearity, accuracy, precision, specificity and robustness for estimation of Metoprolol Succinate and Olmesartan Medoxomil in commercially available tablet dosage form and results were found to be satisfactory. Thus the developed and validated stability indicating method can be used successfully for marketed formulations.Keywords: metoprolol succinate, olmesartan medoxomil, RP-HPLC method, validation, ICH
Procedia PDF Downloads 3162219 Stature and Gender Estimation Using Foot Measurements in South Indian Population
Authors: Jagadish Rao Padubidri, Mehak Bhandary, Sowmya J. Rao
Abstract:
Introduction: The significance of the human foot and its measurements in identifying an individual has been proved a lot of times by different studies in different geographical areas and its association to the stature and gender of the individual has been justified by many researches. In our study we have used different foot measurements including the length, width, malleol height and navicular height for establishing its association to stature and gender and to find out its accuracy. The purpose of this study is to show the relation of foot measurements with stature and gender, and to derive Multiple and Logistic regression equations for stature and gender estimation in South Indian population. Materials and Methods: The subjects for this study were 200 South Indian students out of which 100 were females and 100 were males, aged between 18 to 24 years. The data for the present study included the stature, foot length, foot breath, foot malleol height, foot navicular height of both right and left foot. Descriptive statistics, T-test and Pearson correlation coefficients were derived between stature, gender and foot measurements. The stature was estimated from right and left foot measurements for both male and female South Indian population using multiple regression analysis and logistic regression analysis for gender estimation. Results: The means, standard deviation, stature, right and left foot measurements and T-test in male population were higher than in females. LFL (Left foot length) is more than RFL (Right Foot length) in male groups, but in female groups the length of both foot are almost equal [RFL=226.6, LFL=227.1]. There is not much of difference in means of RFW (Right foot width) and LFW (Left foot width) in both the genders. Significant difference were seen in mean values of malleol and navicular height of right and left feet in male gender. No such difference was seen in female subjects. Conclusions: The study has successfully demonstrated the correlation of foot length in stature estimation in all the three study groups in both right and left foot. Next in parameters are Foot width and malleol height in estimating stature among male and female groups. Navicular height of both right and left foot showed poor relationship with stature estimation in both male and female groups. Multiple regression equations for both right and left foot measurements to estimate stature were derived with standard error ranging from 11-12 cm in males and 10-11 cm in females. The SEE was 5.8 when both male and female groups were pooled together. The logistic regression model which was derived to determine gender showed 85% accuracy and 92.5% accuracy using right and left foot measurements respectively. We believe that stature and gender can be estimated with foot measurements in South Indian population.Keywords: foot length, gender, stature, South Indian
Procedia PDF Downloads 3352218 State Estimation Based on Unscented Kalman Filter for Burgers’ Equation
Authors: Takashi Shimizu, Tomoaki Hashimoto
Abstract:
Controlling the flow of fluids is a challenging problem that arises in many fields. Burgers’ equation is a fundamental equation for several flow phenomena such as traffic, shock waves, and turbulence. The optimal feedback control method, so-called model predictive control, has been proposed for Burgers’ equation. However, the model predictive control method is inapplicable to systems whose all state variables are not exactly known. In practical point of view, it is unusual that all the state variables of systems are exactly known, because the state variables of systems are measured through output sensors and limited parts of them can be only available. In fact, it is usual that flow velocities of fluid systems cannot be measured for all spatial domains. Hence, any practical feedback controller for fluid systems must incorporate some type of state estimator. To apply the model predictive control to the fluid systems described by Burgers’ equation, it is needed to establish a state estimation method for Burgers’ equation with limited measurable state variables. To this purpose, we apply unscented Kalman filter for estimating the state variables of fluid systems described by Burgers’ equation. The objective of this study is to establish a state estimation method based on unscented Kalman filter for Burgers’ equation. The effectiveness of the proposed method is verified by numerical simulations.Keywords: observer systems, unscented Kalman filter, nonlinear systems, Burgers' equation
Procedia PDF Downloads 1532217 A Digital Filter for Symmetrical Components Identification
Authors: Khaled M. El-Naggar
Abstract:
This paper presents a fast and efficient technique for monitoring and supervising power system disturbances generated due to dynamic performance of power systems or faults. Monitoring power system quantities involve monitoring fundamental voltage, current magnitudes, and their frequencies as well as their negative and zero sequence components under different operating conditions. The proposed technique is based on simulated annealing optimization technique (SA). The method uses digital set of measurements for the voltage or current waveforms at power system bus to perform the estimation process digitally. The algorithm is tested using different simulated data to monitor the symmetrical components of power system waveforms. Different study cases are considered in this work. Effects of number of samples, sampling frequency and the sample window size are studied. Results are reported and discussed.Keywords: estimation, faults, measurement, symmetrical components
Procedia PDF Downloads 4662216 Flame Volume Prediction and Validation for Lean Blowout of Gas Turbine Combustor
Authors: Ejaz Ahmed, Huang Yong
Abstract:
The operation of aero engines has a critical importance in the vicinity of lean blowout (LBO) limits. Lefebvre’s model of LBO based on empirical correlation has been extended to flame volume concept by the authors. The flame volume takes into account the effects of geometric configuration, the complex spatial interaction of mixing, turbulence, heat transfer and combustion processes inside the gas turbine combustion chamber. For these reasons, flame volume based LBO predictions are more accurate. Although LBO prediction accuracy has improved, it poses a challenge associated with Vf estimation in real gas turbine combustors. This work extends the approach of flame volume prediction previously based on fuel iterative approximation with cold flow simulations to reactive flow simulations. Flame volume for 11 combustor configurations has been simulated and validated against experimental data. To make prediction methodology robust as required in the preliminary design stage, reactive flow simulations were carried out with the combination of probability density function (PDF) and discrete phase model (DPM) in FLUENT 15.0. The criterion for flame identification was defined. Two important parameters i.e. critical injection diameter (Dp,crit) and critical temperature (Tcrit) were identified, and their influence on reactive flow simulation was studied for Vf estimation. Obtained results exhibit ±15% error in Vf estimation with experimental data.Keywords: CFD, combustion, gas turbine combustor, lean blowout
Procedia PDF Downloads 2682215 On Modeling Data Sets by Means of a Modified Saddlepoint Approximation
Authors: Serge B. Provost, Yishan Zhang
Abstract:
A moment-based adjustment to the saddlepoint approximation is introduced in the context of density estimation. First applied to univariate distributions, this methodology is extended to the bivariate case. It then entails estimating the density function associated with each marginal distribution by means of the saddlepoint approximation and applying a bivariate adjustment to the product of the resulting density estimates. The connection to the distribution of empirical copulas will be pointed out. As well, a novel approach is proposed for estimating the support of distribution. As these results solely rely on sample moments and empirical cumulant-generating functions, they are particularly well suited for modeling massive data sets. Several illustrative applications will be presented.Keywords: empirical cumulant-generating function, endpoints identification, saddlepoint approximation, sample moments, density estimation
Procedia PDF Downloads 1622214 An Efficient Propensity Score Method for Causal Analysis With Application to Case-Control Study in Breast Cancer Research
Authors: Ms Azam Najafkouchak, David Todem, Dorothy Pathak, Pramod Pathak, Joseph Gardiner
Abstract:
Propensity score (PS) methods have recently become the standard analysis as a tool for the causal inference in the observational studies where exposure is not randomly assigned, thus, confounding can impact the estimation of treatment effect on the outcome. For the binary outcome, the effect of treatment on the outcome can be estimated by odds ratios, relative risks, and risk differences. However, using the different PS methods may give you a different estimation of the treatment effect on the outcome. Several methods of PS analyses have been used mainly, include matching, inverse probability of weighting, stratification, and covariate adjusted on PS. Due to the dangers of discretizing continuous variables (exposure, covariates), the focus of this paper will be on how the variation in cut-points or boundaries will affect the average treatment effect (ATE) utilizing the stratification of PS method. Therefore, we are trying to avoid choosing arbitrary cut-points, instead, we continuously discretize the PS and accumulate information across all cut-points for inferences. We will use Monte Carlo simulation to evaluate ATE, focusing on two PS methods, stratification and covariate adjusted on PS. We will then show how this can be observed based on the analyses of the data from a case-control study of breast cancer, the Polish Women’s Health Study.Keywords: average treatment effect, propensity score, stratification, covariate adjusted, monte Calro estimation, breast cancer, case_control study
Procedia PDF Downloads 1072213 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 2022212 On Confidence Intervals for the Difference between Inverse of Normal Means with Known Coefficients of Variation
Authors: Arunee Wongkhao, Suparat Niwitpong, Sa-aat Niwitpong
Abstract:
In this paper, we propose two new confidence intervals for the difference between the inverse of normal means with known coefficients of variation. One of these two confidence intervals for this problem is constructed based on the generalized confidence interval and the other confidence interval is constructed based on the closed form method of variance estimation. We examine the performance of these confidence intervals in terms of coverage probabilities and expected lengths via Monte Carlo simulation.Keywords: coverage probability, expected length, inverse of normal mean, coefficient of variation, generalized confidence interval, closed form method of variance estimation
Procedia PDF Downloads 3092211 Visualization and Performance Measure to Determine Number of Topics in Twitter Data Clustering Using Hybrid Topic Modeling
Authors: Moulana Mohammed
Abstract:
Topic models are widely used in building clusters of documents for more than a decade, yet problems occurring in choosing optimal number of topics. The main problem is the lack of a stable metric of the quality of topics obtained during the construction of topic models. The authors analyzed from previous works, most of the models used in determining the number of topics are non-parametric and quality of topics determined by using perplexity and coherence measures and concluded that they are not applicable in solving this problem. In this paper, we used the parametric method, which is an extension of the traditional topic model with visual access tendency for visualization of the number of topics (clusters) to complement clustering and to choose optimal number of topics based on results of cluster validity indices. Developed hybrid topic models are demonstrated with different Twitter datasets on various topics in obtaining the optimal number of topics and in measuring the quality of clusters. The experimental results showed that the Visual Non-negative Matrix Factorization (VNMF) topic model performs well in determining the optimal number of topics with interactive visualization and in performance measure of the quality of clusters with validity indices.Keywords: interactive visualization, visual mon-negative matrix factorization model, optimal number of topics, cluster validity indices, Twitter data clustering
Procedia PDF Downloads 1342210 Data Driven Infrastructure Planning for Offshore Wind farms
Authors: Isha Saxena, Behzad Kazemtabrizi, Matthias C. M. Troffaes, Christopher Crabtree
Abstract:
The calculations done at the beginning of the life of a wind farm are rarely reliable, which makes it important to conduct research and study the failure and repair rates of the wind turbines under various conditions. This miscalculation happens because the current models make a simplifying assumption that the failure/repair rate remains constant over time. This means that the reliability function is exponential in nature. This research aims to create a more accurate model using sensory data and a data-driven approach. The data cleaning and data processing is done by comparing the Power Curve data of the wind turbines with SCADA data. This is then converted to times to repair and times to failure timeseries data. Several different mathematical functions are fitted to the times to failure and times to repair data of the wind turbine components using Maximum Likelihood Estimation and the Posterior expectation method for Bayesian Parameter Estimation. Initial results indicate that two parameter Weibull function and exponential function produce almost identical results. Further analysis is being done using the complex system analysis considering the failures of each electrical and mechanical component of the wind turbine. The aim of this project is to perform a more accurate reliability analysis that can be helpful for the engineers to schedule maintenance and repairs to decrease the downtime of the turbine.Keywords: reliability, bayesian parameter inference, maximum likelihood estimation, weibull function, SCADA data
Procedia PDF Downloads 872209 A Carrier Phase High Precision Ranging Theory Based on Frequency Hopping
Authors: Jie Xu, Zengshan Tian, Ze Li
Abstract:
Previous indoor ranging or localization systems achieving high accuracy time of flight (ToF) estimation relied on two key points. One is to do strict time and frequency synchronization between the transmitter and receiver to eliminate equipment asynchronous errors such as carrier frequency offset (CFO), but this is difficult to achieve in a practical communication system. The other one is to extend the total bandwidth of the communication because the accuracy of ToF estimation is proportional to the bandwidth, and the larger the total bandwidth, the higher the accuracy of ToF estimation obtained. For example, ultra-wideband (UWB) technology is implemented based on this theory, but high precision ToF estimation is difficult to achieve in common WiFi or Bluetooth systems with lower bandwidth compared to UWB. Therefore, it is meaningful to study how to achieve high-precision ranging with lower bandwidth when the transmitter and receiver are asynchronous. To tackle the above problems, we propose a two-way channel error elimination theory and a frequency hopping-based carrier phase ranging algorithm to achieve high accuracy ranging under asynchronous conditions. The two-way channel error elimination theory uses the symmetry property of the two-way channel to solve the asynchronous phase error caused by the asynchronous transmitter and receiver, and we also study the effect of the two-way channel generation time difference on the phase according to the characteristics of different hardware devices. The frequency hopping-based carrier phase ranging algorithm uses frequency hopping to extend the equivalent bandwidth and incorporates a carrier phase ranging algorithm with multipath resolution to achieve a ranging accuracy comparable to that of UWB at 400 MHz bandwidth in the typical 80 MHz bandwidth of commercial WiFi. Finally, to verify the validity of the algorithm, we implement this theory using a software radio platform, and the actual experimental results show that the method proposed in this paper has a median ranging error of 5.4 cm in the 5 m range, 7 cm in the 10 m range, and 10.8 cm in the 20 m range for a total bandwidth of 80 MHz.Keywords: frequency hopping, phase error elimination, carrier phase, ranging
Procedia PDF Downloads 1242208 Innovative Activity and Development: Analysing Firm Data from Eurozone Country-Members
Authors: Ilias A. Makris
Abstract:
In this work, we attempt to associate firm characteristics with innovative activity. We collect microdata from listed firms of selected Eurozone Country-members, after the beginning of 2007 financial crisis. The following literature, several indicators of growth and performance were selected and tested for their ability to interpret innovative activity. The main scope is to examine the possible differences in performance and growth between innovative and non-innovative firms, during a severe recession. Additionally to that, a special focus will be held on whether macroeconomic performance and national innovation system, determines the extent of innovators' performance. Preliminary findings, through correlation matrices and non-parametric tests, strongly indicate the positive relation between innovative activity and most of the measures used (profitability, size, employment), confirming that even during a recessionary period, innovative firms not only survive but also seem to succeed better economic results in almost all indexes relative to non-innovative. However, even though innovators seem to perform better in all economies examined, the extent of that performance seems to be strongly affected by the supportive mechanisms (financial and structural) that their country provides. Thus, it is clear, that the technologically intensive 'gap' between European South and North, during the economic crisis, became chaotic, due to the harsh austerity measures and reduced budgets in those countries, even in sectors with high potentials in economic activity and employment, impairing the effects of crisis and enhancing the vicious circle of recession.Keywords: eurozone, innovative activity, development, firm performance, non-parametric tests
Procedia PDF Downloads 4382207 The Use of a Rabbit Model to Evaluate the Influence of Age on Excision Wound Healing
Authors: S. Bilal, S. A. Bhat, I. Hussain, J. D. Parrah, S. P. Ahmad, M. R. Mir
Abstract:
Background: The wound healing involves a highly coordinated cascade of cellular and immunological response over a period including coagulation, inflammation, granulation tissue formation, epithelialization, collagen synthesis and tissue remodeling. Wounds in aged heal more slowly than those in younger, mainly because of comorbidities that occur as one age. The present study is about the influence of age on wound healing. 1x1cm^2 (100 mm) wounds were created on the back of the animal. The animals were divided into two groups; one group had animals in the age group of 3-9 months while another group had animals in the age group of 15-21 months. Materials and Methods: 24 clinically healthy rabbits in the age group of 3-21 months were used as experimental animals and divided into two groups viz A and B. All experimental parameters, i.e., Excision wound model, Measurement of wound area, Protein extraction and estimation, Protein extraction and estimation and DNA extraction and estimation were done by standard methods. Results: The parameters studied were wound contraction, hydroxyproline, glucosamine, protein, and DNA. A significant increase (p<0.005) in the hydroxyproline, glucosamine, protein and DNA and a significant decrease in wound area (p<0.005) was observed in the age group of 3-9 months when compared to animals of an age group of 15-21 months. Wound contraction together with hydroxyproline, glucosamine, protein and DNA estimations suggest that advanced age results in retarded wound healing. Conclusion: The decrease wound contraction and accumulation of hydroxyproline, glucosamine, protein and DNA in group B animals may be associated with the reduction or delay in growth factors because of the advancing age.Keywords: age, wound healing, excision wound, hydroxyproline, glucosamine
Procedia PDF Downloads 6602206 In Agile Projects - Arithmetic Sequence is More Effective than Fibonacci Sequence to Use for Estimating the Implementation Effort of User Stories
Authors: Khaled Jaber
Abstract:
The estimation of effort in software development is a complex task. The traditional Waterfall approach used to develop software systems requires a lot of time to estimate the effort needed to implement user requirements. Agile manifesto, however, is currently more used in the industry than the Waterfall to develop software systems. In Agile, the user requirement is referred to as a user story. Agile teams mostly use the Fibonacci sequence 1, 2, 3, 5, 8, 11, etc. in estimating the effort needed to implement the user story. This work shows through analysis that the Arithmetic sequence, e.g., 3, 6, 9, 12, etc., is more effective than the Fibonacci sequence in estimating the user stories. This paper mathematically and visually proves the effectiveness of the Arithmetic sequence over the FB sequence.Keywords: agie, scrum, estimation, fibonacci sequence
Procedia PDF Downloads 2072205 Item Response Calibration/Estimation: An Approach to Adaptive E-Learning System Development
Authors: Adeniran Adetunji, Babalola M. Florence, Akande Ademola
Abstract:
In this paper, we made an overview on the concept of adaptive e-Learning system, enumerates the elements of adaptive learning concepts e.g. A pedagogical framework, multiple learning strategies and pathways, continuous monitoring and feedback on student performance, statistical inference to reach final learning strategy that works for an individual learner by “mass-customization”. Briefly highlights the motivation of this new system proposed for effective learning teaching. E-Review literature on the concept of adaptive e-learning system and emphasises on the Item Response Calibration, which is an important approach to developing an adaptive e-Learning system. This paper write-up is concluded on the justification of item response calibration/estimation towards designing a successful and effective adaptive e-Learning system.Keywords: adaptive e-learning system, pedagogical framework, item response, computer applications
Procedia PDF Downloads 5972204 Parametric Study and Design on under Reamed Pile - An Experimental and Numerical Study
Authors: S. Chandrakaran, Aarthy D.
Abstract:
Abstract: Under reamed piles are piles which are of different types like bored cast in-situ pile or bored compaction concrete piles where one or more bulbs are provided. In this paper, the design procedure of under reamed pile by both experimental study and numerical study using PLAXIS 3D Foundation software was studied. The soil chosen for study was M Sand. The Single and double under reamed pile modelling was made using mild steel. The pile load test experiment was conducted in the laboratory and the ultimate compression load for 25 mm settlement on single and double under reamed pile was observed and finally the result was compared with conventional pile (pile without bulb). The parametric influence on under reamed pile was studied by varying the geometrical parameters like diameter of bulbs, spacing between bulbs, position of bulbs and number of bulbs. The results of the numerical model showed that when the diameter of bulb D u =2.5D, the ultimate compression load for an under-reamed pile with a single bulb increased by 55 % compared to a pile without a bulb. It was observed that when the spacing between the bulbs was S=6D u with three different positions of bulb from bottom of pile as D u , 2D u and 3D u , the ultimate compression load increased by 88%, 94% and 73 % respectively, compared to the ultimate compression load for 25 mm settlement on conventional pile and if spacing was more than 6D u , ultimate compression load for 25 mm settlement started to decrease. It was observed that when the bucket length was more than 2D u , the ultimate compressionKeywords: load capcity, under remed bulb . sand, model study, sand
Procedia PDF Downloads 892203 Estimation of Desktop E-Wastes in Delhi Using Multivariate Flow Analysis
Authors: Sumay Bhojwani, Ashutosh Chandra, Mamita Devaburman, Akriti Bhogal
Abstract:
This article uses the Material flow analysis for estimating e-wastes in the Delhi/NCR region. The Material flow analysis is based on sales data obtained from various sources. Much of the data available for the sales is unreliable because of the existence of a huge informal sector. The informal sector in India accounts for more than 90%. Therefore, the scope of this study is only limited to the formal one. Also, for projection of the sales data till 2030, we have used regression (linear) to avoid complexity. The actual sales in the years following 2015 may vary non-linearly but we have assumed a basic linear relation. The purpose of this study was to know an approximate quantity of desktop e-wastes that we will have by the year 2030 so that we start preparing ourselves for the ineluctable investment in the treatment of these ever-rising e-wastes. The results of this study can be used to install a treatment plant for e-wastes in Delhi.Keywords: e-wastes, Delhi, desktops, estimation
Procedia PDF Downloads 2592202 Volume Estimation of Trees: An Exploratory Study on Pterocarpus erinaceus Logging Operations within Forest Transition and Savannah Ecological Zones of Ghana
Authors: Albert Kwabena Osei Konadu
Abstract:
Pterocarpus erinaceus, also known as Rosewood, is tropical wood, endemic in forest savannah transition zones within the middle and northern portion of Ghana. Its economic viability has made it increasingly popular and in high demand, leading to widespread conservation concerns. Ghana’s forest resource management regime for these ecozones is mainly on conservation and very little on resource utilization. Consequently, commercial logging management standards are at teething stage and not fully developed, leading to a deficiency in the monitoring of logging operations and quantification of harvested trees volumes. Tree information form (TIF); a volume estimation and tracking regime, has proven to be an effective, sustainable management tool for regulating timber resource extraction in the high forest zones of the country. This work aims to generate TIF that can track and capture requisite parameters to accurately estimate the volume of harvested rosewood within forest savannah transition zones. Tree information forms were created on three scenarios of individual billets, stacked billets and conveying vessel basis. These TIFs were field-tested to deduce the most viable option for the tracking and estimation of harvested volumes of rosewood using the smallian and cubic volume estimation formula. Overall, four districts were covered with individual billets, stacked billets and conveying vessel scenarios registering mean volumes of 25.83m3,45.08m3 and 32.6m3, respectively. These adduced volumes were validated by benchmarking to assigned volumes of the Forestry Commission of Ghana and known standard volumes of conveying vessels. The results did indicate an underestimation of extracted volumes under the quotas regime, a situation that could lead to unintended overexploitation of the species. The research revealed conveying vessels route is the most viable volume estimation and tracking regime for the sustainable management of the Pterocarpous erinaceus species as it provided a more practical volume estimate and data extraction protocol.Keywords: convention on international trade in endangered species, cubic volume formula, forest transition savannah zones, pterocarpus erinaceus, smallian’s volume formula, tree information form
Procedia PDF Downloads 1102201 Finite Element Analysis of Raft Foundation on Various Soil Types under Earthquake Loading
Authors: Qassun S. Mohammed Shafiqu, Murtadha A. Abdulrasool
Abstract:
The design of shallow foundations to withstand different dynamic loads has given considerable attention in recent years. Dynamic loads may be due to the earthquakes, pile driving, blasting, water waves, and machine vibrations. But, predicting the behavior of shallow foundations during earthquakes remains a difficult task for geotechnical engineers. A database for dynamic and static parameters for different soils in seismic active zones in Iraq is prepared which has been collected from geophysical and geotechnical investigation works. Then, analysis of a typical 3-D soil-raft foundation system under earthquake loading is carried out using the database. And a parametric study has been carried out taking into consideration the influence of some parameters on the dynamic behavior of the raft foundation, such as raft stiffness, damping ratio as well as the influence of the earthquake acceleration-time records. The results of the parametric study show that the settlement caused by the earthquake can be decreased by about 72% with increasing the thickness from 0.5 m to 1.5 m. But, it has been noticed that reduction in the maximum bending moment by about 82% was predicted by decreasing the raft thickness from 1.5 m to 0.5 m in all sites model. Also, it has been observed that the maximum lateral displacement, the maximum vertical settlement and the maximum bending moment for damping ratio 0% is about 14%, 20%, and 18% higher than that for damping ratio 7.5%, respectively for all sites model.Keywords: shallow foundation, seismic behavior, raft thickness, damping ratio
Procedia PDF Downloads 1492200 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System
Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi
Abstract:
Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.Keywords: channel estimation, OFDM, pilot-assist, VLC
Procedia PDF Downloads 1812199 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model
Authors: Nureni O. Adeboye, Dawud A. Agunbiade
Abstract:
This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity
Procedia PDF Downloads 1422198 Estimation of Maize Yield by Using a Process-Based Model and Remote Sensing Data in the Northeast China Plain
Authors: Jia Zhang, Fengmei Yao, Yanjing Tan
Abstract:
The accurate estimation of crop yield is of great importance for the food security. In this study, a process-based mechanism model was modified to estimate yield of C4 crop by modifying the carbon metabolic pathway in the photosynthesis sub-module of the RS-P-YEC (Remote-Sensing-Photosynthesis-Yield estimation for Crops) model. The yield was calculated by multiplying net primary productivity (NPP) and the harvest index (HI) derived from the ratio of grain to stalk yield. The modified RS-P-YEC model was used to simulate maize yield in the Northeast China Plain during the period 2002-2011. The statistical data of maize yield from study area was used to validate the simulated results at county-level. The results showed that the Pearson correlation coefficient (R) was 0.827 (P < 0.01) between the simulated yield and the statistical data, and the root mean square error (RMSE) was 712 kg/ha with a relative error (RE) of 9.3%. From 2002-2011, the yield of maize planting zone in the Northeast China Plain was increasing with smaller coefficient of variation (CV). The spatial pattern of simulated maize yield was consistent with the actual distribution in the Northeast China Plain, with an increasing trend from the northeast to the southwest. Hence the results demonstrated that the modified process-based model coupled with remote sensing data was suitable for yield prediction of maize in the Northeast China Plain at the spatial scale.Keywords: process-based model, C4 crop, maize yield, remote sensing, Northeast China Plain
Procedia PDF Downloads 3782197 Performance of an Absorption Refrigerator Using a Solar Thermal Collector
Authors: Abir Hmida, Nihel Chekir, Ammar Ben Brahim
Abstract:
In the present paper, we investigate the feasibility of a thermal solar driven cold room in Gabes, southern region of Tunisia. The cold room of 109 m3 is refrigerated using an ammonia absorption machine. It is destined to preserve dates during the hot months of the year. A detailed study of the cold room leads previously to the estimation of the cooling load of the proposed storage room in the operating conditions of the region. The next step consists of the estimation of the required heat in the generator of the absorption machine to ensure the desired cold temperature. A thermodynamic analysis was accomplished and complete description of the system is determined. We propose, here, to provide the needed heat thermally from the sun by using vacuum tube collectors. We found that at least 21m² of solar collectors are necessary to accomplish the work of the solar cold room.Keywords: absorption, ammonia, cold room, solar collector, vacuum tube
Procedia PDF Downloads 1782196 Calculational-Experimental Approach of Radiation Damage Parameters on VVER Equipment Evaluation
Authors: Pavel Borodkin, Nikolay Khrennikov, Azamat Gazetdinov
Abstract:
The problem of ensuring of VVER type reactor equipment integrity is now most actual in connection with justification of safety of the NPP Units and extension of their service life to 60 years and more. First of all, it concerns old units with VVER-440 and VVER-1000. The justification of the VVER equipment integrity depends on the reliability of estimation of the degree of the equipment damage. One of the mandatory requirements, providing the reliability of such estimation, and also evaluation of VVER equipment lifetime, is the monitoring of equipment radiation loading parameters. In this connection, there is a problem of justification of such normative parameters, used for an estimation of the pressure vessel metal embrittlement, as the fluence and fluence rate (FR) of fast neutrons above 0.5 MeV. From the point of view of regulatory practice, a comparison of displacement per atom (DPA) and fast neutron fluence (FNF) above 0.5 MeV has a practical concern. In accordance with the Russian regulatory rules, neutron fluence F(E > 0.5 MeV) is a radiation exposure parameter used in steel embrittlement prediction under neutron irradiation. However, the DPA parameter is a more physically legitimate quantity of neutron damage of Fe based materials. If DPA distribution in reactor structures is more conservative as neutron fluence, this case should attract the attention of the regulatory authority. The purpose of this work was to show what radiation load parameters (fluence, DPA) on all VVER equipment should be under control, and give the reasonable estimations of such parameters in the volume of all equipment. The second task is to give the conservative estimation of each parameter including its uncertainty. Results of recently received investigations allow to test the conservatism of calculational predictions, and, as it has been shown in the paper, combination of ex-vessel measured data with calculated ones allows to assess unpredicted uncertainties which are results of specific unique features of individual equipment for VVER reactor. Some results of calculational-experimental investigations are presented in this paper.Keywords: equipment integrity, fluence, displacement per atom, nuclear power plant, neutron activation measurements, neutron transport calculations
Procedia PDF Downloads 1572195 Estimation of Coefficients of Ridge and Principal Components Regressions with Multicollinear Data
Authors: Rajeshwar Singh
Abstract:
The presence of multicollinearity is common in handling with several explanatory variables simultaneously due to exhibiting a linear relationship among them. A great problem arises in understanding the impact of explanatory variables on the dependent variable. Thus, the method of least squares estimation gives inexact estimates. In this case, it is advised to detect its presence first before proceeding further. Using the ridge regression degree of its occurrence is reduced but principal components regression gives good estimates in this situation. This paper discusses well-known techniques of the ridge and principal components regressions and applies to get the estimates of coefficients by both techniques. In addition to it, this paper also discusses the conflicting claim on the discovery of the method of ridge regression based on available documents.Keywords: conflicting claim on credit of discovery of ridge regression, multicollinearity, principal components and ridge regressions, variance inflation factor
Procedia PDF Downloads 4212194 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89
Authors: A. Chatel, I. S. Torreguitart, T. Verstraete
Abstract:
The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness
Procedia PDF Downloads 1122193 Quantification of Methane Emissions from Solid Waste in Oman Using IPCC Default Methodology
Authors: Wajeeha A. Qazi, Mohammed-Hasham Azam, Umais A. Mehmood, Ghithaa A. Al-Mufragi, Noor-Alhuda Alrawahi, Mohammed F. M. Abushammala
Abstract:
Municipal Solid Waste (MSW) disposed in landfill sites decompose under anaerobic conditions and produce gases which mainly contain carbon dioxide (CO₂) and methane (CH₄). Methane has the potential of causing global warming 25 times more than CO₂, and can potentially affect human life and environment. Thus, this research aims to determine MSW generation and the annual CH₄ emissions from the generated waste in Oman over the years 1971-2030. The estimation of total waste generation was performed using existing models, while the CH₄ emissions estimation was performed using the intergovernmental panel on climate change (IPCC) default method. It is found that total MSW generation in Oman might be reached 3,089 Gg in the year 2030, which approximately produced 85 Gg of CH₄ emissions in the year 2030.Keywords: methane, emissions, landfills, solid waste
Procedia PDF Downloads 510