Search results for: error pointing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2013

Search results for: error pointing

1293 Design and Evaluation of a Prototype for Non-Invasive Screening of Diabetes – Skin Impedance Technique

Authors: Pavana Basavakumar, Devadas Bhat

Abstract:

Diabetes is a disease which often goes undiagnosed until its secondary effects are noticed. Early detection of the disease is necessary to avoid serious consequences which could lead to the death of the patient. Conventional invasive tests for screening of diabetes are mostly painful, time consuming and expensive. There’s also a risk of infection involved, therefore it is very essential to develop non-invasive methods to screen and estimate the level of blood glucose. Extensive research is going on with this perspective, involving various techniques that explore optical, electrical, chemical and thermal properties of the human body that directly or indirectly depend on the blood glucose concentration. Thus, non-invasive blood glucose monitoring has grown into a vast field of research. In this project, an attempt was made to device a prototype for screening of diabetes by measuring electrical impedance of the skin and building a model to predict a patient’s condition based on the measured impedance. The prototype developed, passes a negligible amount of constant current (0.5mA) across a subject’s index finger through tetra polar silver electrodes and measures output voltage across a wide range of frequencies (10 KHz – 4 MHz). The measured voltage is proportional to the impedance of the skin. The impedance was acquired in real-time for further analysis. Study was conducted on over 75 subjects with permission from the institutional ethics committee, along with impedance, subject’s blood glucose values were also noted, using conventional method. Nonlinear regression analysis was performed on the features extracted from the impedance data to obtain a model that predicts blood glucose values for a given set of features. When the predicted data was depicted on Clarke’s Error Grid, only 58% of the values predicted were clinically acceptable. Since the objective of the project was to screen diabetes and not actual estimation of blood glucose, the data was classified into three classes ‘NORMAL FASTING’,’NORMAL POSTPRANDIAL’ and ‘HIGH’ using linear Support Vector Machine (SVM). Classification accuracy obtained was 91.4%. The developed prototype was economical, fast and pain free. Thus, it can be used for mass screening of diabetes.

Keywords: Clarke’s error grid, electrical impedance of skin, linear SVM, nonlinear regression, non-invasive blood glucose monitoring, screening device for diabetes

Procedia PDF Downloads 321
1292 Georgiana G. King’s The Way of Saint James. A Pioneer Cultural Guide of a Pilgrimage Route

Authors: Paula Pita-Galán

Abstract:

In 1920 Georgiana Goddard King, an Art Historian and Professor at Bryn Mawr College (PA, USA), published The Way of Saint James (New York: P.G. Putnam’s Sons), one of the earliest modern guides of this pilgrimage route. In its three volumes, the author described the towns and villages crossed by the Camino, talking about the history, traditions, monuments, and the people that she had met during her own pilgrimage together with the photographer Edith H. Lowber. The two women walked the route from Toulouse to Santiago in several journeys that took place between 1911 and 1914, travelling with funds of the Hispanic Society of New York. The cultural interest that motivated the journey explains how King intertwines in her narration history, anthropology, geography, art history, and religion, giving; as a result, the book targeted intellectuals, curious travelers, and tourist rather than pilgrims in a moment in which the pilgrimage to Santiago had almost disappeared as a practice. The Way of Saint James is barely known nowadays, so the aim of this research is disseminate it, focusing on the modernity of its approach and pointing at the link that it has with Georgiana King’s understanding of art as a product of the culture and civilization that produces it. In this paper, we will analyze The Way of Saint James in its historiographical context as it was written during the rise of the interest on Spain and its culture in the United States of America; paying special attention on the relationship of the author with the Hispanic Society and sir Archer Milton Huntington. On the other hand, we will look into Georgiana Goddard King’s work as an scholar by analyzing her works and the personal papers (letters, notes, and manuscripts) that she left in Bryn Mawr College, where I have been researching with a Fulbright grant. As a result, we will understand the pioneer approach of this unique guide of the Way of Saint James as a reflection of Georgiana King’s own modernity as an scholar. The wide cultural interests of King gave, as a result, a guide that offers a transversal knowledge of The Way of Saint James, together with King’s impressions and experiences, in the same way of current guides but far from the ‘objective’ and formalist methodology followed by her colleagues. This kind of modernity was badly understood at her time and helped the oblivion of this book as well as her author.

Keywords: georgiana goddard king, the way of saint james, pilgrimage, cultural heritage, guide

Procedia PDF Downloads 124
1291 Language Transfer in Graduate Candidates’ Essays

Authors: Erika Martínez Lugo

Abstract:

Candidates to some graduate studies are asked to write essays in English to prove their competence to write essays and to do it in English. In the present study, language transfer (LT) in 15 written essays is identified, documented, analyzed, and classified. The essays were written in 2019, and the graduate program is a Masters in Modern Languages in a North-Western Mexican city border with USA. This study is of interest since it is important to determine whether or not some errors have been fossilized and have become mistakes, or if it is part of the candidates’ interlanguage. The results show that most language transfer is negative and syntactic, where the influence of candidates L1 (Spanish) is evident in their use of L2 (English).

Keywords: language transfer, cross-linguistic influence, interlanguage, error vs mistake

Procedia PDF Downloads 174
1290 The Finite Element Method for Nonlinear Fredholm Integral Equation of the Second Kind

Authors: Melusi Khumalo, Anastacia Dlamini

Abstract:

In this paper, we consider a numerical solution for nonlinear Fredholm integral equations of the second kind. We work with uniform mesh and use the Lagrange polynomials together with the Galerkin finite element method, where the weight function is chosen in such a way that it takes the form of the approximate solution but with arbitrary coefficients. We implement the finite element method to the nonlinear Fredholm integral equations of the second kind. We consider the error analysis of the method. Furthermore, we look at a specific example to illustrate the implementation of the finite element method.

Keywords: finite element method, Galerkin approach, Fredholm integral equations, nonlinear integral equations

Procedia PDF Downloads 371
1289 Part of Speech Tagging Using Statistical Approach for Nepali Text

Authors: Archit Yajnik

Abstract:

Part of Speech Tagging has always been a challenging task in the era of Natural Language Processing. This article presents POS tagging for Nepali text using Hidden Markov Model and Viterbi algorithm. From the Nepali text, annotated corpus training and testing data set are randomly separated. Both methods are employed on the data sets. Viterbi algorithm is found to be computationally faster and accurate as compared to HMM. The accuracy of 95.43% is achieved using Viterbi algorithm. Error analysis where the mismatches took place is elaborately discussed.

Keywords: hidden markov model, natural language processing, POS tagging, viterbi algorithm

Procedia PDF Downloads 323
1288 Densities and Viscosities of Binary Mixture Containing Diethylamine and 2-Alkanol

Authors: Elham jassemi Zargani, Mohammad almasi

Abstract:

Densities and viscosities for binary mixtures of diethylamine + 2 Alkanol (2 propanol up to 2 pentanol) were measured over the entire composition range and temperature interval of 293.15 to 323.15 K. Excess molar volumes V_m^E and viscosity deviations Δη were calculated and correlated by the Redlich−Kister type function to derive the coefficients and estimate the standard error. For mixtures of diethylamine with used 2-alkanols, V_m^E and Δη are negative over the entire range of mole fraction. The observed variations of these parameters, with alkanols chain length and temperature, are discussed in terms of the inter-molecular interactions between the unlike molecules of the binary mixtures.

Keywords: densities, viscosities, diethylamine, 2-alkanol, Redlich-Kister

Procedia PDF Downloads 384
1287 Some Generalized Multivariate Estimators for Population Mean under Multi Phase Stratified Systematic Sampling

Authors: Muqaddas Javed, Muhammad Hanif

Abstract:

The generalized multivariate ratio and regression type estimators for population mean are suggested under multi-phase stratified systematic sampling (MPSSS) using multi auxiliary information. Estimators are developed under the two different situations of availability of auxiliary information. The expressions of bias and mean square error (MSE) are developed. Special cases of suggested estimators are also discussed and simulation study is conducted to observe the performance of estimators.

Keywords: generalized estimators, multi-phase sampling, stratified random sampling, systematic sampling

Procedia PDF Downloads 722
1286 A New Approach to Predicting Physical Biometrics from Behavioural Biometrics

Authors: Raid R. O. Al-Nima, S. S. Dlay, W. L. Woo

Abstract:

A relationship between face and signature biometrics is established in this paper. A new approach is developed to predict faces from signatures by using artificial intelligence. A multilayer perceptron (MLP) neural network is used to generate face details from features extracted from signatures, here face is the physical biometric and signatures is the behavioural biometric. The new method establishes a relationship between the two biometrics and regenerates a visible face image from the signature features. Furthermore, the performance efficiencies of our new technique are demonstrated in terms of minimum error rates compared to published work.

Keywords: behavioural biometric, face biometric, neural network, physical biometric, signature biometric

Procedia PDF Downloads 470
1285 The Conservation of the Roman Mosaics in the Museum of Sousse, Tunisia: Between Doctrines and Practices

Authors: Zeineb Yousse, Fakher Kharrat

Abstract:

Mosaic is a part of a broad universal cultural heritage; sometimes it represents a rather essential source for the researches on the everyday life of some of the previous civilizations. Tunisia has one of the finest and largest collections of mosaics in the world, which is essentially exhibited in the Museums of Bardo and Sousse. Restored and reconstituted, they bear witnesses to hard work. Our paper deals with the discipline of conservation of Roman mosaics based on the proceedings of the workshop of the Museum of Sousse. Thus, we highlight two main objectives. In the first place, it is a question of revealing the techniques adopted by professionals to handle mosaics and to which school of conservation these techniques belong. In the second place, we are going to interpret the works initiated to preserve the archaeological heritage in order to protect it in present time and transmit it to future generations. To this end, we paid attention to four Roman mosaics currently exhibited in the Museum of Sousse. These Mosaics show different voids or gaps at the level of their surfaces and the method used to fill these gaps seems to be interesting to analyze. These mosaics are known under the names of: Orpheus Charming the Animals, Gladiator and Bears, Stud farm of Sorothus and finally Head of Medusa. The study on the conservation passes through two chained phases. We start with a small historical overview in order to gather information related to the original location, the date of its composition as well as the description of its image. Afterward, the intervention process is analyzed by handling three complementary elements which are: diagnosis of the existing state, the study of the medium processing and the study of the processing of the tesselatum surface which includes the pictorial composition of the mosaic. Furthermore, we have implemented an evaluation matrix with six operating principles allowing the assessment of the appropriateness of the intervention. These principles are the following: minimal intervention, reversibility, compatibility, visibility, durability, authenticity and enhancement. Various accumulated outcomes are pointing out the techniques used to fill the gaps as well as the level of compliance with the principles of conservation. Accordingly, the conservation of mosaics in Tunisia is a practice that combines various techniques without really arguing about the choice of a particular theory.

Keywords: conservation, matrix, museum of Sousse, operating particular theory, principles, Roman mosaics

Procedia PDF Downloads 325
1284 Application of Artificial Neural Network to Prediction of Feature Academic Performance of Students

Authors: J. K. Alhassan, C. S. Actsu

Abstract:

This study is on the prediction of feature performance of undergraduate students with Artificial Neural Networks (ANN). With the growing decline in the quality academic performance of undergraduate students, it has become essential to predict the students’ feature academic performance early in their courses of first and second years and to take the necessary precautions using such prediction-based information. The feed forward multilayer neural network model was used to train and develop a network and the test carried out with some of the input variables. A result of 80% accuracy was obtained from the test which was carried out, with an average error of 0.009781.

Keywords: academic performance, artificial neural network, prediction, students

Procedia PDF Downloads 458
1283 Denoising of Magnetotelluric Signals by Filtering

Authors: Rodrigo Montufar-Chaveznava, Fernando Brambila-Paz, Ivette Caldelas

Abstract:

In this paper, we present the advances corresponding to the denoising processing of magnetotelluric signals using several filters. In particular, we use the most common spatial domain filters such as median and mean, but we are also using the Fourier and wavelet transform for frequency domain filtering. We employ three datasets obtained at the different sampling rate (128, 4096 and 8192 bps) and evaluate the mean square error, signal-to-noise relation, and peak signal-to-noise relation to compare the kernels and determine the most suitable for each case. The magnetotelluric signals correspond to earth exploration when water is searched. The object is to find a denoising strategy different to the one included in the commercial equipment that is employed in this task.

Keywords: denoising, filtering, magnetotelluric signals, wavelet transform

Procedia PDF Downloads 366
1282 A Compressor Map Optimizing Tool for Prediction of Compressor Off-Design Performance

Authors: Zhongzhi Hu, Jie Shen, Jiqiang Wang

Abstract:

A high precision aeroengine model is needed when developing the engine control system. Compared with other main components, the axial compressor is the most challenging component to simulate. In this paper, a compressor map optimizing tool based on the introduction of a modifiable β function is developed for FWorks (FADEC Works). Three parameters (d density, f fitting coefficient, k₀ slope of the line β=0) are introduced to the β function to make it modifiable. The comparison of the traditional β function and the modifiable β function is carried out for a certain type of compressor. The interpolation errors show that both methods meet the modeling requirements, while the modifiable β function can predict compressor performance more accurately for some areas of the compressor map where the users are interested in.

Keywords: beta function, compressor map, interpolation error, map optimization tool

Procedia PDF Downloads 261
1281 Constraints on IRS Control: An Alternative Approach to Tax Gap Analysis

Authors: J. T. Manhire

Abstract:

A tax authority wants to take actions it knows will foster the greatest degree of voluntary taxpayer compliance to reduce the “tax gap.” This paper suggests that even if a tax authority could attain a state of complete knowledge, there are constraints on whether and to what extent such actions would result in reducing the macro-level tax gap. These limits are not merely a consequence of finite agency resources. They are inherent in the system itself. To show that this is one possible interpretation of the tax gap data, the paper formulates known results in a different way by analyzing tax compliance as a population with a single covariate. This leads to a standard use of the logistic map to analyze the dynamics of non-compliance growth or decay over a sequence of periods. This formulation gives the same results as the tax gap studies performed over the past fifty years in the U.S. given the published margins of error. Limitations and recommendations for future work are discussed, along with some implications for tax policy.

Keywords: income tax, logistic map, tax compliance, tax law

Procedia PDF Downloads 115
1280 A Case-Control Study on Dietary Heme/Nonheme Iron and Colorectal Cancer Risk

Authors: Alvaro L. Ronco

Abstract:

Background and purpose: Although our country is a developing one, it has a typical Western meat-rich dietary style. Based on estimates of heme and nonheme iron contents in representative foods, we carried out the present epidemiologic study, with the aim of accurately analyzing dietary iron and its role on CRC risk. Subjects/methods: Patients (611 CRC incident cases and 2394 controls, all belonging to public hospitals of our capital city) were interviewed through a questionnaire including socio-demographic, reproductive and lifestyle variables, and a food frequency questionnaire of 64 items, which asked about food intake 5 years before the interview. The sample included 1937 men and 1068 women. Controls were matched by sex and age (± 5 years) to cases. Food-derived nutrients were calculated from available databases. Total dietary iron was calculated and classified by heme or nonheme source, following data of specific Dutch and Canadian studies, and additionally adjusted by energy. Odds Ratios (OR) and 95% confidence intervals were calculated through unconditional logistic regression, adjusting for relevant potential confounders (education, body mass index, family history of cancer, energy, infusions, and others). A heme/nonheme (H/NH) ratio was created and the interest variables were categorized into tertiles, for analysis purposes. Results: The following risk estimations correspond to the highest tertiles. Total iron intake showed no association with CRC risk neither among men (OR=0.83, ptrend =.18) nor among women (OR=1.48, ptrend =.09). Heme iron was positively associated among men (OR=1.88, ptrend < .001) and for the overall sample (OR=1.44, ptrend =.002), however, it was not associated among women (OR=0.91, ptrend =.83). Nonheme iron showed an inverse association among men (OR=0.53, ptrend < .001) and the overall sample (OR=0.78, ptrend =.04), but was not associated among women (OR=1.46, ptrend =.14). Regarding H/NH ratio, risks increased only among men (OR=2.12, ptrend < .001) but lacked of association among women (OR=0.81, ptrend =.29). Conclusions. We have observed different types of associations between CRC risk and high dietary heme, nonheme and H/NH iron ratio. Therefore, the source of the available iron might be of importance as a link to colorectal carcinogenesis, perhaps pointing to reconsider the animal/plant proportions of this vital mineral within diet. Nevertheless, the different associations observed for each sex, demand further studies in order to clarify these points.

Keywords: chelation, colorectal cancer, heme, iron, nonheme

Procedia PDF Downloads 164
1279 Evolution of Multimodulus Algorithm Blind Equalization Based on Recursive Least Square Algorithm

Authors: Sardar Ameer Akram Khan, Shahzad Amin Sheikh

Abstract:

Blind equalization is an important technique amongst equalization family. Multimodulus algorithms based on blind equalization removes the undesirable effects of ISI and cater ups the phase issues, saving the cost of rotator at the receiver end. In this paper a new algorithm combination of recursive least square and Multimodulus algorithm named as RLSMMA is proposed by providing few assumption, fast convergence and minimum Mean Square Error (MSE) is achieved. The excellence of this technique is shown in the simulations presenting MSE plots and the resulting filter results.

Keywords: blind equalizations, constant modulus algorithm, multi-modulus algorithm, recursive least square algorithm, quadrature amplitude modulation (QAM)

Procedia PDF Downloads 637
1278 An Overview of Adaptive Channel Equalization Techniques and Algorithms

Authors: Navdeep Singh Randhawa

Abstract:

Wireless communication system has been proved as the best for any communication. However, there are some undesirable threats of a wireless communication channel on the information transmitted through it, such as attenuation, distortions, delays and phase shifts of the signals arriving at the receiver end which are caused by its band limited and dispersive nature. One of the threat is ISI (Inter Symbol Interference), which has been found as a great obstacle in high speed communication. Thus, there is a need to provide perfect and accurate technique to remove this effect to have an error free communication. Thus, different equalization techniques have been proposed in literature. This paper presents the equalization techniques followed by the concept of adaptive filter equalizer, its algorithms (LMS and RLS) and applications of adaptive equalization technique.

Keywords: channel equalization, adaptive equalizer, least mean square, recursive least square

Procedia PDF Downloads 442
1277 Large Time Asymptotic Behavior to Solutions of a Forced Burgers Equation

Authors: Satyanarayana Engu, Ahmed Mohd, V. Murugan

Abstract:

We study the large time asymptotics of solutions to the Cauchy problem for a forced Burgers equation (FBE) with the initial data, which is continuous and summable on R. For which, we first derive explicit solutions of FBE assuming a different class of initial data in terms of Hermite polynomials. Later, by violating this assumption we prove the existence of a solution to the considered Cauchy problem. Finally, we give an asymptotic approximate solution and establish that the error will be of order O(t^(-1/2)) with respect to L^p -norm, where 1≤p≤∞, for large time.

Keywords: Burgers equation, Cole-Hopf transformation, Hermite polynomials, large time asymptotics

Procedia PDF Downloads 327
1276 Microwave Imaging by Application of Information Theory Criteria in MUSIC Algorithm

Authors: Majid Pourahmadi

Abstract:

The performance of time-reversal MUSIC algorithm will be dramatically degrades in presence of strong noise and multiple scattering (i.e. when scatterers are close to each other). This is due to error in determining the number of scatterers. The present paper provides a new approach to alleviate such a problem using an information theoretic criterion referred as minimum description length (MDL). The merits of the novel approach are confirmed by the numerical examples. The results indicate the time-reversal MUSIC yields accurate estimate of the target locations with considerable noise and multiple scattering in the received signals.

Keywords: microwave imaging, time reversal, MUSIC algorithm, minimum description length (MDL)

Procedia PDF Downloads 326
1275 Development of a Model Based on Wavelets and Matrices for the Treatment of Weakly Singular Partial Integro-Differential Equations

Authors: Somveer Singh, Vineet Kumar Singh

Abstract:

We present a new model based on viscoelasticity for the Non-Newtonian fluids.We use a matrix formulated algorithm to approximate solutions of a class of partial integro-differential equations with the given initial and boundary conditions. Some numerical results are presented to simplify application of operational matrix formulation and reduce the computational cost. Convergence analysis, error estimation and numerical stability of the method are also investigated. Finally, some test examples are given to demonstrate accuracy and efficiency of the proposed method.

Keywords: Legendre Wavelets, operational matrices, partial integro-differential equation, viscoelasticity

Procedia PDF Downloads 330
1274 Trade Policy and Economic Growth of Turkey in Global Economy: New Empirical Evidence

Authors: Pınar Yardımcı

Abstract:

This paper tries to answer to the questions whether or not trade openness cause economic growth and trade policy changes is good for Turkey as a developing country in global economy before and after 1980. We employ Johansen cointegration and Granger causality tests with error correction modelling based on vector autoregressive. Using WDI data from the pre-1980 and the post-1980, we find that trade openness and economic growth are cointegrated in the second term only. Also the results suggest a lack of long-run causality between our two variables. These findings may imply that trade policy of Turkey should concentrate more on extra complementary economic reforms.

Keywords: globalization, trade policy, economic growth, openness, cointegration, Turkey

Procedia PDF Downloads 356
1273 Spectral Efficiency Improvement in 5G Systems by Polyphase Decomposition

Authors: Wilson Enríquez, Daniel Cardenas

Abstract:

This article proposes a filter bank format combined with the mathematical tool called polyphase decomposition and the discrete Fourier transform (DFT) with the purpose of improving the performance of the fifth-generation communication systems (5G). We started with a review of the literature and the study of the filter bank theory and its combination with DFT in order to improve the performance of wireless communications since it reduces the computational complexity of these communication systems. With the proposed technique, several experiments were carried out in order to evaluate the structures in 5G systems. Finally, the results are presented in graphical form in terms of bit error rate against the ratio bit energy/noise power spectral density (BER vs. Eb / No).

Keywords: multi-carrier system (5G), filter bank, polyphase decomposition, FIR equalizer

Procedia PDF Downloads 195
1272 Particle Deflection in a PDMS Microchannel Caused by a Plane Travelling Surface Acoustic Wave

Authors: Florian Keipert, Hagen Schmitd

Abstract:

The size selective separation of different species in a microfluidic system is an actual task in biological or medical research. Former works dealt with the utilisation of the acoustic radiation force (ARF) caused by a plane travelling Surface Acoustic Wave (tSAW). In literature the ARF is described by a dimensionless parameter κ, depending on the wavelength and the particle diameter. To our knowledge research was done for values 0.2 < κ < 5.8 showing that the ARF is dominating the acoustic streaming force (ASF) for κ > 1.2. As a consequence the particle separation is limited by κ. In addition the dependence on the electrical power level was examined but only for κ > 1 pointing out an increased particle deflection for higher electrical power levels. Nevertheless a detailed study on the ASF and ARF especially for κ < 1 is still missing. In our setup we used a tSAW with a wavelength λ = 90 µm and 3 µm PS particles corresponding to κ = 0.3. Herewith the influence of the applied electrical power level on the particle deflection in a polydimethylsiloxan micro channel was investigated. Our results show an increased particle deflection for an increased electrical power level, which coincides with the reported results for κ > 1. Therefore particle separation is in contrast to literature also possible for lower κ values. Thereby the experimental setup can be generally simplified by a coordinated electrical power level for the specific particle size. Furthermore this raises the question of whether this particle deflection is caused only by the ARF as adopted so far or by the ASF or the sum of both forces. To investigate this fact a 0% - 24% saline solution was used and thus the mismatch between the compressibility of the PS particle and the working fluid could be changed. Therefore it is possible to change the relative strength between ARF and ASF and consequently the particle deflection. We observed a decreasing in the particle deflection for an increased NaCl content up to a 12% saline solution and subsequently an increasing of the particle deflection. Our observation could be explained by the acoustic contrast factor Φ, which depends on the compressibility mismatch. The compressibility of water is increased by the NaCl and the range of a 0% - 24% saline solution covers the PS particle compressibility. Hence the particle deflection reaches a minimum value for the accordance between compressibility of PS particle and saline solution. This minimum value can be estimated as the particle deflection only caused by the ASF. Knowing the particle deflection due to the ASF the particle deflection caused by the ARF can be calculated and thus finally the relation between both forces. Concluding, the particle deflection and therefore the size selective particle separation generated by a tSAW can be achieved for values κ < 1, simplifying actual setups by adjusting the electrical power level. Beyond we studied for the first time the relative strength between ARF and ASF to characterise the particle deflection in a microchannel.

Keywords: ARF, ASF, particle separation, saline solution, tSAW

Procedia PDF Downloads 253
1271 Markov-Chain-Based Optimal Filtering and Smoothing

Authors: Garry A. Einicke, Langford B. White

Abstract:

This paper describes an optimum filter and smoother for recovering a Markov process message from noisy measurements. The developments follow from an equivalence between a state space model and a hidden Markov chain. The ensuing filter and smoother employ transition probability matrices and approximate probability distribution vectors. The properties of the optimum solutions are retained, namely, the estimates are unbiased and minimize the variance of the output estimation error, provided that the assumed parameter set are correct. Methods for estimating unknown parameters from noisy measurements are discussed. Signal recovery examples are described in which performance benefits are demonstrated at an increased calculation cost.

Keywords: optimal filtering, smoothing, Markov chains

Procedia PDF Downloads 314
1270 A Generalization of the Secret Sharing Scheme Codes Over Certain Ring

Authors: Ibrahim Özbek, Erdoğan Mehmet Özkan

Abstract:

In this study, we generalize (k,n) threshold secret sharing scheme on the study Ozbek and Siap to the codes over the ring Fq+ αFq. In this way, it is mentioned that the method obtained in that article can also be used on codes over rings, and new advantages to be obtained. The method of securely sharing the key in cryptography, which Shamir first systematized and Massey carried over to codes, became usable for all error-correcting codes. The firewall of this scheme is based on the hardness of the syndrome decoding problem. Also, an open study area is left for those working for other rings and code classes. All codes that correct errors with this method have been the working area of this method.

Keywords: secret sharing scheme, linear codes, algebra, finite rings

Procedia PDF Downloads 68
1269 PSRR Enhanced LDO Regulator Using Noise Sensing Circuit

Authors: Min-ju Kwon, Chae-won Kim, Jeong-yun Seo, Hee-guk Chae, Yong-seo Koo

Abstract:

In this paper, we presented the LDO (low-dropout) regulator which enhanced the PSRR by applying the constant current source generation technique through the BGR (Band Gap Reference) to form the noise sensing circuit. The current source through the BGR has a constant current value even if the applied voltage varies. Then, the noise sensing circuit, which is composed of the current source through the BGR, operated between the error amplifier and the pass transistor gate of the LDO regulator. As a result, the LDO regulator has a PSRR of -68.2 dB at 1k Hz, -45.85 dB at 1 MHz and -45 dB at 10 MHz. the other performance of the proposed LDO was maintained at the same level of the conventional LDO regulator.

Keywords: LDO regulator, noise sensing circuit, current reference, pass transistor

Procedia PDF Downloads 280
1268 Sequential Data Assimilation with High-Frequency (HF) Radar Surface Current

Authors: Lei Ren, Michael Hartnett, Stephen Nash

Abstract:

The abundant measured surface current from HF radar system in coastal area is assimilated into model to improve the modeling forecasting ability. A simple sequential data assimilation scheme, Direct Insertion (DI), is applied to update model forecast states. The influence of Direct Insertion data assimilation over time is analyzed at one reference point. Vector maps of surface current from models are compared with HF radar measurements. Root-Mean-Squared-Error (RMSE) between modeling results and HF radar measurements is calculated during the last four days with no data assimilation.

Keywords: data assimilation, CODAR, HF radar, surface current, direct insertion

Procedia PDF Downloads 567
1267 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique

Authors: Nishant Shrivastava, D. K. Sehgal

Abstract:

In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.

Keywords: finite elements, Lagrangian, optimal stress location, serendipity

Procedia PDF Downloads 103
1266 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process

Authors: Johannes Gantner, Michael Held, Matthias Fischer

Abstract:

The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.

Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation

Procedia PDF Downloads 282
1265 Establishing Forecasts Pointing Towards the Hungarian Energy Change Based on the Results of Local Municipal Renewable Energy Production and Energy Export

Authors: Balazs Kulcsar

Abstract:

Professional energy organizations perform analyses mainly on the global and national levels about the expected development of the share of renewables in electric power generation, heating, and cooling, as well as the transport sectors. There are just a few publications, research institutions, non-profit organizations, and national initiatives with a focus on studies in the individual towns, settlements. Issues concerning the self-supply of energy on the settlement level have not become too wide-spread. The goal of our energy geographic studies is to determine the share of local renewable energy sources in the settlement-based electricity supply across Hungary. The Hungarian energy supply system defines four categories based on the installed capacities of electric power generating units. From these categories, the theoretical annual electricity production of small-sized household power plants (SSHPP) featuring installed capacities under 50 kW and small power plants with under 0.5 MW capacities have been taken into consideration. In the above-mentioned power plant categories, the Hungarian Electricity Act has allowed the establishment of power plants primarily for the utilization of renewable energy sources since 2008. Though with certain restrictions, these small power plants utilizing renewable energies have the closest links to individual settlements and can be regarded as the achievements of the host settlements in the shift of energy use. Based on the 2017 data, we have ranked settlements to reflect the level of self-sufficiency in electricity production from renewable energy sources. The results show that the supply of all the energy demanded by settlements from local renewables is within reach now in small settlements, e.g., in the form of the small power plant categories discussed in the study, and is not at all impossible even in small towns and cities. In Hungary, 30 settlements produce more renewable electricity than their own annual electricity consumption. If these overproductive settlements export their excess electricity towards neighboring settlements, then full electricity supply can be realized on further 29 settlements from renewable sources by local small power plants. These results provide an opportunity for governmental planning of the realization of energy shift (legislative background, support system, environmental education), as well as framing developmental forecasts and scenarios until 2030.

Keywords: energy geography, Hungary, local small power plants, renewable energy sources, self-sufficiency settlements

Procedia PDF Downloads 143
1264 Roboweeder: A Robotic Weeds Killer Using Electromagnetic Waves

Authors: Yahoel Van Essen, Gordon Ho, Brett Russell, Hans-Georg Worms, Xiao Lin Long, Edward David Cooper, Avner Bachar

Abstract:

Weeds reduce farm and forest productivity, invade crops, smother pastures and some can harm livestock. Farmers need to spend a significant amount of money to control weeds by means of biological, chemical, cultural, and physical methods. To solve the global agricultural labor shortage and remove poisonous chemicals, a fully autonomous, eco-friendly, and sustainable weeding technology is developed. This takes the form of a weeding robot, ‘Roboweeder’. Roboweeder includes a four-wheel-drive self-driving vehicle, a 4-DOF robotic arm which is mounted on top of the vehicle, an electromagnetic wave generator (magnetron) which is mounted on the “wrist” of the robotic arm, 48V battery packs, and a control/communication system. Cameras are mounted on the front and two sides of the vehicle. Using image processing and recognition, distinguish types of weeds are detected before being eliminated. The electromagnetic wave technology is applied to heat the individual weeds and clusters dielectrically causing them to wilt and die. The 4-DOF robotic arm was modeled mathematically based on its structure/mechanics, each joint’s load, brushless DC motor and worm gear’ characteristics, forward kinematics, and inverse kinematics. The Proportional-Integral-Differential control algorithm is used to control the robotic arm’s motion to ensure the waveguide aperture pointing to the detected weeds. GPS and machine vision are used to traverse the farm and avoid obstacles without the need of supervision. A Roboweeder prototype has been built. Multiple test trials show that Roboweeder is able to detect, point, and kill the pre-defined weeds successfully although further improvements are needed, such as reducing the “weeds killing” time and developing a new waveguide with a smaller waveguide aperture to avoid killing crops surrounded. This technology changes the tedious, time consuming and expensive weeding processes, and allows farmers to grow more, go organic, and eliminate operational headaches. A patent of this technology is pending.

Keywords: autonomous navigation, machine vision, precision heating, sustainable and eco-friendly

Procedia PDF Downloads 240