Search results for: quantity estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2797

Search results for: quantity estimation

2557 Short Arc Technique for Baselines Determinations

Authors: Gamal F.Attia

Abstract:

The baselines are the distances and lengths of the chords between projections of the positions of the laser stations on the reference ellipsoid. For the satellite geodesy, it is very important to determine the optimal length of orbital arc along which laser measurements are to be carried out. It is clear that for the dynamical methods long arcs (one month or more) are to be used. According to which more errors of modeling of different physical forces such as earth's gravitational field, air drag, solar radiation pressure, and others that may influence the accuracy of the estimation of the satellites position, at the same time the measured errors con be almost completely excluded and high stability in determination of relative coordinate system can be achieved. It is possible to diminish the influence of the errors of modeling by using short-arcs of the satellite orbit (several revolutions or days), but the station's coordinates estimated by different arcs con differ from each other by a larger quantity than statistical zero. Under the semidynamical ‘short arc’ method one or several passes of the satellite in one of simultaneous visibility from both ends of the chord is known and the estimated parameter in this case is the length of the chord. The comparison of the same baselines calculated with long and short arcs methods shows a good agreement and even speaks in favor of the last one. In this paper the Short Arc technique has been explained and 3 baselines have been determined using the ‘short arc’ method.

Keywords: baselines, short arc, dynamical, gravitational field

Procedia PDF Downloads 452
2556 Estimation of Time Loss and Costs of Traffic Congestion: The Contingent Valuation Method

Authors: Amira Mabrouk, Chokri Abdennadher

Abstract:

The reduction of road congestion which is inherent to the use of vehicles is an obvious priority to public authority. Therefore, assessing the willingness to pay of an individual in order to save trip-time is akin to estimating the change in price which was the result of setting up a new transport policy to increase the networks fluidity and improving the level of social welfare. This study holds an innovative perspective. In fact, it initiates an economic calculation that has the objective of giving an estimation of the monetized time value during the trips made in Sfax. This research is founded on a double-objective approach. The aim of this study is to i) give an estimation of the monetized value of time; an hour dedicated to trips, ii) determine whether or not the consumer considers the environmental variables to be significant, iii) analyze the impact of applying a public management of the congestion via imposing taxation of city tolls on urban dwellers. This article is built upon a rich field survey led in the city of Sfax. With the use of the contingent valuation method, we analyze the “declared time preferences” of 450 drivers during rush hours. Based on the fond consideration of attributed bias of the applied method, we bring to light the delicacy of this approach with regards to the revelation mode and the interrogative techniques by following the NOAA panel recommendations bearing the exception of the valorization point and other similar studies about the estimation of transportation externality.

Keywords: willingness to pay, contingent valuation, time value, city toll

Procedia PDF Downloads 404
2555 Estimation of Synchronous Machine Synchronizing and Damping Torque Coefficients

Authors: Khaled M. EL-Naggar

Abstract:

Synchronizing and damping torque coefficients of a synchronous machine can give a quite clear picture for machine behavior during transients. These coefficients are used as a power system transient stability measurement. In this paper, a crow search optimization algorithm is presented and implemented to study the power system stability during transients. The algorithm makes use of the machine responses to perform the stability study in time domain. The problem is formulated as a dynamic estimation problem. An objective function that minimizes the error square in the estimated coefficients is designed. The method is tested using practical system with different study cases. Results are reported and a thorough discussion is presented. The study illustrates that the proposed method can estimate the stability coefficients for the critical stable cases where other methods may fail. The tests proved that the proposed tool is an accurate and reliable tool for estimating the machine coefficients for assessment of power system stability.

Keywords: optimization, estimation, synchronous, machine, crow search

Procedia PDF Downloads 120
2554 Research for Hollow Reinforced Concrete Bridge Piers in Korea

Authors: Ho Young Kim, Jae Hoon Lee, Do Kyu Hwang, Im Jong Kwahk, Tae Hoon Kim, Seung Hoon Lee

Abstract:

Hollow section for bridge columns has some advantages. However, current seismic design codes do not provide design regulations for hollow bridge piers. There have been many experimental studied for hollow reinforced concrete piers in the world. But, Study for hollow section for bridge piers in Korea has been begun with approximately 2000s. There has been conducted experimental study for hollow piers of flexural controlled sections by Yeungnam University, Sung kyunkwan University, Korea Expressway Corporation in 2009. This study concluded that flexural controlled sections for hollow piers showed the similar behavior to solid sections. And there have been conducted experimental study for hollow piers of compression controlled sections by Yeungnam University, Korea Institute of Construction Technology in 2012. This study concluded that compression controlled sections for hollow piers showed compression fracture of concrete in inside wall face. Samsung C&T Engineering & Construction Group has been conducted study with Yeungnam University for reduce the quantity of reinforcement details about hollow piers. Reduce the quantity of reinforcement details are triangular cross tie. This study concluded that triangular reinforcement details showed the similar behavior as compared with existing reinforcement details.

Keywords: hollow pier, flexural controlled section, compression controlled section, reduce the quantity of reinforcement, details

Procedia PDF Downloads 397
2553 Empirical Model for the Estimation of Global Solar Radiation on Horizontal Surface in Algeria

Authors: Malika Fekih, Abdenour Bourabaa, Rafika Hariti, Mohamed Saighi

Abstract:

In Algeria the global solar radiation and its components is not available for all locations due to which there is a requirement of using different models for the estimation of global solar radiation that use climatological parameters of the locations. Empirical constants for these models have been estimated and the results obtained have been tested statistically. The results show encouraging agreement between estimated and measured values.

Keywords: global solar radiation, empirical model, semi arid areas, climatological parameters

Procedia PDF Downloads 479
2552 Multitasking Incentives and Employee Performance: Evidence from Call Center Field Experiments and Laboratory Experiments

Authors: Sung Ham, Chanho Song, Jiabin Wu

Abstract:

Employees are commonly incentivized on both quantity and quality performance and much of the extant literature focuses on demonstrating that multitasking incentives lead to tradeoffs. Alternatively, we consider potential solutions to the tradeoff problem from both a theoretical and an experimental perspective. Across two field experiments from a call center, we find that tradeoffs can be mitigated when incentives are jointly enhanced across tasks, where previous research has suggested that incentives be reduced instead of enhanced. In addition, we also propose and test, in a laboratory setting, the implications of revising the metric used to assess quality. Our results indicate that metrics can be adjusted to align quality and quantity more efficiently. Thus, this alignment has the potential to thwart the classic tradeoff problem. Finally, we validate our findings with an economic experiment that verifies that effort is largely consistent with our theoretical predictions.

Keywords: incentives, multitasking, field experiment, experimental economics

Procedia PDF Downloads 148
2551 Tenants Use Less Input on Rented Plots: Evidence from Northern Ethiopia

Authors: Desta Brhanu Gebrehiwot

Abstract:

The study aims to investigate the impact of land tenure arrangements on fertilizer use per hectare in Northern Ethiopia. Household and Plot level data are used for analysis. Land tenure contracts such as sharecropping and fixed rent arrangements have endogeneity. Different unobservable characteristics may affect renting-out decisions. Thus, the appropriate method of analysis was the instrumental variable estimation technic. Therefore, the family of instrumental variable estimation methods two-stage least-squares regression (2SLS, the generalized method of moments (GMM), Limited information maximum likelihood (LIML), and instrumental variable Tobit (IV-Tobit) was used. Besides, a method to handle a binary endogenous variable is applied, which uses a two-step estimation. In the first step probit model includes instruments, and in the second step, maximum likelihood estimation (MLE) (“etregress” command in Stata 14) was used. There was lower fertilizer use per hectare on sharecropped and fixed rented plots relative to owner-operated. The result supports the Marshallian inefficiency principle in sharecropping. The difference in fertilizer use per hectare could be explained by a lack of incentivized detailed contract forms, such as giving more proportion of the output to the tenant under sharecropping contracts, which motivates to use of more fertilizer in rented plots to maximize the production because most sharecropping arrangements share output equally between tenants and landlords.

Keywords: tenure-contracts, endogeneity, plot-level data, Ethiopia, fertilizer

Procedia PDF Downloads 70
2550 Runoff Estimation in the Khiyav River Basin by Using the SCS_ CN Model

Authors: F. Esfandyari Darabad, Z. Samadi

Abstract:

The volume of runoff caused by rainfall in the river basin has enticed the researchers in the fields of the water management resources. In this study, first of the hydrological data such as the rainfall and discharge of the Khiyav river basin of Meshkin city in the northwest of Iran collected and then the process of analyzing and reconstructing has been completed. The soil conservation service (scs) has developed a method for calculating the runoff, in which is based on the curve number specification (CN). This research implemented the following model in the Khiyav river basin of Meshkin city by the GIS techniques and concluded the following fact in which represents the usage of weight model in calculating the curve numbers that provides the possibility for the all efficient factors which is contributing to the runoff creation such as; the geometric characteristics of the basin, the basin soil characteristics, vegetation, geology, climate and human factors to be considered, so an accurate estimation of runoff from precipitation to be achieved as the result. The findings also exposed the accident-prone areas in the output of the Khiyav river basin so it was revealed that the Khiyav river basin embodies a high potential for the flood creation.

Keywords: curve number, khiyav river basin, runoff estimation, SCS

Procedia PDF Downloads 604
2549 Comparative Study on Daily Discharge Estimation of Soolegan River

Authors: Redvan Ghasemlounia, Elham Ansari, Hikmet Kerem Cigizoglu

Abstract:

Hydrological modeling in arid and semi-arid regions is very important. Iran has many regions with these climate conditions such as Chaharmahal and Bakhtiari province that needs lots of attention with an appropriate management. Forecasting of hydrological parameters and estimation of hydrological events of catchments, provide important information that used for design, management and operation of water resources such as river systems, and dams, widely. Discharge in rivers is one of these parameters. This study presents the application and comparison of some estimation methods such as Feed-Forward Back Propagation Neural Network (FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) to predict the daily flow discharge of the Soolegan River, located at Chaharmahal and Bakhtiari province, in Iran. In this study, Soolegan, station was considered. This Station is located in Soolegan River at 51° 14՜ Latitude 31° 38՜ longitude at North Karoon basin. The Soolegan station is 2086 meters higher than sea level. The data used in this study are daily discharge and daily precipitation of Soolegan station. Feed Forward Back Propagation Neural Network(FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) models were developed using the same input parameters for Soolegan's daily discharge estimation. The results of estimation models were compared with observed discharge values to evaluate performance of the developed models. Results of all methods were compared and shown in tables and charts.

Keywords: ANN, multi linear regression, Bayesian network, forecasting, discharge, gene expression programming

Procedia PDF Downloads 548
2548 Parameter Estimation in Dynamical Systems Based on Latent Variables

Authors: Arcady Ponosov

Abstract:

A novel mathematical approach is suggested, which facilitates a compressed representation and efficient validation of parameter-rich ordinary differential equation models describing the dynamics of complex, especially biology-related, systems and which is based on identification of the system's latent variables. In particular, an efficient parameter estimation method for the compressed non-linear dynamical systems is developed. The method is applied to the so-called 'power-law systems' being non-linear differential equations typically used in Biochemical System Theory.

Keywords: generalized law of mass action, metamodels, principal components, synergetic systems

Procedia PDF Downloads 338
2547 Foil Bearing Stiffness Estimation with Pseudospectral Scheme

Authors: Balaji Sankar, Sadanand Kulkarni

Abstract:

Compliant foil gas lubricated bearings are used for the support of light loads in the order of few kilograms at high speeds, in the order of 50,000 RPM. The stiffness of the foil bearings depends both on the stiffness of the compliant foil and on the lubricating gas film. The stiffness of the bearings plays a crucial role in the stable operation of the supported rotor over a range of speeds. This paper describes a numerical approach to estimate the stiffness of the bearings using pseudo spectral scheme. Methodology to obtain the stiffness of the foil bearing as a function of weight of the shaft is given and the results are presented.

Keywords: foil bearing, simulation, numerical, stiffness estimation

Procedia PDF Downloads 324
2546 Mobile Smart Application Proposal for Predicting Calories in Food

Authors: Marcos Valdez Alexander Junior, Igor Aguilar-Alonso

Abstract:

Malnutrition is the root of different diseases that universally affect everyone, diseases such as obesity and malnutrition. The objective of this research is to predict the calories of the food to be eaten, developing a smart mobile application to show the user if a meal is balanced. Due to the large percentage of obesity and malnutrition in Peru, the present work is carried out. The development of the intelligent application is proposed with a three-layer architecture, and for the prediction of the nutritional value of the food, the use of pre-trained models based on convolutional neural networks is proposed.

Keywords: volume estimation, calorie estimation, artificial vision, food nutrition

Procedia PDF Downloads 80
2545 Evaluation of Dual Polarization Rainfall Estimation Algorithm Applicability in Korea: A Case Study on Biseulsan Radar

Authors: Chulsang Yoo, Gildo Kim

Abstract:

Dual polarization radar provides comprehensive information about rainfall by measuring multiple parameters. In Korea, for the rainfall estimation, JPOLE and CSU-HIDRO algorithms are generally used. This study evaluated the local applicability of JPOLE and CSU-HIDRO algorithms in Korea by using the observed rainfall data collected on August, 2014 by the Biseulsan dual polarization radar data and KMA AWS. A total of 11,372 pairs of radar-ground rain rate data were classified according to thresholds of synthetic algorithms into suitable and unsuitable data. Then, evaluation criteria were derived by comparing radar rain rate and ground rain rate, respectively, for entire, suitable, unsuitable data. The results are as follows: (1) The radar rain rate equation including KDP, was found better in the rainfall estimation than the other equations for both JPOLE and CSU-HIDRO algorithms. The thresholds were found to be adequately applied for both algorithms including specific differential phase. (2) The radar rain rate equation including horizontal reflectivity and differential reflectivity were found poor compared to the others. The result was not improved even when only the suitable data were applied. Acknowledgments: This work was supported by the Basic Science Research Program through the National Research Foundation of Korea, funded by the Ministry of Education (NRF-2013R1A1A2011012).

Keywords: CSU-HIDRO algorithm, dual polarization radar, JPOLE algorithm, radar rainfall estimation algorithm

Procedia PDF Downloads 193
2544 Improved Distance Estimation in Dynamic Environments through Multi-Sensor Fusion with Extended Kalman Filter

Authors: Iffat Ara Ebu, Fahmida Islam, Mohammad Abdus Shahid Rafi, Mahfuzur Rahman, Umar Iqbal, John Ball

Abstract:

The application of multi-sensor fusion for enhanced distance estimation accuracy in dynamic environments is crucial for advanced driver assistance systems (ADAS) and autonomous vehicles. Limitations of single sensors such as cameras or radar in adverse conditions motivate the use of combined camera and radar data to improve reliability, adaptability, and object recognition. A multi-sensor fusion approach using an extended Kalman filter (EKF) is proposed to combine sensor measurements with a dynamic system model, achieving robust and accurate distance estimation. The research utilizes the Mississippi State University Autonomous Vehicular Simulator (MAVS) to create a controlled environment for data collection. Data analysis is performed using MATLAB. Qualitative (visualization of fused data vs ground truth) and quantitative metrics (RMSE, MAE) are employed for performance assessment. Initial results with simulated data demonstrate accurate distance estimation compared to individual sensors. The optimal sensor measurement noise variance and plant noise variance parameters within the EKF are identified, and the algorithm is validated with real-world data from a Chevrolet Blazer. In summary, this research demonstrates that multi-sensor fusion with an EKF significantly improves distance estimation accuracy in dynamic environments. This is supported by comprehensive evaluation metrics, with validation transitioning from simulated to real-world data, paving the way for safer and more reliable autonomous vehicle control.

Keywords: sensor fusion, EKF, MATLAB, MAVS, autonomous vehicle, ADAS

Procedia PDF Downloads 10
2543 Biomass Carbon Credit Estimation for Sustainable Urban Planning and Micro-climate Assessment

Authors: R. Niranchana, K. Meena Alias Jeyanthi

Abstract:

As a result of the present climate change dilemma, the energy balancing strategy is to construct a sustainable environment has become a top concern for researchers worldwide. The environment itself has always been a solution from the earliest days of human evolution. Carbon capture begins with its accurate estimation and monitoring credit inventories, and its efficient use. Sustainable urban planning with deliverables of re-use energy models might benefit from assessment methods like biomass carbon credit ranking. The term "biomass energy" refers to the various ways in which living organisms can potentially be converted into a source of energy. The approaches that can be applied to biomass and an algorithm for evaluating carbon credits are presented in this paper. The micro-climate evaluation using Computational Fluid dynamics was carried out across the location (1 km x1 km) at Dindigul, India (10°24'58.68" North, 77°54.1.80 East). Sustainable Urban design must be carried out considering environmental and physiological convection, conduction, radiation and evaporative heat exchange due to proceeding solar access and wind intensities.

Keywords: biomass, climate assessment, urban planning, multi-regression, carbon estimation algorithm

Procedia PDF Downloads 80
2542 Switched System Diagnosis Based on Intelligent State Filtering with Unknown Models

Authors: Nada Slimane, Foued Theljani, Faouzi Bouani

Abstract:

The paper addresses the problem of fault diagnosis for systems operating in several modes (normal or faulty) based on states assessment. We use, for this purpose, a methodology consisting of three main processes: 1) sequential data clustering, 2) linear model regression and 3) state filtering. Typically, Kalman Filter (KF) is an algorithm that provides estimation of unknown states using a sequence of I/O measurements. Inevitably, although it is an efficient technique for state estimation, it presents two main weaknesses. First, it merely predicts states without being able to isolate/classify them according to their different operating modes, whether normal or faulty modes. To deal with this dilemma, the KF is endowed with an extra clustering step based fully on sequential version of the k-means algorithm. Second, to provide state estimation, KF requires state space models, which can be unknown. A linear regularized regression is used to identify the required models. To prove its effectiveness, the proposed approach is assessed on a simulated benchmark.

Keywords: clustering, diagnosis, Kalman Filtering, k-means, regularized regression

Procedia PDF Downloads 165
2541 Comparison of Different DNA Extraction Platforms with FFPE tissue

Authors: Wang Yanping Karen, Mohd Rafeah Siti, Park MI Kyoung

Abstract:

Formalin-fixed paraffin embedded (FFPE) tissue is important in the area of oncological diagnostics. This method of preserving tissues enabling them to be stored easily at ambient temperature for a long time. This decreases the risk of losing the DNA quantity and quality after extraction, reducing sample wastage, and making FFPE more cost effective. However, extracting DNA from FFPE tissue is a challenge as DNA purified is often highly cross-linked, fragmented, and degraded. In addition, this causes problems for many downstream processes. In this study, there will be a comparison of DNA extraction efficiency between One BioMed’s Xceler8 automated platform with commercial available extraction kits (Qiagen and Roche). The FFPE tissue slices were subjected to deparaffinization process, pretreatment and then DNA extraction using the three mentioned platforms. The DNA quantity were determined with real-time PCR (BioRad CFX ) and gel electrophoresis. The amount of DNA extracted with the One BioMed’s X8 platform was found to be comparable with the other two manual extraction kits.

Keywords: DNA extraction, FFPE tissue, qiagen, roche, one biomed X8

Procedia PDF Downloads 90
2540 The Generalized Pareto Distribution as a Model for Sequential Order Statistics

Authors: Mahdy ‎Esmailian, Mahdi ‎Doostparast, Ahmad ‎Parsian

Abstract:

‎In this article‎, ‎sequential order statistics (SOS) censoring type II samples coming from the generalized Pareto distribution are considered‎. ‎Maximum likelihood (ML) estimators of the unknown parameters are derived on the basis of the available multiple SOS data‎. ‎Necessary conditions for existence and uniqueness of the derived ML estimates are given‎. Due to complexity in the proposed likelihood function‎, ‎a useful re-parametrization is suggested‎. ‎For illustrative purposes‎, ‎a Monte Carlo simulation study is conducted and an illustrative example is analysed‎.

Keywords: bayesian estimation‎, generalized pareto distribution‎, ‎maximum likelihood estimation‎, sequential order statistics

Procedia PDF Downloads 489
2539 Big Data Applications for the Transport Sector

Authors: Antonella Falanga, Armando Cartenì

Abstract:

Today, an unprecedented amount of data coming from several sources, including mobile devices, sensors, tracking systems, and online platforms, characterizes our lives. The term “big data” not only refers to the quantity of data but also to the variety and speed of data generation. These data hold valuable insights that, when extracted and analyzed, facilitate informed decision-making. The 4Vs of big data - velocity, volume, variety, and value - highlight essential aspects, showcasing the rapid generation, vast quantities, diverse sources, and potential value addition of these kinds of data. This surge of information has revolutionized many sectors, such as business for improving decision-making processes, healthcare for clinical record analysis and medical research, education for enhancing teaching methodologies, agriculture for optimizing crop management, finance for risk assessment and fraud detection, media and entertainment for personalized content recommendations, emergency for a real-time response during crisis/events, and also mobility for the urban planning and for the design/management of public and private transport services. Big data's pervasive impact enhances societal aspects, elevating the quality of life, service efficiency, and problem-solving capacities. However, during this transformative era, new challenges arise, including data quality, privacy, data security, cybersecurity, interoperability, the need for advanced infrastructures, and staff training. Within the transportation sector (the one investigated in this research), applications span planning, designing, and managing systems and mobility services. Among the most common big data applications within the transport sector are, for example, real-time traffic monitoring, bus/freight vehicle route optimization, vehicle maintenance, road safety and all the autonomous and connected vehicles applications. Benefits include a reduction in travel times, road accidents and pollutant emissions. Within these issues, the proper transport demand estimation is crucial for sustainable transportation planning. Evaluating the impact of sustainable mobility policies starts with a quantitative analysis of travel demand. Achieving transportation decarbonization goals hinges on precise estimations of demand for individual transport modes. Emerging technologies, offering substantial big data at lower costs than traditional methods, play a pivotal role in this context. Starting from these considerations, this study explores the usefulness impact of big data within transport demand estimation. This research focuses on leveraging (big) data collected during the COVID-19 pandemic to estimate the evolution of the mobility demand in Italy. Estimation results reveal in the post-COVID-19 era, more than 96 million national daily trips, about 2.6 trips per capita, with a mobile population of more than 37.6 million Italian travelers per day. Overall, this research allows us to conclude that big data better enhances rational decision-making for mobility demand estimation, which is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, cloud computing, decision-making, mobility demand, transportation

Procedia PDF Downloads 46
2538 Construction Unit Rate Factor Modelling Using Neural Networks

Authors: Balimu Mwiya, Mundia Muya, Chabota Kaliba, Peter Mukalula

Abstract:

Factors affecting construction unit cost vary depending on a country’s political, economic, social and technological inclinations. Factors affecting construction costs have been studied from various perspectives. Analysis of cost factors requires an appreciation of a country’s practices. Identified cost factors provide an indication of a country’s construction economic strata. The purpose of this paper is to identify the essential factors that affect unit cost estimation and their breakdown using artificial neural networks. Twenty-five (25) identified cost factors in road construction were subjected to a questionnaire survey and employing SPSS factor analysis the factors were reduced to eight. The 8 factors were analysed using the neural network (NN) to determine the proportionate breakdown of the cost factors in a given construction unit rate. NN predicted that political environment accounted 44% of the unit rate followed by contractor capacity at 22% and financial delays, project feasibility, overhead and profit each at 11%. Project location, material availability and corruption perception index had minimal impact on the unit cost from the training data provided. Quantified cost factors can be incorporated in unit cost estimation models (UCEM) to produce more accurate estimates. This can create improvements in the cost estimation of infrastructure projects and establish a benchmark standard to assist the process of alignment of work practises and training of new staff, permitting the on-going development of best practises in cost estimation to become more effective.

Keywords: construction cost factors, neural networks, roadworks, Zambian construction industry

Procedia PDF Downloads 343
2537 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter

Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri

Abstract:

Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.

Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion

Procedia PDF Downloads 677
2536 Extended Kalman Filter Based Direct Torque Control of Permanent Magnet Synchronous Motor

Authors: Liang Qin, Hanan M. D. Habbi

Abstract:

A robust sensorless speed for permanent magnet synchronous motor (PMSM) has been presented for estimation of stator flux components and rotor speed based on The Extended Kalman Filter (EKF). The model of PMSM and its EKF models are modeled in Matlab /Sirnulink environment. The proposed EKF speed estimation method is also proved insensitive to the PMSM parameter variations. Simulation results demonstrate a good performance and robustness.

Keywords: DTC, Extended Kalman Filter (EKF), PMSM, sensorless control, anti-windup PI

Procedia PDF Downloads 644
2535 DOA Estimation Using Golden Section Search

Authors: Niharika Verma, Sandeep Santosh

Abstract:

DOA technique is a localization technique used in the communication field. Various algorithms have been developed for direction of arrival estimation like MUSIC, ROOT MUSIC, etc. These algorithms depend on various parameters like antenna array elements, number of snapshots and various others. Basically the MUSIC spectrum is evaluated and peaks obtained are considered as the angle of arrivals. The angles evaluated using this process depends on the scanning interval chosen. The accuracy of the results obtained depends on the coarseness of the interval chosen. In this paper, golden section search is applied to the MUSIC algorithm and therefore, more accurate results are achieved. Initially the coarse DOA estimations is done using the MUSIC algorithm in the range -90 to 90 degree at the interval of 10 degree. After the peaks obtained then fine DOA estimation is done using golden section search. Also, the partitioning method is applied to estimate the number of signals incident on the antenna array. Dependency of the algorithm on the number of snapshots is also being explained. Hence, the accurate results are being determined using this algorithm.

Keywords: Direction of Arrival (DOA), golden section search, MUSIC, number of snapshots

Procedia PDF Downloads 433
2534 Exponentiated Transmuted Weibull Distribution: A Generalization of the Weibull Probability Distribution

Authors: Abd El Hady N. Ebraheim

Abstract:

This paper introduces a new generalization of the two parameter Weibull distribution. To this end, the quadratic rank transmutation map has been used. This new distribution is named exponentiated transmuted Weibull (ETW) distribution. The ETW distribution has the advantage of being capable of modeling various shapes of aging and failure criteria. Furthermore, eleven lifetime distributions such as the Weibull, exponentiated Weibull, Rayleigh and exponential distributions, among others follow as special cases. The properties of the new model are discussed and the maximum likelihood estimation is used to estimate the parameters. Explicit expressions are derived for the quantiles. The moments of the distribution are derived, and the order statistics are examined.

Keywords: exponentiated, inversion method, maximum likelihood estimation, transmutation map

Procedia PDF Downloads 550
2533 Comparison of Statistical Methods for Estimating Missing Precipitation Data in the River Subbasin Lenguazaque, Colombia

Authors: Miguel Cañon, Darwin Mena, Ivan Cabeza

Abstract:

In this work was compared and evaluated the applicability of statistical methods for the estimation of missing precipitations data in the basin of the river Lenguazaque located in the departments of Cundinamarca and Boyacá, Colombia. The methods used were the method of simple linear regression, distance rate, local averages, mean rates, correlation with nearly stations and multiple regression method. The analysis used to determine the effectiveness of the methods is performed by using three statistical tools, the correlation coefficient (r2), standard error of estimation and the test of agreement of Bland and Altmant. The analysis was performed using real rainfall values removed randomly in each of the seasons and then estimated using the methodologies mentioned to complete the missing data values. So it was determined that the methods with the highest performance and accuracy in the estimation of data according to conditions that were counted are the method of multiple regressions with three nearby stations and a random application scheme supported in the precipitation behavior of related data sets.

Keywords: statistical comparison, precipitation data, river subbasin, Bland and Altmant

Procedia PDF Downloads 454
2532 Age Estimation from Teeth among North Indian Population: Comparison and Reliability of Qualitative and Quantitative Methods

Authors: Jasbir Arora, Indu Talwar, Daisy Sahni, Vidya Rattan

Abstract:

Introduction: Age estimation is a crucial step to build the identity of a person, both in case of deceased and alive. In adults, age can be estimated on the basis of six regressive (Attrition, Secondary dentine, Dentine transparency, Root resorption, Cementum apposition and Periodontal Disease) changes in teeth qualitatively using scoring system and quantitatively by micrometric method. The present research was designed to establish the reliability of qualitative (method 1) and quantitative (method 2) of age estimation among North Indians and to compare the efficacy of these two methods. Method: 250 single-rooted extracted teeth (18-75 yrs.) were collected from Department of Oral Health Sciences, PGIMER, Chandigarh. Before extraction, periodontal score of each tooth was noted. Labiolingual sections were prepared and examined under light microscope for regressive changes. Each parameter was scored using Gustafson’s 0-3 point score system (qualitative), and total score was calculated. For quantitative method, each regressive change was measured quantitatively in form of 18 micrometric parameters under microscope with the help of measuring eyepiece. Age was estimated using linear and multiple regression analysis in Gustafson’s method and Kedici’s method respectively. Estimated age was compared with actual age on the basis of absolute mean error. Results: In pooled data, by Gustafson’s method, significant correlation (r= 0.8) was observed between total score and actual age. Total score generated an absolute mean error of ±7.8 years. Whereas, for Kedici’s method, a value of correlation coefficient of r=0.5 (p<0.01) was observed between all the eighteen micrometric parameters and known age. Using multiple regression equation, age was estimated, and an absolute mean error of age was found to be ±12.18 years. Conclusion: Gustafson’s (qualitative) method was found to be a better predictor for age estimation among North Indians.

Keywords: forensic odontology, age estimation, North India, teeth

Procedia PDF Downloads 227
2531 Median-Based Nonparametric Estimation of Returns in Mean-Downside Risk Portfolio Frontier

Authors: H. Ben Salah, A. Gannoun, C. de Peretti, A. Trabelsi

Abstract:

The Downside Risk (DSR) model for portfolio optimisation allows to overcome the drawbacks of the classical mean-variance model concerning the asymetry of returns and the risk perception of investors. This model optimization deals with a positive definite matrix that is endogenous with respect to portfolio weights. This aspect makes the problem far more difficult to handle. For this purpose, Athayde (2001) developped a new recurcive minimization procedure that ensures the convergence to the solution. However, when a finite number of observations is available, the portfolio frontier presents an appearance which is not very smooth. In order to overcome that, Athayde (2003) proposed a mean kernel estimation of the returns, so as to create a smoother portfolio frontier. This technique provides an effect similar to the case in which we had continuous observations. In this paper, taking advantage on the the robustness of the median, we replace the mean estimator in Athayde's model by a nonparametric median estimator of the returns. Then, we give a new version of the former algorithm (of Athayde (2001, 2003)). We eventually analyse the properties of this improved portfolio frontier and apply this new method on real examples.

Keywords: Downside Risk, Kernel Method, Median, Nonparametric Estimation, Semivariance

Procedia PDF Downloads 469
2530 Oil Palm Shell Ash: Cement Mortar Mixture and Modification of Mechanical Properties

Authors: Abdoullah Namdar, Fadzil Mat Yahaya

Abstract:

The waste agriculture materials cause environment pollution, recycle of these materials help sustainable development. This study focused on the impact of used oil palm shell ash on the compressive and flexural strengths of cement mortar. Two different cement mortar mixes have been designed to investigate the impact of oil palm shell ash on strengths of cement mortar. Quantity of 4% oil palm shell ash has been replaced in cement mortar. The main objective of this paper is, to modify mechanical properties of cement mortar by replacement of oil palm ash in it at early age of seven days. The results have been revealed optimum quantity of oil palm ash for replacement in cement mortar. The deflection, load to failure, time to failure of compressive strength and flexural strength of all specimens have significantly been improved. The stress-strain behavior has been indicated ability of modified cement mortar in control stress path and strain. The micro property of cement paste has not been investigated.

Keywords: minerals, additive, flexural strength, compressive strength, modulus of elasticity

Procedia PDF Downloads 346
2529 Ratio Type Estimators for the Estimation of Population Coefficient of Variation under Two-Stage Sampling

Authors: Muhammad Jabbar

Abstract:

In this paper we propose two ratio and ratio type exponential estimator for the estimation of population coefficient of variation using the auxiliary information under two-stage sampling. The properties of these estimators are derived up to first order of approximation. The efficiency conditions under which suggested estimator are more efficient, are obtained. Numerical and simulated studies are conducted to support the superiority of the estimators. Theoretically and numerically, we have found that our proposed estimator is always more efficient as compared to its competitor estimator.

Keywords: two-stage sampling, coefficient of variation, ratio type exponential estimator

Procedia PDF Downloads 507
2528 Typical Characteristics and Compositions of Solvent System in Application of Maceration Technology to Isolate Antioxidative Activated Extract of Natural Products

Authors: Yohanes Buang, Suwari

Abstract:

Increasing interest of society in use and creation of herbal medicines has encouraged scientists/researchers to establish an ideal method to produce the best quality and quantity of pharmaceutical extracts. To have highest the antioxidative extracts, the method used must be at optimum conditions. Hence, the best method is not only able to provide highest quantity and quality of the isolated pharmaceutical extracts but also it has to be easy to do, simple, fast, and cheap. The characterization of solvents in maceration technique, in present study, involved various variables influencing quantity and quality of the pharmaceutical extracts, such as solvent’s optimum acidity-alkalinity (pH), temperature, concentration, and contact time. The shifting polarity of the solvent by combinations of water with ethanol (70:30) and (50:50) were also performed to completely record the best solvent system in application of maceration technology. Among those three solvents threated within Myrmecodia pendens, as a model of natural product, the results showed that water solvent system with conditions of alkalinity pH, optimum temperature, concentration, and contact time, is the best system to perform the maceration in order to have the highest isolated antioxidative activated extracts. The optimum conditions of the water solvent are at the alkalinity pH 9 up, 30 mg/mL of concentration, 40 min of contact time, 100 °C of temperature, and no ethanol used to replace parts of the water solvent. The present study strongly recommended the best conditions of solvent system to isolate the pharmaceutical extracts of natural products in application of the maceration technology.

Keywords: extracts, herbal medicine, natural product, maceration technique

Procedia PDF Downloads 281