Search results for: time estimation
19075 Estimation of Endogenous Brain Noise from Brain Response to Flickering Visual Stimulation Magnetoencephalography Visual Perception Speed
Authors: Alexander N. Pisarchik, Parth Chholak
Abstract:
Intrinsic brain noise was estimated via magneto-encephalograms (MEG) recorded during perception of flickering visual stimuli with frequencies of 6.67 and 8.57 Hz. First, we measured the mean phase difference between the flicker signal and steady-state event-related field (SSERF) in the occipital area where the brain response at the flicker frequencies and their harmonics appeared in the power spectrum. Then, we calculated the probability distribution of the phase fluctuations in the regions of frequency locking and computed its kurtosis. Since kurtosis is a measure of the distribution’s sharpness, we suppose that inverse kurtosis is related to intrinsic brain noise. In our experiments, the kurtosis value varied among subjects from K = 3 to K = 5 for 6.67 Hz and from 2.6 to 4 for 8.57 Hz. The majority of subjects demonstrated leptokurtic kurtosis (K < 3), i.e., the distribution tails approached zero more slowly than Gaussian. In addition, we found a strong correlation between kurtosis and brain complexity measured as the correlation dimension, so that the MEGs of subjects with higher kurtosis exhibited lower complexity. The obtained results are discussed in the framework of nonlinear dynamics and complex network theories. Specifically, in a network of coupled oscillators, phase synchronization is mainly determined by two antagonistic factors, noise, and the coupling strength. While noise worsens phase synchronization, the coupling improves it. If we assume that each neuron and each synapse contribute to brain noise, the larger neuronal network should have stronger noise, and therefore phase synchronization should be worse, that results in smaller kurtosis. The described method for brain noise estimation can be useful for diagnostics of some brain pathologies associated with abnormal brain noise.Keywords: brain, flickering, magnetoencephalography, MEG, visual perception, perception time
Procedia PDF Downloads 14819074 Heavy Metals Estimation in Coastal Areas Using Remote Sensing, Field Sampling and Classical and Robust Statistic
Authors: Elena Castillo-López, Raúl Pereda, Julio Manuel de Luis, Rubén Pérez, Felipe Piña
Abstract:
Sediments are an important source of accumulation of toxic contaminants within the aquatic environment. Bioassays are a powerful tool for the study of sediments in relation to their toxicity, but they can be expensive. This article presents a methodology to estimate the main physical property of intertidal sediments in coastal zones: heavy metals concentration. This study, which was developed in the Bay of Santander (Spain), applies classical and robust statistic to CASI-2 hyperspectral images to estimate heavy metals presence and ecotoxicity (TOC). Simultaneous fieldwork (radiometric and chemical sampling) allowed an appropriate atmospheric correction to CASI-2 images.Keywords: remote sensing, intertidal sediment, airborne sensors, heavy metals, eTOCoxicity, robust statistic, estimation
Procedia PDF Downloads 42119073 Health as a Proxy for Labour Productivity: The Impact on Wages in Egypt’s Private Sector
Authors: Yasmine Ahmed Shemeis
Abstract:
Determining the impact of productivity increases on wage levels is often difficult due to the unavailability of individual-level productivity data. Accordingly, we proxy for productivity using a self-perceived measure of health based on the postulated positive relationship between better health and productivity improvements. Using Egypt’s labour market data for the years 2012 and 2018 and utilizing a Maximum Likelihood Estimation method, we address two issues: the endogeneity of health in the estimation of wages and a sample selection bias. Our findings indicate the great value that better health has in enhancing wage levels in Egypt’s private sector. Also, we find that overlooking the endogeneity of health underestimates its effect on wages. Thus, the improvement of health states is likely to be beneficial in improving labour market outcomes in terms of wages as well as labour productivity in Egypt.Keywords: labour, Productivity, Wages, Endogeneity, Sample Selection
Procedia PDF Downloads 8019072 Fatigue Life Estimation of Spiral Welded Waterworks Pipelines
Authors: Suk Woo Hong, Chang Sung Seok, Jae Mean Koo
Abstract:
Recently, the welding is widely used in modern industry for joining the structures. However, the waterworks pipes are exposed to the fatigue load by cars, earthquake and etc because of being buried underground. Moreover, the residual stress exists in weld zone by welding process and it is well known that the fatigue life of welded structures is degraded by residual stress. Due to such reasons, the crack can occur in the weld zone of pipeline. In this case, The ground subsidence or sinkhole can occur, if the soil and sand are washed down by fluid leaked from the crack of water pipe. These problems can lead to property damage and endangering lives. For these reasons, the estimation of fatigue characteristics for water pipeline weld zone is needed. Therefore, in this study, for fatigue characteristics estimation of spiral welded waterworks pipe, ASTM standard specimens and Curved Plate specimens were collected from the spiral welded waterworks pipe and the fatigue tests were performed. The S-N curves of each specimen were estimated, and then the fatigue life of weldment Curved Plate specimen was predicted by theoretical and analytical methods. After that, the weldment Curved Plate specimens were collected from the pipe and verification fatigue tests were performed. Finally, it was verified that the predicted S-N curve of weldment Curved Plate specimen was good agreement with fatigue test data.Keywords: spiral welded pipe, prediction fatigue life, endurance limit modifying factors, residual stress
Procedia PDF Downloads 29919071 Understanding the Classification of Rain Microstructure and Estimation of Z-R Relationship using a Micro Rain Radar in Tropical Region
Authors: Tomiwa, Akinyemi Clement
Abstract:
Tropical regions experience diverse and complex precipitation patterns, posing significant challenges for accurate rainfall estimation and forecasting. This study addresses the problem of effectively classifying tropical rain types and refining the Z-R (Reflectivity-Rain Rate) relationship to enhance rainfall estimation accuracy. Through a combination of remote sensing, meteorological analysis, and machine learning, the research aims to develop an advanced classification framework capable of distinguishing between different types of tropical rain based on their unique characteristics. This involves utilizing high-resolution satellite imagery, radar data, and atmospheric parameters to categorize precipitation events into distinct classes, providing a comprehensive understanding of tropical rain systems. Additionally, the study seeks to improve the Z-R relationship, a crucial aspect of rainfall estimation. One year of rainfall data was analyzed using a Micro Rain Radar (MRR) located at The Federal University of Technology Akure, Nigeria, measuring rainfall parameters from ground level to a height of 4.8 km with a vertical resolution of 0.16 km. Rain rates were classified into low (stratiform) and high (convective) based on various microstructural attributes such as rain rates, liquid water content, Drop Size Distribution (DSD), average fall speed of the drops, and radar reflectivity. By integrating diverse datasets and employing advanced statistical techniques, the study aims to enhance the precision of Z-R models, offering a more reliable means of estimating rainfall rates from radar reflectivity data. This refined Z-R relationship holds significant potential for improving our understanding of tropical rain systems and enhancing forecasting accuracy in regions prone to heavy precipitation.Keywords: remote sensing, precipitation, drop size distribution, micro rain radar
Procedia PDF Downloads 3319070 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini
Abstract:
A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.Keywords: central Italy, extreme events, rainfall data, underestimation errors
Procedia PDF Downloads 19119069 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils
Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha
Abstract:
Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering
Procedia PDF Downloads 33819068 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 3219067 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant
Authors: Michael Smalenberger
Abstract:
Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation
Procedia PDF Downloads 17219066 A Semiparametric Approach to Estimate the Mode of Continuous Multivariate Data
Authors: Tiee-Jian Wu, Chih-Yuan Hsu
Abstract:
Mode estimation is an important task, because it has applications to data from a wide variety of sources. We propose a semi-parametric approach to estimate the mode of an unknown continuous multivariate density function. Our approach is based on a weighted average of a parametric density estimate using the Box-Cox transform and a non-parametric kernel density estimate. Our semi-parametric mode estimate improves both the parametric- and non-parametric- mode estimates. Specifically, our mode estimate solves the non-consistency problem of parametric mode estimates (at large sample sizes) and reduces the variability of non-parametric mode estimates (at small sample sizes). The performance of our method at practical sample sizes is demonstrated by simulation examples and two real examples from the fields of climatology and image recognition.Keywords: Box-Cox transform, density estimation, mode seeking, semiparametric method
Procedia PDF Downloads 28419065 Lipschitz Classifiers Ensembles: Usage for Classification of Target Events in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
This paper introduces an original method for guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers. The solution was obtained as a finite closed set of alternative hypotheses, which contains an object of classification with a probability of not less than the specified value. Thus, the classification is represented by a set of hypothetical classes. In this case, the smaller the cardinality of the discrete set of hypothetical classes is, the higher is the classification accuracy. Experiments have shown that if the cardinality of the classifiers ensemble is increased then the cardinality of this set of hypothetical classes is reduced. The problem of the guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers is relevant in the multichannel classification of target events in C-OTDR monitoring systems. Results of suggested approach practical usage to accuracy control in C-OTDR monitoring systems are present.Keywords: Lipschitz classifiers, confidence set, C-OTDR monitoring, classifiers accuracy, classifiers ensemble
Procedia PDF Downloads 49219064 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter
Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy
Abstract:
Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking
Procedia PDF Downloads 40219063 Well-Being Inequality Using Superimposing Satisfaction Waves: Heisenberg Uncertainty in Behavioral Economics and Econometrics
Authors: Okay Gunes
Abstract:
In this article, for the first time in the literature for this subject we propose a new method for the measuring of well-being inequality through a model composed of superimposing satisfaction waves. The displacement of households’ satisfactory state (i.e. satisfaction) is defined in a satisfaction string. The duration of the satisfactory state for a given period of time is measured in order to determine the relationship between utility and total satisfactory time, itself dependent on the density and tension of each satisfaction string. Thus, individual cardinal total satisfaction values are computed by way of a one-dimensional form for scalar sinusoidal (harmonic) moving wave function, using satisfaction waves with varying amplitudes and frequencies which allow us to measure well-being inequality. One advantage to using satisfaction waves is the ability to show that individual utility and consumption amounts would probably not commute; hence it is impossible to measure or to know simultaneously the values of these observables from the dataset. Thus, we crystallize the problem by using a Heisenberg-type uncertainty resolution for self-adjoint economic operators. We propose to eliminate any estimation bias by correlating the standard deviations of selected economic operators; this is achieved by replacing the aforementioned observed uncertainties with households’ perceived uncertainties (i.e. corrected standard deviations) obtained through the logarithmic psychophysical law proposed by Weber and Fechner.Keywords: Heisenberg uncertainty principle, superimposing satisfaction waves, Weber–Fechner law, well-being inequality
Procedia PDF Downloads 44019062 Reliability Prediction of Tires Using Linear Mixed-Effects Model
Authors: Myung Hwan Na, Ho- Chun Song, EunHee Hong
Abstract:
We widely use normal linear mixed-effects model to analysis data in repeated measurement. In case of detecting heteroscedasticity and the non-normality of the population distribution at the same time, normal linear mixed-effects model can give improper result of analysis. To achieve more robust estimation, we use heavy tailed linear mixed-effects model which gives more exact and reliable analysis conclusion than standard normal linear mixed-effects model.Keywords: reliability, tires, field data, linear mixed-effects model
Procedia PDF Downloads 56319061 GIS Application in Surface Runoff Estimation for Upper Klang River Basin, Malaysia
Authors: Suzana Ramli, Wardah Tahir
Abstract:
Estimation of surface runoff depth is a vital part in any rainfall-runoff modeling. It leads to stream flow calculation and later predicts flood occurrences. GIS (Geographic Information System) is an advanced and opposite tool used in simulating hydrological model due to its realistic application on topography. The paper discusses on calculation of surface runoff depth for two selected events by using GIS with Curve Number method for Upper Klang River basin. GIS enables maps intersection between soil type and land use that later produces curve number map. The results show good correlation between simulated and observed values with more than 0.7 of R2. Acceptable performance of statistical measurements namely mean error, absolute mean error, RMSE, and bias are also deduced in the paper.Keywords: surface runoff, geographic information system, curve number method, environment
Procedia PDF Downloads 28119060 Modeling Food Popularity Dependencies Using Social Media Data
Authors: DEVASHISH KHULBE, MANU PATHAK
Abstract:
The rise in popularity of major social media platforms have enabled people to share photos and textual information about their daily life. One of the popular topics about which information is shared is food. Since a lot of media about food are attributed to particular locations and restaurants, information like spatio-temporal popularity of various cuisines can be analyzed. Tracking the popularity of food types and retail locations across space and time can also be useful for business owners and restaurant investors. In this work, we present an approach using off-the shelf machine learning techniques to identify trends and popularity of cuisine types in an area using geo-tagged data from social media, Google images and Yelp. After adjusting for time, we use the Kernel Density Estimation to get hot spots across the location and model the dependencies among food cuisines popularity using Bayesian Networks. We consider the Manhattan borough of New York City as the location for our analyses but the approach can be used for any area with social media data and information about retail businesses.Keywords: Web Mining, Geographic Information Systems, Business popularity, Spatial Data Analyses
Procedia PDF Downloads 11519059 Nonparametric Sieve Estimation with Dependent Data: Application to Deep Neural Networks
Authors: Chad Brown
Abstract:
This paper establishes general conditions for the convergence rates of nonparametric sieve estimators with dependent data. We present two key results: one for nonstationary data and another for stationary mixing data. Previous theoretical results often lack practical applicability to deep neural networks (DNNs). Using these conditions, we derive convergence rates for DNN sieve estimators in nonparametric regression settings with both nonstationary and stationary mixing data. The DNN architectures considered adhere to current industry standards, featuring fully connected feedforward networks with rectified linear unit activation functions, unbounded weights, and a width and depth that grows with sample size.Keywords: sieve extremum estimates, nonparametric estimation, deep learning, neural networks, rectified linear unit, nonstationary processes
Procedia PDF Downloads 4119058 Image Processing techniques for Surveillance in Outdoor Environment
Authors: Jayanth C., Anirudh Sai Yetikuri, Kavitha S. N.
Abstract:
This paper explores the development and application of computer vision and machine learning techniques for real-time pose detection, facial recognition, and number plate extraction. Utilizing MediaPipe for pose estimation, the research presents methods for detecting hand raises and ducking postures through real-time video analysis. Complementarily, facial recognition is employed to compare and verify individual identities using the face recognition library. Additionally, the paper demonstrates a robust approach for extracting and storing vehicle number plates from images, integrating Optical Character Recognition (OCR) with a database management system. The study highlights the effectiveness and versatility of these technologies in practical scenarios, including security and surveillance applications. The findings underscore the potential of combining computer vision techniques to address diverse challenges and enhance automated systems for both individual and vehicular identification. This research contributes to the fields of computer vision and machine learning by providing scalable solutions and demonstrating their applicability in real-world contexts.Keywords: computer vision, pose detection, facial recognition, number plate extraction, machine learning, real-time analysis, OCR, database management
Procedia PDF Downloads 2619057 An Efficient Fundamental Matrix Estimation for Moving Object Detection
Authors: Yeongyu Choi, Ju H. Park, S. M. Lee, Ho-Youl Jung
Abstract:
In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object’s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC, the fundamental matrix is calculated. In this method, we have improved the performances of moving object detection by using two threshold values that determine inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values.Keywords: corner detection, optical flow, epipolar geometry, RANSAC
Procedia PDF Downloads 40619056 Change Point Detection Using Random Matrix Theory with Application to Frailty in Elderly Individuals
Authors: Malika Kharouf, Aly Chkeir, Khac Tuan Huynh
Abstract:
Detecting change points in time series data is a challenging problem, especially in scenarios where there is limited prior knowledge regarding the data’s distribution and the nature of the transitions. We present a method designed for detecting changes in the covariance structure of high-dimensional time series data, where the number of variables closely matches the data length. Our objective is to achieve unbiased test statistic estimation under the null hypothesis. We delve into the utilization of Random Matrix Theory to analyze the behavior of our test statistic within a high-dimensional context. Specifically, we illustrate that our test statistic converges pointwise to a normal distribution under the null hypothesis. To assess the effectiveness of our proposed approach, we conduct evaluations on a simulated dataset. Furthermore, we employ our method to examine changes aimed at detecting frailty in the elderly.Keywords: change point detection, hypothesis tests, random matrix theory, frailty in elderly
Procedia PDF Downloads 5219055 Periodicity Analysis of Long-Term Waterquality Data Series of the Hungarian Section of the River Tisza Using Morlet Wavelet Spectrum Estimation
Authors: Péter Tanos, József Kovács, Angéla Anda, Gábor Várbíró, Sándor Molnár, István Gábor Hatvani
Abstract:
The River Tisza is the second largest river in Central Europe. In this study, Morlet wavelet spectrum (periodicity) analysis was used with chemical, biological and physical water quality data for the Hungarian section of the River Tisza. In the research 15, water quality parameters measured at 14 sampling sites in the River Tisza and 4 sampling sites in the main artificial changes were assessed for the time period 1993 - 2005. Results show that annual periodicity was not always to be found in the water quality parameters, at least at certain sampling sites. Periodicity was found to vary over space and time, but in general, an increase was observed in the company of higher trophic states of the river heading downstream.Keywords: annual periodicity water quality, spatiotemporal variability of periodic behavior, Morlet wavelet spectrum analysis, River Tisza
Procedia PDF Downloads 34419054 Particle Size Distribution Estimation of a Mixture of Regular and Irregular Sized Particles Using Acoustic Emissions
Authors: Ejay Nsugbe, Andrew Starr, Ian Jennions, Cristobal Ruiz-Carcel
Abstract:
This works investigates the possibility of using Acoustic Emissions (AE) to estimate the Particle Size Distribution (PSD) of a mixture of particles that comprise of particles of different densities and geometry. The experiments carried out involved the mixture of a set of glass and polyethylene particles that ranged from 150-212 microns and 150-250 microns respectively and an experimental rig that allowed the free fall of a continuous stream of particles on a target plate which the AE sensor was placed. By using a time domain based multiple threshold method, it was observed that the PSD of the particles in the mixture could be estimated.Keywords: acoustic emissions, particle sizing, process monitoring, signal processing
Procedia PDF Downloads 35219053 Integration GIS–SCADA Power Systems to Enclosure Air Dispersion Model
Authors: Ibrahim Shaker, Amr El Hossany, Moustafa Osman, Mohamed El Raey
Abstract:
This paper will explore integration model between GIS–SCADA system and enclosure quantification model to approach the impact of failure-safe event. There are real demands to identify spatial objects and improve control system performance. Nevertheless, the employed methodology is predicting electro-mechanic operations and corresponding time to environmental incident variations. Open processing, as object systems technology, is presented for integration enclosure database with minimal memory size and computation time via connectivity drivers such as ODBC:JDBC during main stages of GIS–SCADA connection. The function of Geographic Information System is manipulating power distribution in contrast to developing issues. In other ward, GIS-SCADA systems integration will require numerical objects of process to enable system model calibration and estimation demands, determine of past events for analysis and prediction of emergency situations for response training.Keywords: air dispersion model, environmental management, SCADA systems, GIS system, integration power system
Procedia PDF Downloads 36819052 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements
Authors: Henok Hailemariam, Frank Wuttke
Abstract:
Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence
Procedia PDF Downloads 36319051 Engine Thrust Estimation by Strain Gauging of Engine Mount Assembly
Authors: Rohit Vashistha, Amit Kumar Gupta, G. P. Ravishankar, Mahesh P. Padwale
Abstract:
Accurate thrust measurement is required for aircraft during takeoff and after ski-jump. In a developmental aircraft, takeoff from ship is extremely critical and thrust produced by the engine should be known to the pilot before takeoff so that if thrust produced is not sufficient then take-off can be aborted and accident can be avoided. After ski-jump, thrust produced by engine is required because the horizontal speed of aircraft is less than the normal takeoff speed. Engine should be able to produce enough thrust to provide nominal horizontal takeoff speed to the airframe within prescribed time limit. The contemporary low bypass gas turbine engines generally have three mounts where the two side mounts transfer the engine thrust to the airframe. The third mount only takes the weight component. It does not take any thrust component. In the present method of thrust estimation, the strain gauging of the two side mounts is carried out. The strain produced at various power settings is used to estimate the thrust produced by the engine. The quarter Wheatstone bridge is used to acquire the strain data. The engine mount assembly is subjected to Universal Test Machine for determination of equivalent elasticity of assembly. This elasticity value is used in the analytical approach for estimation of engine thrust. The estimated thrust is compared with the test bed load cell thrust data. The experimental strain data is also compared with strain data obtained from FEM analysis. Experimental setup: The strain gauge is mounted on the tapered portion of the engine mount sleeve. Two strain gauges are mounted on diametrically opposite locations. Both of the strain gauges on the sleeve were in the horizontal plane. In this way, these strain gauges were not taking any strain due to the weight of the engine (except negligible strain due to material's poison's ratio) or the hoop's stress. Only the third mount strain gauge will show strain when engine is not running i.e. strain due to weight of engine. When engine starts running, all the load will be taken by the side mounts. The strain gauge on the forward side of the sleeve was showing a compressive strain and the strain gauge on the rear side of the sleeve shows a tensile strain. Results and conclusion: the analytical calculation shows that the hoop stresses dominate the bending stress. The estimated thrust by strain gauge shows good accuracy at higher power setting as compared to lower power setting. The accuracy of estimated thrust at max power setting is 99.7% whereas at lower power setting is 78%.Keywords: engine mounts, finite elements analysis, strain gauge, stress
Procedia PDF Downloads 48119050 Cycle Number Estimation Method on Fatigue Crack Initiation Using Voronoi Tessellation and the Tanaka Mura Model
Authors: Mohammad Ridzwan Bin Abd Rahim, Siegfried Schmauder, Yupiter HP Manurung, Peter Binkele, Meor Iqram B. Meor Ahmad, Kiarash Dogahe
Abstract:
This paper deals with the short crack initiation of the material P91 under cyclic loading at two different temperatures, concluded with the estimation of the short crack initiation Wöhler (S/N) curve. An artificial but representative model microstructure was generated using Voronoi tessellation and the Finite Element Method, and the non-uniform stress distribution was calculated accordingly afterward. The number of cycles needed for crack initiation is estimated on the basis of the stress distribution in the model by applying the physically-based Tanaka-Mura model. Initial results show that the number of cycles to generate crack initiation is strongly correlated with temperature.Keywords: short crack initiation, P91, Wöhler curve, Voronoi tessellation, Tanaka-Mura model
Procedia PDF Downloads 10119049 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study
Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb
Abstract:
The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.Keywords: anthropomorphic phantom, computed tomography, CT-expo, radiation dose
Procedia PDF Downloads 22019048 The Beta-Fisher Snedecor Distribution with Applications to Cancer Remission Data
Authors: K. A. Adepoju, O. I. Shittu, A. U. Chukwu
Abstract:
In this paper, a new four-parameter generalized version of the Fisher Snedecor distribution called Beta- F distribution is introduced. The comprehensive account of the statistical properties of the new distributions was considered. Formal expressions for the cumulative density function, moments, moment generating function and maximum likelihood estimation, as well as its Fisher information, were obtained. The flexibility of this distribution as well as its robustness using cancer remission time data was demonstrated. The new distribution can be used in most applications where the assumption underlying the use of other lifetime distributions is violated.Keywords: fisher-snedecor distribution, beta-f distribution, outlier, maximum likelihood method
Procedia PDF Downloads 34719047 Design Flood Estimation in Satluj Basin-Challenges for Sunni Dam Hydro Electric Project, Himachal Pradesh-India
Authors: Navneet Kalia, Lalit Mohan Verma, Vinay Guleria
Abstract:
Introduction: Design Flood studies are essential for effective planning and functioning of water resource projects. Design flood estimation for Sunni Dam Hydro Electric Project located in State of Himachal Pradesh, India, on the river Satluj, was a big challenge in view of the river flowing in the Himalayan region from Tibet to India, having a large catchment area of varying topography, climate, and vegetation. No Discharge data was available for the part of the river in Tibet, whereas, for India, it was available only at Khab, Rampur, and Luhri. The estimation of Design Flood using standard methods was not possible. This challenge was met using two different approaches for upper (snow-fed) and lower (rainfed) catchment using Flood Frequency Approach and Hydro-metrological approach. i) For catchment up to Khab Gauging site (Sub-Catchment, C1), Flood Frequency approach was used. Around 90% of the catchment area (46300 sqkm) up to Khab is snow-fed which lies above 4200m. In view of the predominant area being snow-fed area, 1 in 10000 years return period flood estimated using Flood Frequency analysis at Khab was considered as Probable Maximum Flood (PMF). The flood peaks were taken from daily observed discharges at Khab, which were increased by 10% to make them instantaneous. Design Flood of 4184 cumec thus obtained was considered as PMF at Khab. ii) For catchment between Khab and Sunni Dam (Sub-Catchment, C2), Hydro-metrological approach was used. This method is based upon the catchment response to the rainfall pattern observed (Probable Maximum Precipitation - PMP) in a particular catchment area. The design flood computation mainly involves the estimation of a design storm hyetograph and derivation of the catchment response function. A unit hydrograph is assumed to represent the response of the entire catchment area to a unit rainfall. The main advantage of the hydro-metrological approach is that it gives a complete flood hydrograph which allows us to make a realistic determination of its moderation effect while passing through a reservoir or a river reach. These studies were carried out to derive PMF for the catchment area between Khab and Sunni Dam site using a 1-day and 2-day PMP values of 232 and 416 cm respectively. The PMF so obtained was 12920.60 cumec. Final Result: As the Catchment area up to Sunni Dam has been divided into 2 sub-catchments, the Flood Hydrograph for the Catchment C1 has been routed through the connecting channel reach (River Satluj) using Muskingum method and accordingly, the Design Flood was computed after adding the routed flood ordinates with flood ordinates of catchment C2. The total Design Flood (i.e. 2-Day PMF) with a peak of 15473 cumec was obtained. Conclusion: Even though, several factors are relevant while deciding the method to be used for design flood estimation, data availability and the purpose of study are the most important factors. Since, generally, we cannot wait for the hydrological data of adequate quality and quantity to be available, flood estimation has to be done using whatever data is available. Depending upon the type of data available for a particular catchment, the method to be used is to be selected.Keywords: design flood, design storm, flood frequency, PMF, PMP, unit hydrograph
Procedia PDF Downloads 32619046 Volume Estimation of Trees: An Exploratory Study on Rosewood Logging Within Forest Transition and Savannah Ecological Zones of Ghana
Authors: Albert Kwabena Osei Konadu
Abstract:
One of the endemic forest species of the savannah transition zones enlisted by the Convention of International Treaty for Endangered Species (CITES) in Appendix II is the Rosewood, also known as Pterocarpus erinaceus or Krayie. Its economic viability has made it increasingly popular and in high demand. Ghana’s forest resource management regime for these ecozones is mainly on conservation and very little on resource utilization. Consequently, commercial logging management standards are at teething stage and not fully developed, leading to a deficiency in the monitoring of logging operations and quantification of harvested trees volumes. Tree information form (TIF); a volume estimation and tracking regime, has proven to be an effective sustainable management tool for regulating timber resource extraction in the high forest zones of the country. This work aims to generate TIF that can track and capture requisite parameters to accurately estimate the volume of harvested rosewood within forest savannah transition zones. Tree information forms were created on three scenarios of individual billets, stacked billets and conveying vessel basis. The study was limited by the usage of regulators assigned volume as benchmark and also fraught with potential volume measurement error in the stacked billet scenario due to the existence of spaces within packed billets. These TIFs were field-tested to deduce the most viable option for the tracking and estimation of harvested volumes of rosewood using the smallian and cubic volume estimation formula. Overall, four districts were covered with individual billets, stacked billets and conveying vessel scenarios registering mean volumes of 25.83m3,45.08m3 and 32.6m3, respectively. These adduced volumes were validated by benchmarking to assigned volumes of the Forestry Commission of Ghana and known standard volumes of conveying vessels. The results did indicate an underestimation of extracted volumes under the quotas regime, a situation that could lead to unintended overexploitation of the species. The research revealed conveying vessels route is the most viable volume estimation and tracking regime for the sustainable management of the Pterocarpous erinaceus species as it provided a more practical volume estimate and data extraction protocol.Keywords: cubic volume formula, smallian volume formula, pterocarpus erinaceus, tree information form, forest transition and savannah zones, harvested tree volume
Procedia PDF Downloads 43