Search results for: time estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19495

Search results for: time estimation

18745 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 303
18744 An Industrial Workplace Alerting and Monitoring Platform to Prevent Workplace Injury and Accidents

Authors: Sanjay Adhikesaven

Abstract:

Workplace accidents are a critical problem that causes many deaths, injuries, and financial losses. Climate change has a severe impact on industrial workers, partially caused by global warming. To reduce such casualties, it is important to proactively find unsafe environments where injuries could occur by detecting the use of personal protective equipment (PPE) and identifying unsafe activities. Thus, we propose an industrial workplace alerting and monitoring platform to detect PPE use and classify unsafe activity in group settings involving multiple humans and objects over a long period of time. Our proposed method is the first to analyze prolonged actions involving multiple people or objects. It benefits from combining pose estimation with PPE detection in one platform. Additionally, we propose the first open-source annotated data set with video data from industrial workplaces annotated with the action classifications and detected PPE. The proposed system can be implemented within the surveillance cameras already present in industrial settings, making it a practical and effective solution.

Keywords: computer vision, deep learning, workplace safety, automation

Procedia PDF Downloads 102
18743 An Estimation of Rice Output Supply Response in Sierra Leone: A Nerlovian Model Approach

Authors: Alhaji M. H. Conteh, Xiangbin Yan, Issa Fofana, Brima Gegbe, Tamba I. Isaac

Abstract:

Rice grain is Sierra Leone’s staple food and the nation imports over 120,000 metric tons annually due to a shortfall in its cultivation. Thus, the insufficient level of the crop's cultivation in Sierra Leone is caused by many problems and this led to the endlessly widening supply and demand for the crop within the country. Consequently, this has instigated the government to spend huge money on the importation of this grain that would have been otherwise cultivated domestically at a cheaper cost. Hence, this research attempts to explore the response of rice supply with respect to its demand in Sierra Leone within the period 1980-2010. The Nerlovian adjustment model to the Sierra Leone rice data set within the period 1980-2010 was used. The estimated trend equations revealed that time had significant effect on output, productivity (yield) and area (acreage) of rice grain within the period 1980-2010 and this occurred generally at the 1% level of significance. The results showed that, almost the entire growth in output had the tendency to increase in the area cultivated to the crop. The time trend variable that was included for government policy intervention showed an insignificant effect on all the variables considered in this research. Therefore, both the short-run and long-run price response was inelastic since all their values were less than one. From the findings above, immediate actions that will lead to productivity growth in rice cultivation are required. To achieve the above, the responsible agencies should provide extension service schemes to farmers as well as motivating them on the adoption of modern rice varieties and technology in their rice cultivation ventures.

Keywords: Nerlovian adjustment model, price elasticities, Sierra Leone, trend equations

Procedia PDF Downloads 233
18742 Scenario Based Reaction Time Analysis for Seafarers

Authors: Umut Tac, Leyla Tavacioglu, Pelin Bolat

Abstract:

Human factor has been one of the elements that cause vulnerabilities which can be resulted with accidents in maritime transportation. When the roots of human factor based accidents are analyzed, gaps in performing cognitive abilities (reaction time, attention, memory…) are faced as the main reasons for the vulnerabilities in complex environment of maritime systems. Thus cognitive processes in maritime systems have arisen important subject that should be investigated comprehensively. At this point, neurocognitive tests such as reaction time analysis tests have been used as coherent tools that enable us to make valid assessments for cognitive status. In this respect, the aim of this study is to evaluate the reaction time (response time or latency) of seafarers due to their occupational experience and age. For this study, reaction time for different maneuverers has been taken while the participants were performing a sea voyage through a simulator which was run up with a certain scenario. After collecting the data for reaction time, a statistical analyze has been done to understand the relation between occupational experience and cognitive abilities.

Keywords: cognitive abilities, human factor, neurocognitive test battery, reaction time

Procedia PDF Downloads 298
18741 Price Effect Estimation of Tobacco on Low-wage Male Smokers: A Causal Mediation Analysis

Authors: Kawsar Ahmed, Hong Wang

Abstract:

The study's goal was to estimate the causal mediation impact of tobacco tax before and after price hikes among low-income male smokers, with a particular emphasis on the effect estimating pathways framework for continuous and dichotomous variables. From July to December 2021, a cross-sectional investigation of observational data (n=739) was collected from Bangladeshi low-wage smokers. The Quasi-Bayesian technique, binomial probit model, and sensitivity analysis using a simulation of the computational tools R mediation package had been used to estimate the effect. After a price rise for tobacco products, the average number of cigarettes or bidis sticks taken decreased from 6.7 to 4.56. Tobacco product rising prices have a direct effect on low-income people's decisions to quit or lessen their daily smoking habits of Average Causal Mediation Effect (ACME) [effect=2.31, 95 % confidence interval (C.I.) = (4.71-0.00), p<0.01], Average Direct Effect (ADE) [effect=8.6, 95 percent (C.I.) = (6.8-0.11), p<0.001], and overall significant effects (p<0.001). Tobacco smoking choice is described by the mediated proportion of income effect, which is 26.1% less of following price rise. The curve of ACME and ADE is based on observational figures of the coefficients of determination that asses the model of hypothesis as the substantial consequence after price rises in the sensitivity analysis. To reduce smoking product behaviors, price increases through taxation have a positive causal mediation with income that affects the decision to limit tobacco use and promote low-income men's healthcare policy.

Keywords: causal mediation analysis, directed acyclic graphs, tobacco price policy, sensitivity analysis, pathway estimation

Procedia PDF Downloads 112
18740 Estimation of Wind Characteristics and Energy Yield at Different Towns in Libya

Authors: Farag Ahwide, Souhel Bousheha

Abstract:

A technical assessment has been made of electricity generation, considering wind turbines ranging between Vestas (V80-2.0 MW and V112-3.0 MW) and the air density is equal to 1.225 Kg/m3, at different towns in Libya. Wind speed might have been measured each 3 hours during 10 m stature at a time for 10 quite sometime between 2000 Furthermore 2009, these towns which are spotted on the bank from claiming Mediterranean ocean also how in the desert, which need aid Derna 1, Derna 2, Shahat, Benghazi, Ajdabya, Sirte, Misurata, Tripoli-Airport, Al-Zawya, Al-Kofra, Sabha, Nalut. The work presented long term "wind data analysis in terms of annual, seasonal, monthly and diurnal variations at these sites. Wind power density with different heights has been studied. Excel sheet program was used to calculate the values of wind power density and the values of wind speed frequency for the stations; their seasonally values have been estimated. Limit variable with rated wind pace to 10 different wind turbines need to be been estimated, which is used to focus those required yearly vitality yield of a wind vitality change framework (WECS), acknowledging wind turbines extending between 600 kW and 3000 kW).

Keywords: energy yield, wind turbines, wind speed, wind power density

Procedia PDF Downloads 298
18739 An Improved Tracking Approach Using Particle Filter and Background Subtraction

Authors: Amir Mukhtar, Dr. Likun Xia

Abstract:

An improved, robust and efficient visual target tracking algorithm using particle filtering is proposed. Particle filtering has been proven very successful in estimating non-Gaussian and non-linear problems. In this paper, the particle filter is used with color feature to estimate the target state with time. Color distributions are applied as this feature is scale and rotational invariant, shows robustness to partial occlusion and computationally efficient. The performance is made more robust by choosing the different (YIQ) color scheme. Tracking is performed by comparison of chrominance histograms of target and candidate positions (particles). Color based particle filter tracking often leads to inaccurate results when light intensity changes during a video stream. Furthermore, background subtraction technique is used for size estimation of the target. The qualitative evaluation of proposed algorithm is performed on several real-world videos. The experimental results demonstrate that the improved algorithm can track the moving objects very well under illumination changes, occlusion and moving background.

Keywords: tracking, particle filter, histogram, corner points, occlusion, illumination

Procedia PDF Downloads 380
18738 Subway Ridership Estimation at a Station-Level: Focus on the Impact of Bus Demand, Commercial Business Characteristics and Network Topology

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The primary purpose of this study is to develop a methodological framework to predict daily subway ridership at a station-level and to examine the association between subway ridership and bus demand incorporating commercial business facility in the vicinity of each subway station. The socio-economic characteristics, land-use, and built environment as factors may have an impact on subway ridership. However, it should be considered not only the endogenous relationship between bus and subway demand but also the characteristics of commercial business within a subway station’s sphere of influence, and integrated transit network topology. Regarding a statistical approach to estimate subway ridership at a station level, therefore it should be considered endogeneity and heteroscedastic issues which might have in the subway ridership prediction model. This study focused on both discovering the impacts of bus demand, commercial business characteristics, and network topology on subway ridership and developing more precise subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers entire Seoul city in South Korea and includes 243 stations with the temporal scope set at twenty-four hours with one-hour interval time panels each. The data for subway and bus ridership was collected Seoul Smart Card data from 2015 and 2016. Three-Stage Least Square(3SLS) approach was applied to develop daily subway ridership model as capturing the endogeneity and heteroscedasticity between bus and subway demand. Independent variables incorporating in the modeling process were commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. As a result, it was found that bus ridership and subway ridership were endogenous each other and they had a significantly positive sign of coefficients which means one transit mode could increase another transportation mode’s ridership. In other words, two transit modes of subway and bus have a mutual relationship instead of the competitive relationship. The commercial business characteristics are the most critical dimension among the independent variables. The variables of commercial business facility rate in the paper containing six types; medical, educational, recreational, financial, food service, and shopping. From the model result, a higher rate in medical, financial buildings, shopping, and food service facility lead to increment of subway ridership at a station, while recreational and educational facility shows lower subway ridership. The complex network theory was applied for estimating integrated network topology measures that cover the entire Seoul transit network system, and a framework for seeking an impact on subway ridership. The centrality measures were found to be significant and showed a positive sign indicating higher centrality led to more subway ridership at a station level. The results of model accuracy tests by out of samples provided that 3SLS model has less mean square error rather than OLS and showed the methodological approach for the 3SLS model was plausible to estimate more accurate subway ridership. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science and ICT (2017R1C1B2010175).

Keywords: subway ridership, bus ridership, commercial business characteristic, endogeneity, network topology

Procedia PDF Downloads 144
18737 Assessment of Climate Change Impacts on the Hydrology of Upper Guder Catchment, Upper Blue Nile

Authors: Fikru Fentaw Abera

Abstract:

Climate changes alter regional hydrologic conditions and results in a variety of impacts on water resource systems. Such hydrologic changes will affect almost every aspect of human well-being. The goal of this paper is to assess the impact of climate change on the hydrology of Upper Guder catchment located in northwest of Ethiopia. The GCM derived scenarios (HadCM3 A2a & B2a SRES emission scenarios) experiments were used for the climate projection. The statistical downscaling model (SDSM) was used to generate future possible local meteorological variables in the study area. The down-scaled data were then used as input to the soil and water assessment tool (SWAT) model to simulate the corresponding future stream flow regime in Upper Guder catchment of the Abay River Basin. A semi distributed hydrological model, SWAT was developed and Generalized Likelihood Uncertainty Estimation (GLUE) was utilized for uncertainty analysis. GLUE is linked with SWAT in the Calibration and Uncertainty Program known as SWAT-CUP. Three benchmark periods simulated for this study were 2020s, 2050s and 2080s. The time series generated by GCM of HadCM3 A2a and B2a and Statistical Downscaling Model (SDSM) indicate a significant increasing trend in maximum and minimum temperature values and a slight increasing trend in precipitation for both A2a and B2a emission scenarios in both Gedo and Tikur Inch stations for all three bench mark periods. The hydrologic impact analysis made with the downscaled temperature and precipitation time series as input to the hydrological model SWAT suggested for both A2a and B2a emission scenarios. The model output shows that there may be an annual increase in flow volume up to 35% for both emission scenarios in three benchmark periods in the future. All seasons show an increase in flow volume for both A2a and B2a emission scenarios for all time horizons. Potential evapotranspiration in the catchment also will increase annually on average 3-15% for the 2020s and 7-25% for the 2050s and 2080s for both A2a and B2a emissions scenarios.

Keywords: climate change, Guder sub-basin, GCM, SDSM, SWAT, SWAT-CUP, GLUE

Procedia PDF Downloads 364
18736 Study of the Responding Time for Low Permeability Reservoirs

Authors: G. Lei, P. C. Dong, X. Q. Cen, S. Y. Mo

Abstract:

One of the most significant parameters, describing the effect of water flooding in porous media, is flood-response time, and it is an important index in oilfield development. The responding time in low permeability reservoir is usually calculated by the method of stable state successive substitution neglecting the effect of medium deformation. Numerous studies show that the media deformation has an important impact on the development for low permeability reservoirs and can not be neglected. On the base of streamline tube model, we developed a method to interpret responding time with medium deformation factor. The results show that: the media deformation factor, threshold pressure gradient and well spacing have a significant effect on the flood response time. The greater the media deformation factor, threshold pressure gradient or well spacing is, the lower the flood response time is. The responding time of different streamlines varies. As the angle with the main streamline increases, the water flooding response time delays as a "parabola" shape.

Keywords: low permeability, flood-response time, threshold pressure gradient, medium deformation

Procedia PDF Downloads 499
18735 Hierarchical Piecewise Linear Representation of Time Series Data

Authors: Vineetha Bettaiah, Heggere S. Ranganath

Abstract:

This paper presents a Hierarchical Piecewise Linear Approximation (HPLA) for the representation of time series data in which the time series is treated as a curve in the time-amplitude image space. The curve is partitioned into segments by choosing perceptually important points as break points. Each segment between adjacent break points is recursively partitioned into two segments at the best point or midpoint until the error between the approximating line and the original curve becomes less than a pre-specified threshold. The HPLA representation achieves dimensionality reduction while preserving prominent local features and general shape of time series. The representation permits course-fine processing at different levels of details, allows flexible definition of similarity based on mathematical measures or general time series shape, and supports time series data mining operations including query by content, clustering and classification based on whole or subsequence similarity.

Keywords: data mining, dimensionality reduction, piecewise linear representation, time series representation

Procedia PDF Downloads 275
18734 Heroin Withdrawal, Prison and Multiple Temporalities

Authors: Ian Walmsley

Abstract:

The aim of this paper is to explore the influence of time and temporality on the experience of coming off heroin in prison. The presentation draws on qualitative data collected during a small-scale pilot study of the role of self-care in the process of coming off drugs in prison. Time and temporality emerged as a key theme in the interview transcripts. Drug dependent prisoners experience of time in prison has not been recognized in the research literature. Instead, the literature on prison time typically views prisoners as a homogenous group or tends to focus on the influence of aging and gender on prison time. Furthermore, there is a tendency in the literature on prison drug treatment and recovery to conceptualize drug dependent prisoners as passive recipients of prison healthcare, rather than active agents. In building on these gaps, this paper argues that drug dependent prisoners experience multiple temporalities which involve an interaction between the body-times of the drug dependent prisoner and the economy of time in prison. One consequence of this interaction is the feeling that they are doing, at this point in their prison sentence, double prison time. The second part of the argument is that time and temporality were a means through which they governed their withdrawing bodies. In addition, this paper will comment on the challenges of prison research in England.

Keywords: heroin withdrawal, time and temporality, prison, body

Procedia PDF Downloads 276
18733 Detection of Image Blur and Its Restoration for Image Enhancement

Authors: M. V. Chidananda Murthy, M. Z. Kurian, H. S. Guruprasad

Abstract:

Image restoration in the process of communication is one of the emerging fields in the image processing. The motion analysis processing is the simplest case to detect motion in an image. Applications of motion analysis widely spread in many areas such as surveillance, remote sensing, film industry, navigation of autonomous vehicles, etc. The scene may contain multiple moving objects, by using motion analysis techniques the blur caused by the movement of the objects can be enhanced by filling-in occluded regions and reconstruction of transparent objects, and it also removes the motion blurring. This paper presents the design and comparison of various motion detection and enhancement filters. Median filter, Linear image deconvolution, Inverse filter, Pseudoinverse filter, Wiener filter, Lucy Richardson filter and Blind deconvolution filters are used to remove the blur. In this work, we have considered different types and different amount of blur for the analysis. Mean Square Error (MSE) and Peak Signal to Noise Ration (PSNR) are used to evaluate the performance of the filters. The designed system has been implemented in Matlab software and tested for synthetic and real-time images.

Keywords: image enhancement, motion analysis, motion detection, motion estimation

Procedia PDF Downloads 287
18732 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data

Authors: Georgiana Onicescu, Yuqian Shen

Abstract:

Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.

Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection

Procedia PDF Downloads 143
18731 Empirical Modeling of Air Dried Rubberwood Drying System

Authors: S. Khamtree, T. Ratanawilai, C. Nuntadusit

Abstract:

Rubberwood is a crucial commercial timber in Southern Thailand. All processes in a rubberwood production depend on the knowledge and expertise of the technicians, especially the drying process. This research aims to develop an empirical model for drying kinetics in rubberwood. During the experiment, the temperature of the hot air and the average air flow velocity were kept at 80-100 °C and 1.75 m/s, respectively. The moisture content in the samples was determined less than 12% in the achievement of drying basis. The drying kinetic was simulated using an empirical solver. The experimental results illustrated that the moisture content was reduced whereas the drying temperature and time were increased. The coefficient of the moisture ratio between the empirical and the experimental model was tested with three statistical parameters, R-square (), Root Mean Square Error (RMSE) and Chi-square (χ²) to predict the accuracy of the parameters. The experimental moisture ratio had a good fit with the empirical model. Additionally, the results indicated that the drying of rubberwood using the Henderson and Pabis model revealed the suitable level of agreement. The result presented an excellent estimation (= 0.9963) for the moisture movement compared to the other models. Therefore, the empirical results were valid and can be implemented in the future experiments.

Keywords: empirical models, rubberwood, moisture ratio, hot air drying

Procedia PDF Downloads 267
18730 A Time Delay Neural Network for Prediction of Human Behavior

Authors: A. Hakimiyan, H. Namazi

Abstract:

Human behavior is defined as a range of behaviors exhibited by humans who are influenced by different internal or external sources. Human behavior is the subject of much research in different areas of psychology and neuroscience. Despite some advances in studies related to forecasting of human behavior, there are not many researches which consider the effect of the time delay between the presence of stimulus and the related human response. Analysis of EEG signal as a fractal time series is one of the major tools for studying the human behavior. In the other words, the human brain activity is reflected in his EEG signal. Artificial Neural Network has been proved useful in forecasting of different systems’ behavior especially in engineering areas. In this research, a time delay neural network is trained and tested in order to forecast the human EEG signal and subsequently human behavior. This neural network, by introducing a time delay, takes care of the lagging time between the occurrence of the stimulus and the rise of the subsequent action potential. The results of this study are useful not only for the fundamental understanding of human behavior forecasting, but shall be very useful in different areas of brain research such as seizure prediction.

Keywords: human behavior, EEG signal, time delay neural network, prediction, lagging time

Procedia PDF Downloads 663
18729 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 190
18728 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 103
18727 Wind Resource Estimation and Economic Analysis for Rakiraki, Fiji

Authors: Kaushal Kishore

Abstract:

Immense amount of imported fuels are used in Fiji for electricity generation, transportation and for carrying out miscellaneous household work. To alleviate its dependency on fossil fuel, paramount importance has been given to instigate the utilization of renewable energy sources for power generation and to reduce the environmental dilapidation. Amongst the many renewable energy sources, wind has been considered as one of the best identified renewable sources that are comprehensively available in Fiji. In this study the wind resource assessment for three locations in Rakiraki, Fiji has been carried out. The wind resource estimation at Rokavukavu, Navolau and at Tuvavatu has been analyzed. The average wind speed at 55 m above ground level (a.g.l) at Rokavukavu, Navolau, and Tuvavatu sites are 5.91 m/s, 8.94 m/s and 8.13 m/s with the turbulence intensity of 14.9%, 17.1%, and 11.7% respectively. The moment fitting method has been used to estimate the Weibull parameter and the power density at each sites. A high resolution wind resource map for the three locations has been developed by using Wind Atlas Analysis and Application Program (WAsP). The results obtained from WAsP exhibited good wind potential at Navolau and Tuvavatu sites. A wind farm has been proposed at Navolau and Tuvavatu site that comprises six Vergnet 275 kW wind turbines at each site. The annual energy production (AEP) for each wind farm is estimated and an economic analysis is performed. The economic analysis for the proposed wind farms at Navolau and Tuvavatu sites showed a payback period of 5 and 6 years respectively.

Keywords: annual energy production, Rakiraki Fiji, turbulence intensity, Weibull parameter, wind speed, Wind Atlas Analysis and Application Program

Procedia PDF Downloads 188
18726 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 74
18725 Modeling of Glycine Transporters in Mammalian Using the Probability Approach

Authors: K. S. Zaytsev, Y. R. Nartsissov

Abstract:

Glycine is one of the key inhibitory neurotransmitters in Central nervous system (CNS) meanwhile glycinergic transmission is highly dependable on its appropriate reuptake from synaptic cleft. Glycine transporters (GlyT) of types 1 and 2 are the enzymes providing glycine transport back to neuronal and glial cells along with Na⁺ and Cl⁻ co-transport. The distribution and stoichiometry of GlyT1 and GlyT2 differ in details, and GlyT2 is more interesting for the research as it reuptakes glycine to neuron cells, whereas GlyT1 is located in glial cells. In the process of GlyT2 activity, the translocation of the amino acid is accompanied with binding of both one chloride and three sodium ions consequently (two sodium ions for GlyT1). In the present study, we developed a computer simulator of GlyT2 and GlyT1 activity based on known experimental data for quantitative estimation of membrane glycine transport. The trait of a single protein functioning was described using the probability approach where each enzyme state was considered separately. Created scheme of transporter functioning realized as a consequence of elemental steps allowed to take into account each event of substrate association and dissociation. Computer experiments using up-to-date kinetic parameters allowed receiving the number of translocated glycine molecules, Na⁺ and Cl⁻ ions per time period. Flexibility of developed software makes it possible to evaluate glycine reuptake pattern in time under different internal characteristics of enzyme conformational transitions. We investigated the behavior of the system in a wide range of equilibrium constant (from 0.2 to 100), which is not determined experimentally. The significant influence of equilibrium constant in the range from 0.2 to 10 on the glycine transfer process is shown. The environmental conditions such as ion and glycine concentrations are decisive if the values of the constant are outside the specified range.

Keywords: glycine, inhibitory neurotransmitters, probability approach, single protein functioning

Procedia PDF Downloads 119
18724 Estimation of Soil Nutrient Content Using Google Earth and Pleiades Satellite Imagery for Small Farms

Authors: Lucas Barbosa Da Silva, Jun Okamoto Jr.

Abstract:

Precision Agriculture has long being benefited from crop fields’ aerial imagery. This important tool has allowed identifying patterns in crop fields, generating useful information to the production management. Reflectance intensity data in different ranges from the electromagnetic spectrum may indicate presence or absence of nutrients in the soil of an area. Different relations between the different light bands may generate even more detailed information. The knowledge of the nutrients content in the soil or in the crop during its growth is a valuable asset to the farmer that seeks to optimize its yield. However, small farmers in Brazil often lack the resources to access this kind information, and, even when they do, it is not presented in a comprehensive and/or objective way. So, the challenges of implementing this technology ranges from the sampling of the imagery, using aerial platforms, building of a mosaic with the images to cover the entire crop field, extracting the reflectance information from it and analyzing its relationship with the parameters of interest, to the display of the results in a manner that the farmer may take the necessary decisions more objectively. In this work, it’s proposed an analysis of soil nutrient contents based on image processing of satellite imagery and comparing its outtakes with commercial laboratory’s chemical analysis. Also, sources of satellite imagery are compared, to assess the feasibility of using Google Earth data in this application, and the impacts of doing so, versus the application of imagery from satellites like Landsat-8 and Pleiades. Furthermore, an algorithm for building mosaics is implemented using Google Earth imagery and finally, the possibility of using unmanned aerial vehicles is analyzed. From the data obtained, some soil parameters are estimated, namely, the content of Potassium, Phosphorus, Boron, Manganese, among others. The suitability of Google Earth Imagery for this application is verified within a reasonable margin, when compared to Pleiades Satellite imagery and to the current commercial model. It is also verified that the mosaic construction method has little or no influence on the estimation results. Variability maps are created over the covered area and the impacts of the image resolution and sample time frame are discussed, allowing easy assessments of the results. The final results show that easy and cheaper remote sensing and analysis methods are possible and feasible alternatives for the small farmer, with little access to technological and/or financial resources, to make more accurate decisions about soil nutrient management.

Keywords: remote sensing, precision agriculture, mosaic, soil, nutrient content, satellite imagery, aerial imagery

Procedia PDF Downloads 175
18723 Chern-Simons Equation in Financial Theory and Time-Series Analysis

Authors: Ognjen Vukovic

Abstract:

Chern-Simons equation represents the cornerstone of quantum physics. The question that is often asked is if the aforementioned equation can be successfully applied to the interaction in international financial markets. By analysing the time series in financial theory, it is proved that Chern-Simons equation can be successfully applied to financial time-series. The aforementioned statement is based on one important premise and that is that the financial time series follow the fractional Brownian motion. All variants of Chern-Simons equation and theory are applied and analysed. Financial theory time series movement is, firstly, topologically analysed. The main idea is that exchange rate represents two-dimensional projections of three-dimensional Brownian motion movement. Main principles of knot theory and topology are applied to financial time series and setting is created so the Chern-Simons equation can be applied. As Chern-Simons equation is based on small particles, it is multiplied by the magnifying factor to mimic the real world movement. Afterwards, the following equation is optimised using Solver. The equation is applied to n financial time series in order to see if it can capture the interaction between financial time series and consequently explain it. The aforementioned equation represents a novel approach to financial time series analysis and hopefully it will direct further research.

Keywords: Brownian motion, Chern-Simons theory, financial time series, econophysics

Procedia PDF Downloads 473
18722 Neural Network in Fixed Time for Collision Detection between Two Convex Polyhedra

Authors: M. Khouil, N. Saber, M. Mestari

Abstract:

In this paper, a different architecture of a collision detection neural network (DCNN) is developed. This network, which has been particularly reviewed, has enabled us to solve with a new approach the problem of collision detection between two convex polyhedra in a fixed time (O (1) time). We used two types of neurons, linear and threshold logic, which simplified the actual implementation of all the networks proposed. The study of the collision detection is divided into two sections, the collision between a point and a polyhedron and then the collision between two convex polyhedra. The aim of this research is to determine through the AMAXNET network a mini maximum point in a fixed time, which allows us to detect the presence of a potential collision.

Keywords: collision identification, fixed time, convex polyhedra, neural network, AMAXNET

Procedia PDF Downloads 422
18721 Estimation and Removal of Chlorophenolic Compounds from Paper Mill Waste Water by Electrochemical Treatment

Authors: R. Sharma, S. Kumar, C. Sharma

Abstract:

A number of toxic chlorophenolic compounds are formed during pulp bleaching. The nature and concentration of these chlorophenolic compounds largely depends upon the amount and nature of bleaching chemicals used. These compounds are highly recalcitrant and difficult to remove but are partially removed by the biochemical treatment processes adopted by the paper industry. Identification and estimation of these chlorophenolic compounds has been carried out in the primary and secondary clarified effluents from the paper mill by GCMS. Twenty-six chorophenolic compounds have been identified and estimated in paper mill waste waters. Electrochemical treatment is an efficient method for oxidation of pollutants and has successfully been used to treat textile and oil waste water. Electrochemical treatment using less expensive anode material, stainless steel electrodes has been tried to study their removal. The electrochemical assembly comprised a DC power supply, a magnetic stirrer and stainless steel (316 L) electrode. The optimization of operating conditions has been carried out and treatment has been performed under optimized treatment conditions. Results indicate that 68.7% and 83.8% of cholorphenolic compounds are removed during 2 h of electrochemical treatment from primary and secondary clarified effluent respectively. Further, there is a reduction of 65.1, 60 and 92.6% of COD, AOX and color, respectively for primary clarified and 83.8%, 75.9% and 96.8% of COD, AOX and color, respectively for secondary clarified effluent. EC treatment has also been found to increase significantly the biodegradability index of wastewater because of conversion of non- biodegradable fraction into biodegradable fraction. Thus, electrochemical treatment is an efficient method for the degradation of cholorophenolic compounds, removal of color, AOX and other recalcitrant organic matter present in paper mill waste water.

Keywords: chlorophenolics, effluent, electrochemical treatment, wastewater

Procedia PDF Downloads 387
18720 Experimental and Analytical Dose Assessment of Patient's Family Members Treated with I-131

Authors: Marzieh Ebrahimi, Vahid Changizi, Mohammad Reza Kardan, Seyed Mahdi Hosseini Pooya, Parham Geramifar

Abstract:

Radiation exposure to the patient's family members is one of the major concerns during thyroid cancer radionuclide therapy. The aim of this study was to measure the total effective dose of the family members by means of thermoluminescence personal dosimeter, and compare with those calculated by analytical methods. Eighty-five adult family members of fifty-one patients volunteered to participate in this research study. Considering the minimum and maximum range of dose rate from 15 µsv/h to 120 µsv/h at patients' release time, the calculated mean and median dose values of family members were 0.45 mSv and 0.28 mSv, respectively. Moreover, almost all family members’ doses were measured to be less than the dose constraint of 5 mSv recommended by Basic Safety Standards. Considering the influence parameters such as patient dose rate and administrated activity, the total effective doses of family members were calculated by TEDE and NRC formulas and compared with those of experimental results. The results indicated that, it is fruitful to use the quantitative calculations for releasing patients treated with I-131 and correct estimation of patients' family doses.

Keywords: effective dose, thermoluminescence, I-131, thyroid cancer

Procedia PDF Downloads 399
18719 Fuzzy Inference Based Modelling of Perception Reaction Time of Drivers

Authors: U. Chattaraj, K. Dhusiya, M. Raviteja

Abstract:

Perception reaction time of drivers is an outcome of human thought process, which is vague and approximate in nature and also varies from driver to driver. So, in this study a fuzzy logic based model for prediction of the same has been presented, which seems suitable. The control factors, like, age, experience, intensity of driving of the driver, speed of the vehicle and distance of stimulus have been considered as premise variables in the model, in which the perception reaction time is the consequence variable. Results show that the model is able to explain the impacts of the control factors on perception reaction time properly.

Keywords: driver, fuzzy logic, perception reaction time, premise variable

Procedia PDF Downloads 304
18718 Analysis of Temporal Factors Influencing Minimum Dwell Time Distributions

Authors: T. Pedersen, A. Lindfeldt

Abstract:

The minimum dwell time is an important part of railway timetable planning. Due to its stochastic behaviour, the minimum dwell time should be considered to create resilient timetables. While there has been significant focus on how to determine and estimate dwell times, to our knowledge, little research has been carried out regarding temporal and running direction variations of these. In this paper, we examine how the minimum dwell time varies depending on temporal factors such as the time of day, day of the week and time of the year. We also examine how it is affected by running direction and station type. The minimum dwell time is estimated by means of track occupation data. A method is proposed to ensure that only minimum dwell times and not planned dwell times are acquired from the track occupation data. The results show that on an aggregated level, the average minimum dwell times in both running directions at a station are similar. However, when temporal factors are considered, there are significant variations. The minimum dwell time varies throughout the day with peak hours having the longest dwell times. It is also found that the minimum dwell times are influenced by weekday, and in particular, weekends are found to have lower minimum dwell times than most other days. The findings show that there is a potential to significantly improve timetable planning by taking minimum dwell time variations into account.

Keywords: minimum dwell time, operations quality, timetable planning, track occupation data

Procedia PDF Downloads 198
18717 Bridgeless Boost Power Factor Correction Rectifier with Hold-Up Time Extension Circuit

Authors: Chih-Chiang Hua, Yi-Hsiung Fang, Yuan-Jhen Siao

Abstract:

A bridgeless boost (BLB) power factor correction (PFC) rectifier with hold-up time extension circuit is proposed in this paper. A full bridge rectifier is widely used in the front end of the ac/dc converter. Since the shortcomings of the full bridge rectifier, the bridgeless rectifier is developed. A BLB rectifier topology is utilized with the hold-up time extension circuit. Unlike the traditional hold-up time extension circuit, the proposed extension scheme uses fewer active switches to achieve a longer hold-up time. Simulation results are presented to verify the converter performance.

Keywords: bridgeless boost (BLB), boost converter, power factor correction (PFC), hold-up time

Procedia PDF Downloads 416
18716 Short Arc Technique for Baselines Determinations

Authors: Gamal F.Attia

Abstract:

The baselines are the distances and lengths of the chords between projections of the positions of the laser stations on the reference ellipsoid. For the satellite geodesy, it is very important to determine the optimal length of orbital arc along which laser measurements are to be carried out. It is clear that for the dynamical methods long arcs (one month or more) are to be used. According to which more errors of modeling of different physical forces such as earth's gravitational field, air drag, solar radiation pressure, and others that may influence the accuracy of the estimation of the satellites position, at the same time the measured errors con be almost completely excluded and high stability in determination of relative coordinate system can be achieved. It is possible to diminish the influence of the errors of modeling by using short-arcs of the satellite orbit (several revolutions or days), but the station's coordinates estimated by different arcs con differ from each other by a larger quantity than statistical zero. Under the semidynamical ‘short arc’ method one or several passes of the satellite in one of simultaneous visibility from both ends of the chord is known and the estimated parameter in this case is the length of the chord. The comparison of the same baselines calculated with long and short arcs methods shows a good agreement and even speaks in favor of the last one. In this paper the Short Arc technique has been explained and 3 baselines have been determined using the ‘short arc’ method.

Keywords: baselines, short arc, dynamical, gravitational field

Procedia PDF Downloads 463