Search results for: bayesian estimation‎
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2141

Search results for: bayesian estimation‎

1271 Survival Data with Incomplete Missing Categorical Covariates

Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar

Abstract:

The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.

Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution

Procedia PDF Downloads 406
1270 A Generalisation of Pearson's Curve System and Explicit Representation of the Associated Density Function

Authors: S. B. Provost, Hossein Zareamoghaddam

Abstract:

A univariate density approximation technique whereby the derivative of the logarithm of a density function is assumed to be expressible as a rational function is introduced. This approach which extends Pearson’s curve system is solely based on the moments of a distribution up to a determinable order. Upon solving a system of linear equations, the coefficients of the polynomial ratio can readily be identified. An explicit solution to the integral representation of the resulting density approximant is then obtained. It will be explained that when utilised in conjunction with sample moments, this methodology lends itself to the modelling of ‘big data’. Applications to sets of univariate and bivariate observations will be presented.

Keywords: density estimation, log-density, moments, Pearson's curve system

Procedia PDF Downloads 281
1269 Improving the Quantification Model of Internal Control Impact on Banking Risks

Authors: M. Ndaw, G. Mendy, S. Ouya

Abstract:

Risk management in banking sector is a key issue linked to financial system stability and its importance has been elevated by technological developments and emergence of new financial instruments. In this paper, we improve the model previously defined for quantifying internal control impact on banking risks by automatizing the residual criticality estimation step of FMECA. For this, we defined three equations and a maturity coefficient to obtain a mathematical model which is tested on all banking processes and type of risks. The new model allows an optimal assessment of residual criticality and improves the correlation rate that has become 98%.

Keywords: risk, control, banking, FMECA, criticality

Procedia PDF Downloads 334
1268 Cost Overrun Causes in Public Construction Projects in Saudi Arabia

Authors: Ibrahim Mahamid, A. Al-Ghonamy, M. Aichouni

Abstract:

This study is conducted to identify causes of cost deviations in public construction projects in Saudi Arabia from contractors’ perspective. 41 factors that might affect cost estimating accuracy were identified through literature review and discussion with some construction experts. The factors were tabulated in a questionnaire form and a field survey included 51 contractors from the Northern Province of Saudi Arabia was performed. The results show that the top five important causes are: wrong estimation method, long period between design and time of implementation, cost of labor, cost of machinary and absence of construction-cost data.

Keywords: cost deviation, public construction, cost estimating, Saudi Arabia, contractors

Procedia PDF Downloads 475
1267 Adaptive Nonparametric Approach for Guaranteed Real-Time Detection of Targeted Signals in Multichannel Monitoring Systems

Authors: Andrey V. Timofeev

Abstract:

An adaptive nonparametric method is proposed for stable real-time detection of seismoacoustic sources in multichannel C-OTDR systems with a significant number of channels. This method guarantees given upper boundaries for probabilities of Type I and Type II errors. Properties of the proposed method are rigorously proved. The results of practical applications of the proposed method in a real C-OTDR-system are presented in this report.

Keywords: guaranteed detection, multichannel monitoring systems, change point, interval estimation, adaptive detection

Procedia PDF Downloads 448
1266 Modeling the Impact of Controls on Information System Risks

Authors: M. Ndaw, G. Mendy, S. Ouya

Abstract:

Information system risk management helps to reduce or eliminate risk by implementing appropriate controls. In this paper, we propose a quantification model of controls impact on information system risks by automatizing the residual criticality estimation step of FMECA which is based on a inductive reasoning. For this, we defined three equations based on type and maturity of controls. For testing, the values obtained with the model were compared to estimated values given by interlocutors during different working sessions and the result is satisfactory. This model allows an optimal assessment of controls maturity and facilitates risk analysis of information system.

Keywords: information system, risk, control, FMECA method

Procedia PDF Downloads 355
1265 The role of Financial Development and Institutional Quality in Promoting Sustainable Development through Tourism Management

Authors: Hashim Zameer

Abstract:

Effective tourism management plays a vital role in promoting sustainability and supporting ecosystems. A common principle that has been in practice over the years is “first pollute and then clean,” indicating countries need financial resources to promote sustainability. Financial development and the tourism management both seems very important to promoting sustainable development. However, without institutional support, it is very difficult to succeed. In this context, it seems prominently significant to explore how institutional quality, tourism development, and financial development could promote sustainable development. In the past, no research explored the role of tourism development in sustainable development. Moreover, the role of financial development, natural resources, and institutional quality in sustainable development is also ignored. In this regard, this paper aims to investigate the role of tourism development, natural resources, financial development, and institutional quality in sustainable development in China. The study used time-series data from 2000–2021 and employed the Bayesian linear regression model because it is suitable for small data sets. The robustness of the findings was checked using a quantile regression approach. The results reveal that an increase in tourism expenditures stimulates the economy, creates jobs, encourages cultural exchange, and supports sustainability initiatives. Moreover, financial development and institution quality have a positive effect on sustainable development. However, reliance on natural resources can result in negative economic, social, and environmental outcomes, highlighting the need for resource diversification and management to reinforce sustainable development. These results highlight the significance of financial development, strong institutions, sustainable tourism, and careful utilization of natural resources for long-term sustainability. The study holds vital insights for policy formulation to promote sustainable tourism.

Keywords: sustainability, tourism development, financial development, institutional quality

Procedia PDF Downloads 83
1264 Employment Mobility and the Effects of Wage Level and Tenure

Authors: Idit Kalisher, Israel Luski

Abstract:

One result of the growing dynamicity of labor markets in recent decades is a wider scope of employment mobility – i.e., transitions between employers, either within or between careers. Employment mobility decisions are primarily affected by the current employment status of the worker, which is reflected in wage and tenure. Using 34,328 observations from the National Longitudinal Survey of Youth 1979 (NLS79), which were derived from the USA population between 1990 and 2012, this paper aims to investigate the effects of wage and tenure over employment mobility choices, and additionally to examine the effects of other personal characteristics, individual labor market characteristics and macroeconomic factors. The estimation strategy was designed to address two challenges that arise from the combination of the model and the data: (a) endogeneity of the wage and the tenure in the choice equation; and (b) unobserved heterogeneity, as the data of this research is longitudinal. To address (a), estimation was performed using two-stage limited dependent variable procedure (2SLDV); and to address (b), the second stage was estimated using femlogit – an implementation of the multinomial logit model with fixed effects. Among workers who have experienced at least one turnover, the wage was found to have a main effect on career turnover likelihood of all workers, whereas the wage effect on job turnover likelihood was found to be dependent on individual characteristics. The wage was found to negatively affect the turnover likelihood and the effect was found to vary across wage level: high-wage workers were more affected compared to low-wage workers. Tenure was found to have a main positive effect on both turnover types’ likelihoods, though the effect was moderated by the wage. The findings also reveal that as their wage increases, women are more likely to turnover than men, and academically educated workers are more likely to turnover within careers. Minorities were found to be as likely as Caucasians to turnover post wage-increase, but less likely to turnover with each additional tenure year. The wage and the tenure effects were found to vary also between careers. The difference in attitude towards money, labor market opportunities and risk aversion could explain these findings. Additionally, the likelihood of a turnover was found to be affected by previous unemployment spells, age, and other labor market and personal characteristics. The results of this research could assist policymakers as well as business owners and employers. The former may be able to encourage women and older workers’ employment by considering the effects of gender and age on the probability of a turnover, and the latter may be able to assess their employees’ likelihood of a turnover by considering the effects of their personal characteristics.

Keywords: employment mobility, endogeneity, femlogit, turnover

Procedia PDF Downloads 151
1263 Breast Cancer Incidence Estimation in Castilla-La Mancha (CLM) from Mortality and Survival Data

Authors: C. Romero, R. Ortega, P. Sánchez-Camacho, P. Aguilar, V. Segur, J. Ruiz, G. Gutiérrez

Abstract:

Introduction: Breast cancer is a leading cause of death in CLM. (2.8% of all deaths in women and 13,8% of deaths from tumors in womens). It is the most tumor incidence in CLM region with 26.1% from all tumours, except nonmelanoma skin (Cancer Incidence in Five Continents, Volume X, IARC). Cancer registries are a good information source to estimate cancer incidence, however the data are usually available with a lag which makes difficult their use for health managers. By contrast, mortality and survival statistics have less delay. In order to serve for resource planning and responding to this problem, a method is presented to estimate the incidence of mortality and survival data. Objectives: To estimate the incidence of breast cancer by age group in CLM in the period 1991-2013. Comparing the data obtained from the model with current incidence data. Sources: Annual number of women by single ages (National Statistics Institute). Annual number of deaths by all causes and breast cancer. (Mortality Registry CLM). The Breast cancer relative survival probability. (EUROCARE, Spanish registries data). Methods: A Weibull Parametric survival model from EUROCARE data is obtained. From the model of survival, the population and population data, Mortality and Incidence Analysis MODel (MIAMOD) regression model is obtained to estimate the incidence of cancer by age (1991-2013). Results: The resulting model is: Ix,t = Logit [const + age1*x + age2*x2 + coh1*(t – x) + coh2*(t-x)2] Where: Ix,t is the incidence at age x in the period (year) t; the value of the parameter estimates is: const (constant term in the model) = -7.03; age1 = 3.31; age2 = -1.10; coh1 = 0.61 and coh2 = -0.12. It is estimated that in 1991 were diagnosed in CLM 662 cases of breast cancer (81.51 per 100,000 women). An estimated 1,152 cases (112.41 per 100,000 women) were diagnosed in 2013, representing an increase of 40.7% in gross incidence rate (1.9% per year). The annual average increases in incidence by age were: 2.07% in women aged 25-44 years, 1.01% (45-54 years), 1.11% (55-64 years) and 1.24% (65-74 years). Cancer registries in Spain that send data to IARC declared 2003-2007 the average annual incidence rate of 98.6 cases per 100,000 women. Our model can obtain an incidence of 100.7 cases per 100,000 women. Conclusions: A sharp and steady increase in the incidence of breast cancer in the period 1991-2013 is observed. The increase was seen in all age groups considered, although it seems more pronounced in young women (25-44 years). With this method you can get a good estimation of the incidence.

Keywords: breast cancer, incidence, cancer registries, castilla-la mancha

Procedia PDF Downloads 311
1262 Effects of Lung Protection Ventilation Strategies on Postoperative Pulmonary Complications After Noncardiac Surgery: A Network Meta-Analysis of Randomized Controlled Trials

Authors: Ran An, Dang Wang

Abstract:

Background: Mechanical ventilation has been confirmed to increase the incidence of postoperative pulmonary complications (PPCs), and several studies have shown that low tidal volumes combined with positive end-expiratory pressure (PEEP) and recruitment manoeuvres (RM) reduce the incidence of PPCs. However, the optimal lung-protective ventilatory strategy remains unclear. Methods: Multiple databases were searched for randomized controlled trials (RCTs) published prior to October 2023. The association between individual PEEP (iPEEP) or other forms of lung-protective ventilation and the incidence of PPCs was evaluated by Bayesian network meta-analysis. Results: We included 58 studies (11610 patients) in this meta-analysis. The network meta-analysis showed that low ventilation (LVt) combined with iPEEP and RM was associated with significantly lower incidences of PPCs [HVt: OR=0.38 95CrI (0.19, 0.75), LVt: OR=0.33, 95% CrI (0.12, 0.82)], postoperative atelectasis, and pneumonia than was HVt or LVt. In abdominal surgery, LVT combined with iPEEP or medium-to-high PEEP and RM were associated with significantly lower incidences of PPCs, postoperative atelectasis, and pneumonia. LVt combined with iPEEP and RM was ranked the highest, which was based on SUCRA scores. Conclusion: LVt combined with iPEEP and RM decreased the incidences of PPCs, postoperative atelectasis, and pneumonia in noncardiac surgery patients. iPEEP-guided ventilation was the optimal lung protection ventilation strategy. The quality of evidence was moderate.

Keywords: protection ventilation strategies, postoperative pulmonary complications, network meta-analysis, noncardiac surgery

Procedia PDF Downloads 35
1261 Efficacy of Agrobacterium Tumefaciens as a Possible Entomopathogenic Agent

Authors: Fouzia Qamar, Shahida Hasnain

Abstract:

The objective of the present study was to evaluate the possible role of Agrobacterium tumefaciens as a possible insect biocontrol agent. Pests selected for the present challenge were adult males of Periplaneta americana and last instar larvae of Pieris brassicae and Spodoptera litura. Different ranges of bacterial doses were selected and tested to score the mortalities of the insects after 24 hours, for the lethal dose estimation studies. Mode of application for the inoculation of the bacteria, was the microinjection technique. The evaluation of the possible entomopathogenic carrying attribute of bacterial Ti plasmid, led to the conclusion that the loss of plasmid was associated with the loss of virulence against target insects.

Keywords: agrobacterium tumefaciens, toxicity assessment, biopesticidal attribute, entomopathogenic agent

Procedia PDF Downloads 379
1260 Kinetic Studies on CO₂ Gasification of Low and High Ash Indian Coals in Context of Underground Coal Gasification

Authors: Geeta Kumari, Prabu Vairakannu

Abstract:

Underground coal gasification (UCG) technology is an efficient and an economic in-situ clean coal technology, which converts unmineable coals into calorific valuable gases. This technology avoids ash disposal, coal mining, and storage problems. CO₂ gas can be a potential gasifying medium for UCG. CO₂ is a greenhouse gas and, the liberation of this gas to the atmosphere from thermal power plant industries leads to global warming. Hence, the capture and reutilization of CO₂ gas are crucial for clean energy production. However, the reactivity of high ash Indian coals with CO₂ needs to be assessed. In the present study, two varieties of Indian coals (low ash and high ash) are used for thermogravimetric analyses (TGA). Two low ash north east Indian coals (LAC) and a typical high ash Indian coal (HAC) are procured from the coal mines of India. Low ash coal with 9% ash (LAC-1) and 4% ash (LAC-2) and high ash coal (HAC) with 42% ash are used for the study. TGA studies are carried out to evaluate the activation energy for pyrolysis and gasification of coal under N₂ and CO₂ atmosphere. Coats and Redfern method is used to estimate the activation energy of coal under different temperature regimes. Volumetric model is assumed for the estimation of the activation energy. The activation energy estimated under different temperature range. The inherent properties of coals play a major role in their reactivity. The results show that the activation energy decreases with the decrease in the inherent percentage of coal ash due to the ash layer hindrance. A reverse trend was observed with volatile matter. High volatile matter of coal leads to the estimation of low activation energy. It was observed that the activation energy under CO₂ atmosphere at 400-600°C is less as compared to N₂ inert atmosphere. At this temperature range, it is estimated that 15-23% reduction in the activation energy under CO₂ atmosphere. This shows the reactivity of CO₂ gas with higher hydrocarbons of the coal volatile matters. The reactivity of CO₂ with the volatile matter of coal might occur through dry reforming reaction in which CO₂ reacts with higher hydrocarbon, aromatics of the tar content. The observed trend of Ea in the temperature range of 150-200˚C and 400-600˚C is HAC > LAC-1 >LAC-2 in both N₂ and CO₂ atmosphere. At the temperature range of 850-1000˚C, higher activation energy is estimated when compared to those values in the temperature range of 400-600°C. Above 800°C, char gasification through Boudouard reaction progressed under CO₂ atmosphere. It was observed that 8-20 kJ/mol of activation energy is increased during char gasification above 800°C compared to volatile matter pyrolysis between the temperature ranges of 400-600°C. The overall activation energy of the coals in the temperature range of 30-1000˚C is higher in N₂ atmosphere than CO₂ atmosphere. It can be concluded that higher hydrocarbons such as tar effectively undergoes cracking and reforming reactions in presence of CO₂. Thus, CO₂ gas is beneficial for the production of high calorific value syngas using high ash Indian coals.

Keywords: clean coal technology, CO₂ gasification, activation energy, underground coal gasification

Procedia PDF Downloads 171
1259 The Normal-Generalized Hyperbolic Secant Distribution: Properties and Applications

Authors: Hazem M. Al-Mofleh

Abstract:

In this paper, a new four-parameter univariate continuous distribution called the Normal-Generalized Hyperbolic Secant Distribution (NGHS) is defined and studied. Some general and structural distributional properties are investigated and discussed, including: central and non-central n-th moments and incomplete moments, quantile and generating functions, hazard function, Rényi and Shannon entropies, shapes: skewed right, skewed left, and symmetric, modality regions: unimodal and bimodal, maximum likelihood (MLE) estimators for the parameters. Finally, two real data sets are used to demonstrate empirically its flexibility and prove the strength of the new distribution.

Keywords: bimodality, estimation, hazard function, moments, Shannon’s entropy

Procedia PDF Downloads 349
1258 Localization of Radioactive Sources with a Mobile Radiation Detection System using Profit Functions

Authors: Luís Miguel Cabeça Marques, Alberto Manuel Martinho Vale, José Pedro Miragaia Trancoso Vaz, Ana Sofia Baptista Fernandes, Rui Alexandre de Barros Coito, Tiago Miguel Prates da Costa

Abstract:

The detection and localization of hidden radioactive sources are of significant importance in countering the illicit traffic of Special Nuclear Materials and other radioactive sources and materials. Radiation portal monitors are commonly used at airports, seaports, and international land borders for inspecting cargo and vehicles. However, these equipment can be expensive and are not available at all checkpoints. Consequently, the localization of SNM and other radioactive sources often relies on handheld equipment, which can be time-consuming. The current study presents the advantages of real-time analysis of gamma-ray count rate data from a mobile radiation detection system based on simulated data and field tests. The incorporation of profit functions and decision criteria to optimize the detection system's path significantly enhances the radiation field information and reduces survey time during cargo inspection. For source position estimation, a maximum likelihood estimation algorithm is employed, and confidence intervals are derived using the Fisher information. The study also explores the impact of uncertainties, baselines, and thresholds on the performance of the profit function. The proposed detection system, utilizing a plastic scintillator with silicon photomultiplier sensors, boasts several benefits, including cost-effectiveness, high geometric efficiency, compactness, and lightweight design. This versatility allows for seamless integration into any mobile platform, be it air, land, maritime, or hybrid, and it can also serve as a handheld device. Furthermore, integration of the detection system into drones, particularly multirotors, and its affordability enable the automation of source search and substantial reduction in survey time, particularly when deploying a fleet of drones. While the primary focus is on inspecting maritime container cargo, the methodologies explored in this research can be applied to the inspection of other infrastructures, such as nuclear facilities or vehicles.

Keywords: plastic scintillators, profit functions, path planning, gamma-ray detection, source localization, mobile radiation detection system, security scenario

Procedia PDF Downloads 116
1257 Optimizing Protection of Medieval Glass Mosaic

Authors: J. Valach, S. Pospisil, S. Kuznecov

Abstract:

The paper deals with experimental estimation of future environmental load on medieval mosaic of Last Judgement on entrance to St. Vitus cathedral on Prague castle. The mosaic suffers from seasonal changes of weather pattern, as well as rains, their acidity, deposition of dust and sooth particles from polluted air and also from freeze-thaw cycles. These phenomena influence state of the mosaic. The mosaic elements, tesserae are mostly made from glass prone to weathering. To estimate future procedure of the best maintenance, relation between various weather scenarios and their effect on the mosaic was investigated. At the same time local method for evaluation of protective coating was developed. Together both methods will contribute to better care for the mosaic and also visitors aesthetical experience.

Keywords: environmental load, cultural heritage, glass mosaic, protection

Procedia PDF Downloads 280
1256 Estimation of Delay Due to Loading–Unloading of Passengers by Buses and Reduction of Number of Lanes at Selected Intersections in Dhaka City

Authors: Sumit Roy, A. Uddin

Abstract:

One of the significant reasons that increase the delay time in the intersections at heterogeneous traffic condition is a sudden reduction of the capacity of the roads. In this study, the delay for this sudden capacity reduction is estimated. Two intersections at Dhaka city were brought in to thestudy, i.e., Kakrail intersection, and SAARC Foara intersection. At Kakrail intersection, the sudden reduction of capacity in the roads is seen at three downstream legs of the intersection, which are because of slowing down or stopping of buses for loading and unloading of passengers. At SAARC Foara intersection, sudden reduction of capacity was seen at two downstream legs. At one leg, it was due to loading and unloading of buses, and at another leg, it was for both loading and unloading of buses and reduction of the number of lanes. With these considerations, the delay due to intentional stoppage or slowing down of buses and reduction of the number of lanes for these two intersections are estimated. Here the delay was calculated by two approaches. The first approach came from the concept of shock waves in traffic streams. Here the delay was calculated by determining the flow, density, and speed before and after the sudden capacity reduction. The second approach came from the deterministic analysis of queues. Here the delay is calculated by determining the volume, capacity and reduced capacity of the road. After determining the delay from these two approaches, the results were compared. For this study, the video of each of the two intersections was recorded for one hour at the evening peak. Necessary geometric data were also taken to determine speed, flow, and density, etc. parameters. The delay was calculated for one hour with one-hour data at both intersections. In case of Kakrail intersection, the per hour delay for Kakrail circle leg was 5.79, and 7.15 minutes, for Shantinagar cross intersection leg they were 13.02 and 15.65 minutes, and for Paltan T intersection leg, they were 3 and 1.3 minutes for 1st and 2nd approaches respectively. In the case of SAARC Foara intersection, the delay at Shahbag leg was only due to intentional stopping or slowing down of busses, which were 3.2 and 3 minutes respectively for both approaches. For the Karwan Bazar leg, the delays for buses by both approaches were 5 and 7.5 minutes respectively, and for reduction of the number of lanes, the delays for both approaches were 2 and 1.78 minutes respectively. Measuring the delay per hour for the Kakrail leg at Kakrail circle, it is seen that, with consideration of the first approach of delay estimation, the intentional stoppage and lowering of speed by buses contribute to 26.24% of total delay at Kakrail circle. If the loading and unloading of buses at intersection is made forbidden near intersection, and any other measures for loading and unloading of passengers are established far enough from the intersections, then the delay at intersections can be reduced at significant scale, and the performance of the intersections can be enhanced.

Keywords: delay, deterministic queue analysis, shock wave, passenger loading-unloading

Procedia PDF Downloads 178
1255 Adaptive Target Detection of High-Range-Resolution Radar in Non-Gaussian Clutter

Authors: Lina Pan

Abstract:

In non-Gaussian clutter of a spherically invariant random vector, in the cases that a certain estimated covariance matrix could become singular, the adaptive target detection of high-range-resolution radar is addressed. Firstly, the restricted maximum likelihood (RML) estimates of unknown covariance matrix and scatterer amplitudes are derived for non-Gaussian clutter. And then the RML estimate of texture is obtained. Finally, a novel detector is devised. It is showed that, without secondary data, the proposed detector outperforms the existing Kelly binary integrator.

Keywords: non-Gaussian clutter, covariance matrix estimation, target detection, maximum likelihood

Procedia PDF Downloads 465
1254 Bioinformatic Approaches in Population Genetics and Phylogenetic Studies

Authors: Masoud Sheidai

Abstract:

Biologists with a special field of population genetics and phylogeny have different research tasks such as populations’ genetic variability and divergence, species relatedness, the evolution of genetic and morphological characters, and identification of DNA SNPs with adaptive potential. To tackle these problems and reach a concise conclusion, they must use the proper and efficient statistical and bioinformatic methods as well as suitable genetic and morphological characteristics. In recent years application of different bioinformatic and statistical methods, which are based on various well-documented assumptions, are the proper analytical tools in the hands of researchers. The species delineation is usually carried out with the use of different clustering methods like K-means clustering based on proper distance measures according to the studied features of organisms. A well-defined species are assumed to be separated from the other taxa by molecular barcodes. The species relationships are studied by using molecular markers, which are analyzed by different analytical methods like multidimensional scaling (MDS) and principal coordinate analysis (PCoA). The species population structuring and genetic divergence are usually investigated by PCoA and PCA methods and a network diagram. These are based on bootstrapping of data. The Association of different genes and DNA sequences to ecological and geographical variables is determined by LFMM (Latent factor mixed model) and redundancy analysis (RDA), which are based on Bayesian and distance methods. Molecular and morphological differentiating characters in the studied species may be identified by linear discriminant analysis (DA) and discriminant analysis of principal components (DAPC). We shall illustrate these methods and related conclusions by giving examples from different edible and medicinal plant species.

Keywords: GWAS analysis, K-Means clustering, LFMM, multidimensional scaling, redundancy analysis

Procedia PDF Downloads 125
1253 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 131
1252 Molecular Identification and Evolutionary Status of Lucilia bufonivora: An Obligate Parasite of Amphibians in Europe

Authors: Gerardo Arias, Richard Wall, Jamie Stevens

Abstract:

Lucilia bufonivora Moniez, is an obligate parasite of toads and frogs widely distributed in Europe. Its sister taxon Lucilia silvarum Meigen behaves mainly as a carrion breeder in Europe, however it has been reported as a facultative parasite of amphibians. These two closely related species are morphologically almost identical, which has led to misidentification, and in fact, it has been suggested that the amphibian myiasis cases by L. silvarum reported in Europe should be attributed to L. bufonivora. Both species remain poorly studied and their taxonomic relationships are still unclear. The identification of the larval specimens involved in amphibian myiasis with molecular tools and phylogenetic analysis of these two closely related species may resolve this problem. In this work seventeen unidentified larval specimens extracted from toad myiasis cases of the UK, the Netherlands and Switzerland were obtained, their COX1 (mtDNA) and EF1-α (Nuclear DNA) gene regions were amplified and then sequenced. The 17 larval samples were identified with both molecular markers as L. bufonivora. Phylogenetic analysis was carried out with 10 other blowfly species, including L. silvarum samples from the UK and USA. Bayesian Inference trees of COX1 and a combined-gene dataset suggested that L. silvarum and L. bufonivora are separate sister species. However, the nuclear gene EF1-α does not appear to resolve their relationships, suggesting that the rates of evolution of the mtDNA are much faster than those of the nuclear DNA. This work provides the molecular evidence for successful identification of L. bufonivora and a molecular analysis of the populations of this obligate parasite from different locations across Europe. The relationships with L. silvarum are discussed.

Keywords: calliphoridae, molecular evolution, myiasis, obligate parasitism

Procedia PDF Downloads 242
1251 Parameters Estimation of Power Function Distribution Based on Selective Order Statistics

Authors: Moh'd Alodat

Abstract:

In this paper, we discuss the power function distribution and derive the maximum likelihood estimator of its parameter as well as the reliability parameter. We derive the large sample properties of the estimators based on the selective order statistic scheme. We conduct simulation studies to investigate the significance of the selective order statistic scheme in our setup and to compare the efficiency of the new proposed estimators.

Keywords: fisher information, maximum likelihood estimator, power function distribution, ranked set sampling, selective order statistics sampling

Procedia PDF Downloads 464
1250 An Investigation on Hot-Spot Temperature Calculation Methods of Power Transformers

Authors: Ahmet Y. Arabul, Ibrahim Senol, Fatma Keskin Arabul, Mustafa G. Aydeniz, Yasemin Oner, Gokhan Kalkan

Abstract:

In the standards of IEC 60076-2 and IEC 60076-7, three different hot-spot temperature estimation methods are suggested. In this study, the algorithms which used in hot-spot temperature calculations are analyzed by comparing the algorithms with the results of an experimental set-up made by a Transformer Monitoring System (TMS) in use. In tested system, TMS uses only top oil temperature and load ratio for hot-spot temperature calculation. And also, it uses some constants from standards which are on agreed statements tables. During the tests, it came out that hot-spot temperature calculation method is just making a simple calculation and not uses significant all other variables that could affect the hot-spot temperature.

Keywords: Hot-spot temperature, monitoring system, power transformer, smart grid

Procedia PDF Downloads 573
1249 Residual Life Estimation of K-out-of-N Cold Standby System

Authors: Qian Zhao, Shi-Qi Liu, Bo Guo, Zhi-Jun Cheng, Xiao-Yue Wu

Abstract:

Cold standby redundancy is considered to be an effective mechanism for improving system reliability and is widely used in industrial engineering. However, because of the complexity of the reliability structure, there is little literature studying on the residual life of cold standby system consisting of complex components. In this paper, a simulation method is presented to predict the residual life of k-out-of-n cold standby system. In practical cases, failure information of a system is either unknown, partly unknown or completely known. Our proposed method is designed to deal with the three scenarios, respectively. Differences between the procedures are analyzed. Finally, numerical examples are used to validate the proposed simulation method.

Keywords: cold standby system, k-out-of-n, residual life, simulation sampling

Procedia PDF Downloads 401
1248 Estimation of Antiurolithiatic Activity of a Biochemical Medicine, Magnesia phosphorica, in Ethylene Glycol-Induced Nephrolithiasis in Wistar Rats by Urine Analysis, Biochemical, Histopathological, and Electron Microscopic Studies

Authors: Priti S. Tidke, Chandragouda R. Patil

Abstract:

The present study was designed to investigate the effect of Magnesia phosphorica, a biochemical medicine on urine screeing, biochemical, histopathological, and electron microscopic images in ethylene glycol induced nepholithiasis in rats.Male Wistar albino rats were divided into six groups and were orally administered saline once daily (IR-sham and IR-control) or Magnesia phosphorica 100 mg/kg twice daily for 24 days.The effect of various dilutions of biochemical Mag phos3x, 6x, 30x was determined on urine output by comparing the urine volume collected by keeping individual animals in metabolic cages. Calcium oxalate urolithiasis and hyperoxaluria in male Wistar rats was induced by oral administration of 0.75% Ethylene glycol p.o. daily for 24 days. Simultaneous administration of biochemical 3x, 6x, 30xMag phos (100mg/kg p.o. twice a day) along with ethylene glycol significantly decreased calcium oxalate, urea, creatinine, Calcium, Magnesium, Chloride, Phosphorus, Albumin, Alkaline Phosphatase content in urine compared with vehicle-treated control group.After the completion of treatment period animals were sacrificed, kidneys were removed and subjected to microscopic examination for possible stone formation. Histological estimation of kidney treated with biochemical Mag phos (3x, 6x, 30xMag phos 100 mg/kg, p.o.) along with ethylene glycol inhibited the growth of calculi and reduced the number of stones in kidney compared with control group. Biochemical Mag phos of 3x dilution and its crude equivalent also showed potent diuretic and antiurolithiatic activity in ethylene glycol induced urolithiasis. A significant decrease in the weight of stones was observed after treatment in animals which received biochemical Mag phos of 3x dilution and its crude equivalent in comparison with control groups. From this study, it can be proposed that the 3x dilution of biochemical Mag phos exhibits a significant inhibitory effect on crystal growth, with the improvement of kidney function and substantiates claims on the biological activity of twelve tissue remedies which can be proved scientifically through laboratory animal studies.

Keywords: Mag phos, Magnesia phosphorica, ciochemic medicine, urolithiasis, kidney stone, ethylene glycol

Procedia PDF Downloads 428
1247 Development of a Model Based on Wavelets and Matrices for the Treatment of Weakly Singular Partial Integro-Differential Equations

Authors: Somveer Singh, Vineet Kumar Singh

Abstract:

We present a new model based on viscoelasticity for the Non-Newtonian fluids.We use a matrix formulated algorithm to approximate solutions of a class of partial integro-differential equations with the given initial and boundary conditions. Some numerical results are presented to simplify application of operational matrix formulation and reduce the computational cost. Convergence analysis, error estimation and numerical stability of the method are also investigated. Finally, some test examples are given to demonstrate accuracy and efficiency of the proposed method.

Keywords: Legendre Wavelets, operational matrices, partial integro-differential equation, viscoelasticity

Procedia PDF Downloads 336
1246 Hansen Solubility Parameter from Surface Measurements

Authors: Neveen AlQasas, Daniel Johnson

Abstract:

Membranes for water treatment are an established technology that attracts great attention due to its simplicity and cost effectiveness. However, membranes in operation suffer from the adverse effect of membrane fouling. Bio-fouling is a phenomenon that occurs at the water-membrane interface, and is a dynamic process that is initiated by the adsorption of dissolved organic material, including biomacromolecules, on the membrane surface. After initiation, attachment of microorganisms occurs, followed by biofilm growth. The biofilm blocks the pores of the membrane and consequently results in reducing the water flux. Moreover, the presence of a fouling layer can have a substantial impact on the membrane separation properties. Understanding the mechanism of the initiation phase of biofouling is a key point in eliminating the biofouling on membrane surfaces. The adhesion and attachment of different fouling materials is affected by the surface properties of the membrane materials. Therefore, surface properties of different polymeric materials had been studied in terms of their surface energies and Hansen solubility parameters (HSP). The difference between the combined HSP parameters (HSP distance) allows prediction of the affinity of two materials to each other. The possibilities of measuring the HSP of different polymer films via surface measurements, such as contact angle has been thoroughly investigated. Knowing the HSP of a membrane material and the HSP of a specific foulant, facilitate the estimation of the HSP distance between the two, and therefore the strength of attachment to the surface. Contact angle measurements using fourteen different solvents on five different polymeric films were carried out using the sessile drop method. Solvents were ranked as good or bad solvents using different ranking method and ranking was used to calculate the HSP of each polymeric film. Results clearly indicate the absence of a direct relation between contact angle values of each film and the HSP distance between each polymer film and the solvents used. Therefore, estimating HSP via contact angle alone is not sufficient. However, it was found if the surface tensions and viscosities of the used solvents are taken in to the account in the analysis of the contact angle values, a prediction of the HSP from contact angle measurements is possible. This was carried out via training of a neural network model. The trained neural network model has three inputs, contact angle value, surface tension and viscosity of solvent used. The model is able to predict the HSP distance between the used solvent and the tested polymer (material). The HSP distance prediction is further used to estimate the total and individual HSP parameters of each tested material. The results showed an accuracy of about 90% for all the five studied films

Keywords: surface characterization, hansen solubility parameter estimation, contact angle measurements, artificial neural network model, surface measurements

Procedia PDF Downloads 94
1245 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 273
1244 Outlier Detection in Stock Market Data using Tukey Method and Wavelet Transform

Authors: Sadam Alwadi

Abstract:

Outlier values become a problem that frequently occurs in the data observation or recording process. Thus, the need for data imputation has become an essential matter. In this work, it will make use of the methods described in the prior work to detect the outlier values based on a collection of stock market data. In order to implement the detection and find some solutions that maybe helpful for investors, real closed price data were obtained from the Amman Stock Exchange (ASE). Tukey and Maximum Overlapping Discrete Wavelet Transform (MODWT) methods will be used to impute the detect the outlier values.

Keywords: outlier values, imputation, stock market data, detecting, estimation

Procedia PDF Downloads 81
1243 Markov-Chain-Based Optimal Filtering and Smoothing

Authors: Garry A. Einicke, Langford B. White

Abstract:

This paper describes an optimum filter and smoother for recovering a Markov process message from noisy measurements. The developments follow from an equivalence between a state space model and a hidden Markov chain. The ensuing filter and smoother employ transition probability matrices and approximate probability distribution vectors. The properties of the optimum solutions are retained, namely, the estimates are unbiased and minimize the variance of the output estimation error, provided that the assumed parameter set are correct. Methods for estimating unknown parameters from noisy measurements are discussed. Signal recovery examples are described in which performance benefits are demonstrated at an increased calculation cost.

Keywords: optimal filtering, smoothing, Markov chains

Procedia PDF Downloads 317
1242 Digital Forgery Detection by Signal Noise Inconsistency

Authors: Bo Liu, Chi-Man Pun

Abstract:

A novel technique for digital forgery detection by signal noise inconsistency is proposed in this paper. The forged area spliced from the other picture contains some features which may be inconsistent with the rest part of the image. Noise pattern and the level is a possible factor to reveal such inconsistency. To detect such noise discrepancies, the test picture is initially segmented into small pieces. The noise pattern and level of each segment are then estimated by using various filters. The noise features constructed in this step are utilized in energy-based graph cut to expose forged area in the final step. Experimental results show that our method provides a good illustration of regions with noise inconsistency in various scenarios.

Keywords: forgery detection, splicing forgery, noise estimation, noise

Procedia PDF Downloads 461