Search results for: robust estimators
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1442

Search results for: robust estimators

1232 Design and Control of a Knee Rehabilitation Device Using an MR-Fluid Brake

Authors: Mina Beheshti, Vida Shams, Mojtaba Esfandiari, Farzaneh Abdollahi, Abdolreza Ohadi

Abstract:

Most of the people who survive a stroke need rehabilitation tools to regain their mobility. The core function of these devices is a brake actuator. The goal of this study is to design and control a magnetorheological brake which can be used as a rehabilitation tool. In fact, the fluid used in this brake is called magnetorheological fluid or MR that properties can change by variation of the magnetic field. The braking properties can be set as control by using this feature of the fluid. In this research, different MR brake designs are first introduced in each design, and the dimensions of the brake have been determined based on the required torque for foot movement. To calculate the brake dimensions, it is assumed that the shear stress distribution in the fluid is uniform and the fluid is in its saturated state. After designing the rehabilitation brake, the mathematical model of the healthy movement of a healthy person is extracted. Due to the nonlinear nature of the system and its variability, various adaptive controllers, neural networks, and robust have been implemented to estimate the parameters and control the system. After calculating torque and control current, the best type of controller in terms of error and control current has been selected. Finally, this controller is implemented on the experimental data of the patient's movements, and the control current is calculated to achieve the desired torque and motion.

Keywords: rehabilitation, magnetorheological fluid, knee, brake, adaptive control, robust control, neural network control, torque control

Procedia PDF Downloads 121
1231 Invasive Ranges of Gorse (Ulex europaeus) in South Australia and Sri Lanka Using Species Distribution Modelling

Authors: Champika S. Kariyawasam

Abstract:

The distribution of gorse (Ulex europaeus) plants in South Australia has been modelled using 126 presence-only location data as a function of seven climate parameters. The predicted range of U. europaeus is mainly along the Mount Lofty Ranges in the Adelaide Hills and on Kangaroo Island. Annual precipitation and yearly average aridity index appeared to be the highest contributing variables to the final model formulation. The Jackknife procedure was employed to identify the contribution of different variables to gorse model outputs and response curves were used to predict changes with changing environmental variables. Based on this analysis, it was revealed that the combined effect of one or more variables could make a completely different impact to the original variables on their own to the model prediction. This work also demonstrates the need for a careful approach when selecting environmental variables for projecting correlative models to climatically distinct area. Maxent acts as a robust model when projecting the fitted species distribution model to another area with changing climatic conditions, whereas the generalized linear model, bioclim, and domain models to be less robust in this regard. These findings are important not only for predicting and managing invasive alien gorse in South Australia and Sri Lanka but also in other countries of the invasive range.

Keywords: invasive species, Maxent, species distribution modelling, Ulex europaeus

Procedia PDF Downloads 105
1230 Nexus among Foreign Private Investment, CO2 Emissions, Energy Consumption and Sustainable Economic Growth

Authors: Aysha Zamir

Abstract:

This study examines to what extent foreign private investment (FPI) affects the clean industrial environment and sustainable economic growth through developed countries investment in China. Moreover, this study investiage an association among FPI, CO2 emission, energy consumption, and sustainable economic growth. This study uses random effects and generalized least squares (GLS) and panel VAR estimators for data analysis. The results indicate that the Chinese economy has a vastly positive influenced regarding the location and choice of emerging and developed countries’ investment in the domestic market. Furthermore, emerging and developed economies investment increases the contribution among domestic firms, environment sustainability toward the national economy. The further results show that foreign private investment and gross domestic investment have a positive impact on sustainable economic growth.

Keywords: clean industrial environment, energy consumption, CO2 emmission, foreign private investment, developed and emerging economies

Procedia PDF Downloads 98
1229 Self-Tuning Dead-Beat PD Controller for Pitch Angle Control of a Bench-Top Helicopter

Authors: H. Mansor, S.B. Mohd-Noor, N. I. Othman, N. Tazali, R. I. Boby

Abstract:

This paper presents an improved robust Proportional Derivative controller for a 3-Degree-of-Freedom (3-DOF) bench-top helicopter by using adaptive methodology. Bench-top helicopter is a laboratory scale helicopter used for experimental purposes which is widely used in teaching laboratory and research. Proportional Derivative controller has been developed for a 3-DOF bench-top helicopter by Quanser. Experiments showed that the transient response of designed PD controller has very large steady state error i.e., 50%, which is very serious. The objective of this research is to improve the performance of existing pitch angle control of PD controller on the bench-top helicopter by integration of PD controller with adaptive controller. Usually standard adaptive controller will produce zero steady state error; however response time to reach desired set point is large. Therefore, this paper proposed an adaptive with deadbeat algorithm to overcome the limitations. The output response that is fast, robust and updated online is expected. Performance comparisons have been performed between the proposed self-tuning deadbeat PD controller and standard PD controller. The efficiency of the self-tuning dead beat controller has been proven from the tests results in terms of faster settling time, zero steady state error and capability of the controller to be updated online.

Keywords: adaptive control, deadbeat control, bench-top helicopter, self-tuning control

Procedia PDF Downloads 294
1228 Dynamic Fault Diagnosis for Semi-Batch Reactor Under Closed-Loop Control via Independent RBFNN

Authors: Abdelkarim M. Ertiame, D. W. Yu, D. L. Yu, J. B. Gomm

Abstract:

In this paper, a new robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics and using the weighted sum-squared prediction error as the residual. The recursive orthogonal Least Squares algorithm (ROLS) is employed to train the model to overcome the training difficulty of the independent mode of the network. Then, another RBFNN is used as a fault classifier to isolate faults from different features involved in the residual vector. The several actuator and sensor faults are simulated in a nonlinear simulation of the reactor in Simulink. The scheme is used to detect and isolate the faults on-line. The simulation results show the effectiveness of the scheme even the process is subjected to disturbances and uncertainties including significant changes in the monomer feed rate, fouling factor, impurity factor, ambient temperature and measurement noise. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method.

Keywords: Robust fault detection, cascade control, independent RBF model, RBF neural networks, Chylla-Haase reactor, FDI under closed-loop control

Procedia PDF Downloads 473
1227 A Comparative Analysis of Solid Waste Treatment Technologies on Cost and Environmental Basis

Authors: Nesli Aydin

Abstract:

Waste management decision making in developing countries has moved towards being more pragmatic, transparent, sustainable and comprehensive. Turkey is required to make its waste related legislation compatible with European Legislation as it is a candidate country of the European Union. Improper Turkish practices such as open burning and open dumping practices must be abandoned urgently, and robust waste management systems have to be structured. The determination of an optimum waste management system in any region requires a comprehensive analysis in which many criteria are taken into account by stakeholders. In conducting this sort of analysis, there are two main criteria which are evaluated by waste management analysts; economic viability and environmentally friendliness. From an analytical point of view, a central characteristic of sustainable development is an economic-ecological integration. It is predicted that building a robust waste management system will need significant effort and cooperation between the stakeholders in developing countries such as Turkey. In this regard, this study aims to provide data regarding the cost and environmental burdens of waste treatment technologies such as an incinerator, an autoclave (with different capacities), a hydroclave and a microwave coupled with updated information on calculation methods, and a framework for comparing any proposed scenario performances on a cost and environmental basis.

Keywords: decision making, economic viability, environmentally friendliness, waste management systems

Procedia PDF Downloads 280
1226 Performance Analysis of Geophysical Database Referenced Navigation: The Combination of Gravity Gradient and Terrain Using Extended Kalman Filter

Authors: Jisun Lee, Jay Hyoun Kwon

Abstract:

As an alternative way to compensate the INS (inertial navigation system) error in non-GNSS (Global Navigation Satellite System) environment, geophysical database referenced navigation is being studied. In this study, both gravity gradient and terrain data were combined to complement the weakness of sole geophysical data as well as to improve the stability of the positioning. The main process to compensate the INS error using geophysical database was constructed on the basis of the EKF (Extended Kalman Filter). In detail, two type of combination method, centralized and decentralized filter, were applied to check the pros and cons of its algorithm and to find more robust results. The performance of each navigation algorithm was evaluated based on the simulation by supposing that the aircraft flies with precise geophysical DB and sensors above nine different trajectories. Especially, the results were compared to the ones from sole geophysical database referenced navigation to check the improvement due to a combination of the heterogeneous geophysical database. It was found that the overall navigation performance was improved, but not all trajectories generated better navigation result by the combination of gravity gradient with terrain data. Also, it was found that the centralized filter generally showed more stable results. It is because that the way to allocate the weight for the decentralized filter could not be optimized due to the local inconsistency of geophysical data. In the future, switching of geophysical data or combining different navigation algorithm are necessary to obtain more robust navigation results.

Keywords: Extended Kalman Filter, geophysical database referenced navigation, gravity gradient, terrain

Procedia PDF Downloads 316
1225 Optimal Design of Multi-Machine Power System Stabilizers Using Interactive Honey Bee Mating Optimization

Authors: Hossein Ghadimi, Alireza Alizadeh, Oveis Abedinia, Noradin Ghadimi

Abstract:

This paper presents an enhanced Honey Bee Mating Optimization (HBMO) to solve the optimal design of multi machine power system stabilizer (PSSs) parameters, which is called the Interactive Honey Bee Mating Optimization (IHBMO). Power System Stabilizers (PSSs) are now routinely used in the industry to damp out power system oscillations. The design problem of the proposed controller is formulated as an optimization problem and IHBMO algorithm is employed to search for optimal controller parameters. The proposed method is applied to multi-machine power system (MPS). The method suggested in this paper can be used for designing robust power system stabilizers for guaranteeing the required closed loop performance over a prespecified range of operating and system conditions. The simplicity in design and implementation of the proposed stabilizers makes them better suited for practical applications in real plants. The non-linear simulation results are presented under wide range of operating conditions in comparison with the PSO and CPSS base tuned stabilizer one through FD and ITAE performance indices. The results evaluation shows that the proposed control strategy achieves good robust performance for a wide range of system parameters and load changes in the presence of system nonlinearities and is superior to the other controllers.

Keywords: power system stabilizer, IHBMO, multimachine, nonlinearities

Procedia PDF Downloads 473
1224 An Alternative Stratified Cox Model for Correlated Variables in Infant Mortality

Authors: K. A. Adeleke

Abstract:

Often in epidemiological research, introducing stratified Cox model can account for the existence of interactions of some inherent factors with some major/noticeable factors. This research work aimed at modelling correlated variables in infant mortality with the existence of some inherent factors affecting the infant survival function. An alternative semiparametric Stratified Cox model is proposed with a view to take care of multilevel factors that have interactions with others. This, however, was used as a tool to model infant mortality data from Nigeria Demographic and Health Survey (NDHS) with some multilevel factors (Tetanus, Polio, and Breastfeeding) having correlation with main factors (Sex, Size, and Mode of Delivery). Asymptotic properties of the estimators are also studied via simulation. The tested model via data showed good fit and performed differently depending on the levels of the interaction of the strata variable Z*. An evidence that the baseline hazard functions and regression coefficients are not the same from stratum to stratum provides a gain in information as against the usage of Cox model. Simulation result showed that the present method produced better estimates in terms of bias, lower standard errors, and or mean square errors.

Keywords: stratified Cox, semiparametric model, infant mortality, multilevel factors, cofounding variables

Procedia PDF Downloads 535
1223 Optimum Design of Hybrid (Metal-Composite) Mechanical Power Transmission System under Uncertainty by Convex Modelling

Authors: Sfiso Radebe

Abstract:

The design models dealing with flawless composite structures are in abundance, where the mechanical properties of composite structures are assumed to be known a priori. However, if the worst case scenario is assumed, where material defects combined with processing anomalies in composite structures are expected, a different solution is attained. Furthermore, if the system being designed combines in series hybrid elements, individually affected by material constant variations, it implies that a different approach needs to be taken. In the body of literature, there is a compendium of research that investigates different modes of failure affecting hybrid metal-composite structures. It covers areas pertaining to the failure of the hybrid joints, structural deformation, transverse displacement, the suppression of vibration and noise. In the present study a system employing a combination of two or more hybrid power transmitting elements will be explored for the least favourable dynamic loads as well as weight minimization, subject to uncertain material properties. Elastic constants are assumed to be uncertain-but-bounded quantities varying slightly around their nominal values where the solution is determined using convex models of uncertainty. Convex analysis of the problem leads to the computation of the least favourable solution and ultimately to a robust design. This approach contrasts with a deterministic analysis where the average values of elastic constants are employed in the calculations, neglecting the variations in the material properties.

Keywords: convex modelling, hybrid, metal-composite, robust design

Procedia PDF Downloads 184
1222 An Estimating Parameter of the Mean in Normal Distribution by Maximum Likelihood, Bayes, and Markov Chain Monte Carlo Methods

Authors: Autcha Araveeporn

Abstract:

This paper is to compare the parameter estimation of the mean in normal distribution by Maximum Likelihood (ML), Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML estimator is estimated by the average of data, the Bayes method is considered from the prior distribution to estimate Bayes estimator, and MCMC estimator is approximated by Gibbs sampling from posterior distribution. These methods are also to estimate a parameter then the hypothesis testing is used to check a robustness of the estimators. Data are simulated from normal distribution with the true parameter of mean 2, and variance 4, 9, and 16 when the sample sizes is set as 10, 20, 30, and 50. From the results, it can be seen that the estimation of MLE, and MCMC are perceivably different from the true parameter when the sample size is 10 and 20 with variance 16. Furthermore, the Bayes estimator is estimated from the prior distribution when mean is 1, and variance is 12 which showed the significant difference in mean with variance 9 at the sample size 10 and 20.

Keywords: Bayes method, Markov chain Monte Carlo method, maximum likelihood method, normal distribution

Procedia PDF Downloads 329
1221 The Roots of the Robust and Looting Economy (poverty and inequality) in Iran after the 1979 Revolution, From the Perspective of Acem Oglu & Robinson theory

Authors: Vorya Shabrandi

Abstract:

The study factors of poverty and inequality causes in countries is the subject of many scholars and economists in the last century, theorists in various areas of economic science know different factors as the roots of poverty and inequality in Iran after the 1979 revolution. Economists have emphasized political elements and political scientists on political elements. This research reviews the political economy of poverty and corruption in Iran after the revolution. The findings of this research, based on AcemOgluand Robinson theory, show how the institutional structural dependence of Iran's economy to raw has led to the growth of its non-economic economic institutions and its consequence of the continuity of the release and looting economy and poverty and inequality in Iran's political economy Is. This research was carried out using descriptive-analytical and comparative methods. Many economists try to justify the conditions of the country based on war, sanctions; And the external factors, and ... knows. In this study, we tried to examine the roots of poverty and the looting economy of Iran by implementing Research AcemOgluand Robinson on the institutions and roots of poverty. Looking for a framework for understanding why countries, such as Iran, the reason for the difference in revenue in different countries, as well as the poor or wealth of countries, regardless of the non-effective and non-professional institutions, and why inefficient institutions in some countries, such as Iran, such as Iran It remains and does not have a voluntary political powers to change these institutions. Findings The research shows that institutions are broadly the main reason for the roots of the robust and looting economy (poverty and inequality) in Iran.

Keywords: Iran, plunderable (Loot) economy, raw shopping, poverty and inequality, acem oglu and robinson, non-inclusive institutions

Procedia PDF Downloads 98
1220 Foreign Investment, Technological Diffusion and Competiveness of Exports: A Case for Textile Industry in Pakistan

Authors: Syed Toqueer Akhter, Muhammad Awais

Abstract:

Pakistan is a country which is gifted by naturally abundant resources these resources are a pioneer towards a prospect and developed country. Pakistan is the fourth largest exporter of the textile in the world and with the passage of time the competitiveness of these exports is subject to a decline. With a lot of International players in the textile world like China, Bangladesh, India, and Sri Lanka, Pakistan needs to put up a lot of effort to compete with these countries. This research paper would determine the impact of Foreign Direct Investment upon technological diffusion and that how significantly it may be affecting on export performance of the country. It would also demonstrate that with the increase in Foreign Direct Investment, technological diffusion, strong property rights, and using different policy tools, export competitiveness of the country could be improved. The research has been carried out using time series data from 1995 to 2013 and the results have been estimated by using competing Econometrics modes such as Robust regression and Generalized least squares so that to consolidate the impact of the Foreign Investments and Technological diffusion upon export competitiveness comprehensively. Distributed Lag model has also been used to encompass the lagged effect of policy tools variables used by the government. Model estimates entail that 'FDI' and 'Technological Diffusion' do have a significant impact on the competitiveness of the exports of Pakistan. It may also be inferred that competitiveness of Textile Sector requires integrated policy framework, primarily including the reduction in interest rates, providing subsides, and manufacturing of value added products.

Keywords: high technology export, robust regression, patents, technological diffusion, export competitiveness

Procedia PDF Downloads 475
1219 Heteroscedastic Parametric and Semiparametric Smooth Coefficient Stochastic Frontier Application to Technical Efficiency Measurement

Authors: Rebecca Owusu Coffie, Atakelty Hailu

Abstract:

Variants of production frontier models have emerged, however, only a limited number of them are applied in empirical research. Hence the effects of these alternative frontier models are not well understood, particularly within sub-Saharan Africa. In this paper, we apply recent advances in the production frontier to examine levels of technical efficiency and efficiency drivers. Specifically, we compare the heteroscedastic parametric and the semiparametric stochastic smooth coefficient (SPSC) models. Using rice production data from Ghana, our empirical estimates reveal that alternative specification of efficiency estimators results in either downward or upward bias in the technical efficiency estimates. Methodologically, we find that the SPSC model is more suitable and generates high-efficiency estimates. Within the parametric framework, we find that parameterization of both the mean and variance of the pre-truncated function is the best model. For the drivers of technical efficiency, we observed that longer farm distances increase inefficiency through a reduction in labor productivity. High soil quality, however, increases productivity through increased land productivity.

Keywords: pre-truncated, rice production, smooth coefficient, technical efficiency

Procedia PDF Downloads 415
1218 FACTS Based Stabilization for Smart Grid Applications

Authors: Adel. M. Sharaf, Foad H. Gandoman

Abstract:

Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PV-hybrid-Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid-Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6-pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.

Keywords: AC FACTS, smart grid, stabilization, PV-battery storage, Switched Filter-Compensation (SFC)

Procedia PDF Downloads 387
1217 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 185
1216 A Modified Nonlinear Conjugate Gradient Algorithm for Large Scale Unconstrained Optimization Problems

Authors: Tsegay Giday Woldu, Haibin Zhang, Xin Zhang, Yemane Hailu Fissuh

Abstract:

It is well known that nonlinear conjugate gradient method is one of the widely used first order methods to solve large scale unconstrained smooth optimization problems. Because of the low memory requirement, attractive theoretical features, practical computational efficiency and nice convergence properties, nonlinear conjugate gradient methods have a special role for solving large scale unconstrained optimization problems. Large scale optimization problems are with important applications in practical and scientific world. However, nonlinear conjugate gradient methods have restricted information about the curvature of the objective function and they are likely less efficient and robust compared to some second order algorithms. To overcome these drawbacks, the new modified nonlinear conjugate gradient method is presented. The noticeable features of our work are that the new search direction possesses the sufficient descent property independent of any line search and it belongs to a trust region. Under mild assumptions and standard Wolfe line search technique, the global convergence property of the proposed algorithm is established. Furthermore, to test the practical computational performance of our new algorithm, numerical experiments are provided and implemented on the set of some large dimensional unconstrained problems. The numerical results show that the proposed algorithm is an efficient and robust compared with other similar algorithms.

Keywords: conjugate gradient method, global convergence, large scale optimization, sufficient descent property

Procedia PDF Downloads 167
1215 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images

Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj

Abstract:

Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.

Keywords: image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization

Procedia PDF Downloads 106
1214 The System Dynamics Research of China-Africa Trade, Investment and Economic Growth

Authors: Emma Serwaa Obobisaa, Haibo Chen

Abstract:

International trade and outward foreign direct investment are important factors which are generally recognized in the economic growth and development. Though several scholars have struggled to reveal the influence of trade and outward foreign direct investment (FDI) on economic growth, most studies utilized common econometric models such as vector autoregression and aggregated the variables, which for the most part prompts, however, contradictory and mixed results. Thus, there is an exigent need for the precise study of the trade and FDI effect of economic growth while applying strong econometric models and disaggregating the variables into its separate individual variables to explicate their respective effects on economic growth. This will guarantee the provision of policies and strategies that are geared towards individual variables to ensure sustainable development and growth. This study, therefore, seeks to examine the causal effect of China-Africa trade and Outward Foreign Direct Investment on the economic growth of Africa using a robust and recent econometric approach such as system dynamics model. Our study impanels and tests an ensemble of a group of vital variables predominant in recent studies on trade-FDI-economic growth causality: Foreign direct ınvestment, international trade and economic growth. Our results showed that the system dynamics method provides accurate statistical inference regarding the direction of the causality among the variables than the conventional method such as OLS and Granger Causality predominantly used in the literature as it is more robust and provides accurate, critical values.

Keywords: economic growth, outward foreign direct investment, system dynamics model, international trade

Procedia PDF Downloads 81
1213 Application of the Finite Window Method to a Time-Dependent Convection-Diffusion Equation

Authors: Raoul Ouambo Tobou, Alexis Kuitche, Marcel Edoun

Abstract:

The FWM (Finite Window Method) is a new numerical meshfree technique for solving problems defined either in terms of PDEs (Partial Differential Equation) or by a set of conservation/equilibrium laws. The principle behind the FWM is that in such problem each element of the concerned domain is interacting with its neighbors and will always try to adapt to keep in equilibrium with respect to those neighbors. This leads to a very simple and robust problem solving scheme, well suited for transfer problems. In this work, we have applied the FWM to an unsteady scalar convection-diffusion equation. Despite its simplicity, it is well known that convection-diffusion problems can be challenging to be solved numerically, especially when convection is highly dominant. This has led researchers to set the scalar convection-diffusion equation as a benchmark one used to analyze and derive the required conditions or artifacts needed to numerically solve problems where convection and diffusion occur simultaneously. We have shown here that the standard FWM can be used to solve convection-diffusion equations in a robust manner as no adjustments (Upwinding or Artificial Diffusion addition) were required to obtain good results even for high Peclet numbers and coarse space and time steps. A comparison was performed between the FWM scheme and both a first order implicit Finite Volume Scheme (Upwind scheme) and a third order implicit Finite Volume Scheme (QUICK Scheme). The results of the comparison was that for equal space and time grid spacing, the FWM yields a much better precision than the used Finite Volume schemes, all having similar computational cost and conditioning number.

Keywords: Finite Window Method, Convection-Diffusion, Numerical Technique, Convergence

Procedia PDF Downloads 308
1212 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 313
1211 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties

Authors: Sonal Budhiraja, Biswabrata Pradhan

Abstract:

This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.

Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval

Procedia PDF Downloads 216
1210 Efficiency of Robust Heuristic Gradient Based Enumerative and Tunneling Algorithms for Constrained Integer Programming Problems

Authors: Vijaya K. Srivastava, Davide Spinello

Abstract:

This paper presents performance of two robust gradient-based heuristic optimization procedures based on 3n enumeration and tunneling approach to seek global optimum of constrained integer problems. Both these procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure, the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new-found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel solver that is provided within MS Office suite and other published results.

Keywords: constrained integer problems, enumerative search algorithm, Heuristic algorithm, Tunneling algorithm

Procedia PDF Downloads 303
1209 Component Level Flood Vulnerability Framework for the United Kingdom

Authors: Mohammad Shoraka, Francesco Preti, Karen Angeles, Raulina Wojtkiewicz, Karthik Ramanathan

Abstract:

Catastrophe modeling has evolved significantly over the last four decades. Verisk introduced its pioneering comprehensive inland flood model tailored for the U.K. in 2008. Over the course of the last 15 years, Verisk has built a suite of physically driven flood models for several countries and regions across the globe. This paper aims to spotlight a selection of these advancements tailored to the development of vulnerability estimation, which forms an integral part of a forthcoming update to Verisk’s U.K. inland flood model. Vulnerability functions are critical to evaluating and robust modeling flood-induced damage to buildings and contents. The subsequent damage assessments then allow for direct quantification of losses for entire building portfolios. Notably, today’s flood loss models more often prioritize enhanced development of hazard characterization, while vulnerability functions often lack sufficient granularity for a robust assessment. This study proposes a novel, engineering-driven, physically based component-level flood vulnerability framework for the U.K. Various aspects of the framework, including component classification and comprehensive cost analysis, meticulously tailored to capture the distinct building characteristics unique to the U.K., will be discussed. This analysis will elucidate how the cost distribution across individual components contributes to translating component-level damage functions into building-level damage functions. Furthermore, a succinct overview of essential datasets employed to gauge building regional vulnerability will be highlighted.

Keywords: catastrophe modeling, inland flood, vulnerability, cost analysis

Procedia PDF Downloads 34
1208 An Approach for Estimation in Hierarchical Clustered Data Applicable to Rare Diseases

Authors: Daniel C. Bonzo

Abstract:

Practical considerations lead to the use of unit of analysis within subjects, e.g., bleeding episodes or treatment-related adverse events, in rare disease settings. This is coupled with data augmentation techniques such as extrapolation to enlarge the subject base. In general, one can think about extrapolation of data as extending information and conclusions from one estimand to another estimand. This approach induces hierarchichal clustered data with varying cluster sizes. Extrapolation of clinical trial data is being accepted increasingly by regulatory agencies as a means of generating data in diverse situations during drug development process. Under certain circumstances, data can be extrapolated to a different population, a different but related indication, and different but similar product. We consider here the problem of estimation (point and interval) using a mixed-models approach under an extrapolation. It is proposed that estimators (point and interval) be constructed using weighting schemes for the clusters, e.g., equally weighted and with weights proportional to cluster size. Simulated data generated under varying scenarios are then used to evaluate the performance of this approach. In conclusion, the evaluation result showed that the approach is a useful means for improving statistical inference in rare disease settings and thus aids not only signal detection but risk-benefit evaluation as well.

Keywords: clustered data, estimand, extrapolation, mixed model

Procedia PDF Downloads 106
1207 Design of Robust and Intelligent Controller for Active Removal of Space Debris

Authors: Shabadini Sampath, Jinglang Feng

Abstract:

With huge kinetic energy, space debris poses a major threat to astronauts’ space activities and spacecraft in orbit if a collision happens. The active removal of space debris is required in order to avoid frequent collisions that would occur. In addition, the amount of space debris will increase uncontrollably, posing a threat to the safety of the entire space system. But the safe and reliable removal of large-scale space debris has been a huge challenge to date. While capturing and deorbiting space debris, the space manipulator has to achieve high control precision. However, due to uncertainties and unknown disturbances, there is difficulty in coordinating the control of the space manipulator. To address this challenge, this paper focuses on developing a robust and intelligent control algorithm that controls joint movement and restricts it on the sliding manifold by reducing uncertainties. A neural network adaptive sliding mode controller (NNASMC) is applied with the objective of finding the control law such that the joint motions of the space manipulator follow the given trajectory. A computed torque control (CTC) is an effective motion control strategy that is used in this paper for computing space manipulator arm torque to generate the required motion. Based on the Lyapunov stability theorem, the proposed intelligent controller NNASMC and CTC guarantees the robustness and global asymptotic stability of the closed-loop control system. Finally, the controllers used in the paper are modeled and simulated using MATLAB Simulink. The results are presented to prove the effectiveness of the proposed controller approach.

Keywords: GNC, active removal of space debris, AI controllers, MatLabSimulink

Procedia PDF Downloads 88
1206 Improving Sample Analysis and Interpretation Using QIAGENs Latest Investigator STR Multiplex PCR Assays with a Novel Quality Sensor

Authors: Daniel Mueller, Melanie Breitbach, Stefan Cornelius, Sarah Pakulla-Dickel, Margaretha Koenig, Anke Prochnow, Mario Scherer

Abstract:

The European STR standard set (ESS) of loci as well as the new expanded CODIS core loci set as recommended by the CODIS Core Loci Working Group, has led to a higher standardization and harmonization in STR analysis across borders. Various multiplex PCRs assays have since been developed for the analysis of these 17 ESS or 23 CODIS expansion STR markers that all meet high technical demands. However, forensic analysts are often faced with difficult STR results and the questions thereupon. What is the reason that no peaks are visible in the electropherogram? Did the PCR fail? Was the DNA concentration too low? QIAGEN’s newest Investigator STR kits contain a novel Quality Sensor (QS) that acts as internal performance control and gives useful information for evaluating the amplification efficiency of the PCR. QS indicates if the reaction has worked in general and furthermore allows discriminating between the presence of inhibitors or DNA degradation as a cause for the typical ski slope effect observed in STR profiles of such challenging samples. This information can be used to choose the most appropriate rework strategy.Based on the latest PCR chemistry called FRM 2.0, QIAGEN now provides the next technological generation for STR analysis, the Investigator ESSplex SE QS and Investigator 24plex QS Kits. The new PCR chemistry ensures robust and fast PCR amplification with improved inhibitor resistance and easy handling for a manual or automated setup. The short cycling time of 60 min reduces the duration of the total PCR analysis to make a whole workflow analysis in one day more likely. To facilitate the interpretation of STR results a smart primer design was applied for best possible marker distribution, highest concordance rates and a robust gender typing.

Keywords: PCR, QIAGEN, quality sensor, STR

Procedia PDF Downloads 465
1205 State Estimation of a Biotechnological Process Using Extended Kalman Filter and Particle Filter

Authors: R. Simutis, V. Galvanauskas, D. Levisauskas, J. Repsyte, V. Grincas

Abstract:

This paper deals with advanced state estimation algorithms for estimation of biomass concentration and specific growth rate in a typical fed-batch biotechnological process. This biotechnological process was represented by a nonlinear mass-balance based process model. Extended Kalman Filter (EKF) and Particle Filter (PF) was used to estimate the unmeasured state variables from oxygen uptake rate (OUR) and base consumption (BC) measurements. To obtain more general results, a simplified process model was involved in EKF and PF estimation algorithms. This model doesn’t require any special growth kinetic equations and could be applied for state estimation in various bioprocesses. The focus of this investigation was concentrated on the comparison of the estimation quality of the EKF and PF estimators by applying different measurement noises. The simulation results show that Particle Filter algorithm requires significantly more computation time for state estimation but gives lower estimation errors both for biomass concentration and specific growth rate. Also the tuning procedure for Particle Filter is simpler than for EKF. Consequently, Particle Filter should be preferred in real applications, especially for monitoring of industrial bioprocesses where the simplified implementation procedures are always desirable.

Keywords: biomass concentration, extended Kalman filter, particle filter, state estimation, specific growth rate

Procedia PDF Downloads 399
1204 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures

Authors: Milad Abbasi

Abstract:

Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.

Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network

Procedia PDF Downloads 111
1203 Settings of Conditions Leading to Reproducible and Robust Biofilm Formation in vitro in Evaluation of Drug Activity against Staphylococcal Biofilms

Authors: Adela Diepoltova, Klara Konecna, Ondrej Jandourek, Petr Nachtigal

Abstract:

A loss of control over antibiotic-resistant pathogens has become a global issue due to severe and often untreatable infections. This state is reflected in complicated treatment, health costs, and higher mortality. All these factors emphasize the urgent need for the discovery and development of new anti-infectives. One of the most common pathogens mentioned in the phenomenon of antibiotic resistance are bacteria of the genus Staphylococcus. These bacterial agents have developed several mechanisms against the effect of antibiotics. One of them is biofilm formation. In staphylococci, biofilms are associated with infections such as endocarditis, osteomyelitis, catheter-related bloodstream infections, etc. To author's best knowledge, no validated and standardized methodology evaluating candidate compound activity against staphylococcal biofilms exists. However, a variety of protocols for in vitro drug activity testing has been suggested, yet there are often fundamental differences. Based on our experience, a key methodological step that leads to credible results is to form a robust biofilm with appropriate attributes such as firm adherence to the substrate, a complex arrangement in layers, and the presence of extracellular polysaccharide matrix. At first, for the purpose of drug antibiofilm activity evaluation, the focus was put on various conditions (supplementation of cultivation media by human plasma/fetal bovine serum, shaking mode, the density of initial inoculum) that should lead to reproducible and robust in vitro staphylococcal biofilm formation in microtiter plate model. Three model staphylococcal reference strains were included in the study: Staphylococcus aureus (ATCC 29213), methicillin-resistant Staphylococcus aureus (ATCC 43300), and Staphylococcus epidermidis (ATCC 35983). The total biofilm biomass was quantified using the Christensen method with crystal violet, and results obtained from at least three independent experiments were statistically processed. Attention was also paid to the viability of the biofilm-forming staphylococcal cells and the presence of extracellular polysaccharide matrix. The conditions that led to robust biofilm biomass formation with attributes for biofilms mentioned above were then applied by introducing an alternative method analogous to the commercially available test system, the Calgary Biofilm Device. In this test system, biofilms are formed on pegs that are incorporated into the lid of the microtiter plate. This system provides several advantages (in situ detection and quantification of biofilm microbial cells that have retained their viability after drug exposure). Based on our preliminary studies, it was found that the attention to the peg surface and substrate on which the bacterial biofilms are formed should also be paid to. Therefore, further steps leading to the optimization were introduced. The surface of pegs was coated by human plasma, fetal bovine serum, and L-polylysine. Subsequently, the willingness of bacteria to adhere and form biofilm was monitored. In conclusion, suitable conditions were revealed, leading to the formation of reproducible, robust staphylococcal biofilms in vitro for the microtiter model and the system analogous to the Calgary biofilm device, as well. The robustness and typical slime texture could be detected visually. Likewise, an analysis by confocal laser scanning microscopy revealed a complex three-dimensional arrangement of biofilm forming organisms surrounded by an extracellular polysaccharide matrix.

Keywords: anti-biofilm drug activity screening, in vitro biofilm formation, microtiter plate model, the Calgary biofilm device, staphylococcal infections, substrate modification, surface coating

Procedia PDF Downloads 120