Search results for: parametric estimation
1922 Stability-Indicating High-Performance Thin-Layer Chromatography Method for Estimation of Naftopidil
Authors: P. S. Jain, K. D. Bobade, S. J. Surana
Abstract:
A simple, selective, precise and Stability-indicating High-performance thin-layer chromatographic method for analysis of Naftopidil both in a bulk and in pharmaceutical formulation has been developed and validated. The method employed, HPTLC aluminium plates precoated with silica gel as the stationary phase. The solvent system consisted of hexane: ethyl acetate: glacial acetic acid (4:4:2 v/v). The system was found to give compact spot for Naftopidil (Rf value of 0.43±0.02). Densitometric analysis of Naftopidil was carried out in the absorbance mode at 253 nm. The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.999±0.0001 with respect to peak area in the concentration range 200-1200 ng per spot. The method was validated for precision, recovery and robustness. The limits of detection and quantification were 20.35 and 61.68 ng per spot, respectively. Naftopidil was subjected to acid and alkali hydrolysis, oxidation and thermal degradation. The drug undergoes degradation under acidic, basic, oxidation and thermal conditions. This indicates that the drug is susceptible to acid, base, oxidation and thermal conditions. The degraded product was well resolved from the pure drug with significantly different Rf value. Statistical analysis proves that the method is repeatable, selective and accurate for the estimation of investigated drug. The proposed developed HPTLC method can be applied for identification and quantitative determination of Naftopidil in bulk drug and pharmaceutical formulation.Keywords: naftopidil, HPTLC, validation, stability, degradation
Procedia PDF Downloads 4001921 Capture-recapture to Estimate Completeness of Pulmonary Tuberculosis with Two Sources
Authors: Ratchadaporn Ungcharoen, Lily Ingsrisawang
Abstract:
Capture-recapture methods are popular techniques for indirect estimation the size of wildlife populations and the completeness of cases in epidemiology and social sciences. The aim of this study was to estimate the completeness of pulmonary tuberculosis cases confirmed by two sources of hospital registrations and surveillance systems in 2013 in Nakhon Pathom province, Thailand. Several estimators of population size were considered: the Lincoln-Petersen estimator, the Chapman estimator, the Chao’s lower bound estimator, the Zelterman’s estimator, etc. We focus on the Chapman and Chao’s lower bound estimators for estimating the completeness of pulmonary tuberculosis from two sources. The retrieved pulmonary tuberculosis data from two sources were analyzed and bootstrapped for 30 samples, with 241 observations from source 1 and 305 observations from source 2 per sample, for additional exploration of the completeness of pulmonary tuberculosis. The results from the original data show that the Chapman’s estimator gave the estimation of a total 360 (95% CI: 349-371) pulmonary tuberculosis cases, resulting in 57% estimated completeness cases. But the Chao’s lower bound estimator estimated the total of 365 (95% CI: 354-376) pulmonary tuberculosis cases and its estimated completeness cases was 55.9%. For the results from bootstrap samples, the Chapman and the Chao’s lower bound estimators gave an estimated 347 (95% CI: 309-385) and 353 (95% CI: 315-390) pulmonary tuberculosis cases, respectively. If for two sources recoding systems are available, record-linkage and capture-recapture analysis can be useful for estimating the completeness of different registration system. Both Chapman and Chao’s lower bound estimator approaches produce very close estimates.Keywords: capture-recapture, Chao, Chapman, pulmonary tuberculosis
Procedia PDF Downloads 5161920 Engine Thrust Estimation by Strain Gauging of Engine Mount Assembly
Authors: Rohit Vashistha, Amit Kumar Gupta, G. P. Ravishankar, Mahesh P. Padwale
Abstract:
Accurate thrust measurement is required for aircraft during takeoff and after ski-jump. In a developmental aircraft, takeoff from ship is extremely critical and thrust produced by the engine should be known to the pilot before takeoff so that if thrust produced is not sufficient then take-off can be aborted and accident can be avoided. After ski-jump, thrust produced by engine is required because the horizontal speed of aircraft is less than the normal takeoff speed. Engine should be able to produce enough thrust to provide nominal horizontal takeoff speed to the airframe within prescribed time limit. The contemporary low bypass gas turbine engines generally have three mounts where the two side mounts transfer the engine thrust to the airframe. The third mount only takes the weight component. It does not take any thrust component. In the present method of thrust estimation, the strain gauging of the two side mounts is carried out. The strain produced at various power settings is used to estimate the thrust produced by the engine. The quarter Wheatstone bridge is used to acquire the strain data. The engine mount assembly is subjected to Universal Test Machine for determination of equivalent elasticity of assembly. This elasticity value is used in the analytical approach for estimation of engine thrust. The estimated thrust is compared with the test bed load cell thrust data. The experimental strain data is also compared with strain data obtained from FEM analysis. Experimental setup: The strain gauge is mounted on the tapered portion of the engine mount sleeve. Two strain gauges are mounted on diametrically opposite locations. Both of the strain gauges on the sleeve were in the horizontal plane. In this way, these strain gauges were not taking any strain due to the weight of the engine (except negligible strain due to material's poison's ratio) or the hoop's stress. Only the third mount strain gauge will show strain when engine is not running i.e. strain due to weight of engine. When engine starts running, all the load will be taken by the side mounts. The strain gauge on the forward side of the sleeve was showing a compressive strain and the strain gauge on the rear side of the sleeve shows a tensile strain. Results and conclusion: the analytical calculation shows that the hoop stresses dominate the bending stress. The estimated thrust by strain gauge shows good accuracy at higher power setting as compared to lower power setting. The accuracy of estimated thrust at max power setting is 99.7% whereas at lower power setting is 78%.Keywords: engine mounts, finite elements analysis, strain gauge, stress
Procedia PDF Downloads 4851919 Optimizing Microwave Assisted Extraction of Anti-Diabetic Plant Tinospora cordifolia Used in Ayush System for Estimation of Berberine Using Taguchi L-9 Orthogonal Design
Authors: Saurabh Satija, Munish Garg
Abstract:
Present work reports an efficient extraction method using microwaves based solvent–sample duo-heating mechanism, for the extraction of an important anti-diabetic plant Tinospora cordifolia from AYUSH system for estimation of berberine content. The process is based on simultaneous heating of sample matrix and extracting solvent under microwave energy. Methanol was used as the extracting solvent, which has excellent berberine solubilizing power and warms up under microwave attributable to its great dispersal factor. Extraction conditions like time of irradition, microwave power, solute-solvent ratio and temperature were optimized using Taguchi design and berberine was quantified using high performance thin layer chromatography. The ranked optimized parameters were microwave power (rank 1), irradiation time (rank 2) and temperature (rank 3). This kind of extraction mechanism under dual heating provided choice of extraction parameters for better precision and higher yield with significant reduction in extraction time under optimum extraction conditions. This developed extraction protocol will lead to extract higher amounts of berberine which is a major anti-diabetic moiety in Tinospora cordifolia which can lead to development of cheaper formulations of the plant Tinospora cordifolia and can help in rapid prevention of diabetes in the world.Keywords: berberine, microwave, optimization, Taguchi
Procedia PDF Downloads 3481918 Parallel Self Organizing Neural Network Based Estimation of Archie’s Parameters and Water Saturation in Sandstone Reservoir
Authors: G. M. Hamada, A. A. Al-Gathe, A. M. Al-Khudafi
Abstract:
Determination of water saturation in sandstone is a vital question to determine the initial oil or gas in place in reservoir rocks. Water saturation determination using electrical measurements is mainly on Archie’s formula. Consequently accuracy of Archie’s formula parameters affects water saturation values rigorously. Determination of Archie’s parameters a, m, and n is proceeded by three conventional techniques, Core Archie-Parameter Estimation (CAPE) and 3-D. This work introduces the hybrid system of parallel self-organizing neural network (PSONN) targeting accepted values of Archie’s parameters and, consequently, reliable water saturation values. This work focuses on Archie’s parameters determination techniques; conventional technique, CAPE technique, and 3-D technique, and then the calculation of water saturation using current. Using the same data, a hybrid parallel self-organizing neural network (PSONN) algorithm is used to estimate Archie’s parameters and predict water saturation. Results have shown that estimated Arche’s parameters m, a, and n are highly accepted with statistical analysis, indicating that the PSONN model has a lower statistical error and higher correlation coefficient. This study was conducted using a high number of measurement points for 144 core plugs from a sandstone reservoir. PSONN algorithm can provide reliable water saturation values, and it can supplement or even replace the conventional techniques to determine Archie’s parameters and thereby calculate water saturation profiles.Keywords: water saturation, Archie’s parameters, artificial intelligence, PSONN, sandstone reservoir
Procedia PDF Downloads 1281917 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method
Authors: F. C. Amadi, G. C. Enyi, G. Nasr
Abstract:
Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.Keywords: relative permeabilty, porosity, 1-D black oil simulator, capillary pressures
Procedia PDF Downloads 4421916 A Brief Study about Nonparametric Adherence Tests
Authors: Vinicius R. Domingues, Luan C. S. M. Ozelim
Abstract:
The statistical study has become indispensable for various fields of knowledge. Not any different, in Geotechnics the study of probabilistic and statistical methods has gained power considering its use in characterizing the uncertainties inherent in soil properties. One of the situations where engineers are constantly faced is the definition of a probability distribution that represents significantly the sampled data. To be able to discard bad distributions, goodness-of-fit tests are necessary. In this paper, three non-parametric goodness-of-fit tests are applied to a data set computationally generated to test the goodness-of-fit of them to a series of known distributions. It is shown that the use of normal distribution does not always provide satisfactory results regarding physical and behavioral representation of the modeled parameters.Keywords: Kolmogorov-Smirnov test, Anderson-Darling test, Cramer-Von-Mises test, nonparametric adherence tests
Procedia PDF Downloads 4461915 Risk Analysis of Leaks from a Subsea Oil Facility Based on Fuzzy Logic Techniques
Authors: Belén Vinaixa Kinnear, Arturo Hidalgo López, Bernardo Elembo Wilasi, Pablo Fernández Pérez, Cecilia Hernández Fuentealba
Abstract:
The expanded use of risk assessment in legislative and corporate decision-making has increased the role of expert judgement in giving data for security-related decision-making. Expert judgements are required in most steps of risk assessment: danger recognizable proof, hazard estimation, risk evaluation, and examination of choices. This paper presents a fault tree analysis (FTA), which implies a probabilistic failure analysis applied to leakage of oil in a subsea production system. In standard FTA, the failure probabilities of items of a framework are treated as exact values while evaluating the failure probability of the top event. There is continuously insufficiency of data for calculating the failure estimation of components within the drilling industry. Therefore, fuzzy hypothesis can be used as a solution to solve the issue. The aim of this paper is to examine the leaks from the Zafiro West subsea oil facility by using fuzzy fault tree analysis (FFTA). As a result, the research has given theoretical and practical contributions to maritime safety and environmental protection. It has been also an effective strategy used traditionally in identifying hazards in nuclear installations and power industries.Keywords: expert judgment, probability assessment, fault tree analysis, risk analysis, oil pipelines, subsea production system, drilling, quantitative risk analysis, leakage failure, top event, off-shore industry
Procedia PDF Downloads 1911914 Prosody Generation in Neutral Speech Storytelling Application Using Tilt Model
Authors: Manjare Chandraprabha A., S. D. Shirbahadurkar, Manjare Anil S., Paithne Ajay N.
Abstract:
This paper proposes Intonation Modeling for Prosody generation in Neutral speech for Marathi (language spoken in Maharashtra, India) story telling applications. Nowadays audio story telling devices are very eminent for children. In this paper, we proposed tilt model for stressed words in Marathi for speech modification. Tilt model predicts modification in tone of neutral speech. GMM is used to identify stressed words for modification.Keywords: tilt model, fundamental frequency, statistical parametric speech synthesis, GMM
Procedia PDF Downloads 3931913 Satellite LiDAR-Based Digital Terrain Model Correction using Gaussian Process Regression
Authors: Keisuke Takahata, Hiroshi Suetsugu
Abstract:
Forest height is an important parameter for forest biomass estimation, and precise elevation data is essential for accurate forest height estimation. There are several globally or nationally available digital elevation models (DEMs) like SRTM and ASTER. However, its accuracy is reported to be low particularly in mountainous areas where there are closed canopy or steep slope. Recently, space-borne LiDAR, such as the Global Ecosystem Dynamics Investigation (GEDI), have started to provide sparse but accurate ground elevation and canopy height estimates. Several studies have reported the high degree of accuracy in their elevation products on their exact footprints, while it is not clear how this sparse information can be used for wider area. In this study, we developed a digital terrain model correction algorithm by spatially interpolating the difference between existing DEMs and GEDI elevation products by using Gaussian Process (GP) regression model. The result shows that our GP-based methodology can reduce the mean bias of the elevation data from 3.7m to 0.3m when we use airborne LiDAR-derived elevation information as ground truth. Our algorithm is also capable of quantifying the elevation data uncertainty, which is critical requirement for biomass inventory. Upcoming satellite-LiDAR missions, like MOLI (Multi-footprint Observation Lidar and Imager), are expected to contribute to the more accurate digital terrain model generation.Keywords: digital terrain model, satellite LiDAR, gaussian processes, uncertainty quantification
Procedia PDF Downloads 1841912 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana
Authors: Gautier Viaud, Paul-Henry Cournède
Abstract:
Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models
Procedia PDF Downloads 3041911 Nonlinear Free Vibrations of Functionally Graded Cylindrical Shells
Authors: Alexandra Andrade Brandão Soares, Paulo Batista Gonçalves
Abstract:
Using a modal expansion that satisfies the boundary and continuity conditions and expresses the modal couplings characteristic of cylindrical shells in the nonlinear regime, the equations of motion are discretized using the Galerkin method. The resulting algebraic equations are solved by the Newton-Raphson method, thus obtaining the nonlinear frequency-amplitude relation. Finally, a parametric analysis is conducted to study the influence of the geometry of the shell, the gradient of the functional material and vibration modes on the degree and type of nonlinearity of the cylindrical shell, which is the main contribution of this research work.Keywords: cylindrical shells, dynamics, functionally graded material, nonlinear vibrations
Procedia PDF Downloads 661910 Novel Microstrip MIMO Antenna for 3G/4G Applications
Authors: Sandro Samir Nasief, Hussein Hamed Ghouz, Mohamed Fathy
Abstract:
A compact ultra-wide band micro-strip MIMO antenna is introduced. The antenna consists of two elements each of size 24X24 mm2 (square millimetre) while the total MIMO size is 58X24 mm2 after the spacing between MIMO elements and adding a decouple circuit. The first one covers from 3.29 to 6.9 GHZ using digital ground and the second antenna covers from 8.76 to 13.27 GHZ using defective ground. This type of antenna is used for 3G and 4G applications. The introduction for the antenna structure and the parametric study (reflection coefficients, gain, coupling and decoupling) will be introduced.Keywords: micro-strip antenna, MIMO, digital ground, defective ground, decouple circuit, bandwidth
Procedia PDF Downloads 3661909 Investigation and Estimation of State of Health of Battery Pack in Battery Electric Vehicles-Online Battery Characterization
Authors: Ali Mashayekh, Mahdiye Khorasani, Thomas Weyh
Abstract:
The tendency to use the Battery-Electric vehicle (BEV) for the low and medium driving range or even high driving range has been growing more and more. As a result, higher safety, reliability, and durability of the battery pack as a component of electric vehicles, which has a great share of cost and weight of the final product, are the topics to be considered and investigated. Battery aging can be considered as the predominant factor regarding the reliability and durability of BEV. To better understand the aging process, offline battery characterization has been widely used, which is time-consuming and needs very expensive infrastructures. This paper presents the substitute method for the conventional battery characterization methods, which is based on battery Modular Multilevel Management (BM3). According to this Topology, the battery cells can be drained and charged concerning their capacity, which allows varying battery pack structures. Due to the integration of the power electronics, the output voltage of the battery pack is no longer fixed but can be dynamically adjusted in small steps. In other words, each cell can have three different states, namely series, parallel, and bypass in connection with the neighbor cells. With the help of MATLAB/Simulink and by using the BM3 modules, the battery string model is created. This model allows us to switch two cells with the different SoC as parallel, which results in the internal balancing of the cells. But if the parallel switching lasts just for a couple of ms, we can have a perturbation pulse which can stimulate the cells out of the relaxation phase. With the help of modeling the voltage response pulse of the battery, it would be possible to characterize the cell. The Online EIS method, which is discussed in this paper, can be a robust substitute for the conventional battery characterization methods.Keywords: battery characterization, SoH estimation, RLS, BEV
Procedia PDF Downloads 1491908 Knitting Stitches’ Manipulation for Catenary Textile Structures
Authors: Virginia Melnyk
Abstract:
This paper explores the design for catenary structure using knitted textiles. Using the advantages of Grasshopper and Kangaroo parametric software to simulate and pre-design an overall form, the design is then translated to a pattern that can be made with hand manipulated stitches on a knitting machine. The textile takes advantage of the structure of knitted materials and the ability for it to stretch. Using different types of stitches to control the amount of stretch that can occur in portions of the textile generates an overall formal design. The textile is then hardened in an upside-down hanging position and then flipped right-side-up. This then becomes a structural catenary form. The resulting design is used as a small Cat House for a cat to sit inside and climb on top of.Keywords: architectural materials, catenary structures, knitting fabrication, textile design
Procedia PDF Downloads 1851907 Price Effect Estimation of Tobacco on Low-wage Male Smokers: A Causal Mediation Analysis
Authors: Kawsar Ahmed, Hong Wang
Abstract:
The study's goal was to estimate the causal mediation impact of tobacco tax before and after price hikes among low-income male smokers, with a particular emphasis on the effect estimating pathways framework for continuous and dichotomous variables. From July to December 2021, a cross-sectional investigation of observational data (n=739) was collected from Bangladeshi low-wage smokers. The Quasi-Bayesian technique, binomial probit model, and sensitivity analysis using a simulation of the computational tools R mediation package had been used to estimate the effect. After a price rise for tobacco products, the average number of cigarettes or bidis sticks taken decreased from 6.7 to 4.56. Tobacco product rising prices have a direct effect on low-income people's decisions to quit or lessen their daily smoking habits of Average Causal Mediation Effect (ACME) [effect=2.31, 95 % confidence interval (C.I.) = (4.71-0.00), p<0.01], Average Direct Effect (ADE) [effect=8.6, 95 percent (C.I.) = (6.8-0.11), p<0.001], and overall significant effects (p<0.001). Tobacco smoking choice is described by the mediated proportion of income effect, which is 26.1% less of following price rise. The curve of ACME and ADE is based on observational figures of the coefficients of determination that asses the model of hypothesis as the substantial consequence after price rises in the sensitivity analysis. To reduce smoking product behaviors, price increases through taxation have a positive causal mediation with income that affects the decision to limit tobacco use and promote low-income men's healthcare policy.Keywords: causal mediation analysis, directed acyclic graphs, tobacco price policy, sensitivity analysis, pathway estimation
Procedia PDF Downloads 1141906 Analysis of Autoantibodies to the S-100 Protein, NMDA, and Dopamine Receptors in Children with Type 1 Diabetes Mellitus
Authors: Yuri V. Bykov, V. A. Baturin
Abstract:
Aim of the study: The aim of the study was to perform a comparative analysis of the levels of autoantibodies (AAB) to the S-100 protein as well as to the dopamine and NMDA receptors in children with type 1 diabetes mellitus (DM) in therapeutic remission. Materials and methods: Blood serum obtained from 42 children ages 4 to 17 years (20 boys and 22 girls) was analyzed. Twenty-one of these children had a diagnosis of type 1 DM and were in therapeutic remission (study group). The mean duration of disease in children with type 1 DM was 9.6±0.36 years. Children without DM were included in a group of "apparently healthy children" (21 children, comparison group). AAB to the S-100 protein, the dopamine, and NMDA receptors were measured by ELISA. The normal range of IgG AAB was specified as up to 10 µg/mL. In order to compare the central parameters of the groups, the following parametric and non-parametric methods were used: Student's t-test or Mann-Whitney U test. The level of significance for inter-group comparisons was set at p<0.05. Results: The mean levels of AAB to the S-100B protein were significantly higher (p=0.0045) in children with DM (16.84±1.54 µg/mL) when compared with "apparently healthy children" (2.09±0.05 µg/mL). The detected elevated levels of AAB to NMDA receptors may indicate that in children with type 1 DM, there is a change in the activity of the glutamatergic system, which in its turn suggests the presence of excitotoxicity. The mean levels of AAB to dopamine receptors were higher (p=0.0082) in patients comprising the study group than in the children of the comparison group (40.47±2.31 µg/mL and 3.91±0.09 µg/mL). The detected elevated levels of AAB to dopamine receptors suggest an altered activity of the dopaminergic system in children with DM. This can also be viewed as indirect evidence of altered activity of the brain's glutamatergic system. The mean levels of AAB to NMDA receptors were higher in patients with type 1 DM compared with the "apparently healthy children," at 13.16±2.07 µg/mL and 1.304±0.05 µg/mL, respectively (p=0.0021). The elevated mean levels of AAB to the S-100B protein may indicate damage to brain tissue in children with type 1 DM. A difference was also detected between the mean values of the measured AABs, and this difference depended on the duration of the disease: mean AAB values were significantly higher in patients whose disease had lasted more than five years. Conclusions: The elevated mean levels of AAB to the S-100B protein may indicate damage to brain tissue in the setting of excitotoxicity in children with type 1 DM. The discovered elevation of the levels of AAB to NMDA and dopamine receptors may indicate the activation of the glutamatergic and dopaminergic systems. The observed abnormalities indicate the presence of central nervous system damage in children with type 1 DM, with a tendency towards the elevation of the levels of the studied AABs with disease progression.Keywords: autoantibodies, brain damage, children, diabetes mellitus
Procedia PDF Downloads 961905 Speed Power Control of Double Field Induction Generator
Authors: Ali Mausmi, Ahmed Abbou, Rachid El Akhrif
Abstract:
This research paper aims to reduce the chattering phenomenon due to control by sliding mode control applied on a wind energy conversion system based on the doubly fed induction generator (DFIG). Our goal is to offset the effect of parametric uncertainties and come as close as possible to the dynamic response solicited by the control law in the ideal case and therefore force the active and reactive power generated by the DFIG to accurately follow the reference values which are provided to it. The simulation results using Matlab / Simulink demonstrate the efficiency and performance of the proposed technique while maintaining the simplicity of control by first order sliding mode.Keywords: control of speed, correction of the equivalent command, induction generator, sliding mode
Procedia PDF Downloads 3771904 Shape-Changing Structure: A Prototype for the Study of a Dynamic and Modular Structure
Authors: Annarita Zarrillo
Abstract:
This research is part of adaptive architecture, reflecting the evolution that the world of architectural design is going through. Today's architecture is no longer seen as a static system but, conversely, as a dynamic system that changes in response to the environment and the needs of users. One of the major forms of adaptivity is represented by kinetic structures. This study aims to underline the importance of experimentation on physical scale models for the study of dynamic structures and to present the case study of a modular kinetic structure designed through the use of parametric design software and created as a prototype in the laboratories of the Royal Danish Academy in Copenhagen.Keywords: adaptive architecture, architectural application, kinetic structures, modular prototype
Procedia PDF Downloads 1381903 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data
Authors: Georgiana Onicescu, Yuqian Shen
Abstract:
Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection
Procedia PDF Downloads 1461902 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets
Authors: Ece Cigdem Mutlu, Burak Alakent
Abstract:
Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.Keywords: average run length, M-estimators, quality control, robust estimators
Procedia PDF Downloads 1911901 Auto Rickshaw Impacts with Pedestrians: A Computational Analysis of Post-Collision Kinematics and Injury Mechanics
Authors: A. J. Al-Graitti, G. A. Khalid, P. Berthelson, A. Mason-Jones, R. Prabhu, M. D. Jones
Abstract:
Motor vehicle related pedestrian road traffic collisions are a major road safety challenge, since they are a leading cause of death and serious injury worldwide, contributing to a third of the global disease burden. The auto rickshaw, which is a common form of urban transport in many developing countries, plays a major transport role, both as a vehicle for hire and for private use. The most common auto rickshaws are quite unlike ‘typical’ four-wheel motor vehicle, being typically characterised by three wheels, a non-tilting sheet-metal body or open frame construction, a canvas roof and side curtains, a small drivers’ cabin, handlebar controls and a passenger space at the rear. Given the propensity, in developing countries, for auto rickshaws to be used in mixed cityscapes, where pedestrians and vehicles share the roadway, the potential for auto rickshaw impacts with pedestrians is relatively high. Whilst auto rickshaws are used in some Western countries, their limited number and spatial separation from pedestrian walkways, as a result of city planning, has not resulted in significant accident statistics. Thus, auto rickshaws have not been subject to the vehicle impact related pedestrian crash kinematic analyses and/or injury mechanics assessment, typically associated with motor vehicle development in Western Europe, North America and Japan. This study presents a parametric analysis of auto rickshaw related pedestrian impacts by computational simulation, using a Finite Element model of an auto rickshaw and an LS-DYNA 50th percentile male Hybrid III Anthropometric Test Device (dummy). Parametric variables include auto rickshaw impact velocity, auto rickshaw impact region (front, centre or offset) and relative pedestrian impact position (front, side and rear). The output data of each impact simulation was correlated against reported injury metrics, Head Injury Criterion (front, side and rear), Neck injury Criterion (front, side and rear), Abbreviated Injury Scale and reported risk level and adds greater understanding to the issue of auto rickshaw related pedestrian injury risk. The parametric analyses suggest that pedestrians are subject to a relatively high risk of injury during impacts with an auto rickshaw at velocities of 20 km/h or greater, which during some of the impact simulations may even risk fatalities. The present study provides valuable evidence for informing a series of recommendations and guidelines for making the auto rickshaw safer during collisions with pedestrians. Whilst it is acknowledged that the present research findings are based in the field of safety engineering and may over represent injury risk, compared to “Real World” accidents, many of the simulated interactions produced injury response values significantly greater than current threshold curves and thus, justify their inclusion in the study. To reduce the injury risk level and increase the safety of the auto rickshaw, there should be a reduction in the velocity of the auto rickshaw and, or, consideration of engineering solutions, such as retro fitting injury mitigation technologies to those auto rickshaw contact regions which are the subject of the greatest risk of producing pedestrian injury.Keywords: auto rickshaw, finite element analysis, injury risk level, LS-DYNA, pedestrian impact
Procedia PDF Downloads 1941900 Wind Resource Estimation and Economic Analysis for Rakiraki, Fiji
Authors: Kaushal Kishore
Abstract:
Immense amount of imported fuels are used in Fiji for electricity generation, transportation and for carrying out miscellaneous household work. To alleviate its dependency on fossil fuel, paramount importance has been given to instigate the utilization of renewable energy sources for power generation and to reduce the environmental dilapidation. Amongst the many renewable energy sources, wind has been considered as one of the best identified renewable sources that are comprehensively available in Fiji. In this study the wind resource assessment for three locations in Rakiraki, Fiji has been carried out. The wind resource estimation at Rokavukavu, Navolau and at Tuvavatu has been analyzed. The average wind speed at 55 m above ground level (a.g.l) at Rokavukavu, Navolau, and Tuvavatu sites are 5.91 m/s, 8.94 m/s and 8.13 m/s with the turbulence intensity of 14.9%, 17.1%, and 11.7% respectively. The moment fitting method has been used to estimate the Weibull parameter and the power density at each sites. A high resolution wind resource map for the three locations has been developed by using Wind Atlas Analysis and Application Program (WAsP). The results obtained from WAsP exhibited good wind potential at Navolau and Tuvavatu sites. A wind farm has been proposed at Navolau and Tuvavatu site that comprises six Vergnet 275 kW wind turbines at each site. The annual energy production (AEP) for each wind farm is estimated and an economic analysis is performed. The economic analysis for the proposed wind farms at Navolau and Tuvavatu sites showed a payback period of 5 and 6 years respectively.Keywords: annual energy production, Rakiraki Fiji, turbulence intensity, Weibull parameter, wind speed, Wind Atlas Analysis and Application Program
Procedia PDF Downloads 1911899 Tracing Sources of Sediment in an Arid River, Southern Iran
Authors: Hesam Gholami
Abstract:
Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran
Procedia PDF Downloads 741898 3-D Numerical Model for Wave-Induced Seabed Response around an Offshore Pipeline
Authors: Zuodong Liang, Dong-Sheng Jeng
Abstract:
Seabed instability around an offshore pipeline is one of key factors that need to be considered in the design of offshore infrastructures. Unlike previous investigations, a three-dimensional numerical model for the wave-induced soil response around an offshore pipeline is proposed in this paper. The numerical model was first validated with 2-D experimental data available in the literature. Then, a parametric study will be carried out to examine the effects of wave, seabed characteristics and confirmation of pipeline. Numerical examples demonstrate significant influence of wave obliquity on the wave-induced pore pressures and the resultant seabed liquefaction around the pipeline, which cannot be observed in 2-D numerical simulation.Keywords: pore pressure, 3D wave model, seabed liquefaction, pipeline
Procedia PDF Downloads 3731897 Correlation Between Different Radiological Findings and Histopathological diagnosis of Breast Diseases: Retrospective Review Conducted Over Sixth Years in King Fahad University Hospital in Eastern Province, Saudi Arabia
Authors: Sadeem Aljamaan, Reem Hariri, Rahaf Alghamdi, Batool Alotaibi, Batool Alsenan, Lama Althunayyan, Areej Alnemer
Abstract:
The aim of this study is to correlate between radiological findings and histopathological results in regard to the breast imaging-reporting and data system scores, size of breast masses, molecular subtypes and suspicious radiological features, as well as to assess the concordance rate in histological grade between core biopsy and surgical excision among breast cancer patients, followed by analyzing the change of concordance rate in relation to neoadjuvant chemotherapy in a Saudi population. A retrospective review was conducted over 6-year period (2017-2022) on all breast core biopsies of women preceded by radiological investigation. Chi-squared test (χ2) was performed on qualitative data, the Mann-Whitney test for quantitative non-parametric variables, and the Kappa test for grade agreement. A total of 641 cases were included. Ultrasound, mammography, and magnetic resonance imaging demonstrated diagnostic accuracies of 85%, 77.9% and 86.9%; respectively. magnetic resonance imaging manifested the highest sensitivity (72.2%), and the lowest was for ultrasound (61%). Concordance in tumor size with final excisions was best in magnetic resonance imaging, while mammography demonstrated a higher tendency of overestimation (41.9%), and ultrasound showed the highest underestimation (67.7%). The association between basal-like molecular subtypes and the breast imaging-reporting and data system score 5 classifications was statistically significant only for magnetic resonance imaging (p=0.04). Luminal subtypes demonstrated a significantly higher percentage of speculation in mammography. Breast imaging-reporting and data system score 4 manifested a substantial number of benign pathologies in all the 3 modalities. A fair concordance rate (k= 0.212 & 0.379) was demonstrated between excision and the preceding core biopsy grading with and without neoadjuvant therapy, respectively. The results demonstrated a down-grading in cases post-neoadjuvant therapy. In cases who did not receive neoadjuvant therapy, underestimation of tumor grade in biopsy was evident. In summary, magnetic resonance imaging had the highest sensitivity, specificity, positive predictive value and accuracy of both diagnosis and estimation of tumor size. Mammography demonstrated better sensitivity than ultrasound and had the highest negative predictive value, but ultrasound had better specificity, positive predictive value and accuracy. Therefore, the combination of different modalities is advantageous. The concordance rate of core biopsy grading with excision was not impacted by neoadjuvant therapy.Keywords: breast cancer, mammography, MRI, neoadjuvant, pathology, US
Procedia PDF Downloads 821896 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 1441895 Soliton Solutions of the Higher-Order Nonlinear Schrödinger Equation with Dispersion Effects
Authors: H. Triki, Y. Hamaizi, A. El-Akrmi
Abstract:
We consider the higher order nonlinear Schrödinger equation model with fourth-order dispersion, cubic-quintic terms, and self-steepening. This equation governs the propagation of fem to second pulses in optical fibers. We present new bright and dark solitary wave type solutions for such a model under certain parametric conditions. This kind of solution may be useful to explain some physical phenomena related to wave propagation in a nonlinear optical fiber systems supporting high-order nonlinear and dispersive effects.Keywords: nonlinear Schrödinger equation, high-order effects, soliton solution
Procedia PDF Downloads 6361894 Detection of Chaos in General Parametric Model of Infectious Disease
Authors: Javad Khaligh, Aghileh Heydari, Ali Akbar Heydari
Abstract:
Mathematical epidemiological models for the spread of disease through a population are used to predict the prevalence of a disease or to study the impacts of treatment or prevention measures. Initial conditions for these models are measured from statistical data collected from a population since these initial conditions can never be exact, the presence of chaos in mathematical models has serious implications for the accuracy of the models as well as how epidemiologists interpret their findings. This paper confirms the chaotic behavior of a model for dengue fever and SI by investigating sensitive dependence, bifurcation, and 0-1 test under a variety of initial conditions.Keywords: epidemiological models, SEIR disease model, bifurcation, chaotic behavior, 0-1 test
Procedia PDF Downloads 3261893 Estimation of Constant Coefficients of Bourgoyne and Young Drilling Rate Model for Drill Bit Wear Prediction
Authors: Ahmed Z. Mazen, Nejat Rahmanian, Iqbal Mujtaba, Ali Hassanpour
Abstract:
In oil and gas well drilling, the drill bit is an important part of the Bottom Hole Assembly (BHA), which is installed and designed to drill and produce a hole by several mechanisms. The efficiency of the bit depends on many drilling parameters such as weight on bit, rotary speed, and mud properties. When the bit is pulled out of the hole, the evaluation of the bit damage must be recorded very carefully to guide engineers in order to select the bits for further planned wells. Having a worn bit for hole drilling may cause severe damage to bit leading to cutter or cone losses in the bottom of hole, where a fishing job will have to take place, and all of these will increase the operating cost. The main factor to reduce the cost of drilling operation is to maximize the rate of penetration by analyzing real-time data to predict the drill bit wear while drilling. There are numerous models in the literature for prediction of the rate of penetration based on drilling parameters, mostly based on empirical approaches. One of the most commonly used approaches is Bourgoyne and Young model, where the rate of penetration can be estimated by the drilling parameters as well as a wear index using an empirical correlation, provided all the constants and coefficients are accurately determined. This paper introduces a new methodology to estimate the eight coefficients for Bourgoyne and Young model using the gPROMS parameters estimation GPE (Version 4.2.0). Real data collected form similar formations (12 ¼’ sections) in two different fields in Libya are used to estimate the coefficients. The estimated coefficients are then used in the equations and applied to nearby wells in the same field to predict the bit wear.Keywords: Bourgoyne and Young model, bit wear, gPROMS, rate of penetration
Procedia PDF Downloads 154