Search results for: hansen solubility parameter estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4038

Search results for: hansen solubility parameter estimation

3888 Designing, Preparation and Structural Evaluation of Co-Crystals of Oxaprozin

Authors: Maninderjeet K. Grewal, Sakshi Bhatnor, Renu Chadha

Abstract:

The composition of pharmaceutical entities and the molecular interactions can be altered to optimize drug properties such as solubility and bioavailability by the crystal engineering technique. The present work has emphasized on the preparation, characterization, and biopharmaceutical evaluation of co-crystal of BCS Class II anti-osteoarthritis drug, Oxaprozin (OXA) with aspartic acid (ASPA) as co-former. The co-crystals were prepared through the mechanochemical solvent drop grinding method. Characterization of the prepared co-crystal (OXA-ASPA) was done by using analytical tools such as differential scanning calorimetry (DSC), Fourier transform infrared spectroscopy (FT-IR), powder X-ray diffraction (PXRD). DSC thermogram of OXA-ASPA cocrystal showed a single sharp melting endotherm at 235 ºC, which was between the melting peaks of the drug and the counter molecules suggesting the formation of a new phase which is a co-crystal that was further confirmed by using other analytical techniques. FT-IR analysis of OXA-ASPA cocrystal showed a shift in a hydroxyl, carbonyl, and amine peaks as compared to pure drugs indicating all these functional groups are participating in cocrystal formation. The appearance of new peaks in the PXRD pattern of cocrystals in comparison to individual components showed that a new crystalline entity has been formed. The Crystal structure of cocrystal was determined using material studio software (Biovia) from PXRD. The equilibrium solubility study of OXA-ASPA showed improvement in solubility as compared to pure drug. Therefore, it was envisioned to prepare the co-crystal of oxaprozin with a suitable conformer to modulate its physiochemical properties and consequently, the biopharmaceutical parameters.

Keywords: cocrystals, coformer, oxaprozin, solubility

Procedia PDF Downloads 85
3887 Wind Resource Estimation and Economic Analysis for Rakiraki, Fiji

Authors: Kaushal Kishore

Abstract:

Immense amount of imported fuels are used in Fiji for electricity generation, transportation and for carrying out miscellaneous household work. To alleviate its dependency on fossil fuel, paramount importance has been given to instigate the utilization of renewable energy sources for power generation and to reduce the environmental dilapidation. Amongst the many renewable energy sources, wind has been considered as one of the best identified renewable sources that are comprehensively available in Fiji. In this study the wind resource assessment for three locations in Rakiraki, Fiji has been carried out. The wind resource estimation at Rokavukavu, Navolau and at Tuvavatu has been analyzed. The average wind speed at 55 m above ground level (a.g.l) at Rokavukavu, Navolau, and Tuvavatu sites are 5.91 m/s, 8.94 m/s and 8.13 m/s with the turbulence intensity of 14.9%, 17.1%, and 11.7% respectively. The moment fitting method has been used to estimate the Weibull parameter and the power density at each sites. A high resolution wind resource map for the three locations has been developed by using Wind Atlas Analysis and Application Program (WAsP). The results obtained from WAsP exhibited good wind potential at Navolau and Tuvavatu sites. A wind farm has been proposed at Navolau and Tuvavatu site that comprises six Vergnet 275 kW wind turbines at each site. The annual energy production (AEP) for each wind farm is estimated and an economic analysis is performed. The economic analysis for the proposed wind farms at Navolau and Tuvavatu sites showed a payback period of 5 and 6 years respectively.

Keywords: annual energy production, Rakiraki Fiji, turbulence intensity, Weibull parameter, wind speed, Wind Atlas Analysis and Application Program

Procedia PDF Downloads 163
3886 Single Carrier Frequency Domain Equalization Design to Cope with Narrow Band Jammer

Authors: So-Young Ju, Sung-Mi Jo, Eui-Rim Jeong

Abstract:

In this paper, based on the conventional single carrier frequency domain equalization (SC-FDE) structure, we propose a new SC-FDE structure to cope with narrowband jammer. In the conventional SC-FDE structure, channel estimation is performed in the time domain. When a narrowband jammer exists, time-domain channel estimation is very difficult due to high power jamming interference, which degrades receiver performance. To relieve from this problem, a new SC-FDE frame is proposed to enable channel estimation under narrow band jamming environments. In this paper, we proposed a modified SC-FDE structure that can perform channel estimation in the frequency domain and verified the performance via computer simulation.

Keywords: channel estimation, jammer, pilot, SC-FDE

Procedia PDF Downloads 447
3885 Online Estimation of Clutch Drag Torque in Wet Dual Clutch Transmission Based on Recursive Least Squares

Authors: Hongkui Li, Tongli Lu , Jianwu Zhang

Abstract:

This paper focuses on developing an estimation method of clutch drag torque in wet DCT. The modelling of clutch drag torque is investigated. As the main factor affecting the clutch drag torque, dynamic viscosity of oil is discussed. The paper proposes an estimation method of clutch drag torque based on recursive least squares by utilizing the dynamic equations of gear shifting synchronization process. The results demonstrate that the estimation method has good accuracy and efficiency.

Keywords: clutch drag torque, wet DCT, dynamic viscosity, recursive least squares

Procedia PDF Downloads 293
3884 Parameter Tuning of Complex Systems Modeled in Agent Based Modeling and Simulation

Authors: Rabia Korkmaz Tan, Şebnem Bora

Abstract:

The major problem encountered when modeling complex systems with agent-based modeling and simulation techniques is the existence of large parameter spaces. A complex system model cannot be expected to reflect the whole of the real system, but by specifying the most appropriate parameters, the actual system can be represented by the model under certain conditions. When the studies conducted in recent years were reviewed, it has been observed that there are few studies for parameter tuning problem in agent based simulations, and these studies have focused on tuning parameters of a single model. In this study, an approach of parameter tuning is proposed by using metaheuristic algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colonies (ABC), Firefly (FA) algorithms. With this hybrid structured study, the parameter tuning problems of the models in the different fields were solved. The new approach offered was tested in two different models, and its achievements in different problems were compared. The simulations and the results reveal that this proposed study is better than the existing parameter tuning studies.

Keywords: parameter tuning, agent based modeling and simulation, metaheuristic algorithms, complex systems

Procedia PDF Downloads 200
3883 Ionic Liquid Membranes for CO2 Separation

Authors: Zuzana Sedláková, Magda Kárászová, Jiří Vejražka, Lenka Morávková, Pavel Izák

Abstract:

Membrane separations are mentioned frequently as a possibility for CO2 capture. Selectivity of ionic liquid membranes is strongly determined by different solubility of separated gases in ionic liquids. The solubility of separated gases usually varies over an order of magnitude, differently from diffusivity of gases in ionic liquids, which is usually of the same order of magnitude for different gases. The present work evaluates the selection of an appropriate ionic liquid for the selective membrane preparation based on the gas solubility in an ionic liquid. The current state of the art of CO2 capture patents and technologies based on the membrane separations was considered. An overview is given of the discussed transport mechanisms. Ionic liquids seem to be promising candidates thanks to their tunable properties, wide liquid range, reasonable thermal stability, and negligible vapor pressure. However, the uses of supported liquid membranes are limited by their relatively short lifetime from the industrial point of view. On the other hand, ionic liquids could overcome these problems due to their negligible vapor pressure and their tunable properties by adequate selection of the cation and anion.

Keywords: biogas upgrading, carbon dioxide separation, ionic liquid membrane, transport properties

Procedia PDF Downloads 397
3882 A Targeted Maximum Likelihood Estimation for a Non-Binary Causal Variable: An Application

Authors: Mohamed Raouf Benmakrelouf, Joseph Rynkiewicz

Abstract:

Targeted maximum likelihood estimation (TMLE) is well-established method for causal effect estimation with desirable statistical properties. TMLE is a doubly robust maximum likelihood based approach that includes a secondary targeting step that optimizes the target statistical parameter. A causal interpretation of the statistical parameter requires assumptions of the Rubin causal framework. The causal effect of binary variable, E, on outcomes, Y, is defined in terms of comparisons between two potential outcomes as E[YE=1 − YE=0]. Our aim in this paper is to present an adaptation of TMLE methodology to estimate the causal effect of a non-binary categorical variable, providing a large application. We propose coding on the initial data in order to operate a binarization of the interest variable. For each category, we get a transformation of the non-binary interest variable into a binary variable, taking value 1 to indicate the presence of category (or group of categories) for an individual, 0 otherwise. Such a dummy variable makes it possible to have a pair of potential outcomes and oppose a category (or a group of categories) to another category (or a group of categories). Let E be a non-binary interest variable. We propose a complete disjunctive coding of our variable E. We transform the initial variable to obtain a set of binary vectors (dummy variables), E = (Ee : e ∈ {1, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when its category is not present, and the value of 1 when its category is present, which allows to compute a pairwise-TMLE comparing difference in the outcome between one category and all remaining categories. In order to illustrate the application of our strategy, first, we present the implementation of TMLE to estimate the causal effect of non-binary variable on outcome using simulated data. Secondly, we apply our TMLE adaptation to survey data from the French Political Barometer (CEVIPOF), to estimate the causal effect of education level (A five-level variable) on a potential vote in favor of the French extreme right candidate Jean-Marie Le Pen. Counterfactual reasoning requires us to consider some causal questions (additional causal assumptions). Leading to different coding of E, as a set of binary vectors, E = (Ee : e ∈ {2, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when the first category (reference category) is present, and the value of 1 when its category is present, which allows to apply a pairwise-TMLE comparing difference in the outcome between the first level (fixed) and each remaining level. We confirmed that the increase in the level of education decreases the voting rate for the extreme right party.

Keywords: statistical inference, causal inference, super learning, targeted maximum likelihood estimation

Procedia PDF Downloads 69
3881 Age Estimation from Teeth among North Indian Population: Comparison and Reliability of Qualitative and Quantitative Methods

Authors: Jasbir Arora, Indu Talwar, Daisy Sahni, Vidya Rattan

Abstract:

Introduction: Age estimation is a crucial step to build the identity of a person, both in case of deceased and alive. In adults, age can be estimated on the basis of six regressive (Attrition, Secondary dentine, Dentine transparency, Root resorption, Cementum apposition and Periodontal Disease) changes in teeth qualitatively using scoring system and quantitatively by micrometric method. The present research was designed to establish the reliability of qualitative (method 1) and quantitative (method 2) of age estimation among North Indians and to compare the efficacy of these two methods. Method: 250 single-rooted extracted teeth (18-75 yrs.) were collected from Department of Oral Health Sciences, PGIMER, Chandigarh. Before extraction, periodontal score of each tooth was noted. Labiolingual sections were prepared and examined under light microscope for regressive changes. Each parameter was scored using Gustafson’s 0-3 point score system (qualitative), and total score was calculated. For quantitative method, each regressive change was measured quantitatively in form of 18 micrometric parameters under microscope with the help of measuring eyepiece. Age was estimated using linear and multiple regression analysis in Gustafson’s method and Kedici’s method respectively. Estimated age was compared with actual age on the basis of absolute mean error. Results: In pooled data, by Gustafson’s method, significant correlation (r= 0.8) was observed between total score and actual age. Total score generated an absolute mean error of ±7.8 years. Whereas, for Kedici’s method, a value of correlation coefficient of r=0.5 (p<0.01) was observed between all the eighteen micrometric parameters and known age. Using multiple regression equation, age was estimated, and an absolute mean error of age was found to be ±12.18 years. Conclusion: Gustafson’s (qualitative) method was found to be a better predictor for age estimation among North Indians.

Keywords: forensic odontology, age estimation, North India, teeth

Procedia PDF Downloads 218
3880 Studying the Effect of Ethanol and Operating Temperature on Purification of Lactulose Syrup Containing Lactose

Authors: N. Zanganeh, M. Zabet

Abstract:

Lactulose is a synthetic disaccharide which has remarkable applications in food and pharmaceutical fields. Lactulose is not found in nature and it is produced by isomerization reaction of lactose in an alkaline environment. It should be noted that this reaction has a very low yield since significant amount of lactose stays un-reacted in the system. Basically, purification of lactulose is difficult and costly. Previous studies have revealed that solubility of lactose and lactulose are significantly different in ethanol. Considering the fact that solubility is also affected by temperature itself, we investigated the effect of ethanol and temperature on separation process of lactose from the syrup containing lactose and lactulose. For this purpose, a saturated solution containing lactulose and lactose was made at three different temperatures; 25⁰C (room temperature), 31⁰C, and 37⁰C first.  Five samples containing 2g saturated solution was taken and then 2g, 3g, 4g, 5g, and 6g ethanol separately was added to the sampling tubes. Sampling tubes were kept at respective temperatures afterward. The concentration of lactose and lactulose after separation process measured and analyzed by High Performance Liquid Chromatography (HPLC). Results showed that ethanol has such a greater impact than operating temperature on purification process. Also, it was observed that the maximum rate of separation occurred at initial amount of added ethanol.

Keywords: lactulose, lactose, purification, solubility

Procedia PDF Downloads 431
3879 Optimization of a Cone Loudspeaker Parameter of Design Parameters by Analysis of a Narrow Acoustic Sound Pathway

Authors: Yue Hu, Xilu Zhao, Takao Yamaguchi, Manabu Sasajima, Yoshio Koike, Akira Hara

Abstract:

This study tried optimization of design parameter of a cone loudspeaker unit as an example of the high flexibility of the products design. We developed an acoustic analysis software program that considers the impact of damping caused by air viscosity. In sound reproduction, it is difficult to each design the parameter of the loudspeaker. To overcome the limitation of the design problem in practice, this paper proposes a new an acoustic analysis algorithm to optimize design the parameter of the loudspeaker. The material character of cone paper and the loudspeaker edge was the design parameter, and the vibration displacement of the cone paper was the objective function. The results of the analysis were compared with the predicted value. They had high accuracy to the predicted value. These results suggest that, though the parameter design is difficult by experience and intuition, it can be performed comparatively easily using the optimization design by the developed acoustic analysis software.

Keywords: air viscosity, loudspeaker, cone paper, edge, optimization

Procedia PDF Downloads 373
3878 Special Case of Trip Distribution Model and Its Use for Estimation of Detailed Transport Demand in the Czech Republic

Authors: Jiri Dufek

Abstract:

The national model of the Czech Republic has been modified in a detailed way to get detailed travel demand in the municipality level (cities, villages over 300 inhabitants). As a technique for this detailed modelling, three-dimensional procedure for calibrating gravity models, was used. Besides of zone production and attraction, which is usual in gravity models, the next additional parameter for trip distribution was introduced. Usually it is called by “third dimension”. In the model, this parameter is a demand between regions. The distribution procedure involved calculation of appropriate skim matrices and its multiplication by three coefficients obtained by iterative balancing of production, attraction and third dimension. This type of trip distribution was processed in R-project and the results were used in the Czech Republic transport model, created in PTV Vision. This process generated more precise results in local level od the model (towns, villages)

Keywords: trip distribution, three dimension, transport model, municipalities

Procedia PDF Downloads 96
3877 Estimating Tree Height and Forest Classification from Multi Temporal Risat-1 HH and HV Polarized Satellite Aperture Radar Interferometric Phase Data

Authors: Saurav Kumar Suman, P. Karthigayani

Abstract:

In this paper the height of the tree is estimated and forest types is classified from the multi temporal RISAT-1 Horizontal-Horizontal (HH) and Horizontal-Vertical (HV) Polarised Satellite Aperture Radar (SAR) data. The novelty of the proposed project is combined use of the Back-scattering Coefficients (Sigma Naught) and the Coherence. It uses Water Cloud Model (WCM). The approaches use two main steps. (a) Extraction of the different forest parameter data from the Product.xml, BAND-META file and from Grid-xxx.txt file come with the HH & HV polarized data from the ISRO (Indian Space Research Centre). These file contains the required parameter during height estimation. (b) Calculation of the Vegetation and Ground Backscattering, Coherence and other Forest Parameters. (c) Classification of Forest Types using the ENVI 5.0 Tool and ROI (Region of Interest) calculation.

Keywords: RISAT-1, classification, forest, SAR data

Procedia PDF Downloads 375
3876 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE

Procedia PDF Downloads 70
3875 The Hyperbolic Smoothing Approach for Automatic Calibration of Rainfall-Runoff Models

Authors: Adilson Elias Xavier, Otto Corrêa Rotunno Filho, Paulo Canedo De Magalhães

Abstract:

This paper addresses the issue of automatic parameter estimation in conceptual rainfall-runoff (CRR) models. Due to threshold structures commonly occurring in CRR models, the associated mathematical optimization problems have the significant characteristic of being strongly non-differentiable. In order to face this enormous task, the resolution method proposed adopts a smoothing strategy using a special C∞ differentiable class function. The final estimation solution is obtained by solving a sequence of differentiable subproblems which gradually approach the original conceptual problem. The use of this technique, called Hyperbolic Smoothing Method (HSM), makes possible the application of the most powerful minimization algorithms, and also allows for the main difficulties presented by the original CRR problem to be overcome. A set of computational experiments is presented for the purpose of illustrating both the reliability and the efficiency of the proposed approach.

Keywords: rainfall-runoff models, automatic calibration, hyperbolic smoothing method

Procedia PDF Downloads 104
3874 Hardware Implementation of Local Binary Pattern Based Two-Bit Transform Motion Estimation

Authors: Seda Yavuz, Anıl Çelebi, Aysun Taşyapı Çelebi, Oğuzhan Urhan

Abstract:

Nowadays, demand for using real-time video transmission capable devices is ever-increasing. So, high resolution videos have made efficient video compression techniques an essential component for capturing and transmitting video data. Motion estimation has a critical role in encoding raw video. Hence, various motion estimation methods are introduced to efficiently compress the video. Low bit‑depth representation based motion estimation methods facilitate computation of matching criteria and thus, provide small hardware footprint. In this paper, a hardware implementation of a two-bit transformation based low-complexity motion estimation method using local binary pattern approach is proposed. Image frames are represented in two-bit depth instead of full-depth by making use of the local binary pattern as a binarization approach and the binarization part of the hardware architecture is explained in detail. Experimental results demonstrate the difference between the proposed hardware architecture and the architectures of well-known low-complexity motion estimation methods in terms of important aspects such as resource utilization, energy and power consumption.

Keywords: binarization, hardware architecture, local binary pattern, motion estimation, two-bit transform

Procedia PDF Downloads 274
3873 Development of 4-Allylpyrocatechol Loaded Self-Nanoemulsifying Drug Delivery System for Enhancing Water Solubility and Antibacterial Activity against Oral Pathogenic Bacteria

Authors: Pimpak Phumat, Sakornrat Khongkhunthian, Thomas Rades, Anette Müllertz, Siriporn Okonogi

Abstract:

Self-nanoemulsifying drug delivery systems (SNEDDS) containing 4-allylpyrocatechol (AP) extracted from Piper betle were developed to enhance water solubility of AP by using modeling and design (MODDE) program. The amount of AP in each SNEDDS formulation was determined by using high-performance liquid chromatography. The formulation consisted of 20% Miglyol®812N, 40 % Kolliphor®RH40, 30 % Maisine®35-1 and 10 % ethanol was found to be the best SNEDDS that provided the highest loading capacity of AP. (141.48±15.64 mg/g SNEDDS). The system also showed miscibility with water. The particle shape and size of the AP-SNEDDS after dispersing in water was investigated by using a transmission electron microscope and photon correlation spectrophotometer, respectively. The results showed that they were a spherical shape, having a particle size of 34.27 ± 1.14 nm with a narrow size distribution of 0.17 ± 0.04. The particles showed negative zeta potential with a value of -21.66 ± 2.09 mV. Antibacterial activity of AP-SNEDDS containing 1.5 mg/mL of AP was investigated against Streptococcus intermedius. The effect of this system on S. intermedius cells was observed by a scanning electron microscope (SEM). The results from SEM revealed that the bacterial cells were obviously destroyed. Killing kinetic study of AP-SNEDDS was carried out. It was found that the killing rate of AP-SNEDDS against S. intermedius was dose-dependent and the bacterial reduction was 79.86 ± 0.45 % within 30 min. In comparison with chlorhexidine (CHX), AP-SNEDDS showed similar antibacterial effects against S. intermedius. It is concluded that SNEDDS is a potential system for enhancing water solubility of AP. The antibacterial study reveals that AP-SNEDDS can be a promising system to treat bacterial infection caused by S. intermedius.

Keywords: SNEDDS, 4-allylpyrocathecol, solubility, antibacterial activity, Streptococcus intermedius

Procedia PDF Downloads 90
3872 A Novel Search Pattern for Motion Estimation in High Efficiency Video Coding

Authors: Phong Nguyen, Phap Nguyen, Thang Nguyen

Abstract:

High Efficiency Video Coding (HEVC) or H.265 Standard fulfills the demand of high resolution video storage and transmission since it achieves high compression ratio. However, it requires a huge amount of calculation. Since Motion Estimation (ME) block composes about 80 % of calculation load of HEVC, there are a lot of researches to reduce the computation cost. In this paper, we propose a new algorithm to lower the number of Motion Estimation’s searching points. The number of computing points in search pattern is down from 77 for Diamond Pattern and 81 for Square Pattern to only 31. Meanwhile, the Peak Signal to Noise Ratio (PSNR) and bit rate are almost equal to those of conventional patterns. The motion estimation time of new algorithm reduces by at 68.23%, 65.83%compared to the recommended search pattern of diamond pattern, square pattern, respectively.

Keywords: motion estimation, wide diamond, search pattern, H.265, test zone search, HM software

Procedia PDF Downloads 565
3871 Using Derivative Free Method to Improve the Error Estimation of Numerical Quadrature

Authors: Chin-Yun Chen

Abstract:

Numerical integration is an essential tool for deriving different physical quantities in engineering and science. The effectiveness of a numerical integrator depends on different factors, where the crucial one is the error estimation. This work presents an error estimator that combines a derivative free method to improve the performance of verified numerical quadrature.

Keywords: numerical quadrature, error estimation, derivative free method, interval computation

Procedia PDF Downloads 431
3870 Estimation of Slab Depth, Column Size and Rebar Location of Concrete Specimen Using Impact Echo Method

Authors: Y. T. Lee, J. H. Na, S. H. Kim, S. U. Hong

Abstract:

In this study, an experimental research for estimation of slab depth, column size and location of rebar of concrete specimen is conducted using the Impact Echo Method (IE) based on stress wave among non-destructive test methods. Estimation of slab depth had total length of 1800×300 and 6 different depths including 150 mm, 180 mm, 210 mm, 240 mm, 270 mm and 300 mm. The concrete column specimen was manufactured by differentiating the size into 300×300×300 mm, 400×400×400 mm and 500×500×500 mm. In case of the specimen for estimation of rebar, rebar of ∅22 mm was used in a specimen of 300×370×200 and arranged at 130 mm and 150 mm from the top to the rebar top. As a result of error rate of slab depth was overall mean of 3.1%. Error rate of column size was overall mean of 1.7%. Mean error rate of rebar location was 1.72% for top, 1.19% for bottom and 1.5% for overall mean showing relative accuracy.

Keywords: impact echo method, estimation, slab depth, column size, rebar location, concrete

Procedia PDF Downloads 317
3869 Application of an Analytical Model to Obtain Daily Flow Duration Curves for Different Hydrological Regimes in Switzerland

Authors: Ana Clara Santos, Maria Manuela Portela, Bettina Schaefli

Abstract:

This work assesses the performance of an analytical model framework to generate daily flow duration curves, FDCs, based on climatic characteristics of the catchments and on their streamflow recession coefficients. According to the analytical model framework, precipitation is considered to be a stochastic process, modeled as a marked Poisson process, and recession is considered to be deterministic, with parameters that can be computed based on different models. The analytical model framework was tested for three case studies with different hydrological regimes located in Switzerland: pluvial, snow-dominated and glacier. For that purpose, five time intervals were analyzed (the four meteorological seasons and the civil year) and two developments of the model were tested: one considering a linear recession model and the other adopting a nonlinear recession model. Those developments were combined with recession coefficients obtained from two different approaches: forward and inverse estimation. The performance of the analytical framework when considering forward parameter estimation is poor in comparison with the inverse estimation for both, linear and nonlinear models. For the pluvial catchment, the inverse estimation shows exceptional good results, especially for the nonlinear model, clearing suggesting that the model has the ability to describe FDCs. For the snow-dominated and glacier catchments the seasonal results are better than the annual ones suggesting that the model can describe streamflows in those conditions and that future efforts should focus on improving and combining seasonal curves instead of considering single annual ones.

Keywords: analytical streamflow distribution, stochastic process, linear and non-linear recession, hydrological modelling, daily discharges

Procedia PDF Downloads 133
3868 Stochastic Variation of the Hubble's Parameter Using Ornstein-Uhlenbeck Process

Authors: Mary Chriselda A

Abstract:

This paper deals with the fact that the Hubble's parameter is not constant and tends to vary stochastically with time. This premise has been proven by converting it to a stochastic differential equation using the Ornstein-Uhlenbeck process. The formulated stochastic differential equation is further solved analytically using the Euler and the Kolmogorov Forward equations, thereby obtaining the probability density function using the Fourier transformation, thereby proving that the Hubble's parameter varies stochastically. This is further corroborated by simulating the observations using Python and R-software for validation of the premise postulated. We can further draw conclusion that the randomness in forces affecting the white noise can eventually affect the Hubble’s Parameter leading to scale invariance and thereby causing stochastic fluctuations in the density and the rate of expansion of the Universe.

Keywords: Chapman Kolmogorov forward differential equations, fourier transformation, hubble's parameter, ornstein-uhlenbeck process , stochastic differential equations

Procedia PDF Downloads 174
3867 Defects Estimation of Embedded Systems Components by a Bond Graph Approach

Authors: I. Gahlouz, A. Chellil

Abstract:

The paper concerns the estimation of system components faults by using an unknown inputs observer. To reach this goal, we used the Bond Graph approach to physical modelling. We showed that this graphical tool is allowing the representation of system components faults as unknown inputs within the state representation of the considered physical system. The study of the causal and structural features of the system (controllability, observability, finite structure, and infinite structure) based on the Bond Graph approach was hence fulfilled in order to design an unknown inputs observer which is used for the system component fault estimation.

Keywords: estimation, bond graph, controllability, observability

Procedia PDF Downloads 388
3866 Estimating Lost Digital Video Frames Using Unidirectional and Bidirectional Estimation Based on Autoregressive Time Model

Authors: Navid Daryasafar, Nima Farshidfar

Abstract:

In this article, we make attempt to hide error in video with an emphasis on the time-wise use of autoregressive (AR) models. To resolve this problem, we assume that all information in one or more video frames is lost. Then, lost frames are estimated using analogous Pixels time information in successive frames. Accordingly, after presenting autoregressive models and how they are applied to estimate lost frames, two general methods are presented for using these models. The first method which is the same standard method of autoregressive models estimates lost frame in unidirectional form. Usually, in such condition, previous frames information is used for estimating lost frame. Yet, in the second method, information from the previous and next frames is used for estimating the lost frame. As a result, this method is known as bidirectional estimation. Then, carrying out a series of tests, performance of each method is assessed in different modes. And, results are compared.

Keywords: error steganography, unidirectional estimation, bidirectional estimation, AR linear estimation

Procedia PDF Downloads 506
3865 Poster : Incident Signals Estimation Based on a Modified MCA Learning Algorithm

Authors: Rashid Ahmed , John N. Avaritsiotis

Abstract:

Many signal subspace-based approaches have already been proposed for determining the fixed Direction of Arrival (DOA) of plane waves impinging on an array of sensors. Two procedures for DOA estimation based neural networks are presented. First, Principal Component Analysis (PCA) is employed to extract the maximum eigenvalue and eigenvector from signal subspace to estimate DOA. Second, minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix. In this paper, we will modify a Minor Component Analysis (MCA(R)) learning algorithm to enhance the convergence, where a convergence is essential for MCA algorithm towards practical applications. The learning rate parameter is also presented, which ensures fast convergence of the algorithm, because it has direct effect on the convergence of the weight vector and the error level is affected by this value. MCA is performed to determine the estimated DOA. Preliminary results will be furnished to illustrate the convergences results achieved.

Keywords: Direction of Arrival, neural networks, Principle Component Analysis, Minor Component Analysis

Procedia PDF Downloads 421
3864 An Association Model to Correlate the Experimentally Determined Mixture Solubilities of Methyl 10-Undecenoate with Methyl Ricinoleate in Supercritical Carbon Dioxide

Authors: V. Mani Rathnam, Giridhar Madras

Abstract:

Fossil fuels are depleting rapidly as the demand for energy, and its allied chemicals are continuously increasing in the modern world. Therefore, sustainable renewable energy sources based on non-edible oils are being explored as a viable option as they do not compete with the food commodities. Oils such as castor oil are rich in fatty acids and thus can be used for the synthesis of biodiesel, bio-lubricants, and many other fine industrial chemicals. There are several processes available for the synthesis of different chemicals obtained from the castor oil. One such process is the transesterification of castor oil, which results in a mixture of fatty acid methyl esters. The main products in the above reaction are methyl ricinoleate and methyl 10-undecenoate. To separate these compounds, supercritical carbon dioxide (SCCO₂) was used as a green solvent. SCCO₂ was chosen as a solvent due to its easy availability, non-toxic, non-flammable, and low cost. In order to design any separation process, the preliminary requirement is the solubility or phase equilibrium data. Therefore, the solubility of a mixture of methyl ricinoleate with methyl 10-undecenoate in SCCO₂ was determined in the present study. The temperature and pressure range selected for the investigation were T = 313 K to 333 K and P = 10 MPa to 18 MPa. It was observed that the solubility (mol·mol⁻¹) of methyl 10-undecenoate varied from 2.44 x 10⁻³ to 8.42 x 10⁻³ whereas it varied from 0.203 x 10⁻³ to 6.28 x 10⁻³ for methyl ricinoleate within the chosen operating conditions. These solubilities followed a retrograde behavior (characterized by the decrease in the solubility values with the increase in temperature) throughout the range of investigated operating conditions. An association theory model, coupled with regular solution theory for activity coefficients, was developed in the present study. The deviation from the experimental data using this model can be quantified using the average absolute relative deviation (AARD). The AARD% for the present compounds is 4.69 and 8.08 for methyl 10-undecenoate and methyl ricinoleate, respectively in a mixture of methyl ricinoleate and methyl 10-undecenoate. The maximum solubility enhancement of 32% was observed for the methyl ricinoleate in a mixture of methyl ricinoleate and methyl 10-undecenoate. The highest selectivity of SCCO₂ was observed to be 12 for methyl 10-undecenoate in a mixture of methyl ricinoleate and methyl 10-undecenoate.

Keywords: association theory, liquid mixtures, solubilities, supercritical carbon dioxide

Procedia PDF Downloads 106
3863 An Improved Parameter Identification Method for Three Phase Induction Motor

Authors: Liang Zhao, Chong-quan Zhong

Abstract:

In order to improve the control performance of vector inverter, an improved parameter identification solution for induction motor is proposed in this paper. Dc or AC voltage is applied to the induction motor using the SVPWM through the inverter. Then stator resistance, stator leakage inductance, rotor resistance, rotor leakage inductance and mutual inductance are obtained according to the signal response. The discrete Fourier transform (DFT) is used to deal with the noise and harmonic. The impact on parameter identification caused by delays in the inverter switch tube, tube voltage drop and dead-time is avoided by effective compensation measures. Finally, the parameter identification experiment is conducted based on the vector inverter which using TMS320F2808 DSP as the core processor and results show that the strategy is verified.

Keywords: vector inverter, parameter identification, SVPWM; DFT, dead-time compensation

Procedia PDF Downloads 431
3862 Localization of Near Field Radio Controlled Unintended Emitting Sources

Authors: Nurbanu Guzey, S. Jagannathan

Abstract:

Locating radio controlled (RC) devices using their unintended emissions has a great interest considering security concerns. Weak nature of these emissions requires near field localization approach since it is hard to detect these signals in far field region of array. Instead of only angle estimation, near field localization also requires range estimation of the source which makes this method more complicated than far field models. Challenges of locating such devices in a near field region and real time environment are analyzed in this paper. An ESPRIT like near field localization scheme is utilized for both angle and range estimation. 1-D search with symmetric subarrays is provided. Two 7 element uniform linear antenna arrays (ULA) are employed for locating RC source. Experiment results of location estimation for one unintended emitting walkie-talkie for different positions are given.

Keywords: localization, angle of arrival (AoA), range estimation, array signal processing, ESPRIT, Uniform Linear Array (ULA)

Procedia PDF Downloads 493
3861 An Enhanced Floor Estimation Algorithm for Indoor Wireless Localization Systems Using Confidence Interval Approach

Authors: Kriangkrai Maneerat, Chutima Prommak

Abstract:

Indoor wireless localization systems have played an important role to enhance context-aware services. Determining the position of mobile objects in complex indoor environments, such as those in multi-floor buildings, is very challenging problems. This paper presents an effective floor estimation algorithm, which can accurately determine the floor where mobile objects located. The proposed algorithm is based on the confidence interval of the summation of online Received Signal Strength (RSS) obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSN). We compare the performance of the proposed algorithm with those of other floor estimation algorithms in literature by conducting a real implementation of WSN in our facility. The experimental results and analysis showed that the proposed floor estimation algorithm outperformed the other algorithms and provided highest percentage of floor accuracy up to 100% with 95-percent confidence interval.

Keywords: floor estimation algorithm, floor determination, multi-floor building, indoor wireless systems

Procedia PDF Downloads 396
3860 Effect of Velocity Slip on Two Phase Flow in an Eccentric Annular Region

Authors: Umadevi B., Dinesh P. A., Indira. R., Vinay C. V.

Abstract:

A mathematical model is developed to study the simultaneous effects of particle drag and slip parameter on the velocity as well as rate of flow in an annular cross sectional region bounded by two eccentric cylinders. In physiological flows this phenomena can be observed in an eccentric catheterized artery with inner cylinder wall is impermeable and outer cylinder wall is permeable. Blood is a heterogeneous fluid having liquid phase consisting of plasma in which a solid phase of suspended cells and proteins. Arterial wall gets damaged due to aging and lipid molecules get deposited between damaged tissue cells. Blood flow increases towards the damaged tissues in the artery. In this investigation blood is modeled as two phase fluid as one is a fluid phase and the other is particulate phase. The velocity of the fluid phase and rate of flow are obtained by transforming eccentric annulus to concentric annulus with the conformal mapping. The formulated governing equations are analytically solved for the velocity and rate of flow. The numerical investigations are carried out by varying eccentricity parameter, slip parameter and drag parameter. Enhancement of slip parameter signifies loss of fluid then the velocity and rate of flow will be decreased. As particulate drag parameter increases then the velocity as well as rate flow decreases. Eccentricity facilitates transport of more fluid then the velocity and rate of flow increases.

Keywords: catheter, slip parameter, drag parameter, eccentricity

Procedia PDF Downloads 480
3859 Orthogonal Regression for Nonparametric Estimation of Errors-In-Variables Models

Authors: Anastasiia Yu. Timofeeva

Abstract:

Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.

Keywords: grade point average, orthogonal regression, penalized regression spline, locally weighted regression

Procedia PDF Downloads 383