Search results for: parameter calibration
1797 Status Report of the GERDA Phase II Startup
Authors: Valerio D’Andrea
Abstract:
The GERmanium Detector Array (GERDA) experiment, located at the Laboratori Nazionali del Gran Sasso (LNGS) of INFN, searches for 0νββ of 76Ge. Germanium diodes enriched to ∼ 86 % in the double beta emitter 76Ge(enrGe) are exposed being both source and detectors of 0νββ decay. Neutrinoless double beta decay is considered a powerful probe to address still open issues in the neutrino sector of the (beyond) Standard Model of particle Physics. Since 2013, just after the completion of the first part of its experimental program (Phase I), the GERDA setup has been upgraded to perform its next step in the 0νββ searches (Phase II). Phase II aims to reach a sensitivity to the 0νββ decay half-life larger than 1026 yr in about 3 years of physics data taking. This exposing a detector mass of about 35 kg of enrGe and with a background index of about 10^−3 cts/(keV·kg·yr). One of the main new implementations is the liquid argon scintillation light read-out, to veto those events that only partially deposit their energy both in Ge and in the surrounding LAr. In this paper, the GERDA Phase II expected goals, the upgrade work and few selected features from the 2015 commissioning and 2016 calibration runs will be presented. The main Phase I achievements will be also reviewed.Keywords: gerda, double beta decay, LNGS, germanium
Procedia PDF Downloads 3681796 Use of Fabric Phase Sorptive Extraction with Gas Chromatography-Mass Spectrometry for the Determination of Organochlorine Pesticides in Various Aqueous and Juice Samples
Authors: Ramandeep Kaur, Ashok Kumar Malik
Abstract:
Fabric Phase Sorptive Extraction (FPSE) combined with Gas chromatography Mass Spectrometry (GCMS) has been developed for the determination of nineteen organochlorine pesticides in various aqueous samples. The method consolidates the features of sol-gel derived microextraction sorbents with rich surface chemistry of cellulose fabric substrate which could directly extract sample from complex sample matrices and incredibly improve the operation with decreased pretreatment time. Some vital parameters such as kind and volume of extraction solvent and extraction time were examinedand optimized. Calibration curves were obtained in the concentration range 0.5-500 ng/mL. Under the optimum conditions, the limits of detection (LODs) were in the range 0.033 ng/mL to 0.136 ng/mL. The relative standard deviations (RSDs) for extraction of 10 ng/mL 0f OCPs were less than 10%. The developed method has been applied for the quantification of these compounds in aqueous and fruit juice samples. The results obtained proved the present method to be rapid and feasible for the determination of organochlorine pesticides in aqueous samples.Keywords: fabric phase sorptive extraction, gas chromatography-mass spectrometry, organochlorine pesticides, sample pretreatment
Procedia PDF Downloads 4841795 Analysis of Direct Current Motor in LabVIEW
Authors: E. Ramprasath, P. Manojkumar, P. Veena
Abstract:
DC motors have been widely used in the past centuries which are proudly known as the workhorse of industrial systems until the invention of the AC induction motors which makes a huge revolution in industries. Since then, the use of DC machines have been decreased due to enormous factors such as reliability, robustness and complexity but it lost its fame due to the losses. A new methodology is proposed to construct a DC motor through the simulation in LabVIEW to get an idea about its real time performances, if a change in parameter might have bigger improvement in losses and reliability.Keywords: analysis, characteristics, direct current motor, LabVIEW software, simulation
Procedia PDF Downloads 5521794 Structural Behavior of Subsoil Depending on Constitutive Model in Calculation Model of Pavement Structure-Subsoil System
Authors: M. Kadela
Abstract:
The load caused by the traffic movement should be transferred in the road constructions in a harmless way to the pavement as follows: − on the stiff upper layers of the structure (e.g. layers of asphalt: abrading and binding), and − through the layers of principal and secondary substructure, − on the subsoil, directly or through an improved subsoil layer. Reliable description of the interaction proceeding in a system “road construction – subsoil” should be in such case one of the basic requirements of the assessment of the size of internal forces of structure and its durability. Analyses of road constructions are based on: − elements of mechanics, which allows to create computational models, and − results of the experiments included in the criteria of fatigue life analyses. Above approach is a fundamental feature of commonly used mechanistic methods. They allow to use in the conducted evaluations of the fatigue life of structures arbitrarily complex numerical computational models. Considering the work of the system “road construction – subsoil”, it is commonly accepted that, as a result of repetitive loads on the subsoil under pavement, the growth of relatively small deformation in the initial phase is recognized, then this increase disappears, and the deformation takes the character completely reversible. The reliability of calculation model is combined with appropriate use (for a given type of analysis) of constitutive relationships. Phenomena occurring in the initial stage of the system “road construction – subsoil” is unfortunately difficult to interpret in the modeling process. The classic interpretation of the behavior of the material in the elastic-plastic model (e-p) is that elastic phase of the work (e) is undergoing to phase (e-p) by increasing the load (or growth of deformation in the damaging structure). The paper presents the essence of the calibration process of cooperating subsystem in the calculation model of the system “road construction – subsoil”, created for the mechanistic analysis. Calibration process was directed to show the impact of applied constitutive models on its deformation and stress response. The proper comparative base for assessing the reliability of created. This work was supported by the on-going research project “Stabilization of weak soil by application of layer of foamed concrete used in contact with subsoil” (LIDER/022/537/L-4/NCBR/2013) financed by The National Centre for Research and Development within the LIDER Programme. M. Kadela is with the Department of Building Construction Elements and Building Structures on Mining Areas, Building Research Institute, Silesian Branch, Katowice, Poland (phone: +48 32 730 29 47; fax: +48 32 730 25 22; e-mail: m.kadela@ itb.pl). models should be, however, the actual, monitored system “road construction – subsoil”. The paper presents too behavior of subsoil under cyclic load transmitted by pavement layers. The response of subsoil to cyclic load is recorded in situ by the observation system (sensors) installed on the testing ground prepared for this purpose, being a part of the test road near Katowice, in Poland. A different behavior of the homogeneous subsoil under pavement is observed for different seasons of the year, when pavement construction works as a flexible structure in summer, and as a rigid plate in winter. Albeit the observed character of subsoil response is the same regardless of the applied load and area values, this response can be divided into: - zone of indirect action of the applied load; this zone extends to the depth of 1,0 m under the pavement, - zone of a small strain, extending to about 2,0 m.Keywords: road structure, constitutive model, calculation model, pavement, soil, FEA, response of soil, monitored system
Procedia PDF Downloads 3571793 Waist Circumference-Related Performance of Tense Indices during Varying Pediatric Obesity States and Metabolic Syndrome
Authors: Mustafa Metin Donma
Abstract:
Obesity increases the risk of elevated blood pressure, which is a metabolic syndrome (MetS) component. Waist circumference (WC) is accepted as an indispensable parameter for the evaluation of these health problems. The close relationship of height with blood pressure values revealed the necessity of including height in tense indices. The association of tense indices with WC has also become an increasingly important topic. The purpose of this study was to develop a tense index that could contribute to differential diagnosis of MetS more than the indices previously introduced. One hundred and ninety-four children, aged 06-11 years, were considered to constitute four groups. The study was performed on normal weight (Group 1), overweight+obese (Group 2), morbid obese [without (Group 3) and with (Group 4) MetS findings] children. Children were included in the groups according to the recommendations of World Health Organization based on age- and gender dependent body mass index percentiles. For MetS group, MetS components well-established before were considered. Anthropometric measurements, as well as blood pressure values were taken. Tense indices were computed. The formula for the first tense index was (SP+DP)/2. The second index was Advanced Donma Tense Index (ADTI). The formula for this index was [(SP+DP)/2] * Height. Statistical calculations were performed. 0.05 was accepted as the p value indicating statistical significance. There were no statistically significant differences between the groups for pulse pressure, systolic-to-diastolic pressure ratio and tense index. Increasing values were observed from Group 1 to Group 4 in terms of mean arterial blood pressure and advanced Donma tense index (ADTI), which was highly correlated with WC in all groups except Group 1. Both tense index and ADTI exhibited significant correlations with WC in Group 3. However, in Group 4, ADTI, which includes height parameter in the equation, was unique in establishing a strong correlation with WC. In conclusion, ADTI was suggested as a tense index while investigating children with MetS.Keywords: blood pressure, child, height, metabolic syndrome, waist circumference
Procedia PDF Downloads 591792 Smart Side View Mirror Camera for Real Time System
Authors: Nunziata Ivana Guarneri, Arcangelo Bruna, Giuseppe Spampinato, Antonio Buemi
Abstract:
In the last decade, automotive companies have invested a lot in terms of innovation about many aspects regarding the automatic driver assistance systems. One innovation regards the usage of a smart camera placed on the car’s side mirror for monitoring the back and lateral road situation. A common road scenario is the overtaking of the preceding car and, in this case, a brief distraction or a loss of concentration can lead the driver to undertake this action, even if there is an already overtaking vehicle, leading to serious accidents. A valid support for a secure drive can be a smart camera system, which is able to automatically analyze the road scenario and consequentially to warn the driver when another vehicle is overtaking. This paper describes a method for monitoring the side view of a vehicle by using camera optical flow motion vectors. The proposed solution detects the presence of incoming vehicles, assesses their distance from the host car, and warns the driver through different levels of alert according to the estimated distance. Due to the low complexity and computational cost, the proposed system ensures real time performances.Keywords: camera calibration, ego-motion, Kalman filters, object tracking, real time systems
Procedia PDF Downloads 2291791 Calibrating Risk Factors for Road Safety in Low Income Countries
Authors: Atheer Al-Nuaimi, Harry Evdorides
Abstract:
Daily, many individuals die or get harmed on streets around the globe, which requires more particular solutions for transport safety issues. International road assessment program (iRAP) is one of the models that are considering many variables which influence road user’s safety. In iRAP, roads have been partitioned into five-star ratings from 1 star (the most reduced level) to 5 star (the most noteworthy level). These levels are calculated from risk factors which represent the effect of the geometric and traffic conditions on rod safety. The result of iRAP philosophy are the countermeasures that can be utilized to enhance safety levels and lessen fatalities numbers. These countermeasures can be utilized independently as a single treatment or in combination with other countermeasures for a section or an entire road. There is a general understanding that the efficiency of a countermeasure is liable to reduction when it is used in combination with various countermeasures. That is, crash diminishment estimations of single countermeasures cannot be summed easily. In the iRAP model, the fatalities estimations are calculated using a specific methodology. However, this methodology suffers overestimations. Therefore, this study has developed a calibration method to estimate fatalities numbers more accurately.Keywords: crash risk factors, international road assessment program, low-income countries, road safety
Procedia PDF Downloads 1481790 Study on Construction of 3D Topography by UAV-Based Images
Authors: Yun-Yao Chi, Chieh-Kai Tsai, Dai-Ling Li
Abstract:
In this paper, a method of fast 3D topography modeling using the high-resolution camera images is studied based on the characteristics of Unmanned Aerial Vehicle (UAV) system for low altitude aerial photogrammetry and the need of three dimensional (3D) urban landscape modeling. Firstly, the existing high-resolution digital camera with special design of overlap images is designed by reconstructing and analyzing the auto-flying paths of UAVs, which improves the self-calibration function to achieve the high precision imaging by software, and further increased the resolution of the imaging system. Secondly, several-angle images including vertical images and oblique images gotten by the UAV system are used for the detail measure of urban land surfaces and the texture extraction. Finally, the aerial photography and 3D topography construction are both developed in campus of Chang-Jung University and in Guerin district area in Tainan, Taiwan, provide authentication model for construction of 3D topography based on combined UAV-based camera images from system. The results demonstrated that the UAV system for low altitude aerial photogrammetry can be used in the construction of 3D topography production, and the technology solution in this paper offers a new, fast, and technical plan for the 3D expression of the city landscape, fine modeling and visualization.Keywords: 3D, topography, UAV, images
Procedia PDF Downloads 3041789 On Deterministic Chaos: Disclosing the Missing Mathematics from the Lorenz-Haken Equations
Authors: Meziane Belkacem
Abstract:
We aim at converting the original 3D Lorenz-Haken equations, which describe laser dynamics –in terms of self-pulsing and chaos- into 2-second-order differential equations, out of which we extract the so far missing mathematics and corroborations with respect to nonlinear interactions. Leaning on basic trigonometry, we pull out important outcomes; a fundamental result attributes chaos to forbidden periodic solutions inside some precisely delimited region of the control parameter space that governs the bewildering dynamics.Keywords: Physics, optics, nonlinear dynamics, chaos
Procedia PDF Downloads 1581788 A Stochastic Volatility Model for Optimal Market-Making
Authors: Zubier Arfan, Paul Johnson
Abstract:
The electronification of financial markets and the rise of algorithmic trading has sparked a lot of interest from the mathematical community, for the market making-problem in particular. The research presented in this short paper solves the classic stochastic control problem in order to derive the strategy for a market-maker. It also shows how to calibrate and simulate the strategy with real limit order book data for back-testing. The ambiguity of limit-order priority in back-testing is dealt with by considering optimistic and pessimistic priority scenarios. The model, although it does outperform a naive strategy, assumes constant volatility, therefore, is not best suited to the LOB data. The Heston model is introduced to describe the price and variance process of the asset. The Trader's constant absolute risk aversion utility function is optimised by numerically solving a 3-dimensional Hamilton-Jacobi-Bellman partial differential equation to find the optimal limit order quotes. The results show that the stochastic volatility market-making model is more suitable for a risk-averse trader and is also less sensitive to calibration error than the constant volatility model.Keywords: market-making, market-microsctrucure, stochastic volatility, quantitative trading
Procedia PDF Downloads 1521787 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam
Authors: Sahand Golmohammadi, Sana Hosseini Shirazi
Abstract:
Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel
Procedia PDF Downloads 731786 Enhancing a Recidivism Prediction Tool with Machine Learning: Effectiveness and Algorithmic Fairness
Authors: Marzieh Karimihaghighi, Carlos Castillo
Abstract:
This work studies how Machine Learning (ML) may be used to increase the effectiveness of a criminal recidivism risk assessment tool, RisCanvi. The two key dimensions of this analysis are predictive accuracy and algorithmic fairness. ML-based prediction models obtained in this study are more accurate at predicting criminal recidivism than the manually-created formula used in RisCanvi, achieving an AUC of 0.76 and 0.73 in predicting violent and general recidivism respectively. However, the improvements are small, and it is noticed that algorithmic discrimination can easily be introduced between groups such as national vs foreigner, or young vs old. It is described how effectiveness and algorithmic fairness objectives can be balanced, applying a method in which a single error disparity in terms of generalized false positive rate is minimized, while calibration is maintained across groups. Obtained results show that this bias mitigation procedure can substantially reduce generalized false positive rate disparities across multiple groups. Based on these results, it is proposed that ML-based criminal recidivism risk prediction should not be introduced without applying algorithmic bias mitigation procedures.Keywords: algorithmic fairness, criminal risk assessment, equalized odds, recidivism
Procedia PDF Downloads 1521785 Modular Probe for Basic Monitoring of Water and Air Quality
Authors: Andrés Calvillo Téllez, Marianne Martínez Zanzarric, José Cruz Núñez Pérez
Abstract:
A modular system that performs basic monitoring of both water and air quality is presented. Monitoring is essential for environmental, aquaculture, and agricultural disciplines, where this type of instrumentation is necessary for data collection. The system uses low-cost components, which allows readings close to those with high-cost probes. The probe collects readings such as the coordinates of the geographical position, as well as the time it records the target parameters of the monitored. The modules or subsystems that make up the probe are the global positioning (GPS), which shows the altitude, latitude, and longitude data of the point where the reading will be recorded, a real-time clock stage, the date marking the time, the module SD memory continuously stores data, data acquisition system, central processing unit, and energy. The system acquires parameters to measure water quality, conductivity, pressure, and temperature, and for air, three types of ammonia, dioxide, and carbon monoxide gases were censored. The information obtained allowed us to identify the schedule of modification of the parameters and the identification of the ideal conditions for the growth of microorganisms in the water.Keywords: calibration, conductivity, datalogger, monitoring, real time clock, water quality
Procedia PDF Downloads 1041784 Foamability and Foam Stability of Gelatine-Sodium Dodecyl Sulfate Solutions
Authors: Virginia Martin Torrejon, Song Hang
Abstract:
Gelatine foams are widely explored materials due to their biodegradability, biocompatibility, and availability. They exhibit outstanding properties and are currently subject to increasing scientific research due to their potential use in different applications, such as biocompatible cellular materials for biomedical products or biofoams as an alternative to fossil-fuel-derived packaging. Gelatine is a highly surface-active polymer, and its concentrated solutions usually do not require surfactants to achieve low surface tension. Still, anionic surfactants like sodium dodecyl sulfate (SDS) strongly interact with gelatine, impacting its viscosity and rheological properties and, in turn, their foaming behaviour. Foaming behaviour is a key parameter for cellular solids produced by mechanical foaming as it has a significant effect on the processing and properties of cellular materials. Foamability mainly impacts the density and the mechanical properties of the foams, while foam stability is crucial to achieving foams with low shrinkage and desirable pore morphology. This work aimed to investigate the influence of SDS on the foaming behaviour of concentrated gelatine foams by using a dynamic foam analyser. The study of maximum foam height created, foam formation behaviour, drainage behaviour, and foam structure with regard to bubble size and distribution were carried out in 10 wt% gelatine solutions prepared at different SDS/gelatine concentration ratios. Comparative rheological and viscometry measurements provided a good correlation with the data from the dynamic foam analyser measurements. SDS incorporation at optimum dosages and gelatine gelation led to highly stable foams at high expansion ratios. The viscosity increase of the hydrogel solution at SDS content increased was a key parameter for foam stabilization. In addition, the impact of SDS content on gelling time and gel strength also considerably impacted the foams' stability and pore structure.Keywords: dynamic foam analyser, gelatine foams stability and foamability, gelatine-surfactant foams, gelatine-SDS rheology, gelatine-SDS viscosity
Procedia PDF Downloads 1541783 Building a Parametric Link between Mapping and Planning: A Sunlight-Adaptive Urban Green System Plan Formation Process
Authors: Chenhao Zhu
Abstract:
Quantitative mapping is playing a growing role in guiding urban planning, such as using a heat map created by CFX, CFD2000, or Envi-met, to adjust the master plan. However, there is no effective quantitative link between the mappings and planning formation. So, in many cases, the decision-making is still based on the planner's subjective interpretation and understanding of these mappings, which limits the improvement of scientific and accuracy brought by the quantitative mapping. Therefore, in this paper, an effort has been made to give a methodology of building a parametric link between the mapping and planning formation. A parametric planning process based on radiant mapping has been proposed for creating an urban green system. In the first step, a script is written in Grasshopper to build a road network and form the block, while the Ladybug Plug-in is used to conduct a radiant analysis in the form of mapping. Then, the research creatively transforms the radiant mapping from a polygon into a data point matrix, because polygon is hard to engage in the design formation. Next, another script is created to select the main green spaces from the road network based on the criteria of radiant intensity and connect the green spaces' central points to generate a green corridor. After that, a control parameter is introduced to adjust the corridor's form based on the radiant intensity. Finally, a green system containing greenspace and green corridor is generated under the quantitative control of the data matrix. The designer only needs to modify the control parameter according to the relevant research results and actual conditions to realize the optimization of the green system. This method can also be applied to much other mapping-based analysis, such as wind environment analysis, thermal environment analysis, and even environmental sensitivity analysis. The parameterized link between the mapping and planning will bring about a more accurate, objective, and scientific planning.Keywords: parametric link, mapping, urban green system, radiant intensity, planning strategy, grasshopper
Procedia PDF Downloads 1421782 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare
Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl
Abstract:
Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval
Procedia PDF Downloads 2031781 Fast-Forward Problem in Asymmetric Double-Well Potential
Authors: Iwan Setiawan, Bobby Eka Gunara, Katshuhiro Nakamura
Abstract:
The theory to accelerate system on quantum dynamics has been constructed to get the desired wave function on shorter time. This theory is developed on adiabatic quantum dynamics which any regulation is done on wave function that satisfies Schrödinger equation. We show accelerated manipulation of WFs with the use of a parameter-dependent in asymmetric double-well potential and also when it’s influenced by electromagnetic fields.Keywords: driving potential, Adiabatic Quantum Dynamics, regulation, electromagnetic field
Procedia PDF Downloads 3421780 Box Counting Dimension of the Union L of Trinomial Curves When α ≥ 1
Authors: Kaoutar Lamrini Uahabi, Mohamed Atounti
Abstract:
In the present work, we consider one category of curves denoted by L(p, k, r, n). These curves are continuous arcs which are trajectories of roots of the trinomial equation zn = αzk + (1 − α), where z is a complex number, n and k are two integers such that 1 ≤ k ≤ n − 1 and α is a real parameter greater than 1. Denoting by L the union of all trinomial curves L(p, k, r, n) and using the box counting dimension as fractal dimension, we will prove that the dimension of L is equal to 3/2.Keywords: feasible angles, fractal dimension, Minkowski sausage, trinomial curves, trinomial equation
Procedia PDF Downloads 1901779 Stability Indicating Method Development and Validation for Estimation of Antiasthmatic Drug in Combined Dosages Formed by RP-HPLC
Authors: Laxman H. Surwase, Lalit V. Sonawane, Bhagwat N. Poul
Abstract:
A simple stability indicating high performance liquid chromatographic method has been developed for the simultaneous determination of Levosalbutamol Sulphate and Ipratropium Bromide in bulk and pharmaceutical dosage form using reverse phase Zorbax Eclipse Plus C8 column (250mm×4.6mm), with mobile phase phosphate buffer (0.05M KH2PO4): acetonitrile (55:45v/v) pH 3.5 adjusted with ortho-phosphoric acid, the flow rate was 1.0 mL/min and the detection was carried at 212 nm. The retention times of Levosalbutamol Sulphate and Ipratropium Bromide were 2.2007 and 2.6611 min respectively. The correlation coefficient of Levosalbutamol Sulphate and Ipratropium Bromide was found to be 0.997 and 0.998.Calibration plots were linear over the concentration ranges 10-100µg/mL for both Levosalbutamol Sulphate and Ipratropium Bromide. The LOD and LOQ of Levosalbutamol Sulphate were 2.520µg/mL and 7.638µg/mL while for Ipratropium Bromide was 1.201µg/mL and 3.640 µg/mL. The accuracy of the proposed method was determined by recovery studies and found to be 100.15% for Levosalbutamol Sulphate and 100.19% for Ipratropium Bromide respectively. The method was validated for accuracy, linearity, sensitivity, precision, robustness, system suitability. The proposed method could be utilized for routine analysis of Levosalbutamol Sulphate and Ipratropium Bromide in bulk and pharmaceutical capsule dosage form.Keywords: levosalbutamol sulphate, ipratropium bromide, RP-HPLC, phosphate buffer, acetonitrile
Procedia PDF Downloads 3511778 Ill-Posed Inverse Problems in Molecular Imaging
Authors: Ranadhir Roy
Abstract:
Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method
Procedia PDF Downloads 2711777 A New Social Vulnerability Index for Evaluating Social Vulnerability to Climate Change at the Local Scale
Authors: Cuong V Nguyen, Ralph Horne, John Fien, France Cheong
Abstract:
Social vulnerability to climate change is increasingly being acknowledged, and proposals to measure and manage it are emerging. Building upon this work, this paper proposes an approach to social vulnerability assessment using a new mechanism to aggregate and account for causal relationships among components of a Social Vulnerability Index (SVI). To operationalize this index, the authors propose a means to develop an appropriate primary dataset, through application of a specifically-designed household survey questionnaire. The data collection and analysis, including calibration and calculation of the SVI is demonstrated through application in case study city in central coastal Vietnam. The calculation of SVI at the fine-grained local neighbourhood scale provides high resolution in vulnerability assessment, and also obviates the need for secondary data, which may be unavailable or problematic, particularly at the local scale in developing countries. The SVI household survey is underpinned by the results of a Delphi survey, an in-depth interview and focus group discussions with local environmental professionals and community members. The research reveals inherent limitations of existing SVIs but also indicates the potential for their use in assessing social vulnerability and making decisions associated with responding to climate change at the local scale.Keywords: climate change, local scale, social vulnerability, social vulnerability index
Procedia PDF Downloads 4361776 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management
Authors: Berk Ecer, Ebru Akcapinar Sezer
Abstract:
Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach
Procedia PDF Downloads 1401775 Using the Bootstrap for Problems Statistics
Authors: Brahim Boukabcha, Amar Rebbouh
Abstract:
The bootstrap method based on the idea of exploiting all the information provided by the initial sample, allows us to study the properties of estimators. In this article we will present a theoretical study on the different methods of bootstrapping and using the technique of re-sampling in statistics inference to calculate the standard error of means of an estimator and determining a confidence interval for an estimated parameter. We apply these methods tested in the regression models and Pareto model, giving the best approximations.Keywords: bootstrap, error standard, bias, jackknife, mean, median, variance, confidence interval, regression models
Procedia PDF Downloads 3811774 Study and Calibration of Autonomous UAV Systems with Thermal Sensing Allowing Screening of Environmental Concerns
Authors: Raahil Sheikh, Abhishek Maurya, Priya Gujjar, Himanshu Dwivedi, Prathamesh Minde
Abstract:
UAVs have been an initial member of our environment since it's the first used by Austrian warfare in Venice. At that stage, they were just pilotless balloons equipped with bombs to be dropped on enemy territory. Over time, technological advancements allowed UAVs to be controlled remotely or autonomously. This study shall mainly focus on the intensification of pre-existing manual drones equipping them with a variety of sensors and making them autonomous, and capable, and purposing them for a variety of roles, including thermal sensing, data collection, tracking creatures, forest fires, volcano detection, hydrothermal studies, urban heat, Island measurement, and other environmental research. The system can also be used for reconnaissance, research, 3D mapping, and search and rescue missions. This study mainly focuses on automating tedious tasks and reducing human errors as much as possible, reducing deployment time, and increasing the overall efficiency, efficacy, and reliability of the UAVs. Creation of a comprehensive Ground Control System UI (GCS) enabling less trained professionals to be able to use the UAV with maximum potency. With the inclusion of such an autonomous system, artificially intelligent paths and environmental gusts and concerns can be avoided.Keywords: UAV, drone, autonomous system, thermal imaging
Procedia PDF Downloads 751773 Comparison Between a Droplet Digital PCR and Real Time PCR Method in Quantification of HBV DNA
Authors: Surangrat Srisurapanon, Chatchawal Wongjitrat, Navin Horthongkham, Ruengpung Sutthent
Abstract:
HBV infection causes a potential serious public health problem. The ability to detect the HBV DNA concentration is of the importance and improved continuously. By using quantitative Polymerase Chain Reaction (qPCR), several factors in standardized; source of material, calibration standard curve and PCR efficiency are inconsistent. Digital PCR (dPCR) is an alternative PCR-based technique for absolute quantification using Poisson's statistics without requiring a standard curve. Therefore, the aim of this study is to compare the data set of HBV DNA generated between dPCR and qPCR methods. All samples were quantified by Abbott’s real time PCR and 54 samples with 2 -6 log10 HBV DNA were selected for comparison with dPCR. Of these 54 samples, there were two outlier samples defined as negative by dPCR. Of these two, samples were defined as negative by dPCR, whereas 52 samples were positive by both the tests. The difference between the two assays was less than 0.25 log IU/mL in 24/52 samples (46%) of paired samples; less than 0.5 log IU/mL in 46/52 samples (88%) and less than 1 log in 50/52 samples (96%). The correlation coefficient was r=0.788 and P-value <0.0001. Comparison to qPCR, data generated by dPCR tend to be the overestimation in the sample with low HBV DNA concentration and underestimated in the sample with high viral load. The variation in DNA by dPCR measurement might be due to the pre-amplification bias, template. Moreover, a minor drawback of dPCR is the large quantity of DNA had to be used when compare to the qPCR. Since the technology is relatively new, the limitations of this assay will be improved.Keywords: hepatitis B virus, real time PCR, digital PCR, DNA quantification
Procedia PDF Downloads 4821772 Parametric Studies of Ethylene Dichloride Purification Process
Authors: Sh. Arzani, H. Kazemi Esfeh, Y. Galeh Zadeh, V. Akbari
Abstract:
Ethylene dichloride is a colorless liquid with a smell like chloroform. EDC is classified in the simple hydrocarbon group which is obtained from chlorinating ethylene gas. Its chemical formula is C2H2Cl2 which is used as the main mediator in VCM production. Therefore, the purification process of EDC is important in the petrochemical process. In this study, the purification unit of EDC was simulated, and then validation was performed. Finally, the impact of process parameter was studied for the degree of EDC purity. The results showed that by increasing the feed flow, the reflux impure combinations increase and result in an EDC purity decrease.Keywords: ethylene dichloride, purification, edc, simulation
Procedia PDF Downloads 3161771 Probabilistic Model for Evaluating Seismic Soil Liquefaction Based on Energy Approach
Authors: Hamid Rostami, Ali Fallah Yeznabad, Mohammad H. Baziar
Abstract:
The energy-based method for evaluating seismic soil liquefaction has two main sections. First is the demand energy, which is dissipated energy of earthquake at a site, and second is the capacity energy as a representation of soil resistance against liquefaction hazard. In this study, using a statistical analysis of recorded data by 14 down-hole array sites in California, an empirical equation was developed to estimate the demand energy at sites. Because determination of capacity energy at a site needs to calculate several site calibration factors, which are obtained by experimental tests, in this study the standard penetration test (SPT) N-value was assumed as an alternative to the capacity energy at a site. Based on this assumption, the empirical equation was employed to calculate the demand energy for 193 liquefied and no-liquefied sites and then these amounts were plotted versus the corresponding SPT numbers for all sites. Subsequently, a discrimination analysis was employed to determine the equations of several boundary curves for various liquefaction likelihoods. Finally, a comparison was made between the probabilistic model and the commonly used stress method. As a conclusion, the results clearly showed that energy-based method can be more reliable than conventional stress-based method in evaluation of liquefaction occurrence.Keywords: energy demand, liquefaction, probabilistic analysis, SPT number
Procedia PDF Downloads 3691770 Study and Calibration of Autonomous UAV Systems With Thermal Sensing With Multi-purpose Roles
Authors: Raahil Sheikh, Prathamesh Minde, Priya Gujjar, Himanshu Dwivedi, Abhishek Maurya
Abstract:
UAVs have been an initial member of our environment since it's the first used by Austrian warfare in Venice. At that stage, they were just pilotless balloons equipped with bombs to be dropped on enemy territory. Over time, technological advancements allowed UAVs to be controlled remotely or autonomously. This study shall mainly focus on the intensification of pre-existing manual drones equipping them with a variety of sensors and making them autonomous, and capable, and purposing them for a variety of roles, including thermal sensing, data collection, tracking creatures, forest fires, volcano detection, hydrothermal studies, urban heat, Island measurement, and other environmental research. The system can also be used for reconnaissance, research, 3D mapping, and search and rescue missions. This study mainly focuses on automating tedious tasks and reducing human errors as much as possible, reducing deployment time, and increasing the overall efficiency, efficacy, and reliability of the UAVs. Creation of a comprehensive Ground Control System UI (GCS) enabling less trained professionals to be able to use the UAV with maximum potency. With the inclusion of such an autonomous system, artificially intelligent paths and environmental gusts and concerns can be avoidedKeywords: UAV, autonomous systems, drones, geo thermal imaging
Procedia PDF Downloads 861769 A Novel Stator Resistance Estimation Method and Control Design of Speed-Sensorless Induction Motor Drives
Authors: N. Ben Si Ali, N. Benalia, N. Zarzouri
Abstract:
Speed sensorless systems are intensively studied during recent years; this is mainly due to their economical benefit and fragility of mechanical sensors and also the difficulty of installing this type of sensor in many applications. These systems suffer from instability problems and sensitivity to parameter mismatch at low speed operation. In this paper an analysis of adaptive observer stability with stator resistance estimation is given.Keywords: motor drive, sensorless control, adaptive observer, stator resistance estimation
Procedia PDF Downloads 3751768 Controller Design Using GA for SMC Systems
Authors: Susy Thomas, Sajju Thomas, Varghese Vaidyan
Abstract:
This paper considers SMCs using linear feedback with switched gains and proposes a method which can minimize the pole perturbation. The method is able to enhance the robustness property of the controller. A pre-assigned neighborhood of the ‘nominal’ positions is assigned and the system poles are not allowed to stray out of these bounds even when parameters variations/uncertainties act upon the system. A quasi SMM is maintained within the assigned boundaries of the sliding surface.Keywords: parameter variations, pole perturbation, sliding mode control, switching surface, robust switching vector
Procedia PDF Downloads 365