Search results for: inverse method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19232

Search results for: inverse method

18302 Failure Simulation of Small-scale Walls with Chases Using the Lattic Discrete Element Method

Authors: Karina C. Azzolin, Luis E. Kosteski, Alisson S. Milani, Raquel C. Zydeck

Abstract:

This work aims to represent Numerically tests experimentally developed in reduced scale walls with horizontal and inclined cuts by using the Lattice Discrete Element Method (LDEM) implemented On de Abaqus/explicit environment. The cuts were performed with depths of 20%, 30%, and 50% On the walls subjected to centered and eccentric loading. The parameters used to evaluate the numerical model are its strength, the failure mode, and the in-plane and out-of-plane displacements.

Keywords: structural masonry, wall chases, small scale, numerical model, lattice discrete element method

Procedia PDF Downloads 177
18301 BTEX (Benzene, Toluene, Ethylbenzene and Xylene) Degradation by Cold Plasma

Authors: Anelise Leal Vieira Cubas, Marina de Medeiros Machado, Marília de Medeiros Machado

Abstract:

The volatile organic compounds - BTEX (Benzene, Toluene, Ethylbenzene, and Xylene) petroleum derivatives, have high rates of toxicity, which may carry consequences for human health, biota and environment. In this direction, this paper proposes a method of treatment of these compounds by using corona discharge plasma technology. The efficiency of the method was tested by analyzing samples of BTEX after going through a plasma reactor by gas chromatography method. The results show that the optimal residence time of the sample in the reactor was 8 minutes.

Keywords: BTEX, degradation, cold plasma, ecological sciences

Procedia PDF Downloads 317
18300 Determine the Optimal Path of Content Adaptation Services with Max Heap Tree

Authors: Shilan Rahmani Azr, Siavash Emtiyaz

Abstract:

Recent development in computing and communicative technologies leads to much easier mobile accessibility to the information. Users can access to the information in different places using various deceives in which the care variety of abilities. Meanwhile, the format and details of electronic documents are changing each day. In these cases, a mismatch is created between content and client’s abilities. Recently the service-oriented content adaption has been developed which the adapting tasks are dedicated to some extended services. In this method, the main problem is to choose the best appropriate service among accessible and distributed services. In this paper, a method for determining the optimal path to the best services, based on the quality control parameters and user preferences, is proposed using max heap tree. The efficiency of this method in contrast to the other previous methods of the content adaptation is related to the determining the optimal path of the best services which are measured. The results show the advantages and progresses of this method in compare of the others.

Keywords: service-oriented content adaption, QoS, max heap tree, web services

Procedia PDF Downloads 259
18299 Investigation of the Effect of Excavation Step in NATM on Surface Settlement by Finite Element Method

Authors: Seyed Mehrdad Gholami

Abstract:

Nowadays, using rail transport system (Metro) is increased in most cities of The world, so the need for safe and economical way of building tunnels and subway stations is felt more and more. One of the most commonly used methods for constructing underground structures in urban areas is NATM (New Austrian tunneling method). In this method, there are some key parameters such as excavation steps and cross-sectional area that have a significant effect on the surface settlement. Settlement is a very important control factor related to safe excavation. In this paper, Finite Element Method is used by Abaqus. R6 station of Tehran Metro Line 6 is built by NATM and the construction of that is studied and analyzed. Considering the outcomes obtained from numerical modeling and comparison with the results of the instrumentation and monitoring of field, finally, the excavation step of 1 meter and longitudinal distance of 14 meters between side drifts is suggested to achieve safe tunneling with allowable settlement.

Keywords: excavation step, NATM, numerical modeling, settlement.

Procedia PDF Downloads 139
18298 Solving SPDEs by Least Squares Method

Authors: Hassan Manouzi

Abstract:

We present in this paper a useful strategy to solve stochastic partial differential equations (SPDEs) involving stochastic coefficients. Using the Wick-product of higher order and the Wiener-Itˆo chaos expansion, the SPDEs is reformulated as a large system of deterministic partial differential equations. To reduce the computational complexity of this system, we shall use a decomposition-coordination method. To obtain the chaos coefficients in the corresponding deterministic equations, we use a least square formulation. Once this approximation is performed, the statistics of the numerical solution can be easily evaluated.

Keywords: least squares, wick product, SPDEs, finite element, wiener chaos expansion, gradient method

Procedia PDF Downloads 419
18297 Ground Deformation Module for the New Laboratory Methods

Authors: O. Giorgishvili

Abstract:

For calculation of foundations one of the important characteristics is the module of deformation (E0). As we all know, the main goal of calculation of the foundations of buildings on deformation is to arrange the base settling and difference in settlings in such limits that do not cause origination of cracks and changes in design levels that will be dangerous to standard operation in the buildings and their individual structures. As is known from the literature and the practical application, the modulus of deformation is determined by two basic methods: laboratory method, soil test on compression (without the side widening) and soil test in field conditions. As we know, the deformation modulus of soil determined by field method is closer to the actual modulus deformation of soil, but the complexity of the tests to be carried out and the financial concerns did not allow determination of ground deformation modulus by field method. Therefore, we determine the ground modulus of deformation by compression method without side widening. Concerning this, we introduce a new way for determination of ground modulus of deformation by laboratory order that occurs by side widening and more accurately reflects the ground modulus of deformation and more accurately reflects the actual modulus of deformation and closer to the modulus of deformation determined by the field method. In this regard, we bring a new approach on the ground deformation detection laboratory module, which is done by widening sides. The tests and the results showed that the proposed method of ground deformation modulus is closer to the results that are obtained in the field, which reflects the foundation's work in real terms more accurately than the compression of the ground deformation module.

Keywords: build, deformation modulus, foundations, ground, laboratory research

Procedia PDF Downloads 369
18296 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures

Procedia PDF Downloads 355
18295 Descent Algorithms for Optimization Algorithms Using q-Derivative

Authors: Geetanjali Panda, Suvrakanti Chakraborty

Abstract:

In this paper, Newton-like descent methods are proposed for unconstrained optimization problems, which use q-derivatives of the gradient of an objective function. First, a local scheme is developed with alternative sufficient optimality condition, and then the method is extended to a global scheme. Moreover, a variant of practical Newton scheme is also developed introducing a real sequence. Global convergence of these schemes is proved under some mild conditions. Numerical experiments and graphical illustrations are provided. Finally, the performance profiles on a test set show that the proposed schemes are competitive to the existing first-order schemes for optimization problems.

Keywords: Descent algorithm, line search method, q calculus, Quasi Newton method

Procedia PDF Downloads 398
18294 End-to-End Pyramid Based Method for Magnetic Resonance Imaging Reconstruction

Authors: Omer Cahana, Ofer Levi, Maya Herman

Abstract:

Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.

Keywords: magnetic resonance imaging, image reconstruction, pyramid network, deep learning

Procedia PDF Downloads 91
18293 Development, Optimization, and Validation of a Synchronous Fluorescence Spectroscopic Method with Multivariate Calibration for the Determination of Amlodipine and Olmesartan Implementing: Experimental Design

Authors: Noha Ibrahim, Eman S. Elzanfaly, Said A. Hassan, Ahmed E. El Gendy

Abstract:

Objectives: The purpose of the study is to develop a sensitive synchronous spectrofluorimetric method with multivariate calibration after studying and optimizing the different variables affecting the native fluorescence intensity of amlodipine and olmesartan implementing an experimental design approach. Method: In the first step, the fractional factorial design used to screen independent factors affecting the intensity of both drugs. The objective of the second step was to optimize the method performance using a Central Composite Face-centred (CCF) design. The optimal experimental conditions obtained from this study were; a temperature of (15°C ± 0.5), the solvent of 0.05N HCl and methanol with a ratio of (90:10, v/v respectively), Δλ of 42 and the addition of 1.48 % surfactant providing a sensitive measurement of amlodipine and olmesartan. The resolution of the binary mixture with a multivariate calibration method has been accomplished mainly by using partial least squares (PLS) model. Results: The recovery percentage for amlodipine besylate and atorvastatin calcium in tablets dosage form were found to be (102 ± 0.24, 99.56 ± 0.10, for amlodipine and Olmesartan, respectively). Conclusion: Method is valid according to some International Conference on Harmonization (ICH) guidelines, providing to be linear over a range of 200-300, 500-1500 ng mL⁻¹ for amlodipine and Olmesartan. The methods were successful to estimate amlodipine besylate and olmesartan in bulk powder and pharmaceutical preparation.

Keywords: amlodipine, central composite face-centred design, experimental design, fractional factorial design, multivariate calibration, olmesartan

Procedia PDF Downloads 150
18292 An Event Relationship Extraction Method Incorporating Deep Feedback Recurrent Neural Network and Bidirectional Long Short-Term Memory

Authors: Yin Yuanling

Abstract:

A Deep Feedback Recurrent Neural Network (DFRNN) and Bidirectional Long Short-Term Memory (BiLSTM) are designed to address the problem of low accuracy of traditional relationship extraction models. This method combines a deep feedback-based recurrent neural network (DFRNN) with a bi-directional long short-term memory (BiLSTM) approach. The method combines DFRNN, which extracts local features of text based on deep feedback recurrent mechanism, BiLSTM, which better extracts global features of text, and Self-Attention, which extracts semantic information. Experiments show that the method achieves an F1 value of 76.69% on the CEC dataset, which is 0.0652 better than the BiLSTM+Self-ATT model, thus optimizing the performance of the deep learning method in the event relationship extraction task.

Keywords: event relations, deep learning, DFRNN models, bi-directional long and short-term memory networks

Procedia PDF Downloads 144
18291 Bi-Directional Evolutionary Topology Optimization Based on Critical Fatigue Constraint

Authors: Khodamorad Nabaki, Jianhu Shen, Xiaodong Huang

Abstract:

This paper develops a method for considering the critical fatigue stress as a constraint in the Bi-directional Evolutionary Structural Optimization (BESO) method. Our aim is to reach an optimal design in which high cycle fatigue failure does not occur for a specific life time. The critical fatigue stress is calculated based on modified Goodman criteria and used as a stress constraint in our topology optimization problem. Since fatigue generally does not occur for compressive stresses, we use the p-norm approach of the stress measurement that considers the highest tensile principal stress in each point as stress measure to calculate the sensitivity numbers. The BESO method has been extended to minimize volume an object subjected to the critical fatigue stress constraint. The optimization results are compared with the results from the compliance minimization problem which shows clearly the merits of our newly developed approach.

Keywords: topology optimization, BESO method, p-norm, fatigue constraint

Procedia PDF Downloads 295
18290 Least Squares Method Identification of Corona Current-Voltage Characteristics and Electromagnetic Field in Electrostatic Precipitator

Authors: H. Nouri, I. E. Achouri, A. Grimes, H. Ait Said, M. Aissou, Y. Zebboudj

Abstract:

This paper aims to analysis the behaviour of DC corona discharge in wire-to-plate electrostatic precipitators (ESP). Current-voltage curves are particularly analysed. Experimental results show that discharge current is strongly affected by the applied voltage. The proposed method of current identification is to use the method of least squares. Least squares problems that of into two categories: linear or ordinary least squares and non-linear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. A closed-form solution (or closed form expression) is any formula that can be evaluated in a finite number of standard operations. The non-linear problem has no closed-form solution and is usually solved by iterative.

Keywords: electrostatic precipitator, current-voltage characteristics, least squares method, electric field, magnetic field

Procedia PDF Downloads 431
18289 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 222
18288 Online Estimation of Clutch Drag Torque in Wet Dual Clutch Transmission Based on Recursive Least Squares

Authors: Hongkui Li, Tongli Lu , Jianwu Zhang

Abstract:

This paper focuses on developing an estimation method of clutch drag torque in wet DCT. The modelling of clutch drag torque is investigated. As the main factor affecting the clutch drag torque, dynamic viscosity of oil is discussed. The paper proposes an estimation method of clutch drag torque based on recursive least squares by utilizing the dynamic equations of gear shifting synchronization process. The results demonstrate that the estimation method has good accuracy and efficiency.

Keywords: clutch drag torque, wet DCT, dynamic viscosity, recursive least squares

Procedia PDF Downloads 318
18287 Design Systems and the Need for a Usability Method: Assessing the Fitness of Components and Interaction Patterns in Design Systems Using Atmosphere Methodology

Authors: Patrik Johansson, Selina Mardh

Abstract:

The present study proposes a usability test method, Atmosphere, to assess the fitness of components and interaction patterns of design systems. The method covers the user’s perception of the components of the system, the efficiency of the logic of the interaction patterns, perceived ease of use as well as the user’s understanding of the intended outcome of interactions. These aspects are assessed by combining measures of first impression, visual affordance and expectancy. The method was applied to a design system developed for the design of an electronic health record system. The study was conducted involving 15 healthcare personnel. It could be concluded that the Atmosphere method provides tangible data that enable human-computer interaction practitioners to analyze and categorize components and patterns based on perceived usability, success rate of identifying interactive components and success rate of understanding components and interaction patterns intended outcome.

Keywords: atomic design, atmosphere methodology, design system, expectancy testing, first impression testing, usability testing, visual affordance testing

Procedia PDF Downloads 180
18286 A Calibration Method of Portable Coordinate Measuring Arm Using Bar Gauge with Cone Holes

Authors: Rim Chang Hyon, Song Hak Jin, Song Kwang Hyok, Jong Ki Hun

Abstract:

The calibration of the articulated arm coordinate measuring machine (AACMM) is key to improving calibration accuracy and saving calibration time. To reduce the time consumed for calibration, we should choose the proper calibration gauges and develop a reasonable calibration method. In addition, we should get the exact optimal solution by accurately removing the rough errors within the experimental data. In this paper, we present a calibration method of the portable coordinate measuring arm (PCMA) using the 1.2m long bar guage with cone-holes. First, we determine the locations of the bar gauge and establish an optimal objective function for identifying the structural parameter errors. Next, we make a mathematical model of the calibration algorithm and present a new mathematical method to remove the rough errors within calibration data. Finally, we find the optimal solution to identify the kinematic parameter errors by using Levenberg-Marquardt algorithm. The experimental results show that our calibration method is very effective in saving the calibration time and improving the calibration accuracy.

Keywords: AACMM, kinematic model, parameter identify, measurement accuracy, calibration

Procedia PDF Downloads 83
18285 The Employees' Classification Method in the Space of Their Job Satisfaction, Loyalty and Involvement

Authors: Svetlana Ignatjeva, Jelena Slesareva

Abstract:

The aim of the study is development and adaptation of the method to analyze and quantify the indicators characterizing the relationship between a company and its employees. Diagnostics of such indicators is one of the most complex and actual issues in psychology of labour. The offered method is based on the questionnaire; its indicators reflect cognitive, affective and connotative components of socio-psychological attitude of employees to be as efficient as possible in their professional activities. This approach allows measure not only the selected factors but also such parameters as cognitive and behavioural dissonances. Adaptation of the questionnaire includes factor structure analysis and suitability analysis of phenomena indicators measured in terms of internal consistency of individual factors. Structural validity of the questionnaire was tested by exploratory factor analysis. Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. Factor analysis allows reduce dimension of the phenomena moving from the indicators to aggregative indexes and latent variables. Aggregative indexes are obtained as the sum of relevant indicators followed by standardization. The coefficient Cronbach's Alpha was used to assess the reliability-consistency of the questionnaire items. The two-step cluster analysis in the space of allocated factors allows classify employees according to their attitude to work in the company. The results of psychometric testing indicate possibility of using the developed technique for the analysis of employees’ attitude towards their work in companies and development of recommendations on their optimization.

Keywords: involved in the organization, loyalty, organizations, method

Procedia PDF Downloads 356
18284 A Safety Analysis Method for Multi-Agent Systems

Authors: Ching Louis Liu, Edmund Kazmierczak, Tim Miller

Abstract:

Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.

Keywords: multi-agent system, safety analysis, safety model, integration map

Procedia PDF Downloads 417
18283 A Holistic Workflow Modeling Method for Business Process Redesign

Authors: Heejung Lee

Abstract:

In a highly competitive environment, it becomes more important to shorten the whole business process while delivering or even enhancing the business value to the customers and suppliers. Although the workflow management systems receive much attention for its capacity to practically support the business process enactment, the effective workflow modeling method remain still challenging and the high degree of process complexity makes it more difficult to gain the short lead time. This paper presents a workflow structuring method in a holistic way that can reduce the process complexity using activity-needs and formal concept analysis, which eventually enhances the key performance such as quality, delivery, and cost in business process.

Keywords: workflow management, re-engineering, formal concept analysis, business process

Procedia PDF Downloads 409
18282 Fire Smoke Removal over Cu-Mn-Ce Oxide Catalyst with CO₂ Sorbent Addition: Co Oxidation and in-situ CO₂ Sorption

Authors: Jin Lin, Shouxiang Lu, Kim Meow Liew

Abstract:

In a fire accident, fire smoke often poses a serious threat to human safety especially in the enclosed space such as submarine and space-crafts environment. Efficient removal of the hazardous gas products particularly a large amount of CO and CO₂ gases from these confined space is critical for the security of the staff and necessary for the post-fire environment recovery. In this work, Cu-Mn-Ce composite oxide catalysts coupled with CO₂ sorbents were prepared using wet impregnation method, solid-state impregnation method and wet/solid-state impregnation method. The as-prepared samples were tested dynamically and isothermally for CO oxidation and CO₂ sorption and further characterized by the X-ray diffraction (XRD), nitrogen adsorption and desorption, and field emission scanning electron microscopy (FE-SEM). The results showed that all the samples were able to catalyze CO into CO₂ and capture CO₂ in situ by chemisorption. Among all the samples, the sample synthesized by the wet/solid-state impregnation method showed the highest catalytic activity toward CO oxidation and the fine ability of CO₂ sorption. The sample prepared by the solid-state impregnation method showed the second CO oxidation performance, while the coupled sample using the wet impregnation method exhibited much poor CO oxidation activity. The various CO oxidation and CO₂ sorption properties of the samples might arise from the different dispersed states of the CO₂ sorbent in the CO catalyst, owing to the different preparation methods. XRD results confirmed the high-dispersed sorbent phase in the samples prepared by the wet and solid impregnation method, while that of the sample prepared by wet/solid-state impregnation method showed the larger bulk phase as indicated by the high-intensity diffraction peaks. Nitrogen adsorption and desorption results further revealed that the latter sample had a higher surface area and pore volume, which were beneficial for the CO oxidation over the catalyst. Hence, the Cu-Mn-Ce oxide catalyst coupled with CO₂ sorbent using wet/solid-state impregnation method could be a good choice for fire smoke removal in the enclosed space.

Keywords: CO oxidation, CO₂ sorption, preparation methods, smoke removal

Procedia PDF Downloads 139
18281 K-Means Clustering-Based Infinite Feature Selection Method

Authors: Seyyedeh Faezeh Hassani Ziabari, Sadegh Eskandari, Maziar Salahi

Abstract:

Infinite Feature Selection (IFS) algorithm is an efficient feature selection algorithm that selects a subset of features of all sizes (including infinity). In this paper, we present an improved version of it, called clustering IFS (CIFS), by clustering the dataset in advance. To do so, first, we apply the K-means algorithm to cluster the dataset, then we apply IFS. In the CIFS method, the spatial and temporal complexities are reduced compared to the IFS method. Experimental results on 6 datasets show the superiority of CIFS compared to IFS in terms of accuracy, running time, and memory consumption.

Keywords: feature selection, infinite feature selection, clustering, graph

Procedia PDF Downloads 128
18280 An Optimal Control Model to Determine Body Forces of Stokes Flow

Authors: Yuanhao Gao, Pin Lin, Kees Weijer

Abstract:

In this paper, we will determine the external body force distribution with analysis of stokes fluid motion using mathematical modelling and numerical approaching. The body force distribution is regarded as the unknown variable and could be determined by the idea of optimal control theory. The Stokes flow motion and its velocity are generated by given forces in a unit square domain. A regularized objective functional is built to match the numerical result of flow velocity with the generated velocity data. So that the force distribution could be determined by minimizing the value of objective functional, which is also the difference between the numerical and experimental velocity. Then after utilizing the Lagrange multiplier method, some partial differential equations are formulated consisting the optimal control system to solve. Finite element method and conjugate gradient method are used to discretize equations and deduce the iterative expression of target body force to compute the velocity numerically and body force distribution. Programming environment FreeFEM++ supports the implementation of this model.

Keywords: optimal control model, Stokes equation, finite element method, conjugate gradient method

Procedia PDF Downloads 405
18279 RP-HPLC Method Development and Its Validation for Simultaneous Estimation of Metoprolol Succinate and Olmesartan Medoxomil Combination in Bulk and Tablet Dosage Form

Authors: S. Jain, R. Savalia, V. Saini

Abstract:

A simple, accurate, precise, sensitive and specific RP-HPLC method was developed and validated for simultaneous estimation of Metoprolol Succinate and Olmesartan Medoxomil in bulk and tablet dosage form. The RP-HPLC method has shown adequate separation for Metoprolol Succinate and Olmesartan Medoxomil from its degradation products. The separation was achieved on a Phenomenex luna ODS C18 (250mm X 4.6mm i.d., 5μm particle size) with an isocratic mixture of acetonitrile: 50mM phosphate buffer pH 4.0 adjusted with glacial acetic acid in the ratio of 55:45 v/v. The mobile phase at a flow rate of 1.0ml/min, Injection volume 20μl and wavelength of detection was kept at 225nm. The retention time for Metoprolol Succinate and Olmesartan Medoxomil was 2.451±0.1min and 6.167±0.1min, respectively. The linearity of the proposed method was investigated in the range of 5-50μg/ml and 2-20μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively. Correlation coefficient was 0.999 and 0.9996 for Metoprolol Succinate and Olmesartan Medoxomil, respectively. The limit of detection was 0.2847μg/ml and 0.1251μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively and the limit of quantification was 0.8630μg/ml and 0.3793μg/ml for Metoprolol and Olmesartan, respectively. Proposed methods were validated as per ICH guidelines for linearity, accuracy, precision, specificity and robustness for estimation of Metoprolol Succinate and Olmesartan Medoxomil in commercially available tablet dosage form and results were found to be satisfactory. Thus the developed and validated stability indicating method can be used successfully for marketed formulations.

Keywords: metoprolol succinate, olmesartan medoxomil, RP-HPLC method, validation, ICH

Procedia PDF Downloads 315
18278 Fractional Residue Number System

Authors: Parisa Khoshvaght, Mehdi Hosseinzadeh

Abstract:

During the past few years, the Residue Number System (RNS) has been receiving considerable interest due to its parallel and fault-tolerant properties. This system is a useful tool for Digital Signal Processing (DSP) since it can support parallel, carry-free, high-speed and low power arithmetic. One of the drawbacks of Residue Number System is the fractional numbers, that is, the corresponding circuit is very hard to realize in conventional CMOS technology. In this paper, we propose a method in which the numbers of transistors are significantly reduced. The related delay is extremely diminished, in the first glance we use this method to solve concerning problem of one decimal functional number some how this proposition can be extended to generalize the idea. Another advantage of this method is the independency on the kind of moduli.

Keywords: computer arithmetic, residue number system, number system, one-Hot, VLSI

Procedia PDF Downloads 495
18277 Structural and Electrochemical Characterization of Columnar-Structured Mn-Doped Bi26Mo10O69-d Electrolytes

Authors: Maria V. Morozova, Zoya A. Mikhaylovskaya, Elena S. Buyanova, Sofia A. Petrova, Ksenia V. Arishina, Robert G. Zaharov

Abstract:

The present work is devoted to the investigation of two series of doped bismuth molybdates: Bi₂₆-₂ₓMn₂ₓMo₁₀O₆₉-d and Bi₂₆Mo₁₀-₂yMn₂yO₆₉-d. Complex oxides were synthesized by conventional solid state technology and by co-precipitation method. The products were identified by powder diffraction. The powders and ceramic samples were examined by means of densitometry, laser diffraction, and electron microscopic methods. Porosity of the ceramic materials was estimated using the hydrostatic method. The electrical conductivity measurements were carried out using impedance spectroscopy method.

Keywords: bismuth molybdate, columnar structures, impedance spectroscopy, oxygen ionic conductors

Procedia PDF Downloads 436
18276 The Emoji Method: An Approach for Identifying and Formulating Problem Ideas

Authors: Thorsten Herrmann, Alexander Laukemann, Hansgeorg Binz, Daniel Roth

Abstract:

For the analysis of already identified and existing problems, the pertinent literature provides a comprehensive collection of approaches as well as methods in order to analyze the problems in detail. But coming up with problems, which are assets worth pursuing further, is often challenging. However, the importance of well-formulated problem ideas and their influence of subsequent creative processes are incontestable and proven. In order to meet the covered challenges, the Institute for Engineering Design and Industrial Design (IKTD) developed the Emoji Method. This paper presents the Emoji Method, which support designers to generate problem ideas in a structured way. Considering research findings from knowledge management and innovation management, research into emojis and emoticons reveal insights by means of identifying and formulating problem ideas within the early design phase. The simple application and the huge supporting potential of the Emoji Method within the early design phase are only few of the many successful results of the conducted evaluation. The Emoji Method encourages designers to identify problem ideas and describe them in a structured way in order to start focused with generating solution ideas for the revealed problem ideas.

Keywords: emojis, problem ideas, innovation management, knowledge management

Procedia PDF Downloads 143
18275 A Finite Element/Finite Volume Method for Dam-Break Flows over Deformable Beds

Authors: Alia Alghosoun, Ashraf Osman, Mohammed Seaid

Abstract:

A coupled two-layer finite volume/finite element method was proposed for solving dam-break flow problem over deformable beds. The governing equations consist of the well-balanced two-layer shallow water equations for the water flow and a linear elastic model for the bed deformations. Deformations in the topography can be caused by a brutal localized force or simply by a class of sliding displacements on the bathymetry. This deformation in the bed is a source of perturbations, on the water surface generating water waves which propagate with different amplitudes and frequencies. Coupling conditions at the interface are also investigated in the current study and two mesh procedure is proposed for the transfer of information through the interface. In the present work a new procedure is implemented at the soil-water interface using the finite element and two-layer finite volume meshes with a conservative distribution of the forces at their intersections. The finite element method employs quadratic elements in an unstructured triangular mesh and the finite volume method uses the Rusanove to reconstruct the numerical fluxes. The numerical coupled method is highly efficient, accurate, well balanced, and it can handle complex geometries as well as rapidly varying flows. Numerical results are presented for several test examples of dam-break flows over deformable beds. Mesh convergence study is performed for both methods, the overall model provides new insight into the problems at minimal computational cost.

Keywords: dam-break flows, deformable beds, finite element method, finite volume method, hybrid techniques, linear elasticity, shallow water equations

Procedia PDF Downloads 181
18274 Human Tracking across Heterogeneous Systems Based on Mobile Agent Technologies

Authors: Tappei Yotsumoto, Atsushi Nomura, Kozo Tanigawa, Kenichi Takahashi, Takao Kawamura, Kazunori Sugahara

Abstract:

In a human tracking system, expanding a monitoring range of one system is complicating the management of devices and increasing its cost. Therefore, we propose a method to realize a wide-range human tracking by connecting small systems. In this paper, we examined an agent deploy method and information contents across the heterogeneous human tracking systems. By implementing the proposed method, we can construct a human tracking system across heterogeneous systems, and the system can track a target continuously between systems.

Keywords: human tracking system, mobile agent, monitoring, heterogeneous systems

Procedia PDF Downloads 536
18273 Synthesis, Characterization and Impedance Analysis of Polypyrrole/La0.7Ca0.3MnO3 Nanocomposites

Authors: M. G. Smitha, M. V. Murugendrappa

Abstract:

Perovskite manganite La0.7Ca0.3MnO3 was synthesized by Sol-gel method. Polymerization of pyrrole was carried by in-situ polymerization method. The composite of pyrrole (Py)/La0.7Ca0.3MnO3 composite in the presence of oxidizing agent ammonium per sulphate to synthesize polypyrrole (PPy)/La0.7Ca0.3MnO3 (LCM) composite was carried out by the same in-situ polymerization method. The PPy/LCM composites were synthesized with varying compositions like 10, 20, 30, 40, and 50 wt.% of LCM in Py. The surface morphologies of these composites were analyzed by using scanning electron microscope (SEM). The images show that LCM particles are embedded in PPy chain. The impedance measurement of PPy/LCM at different temperature ranges from 30 to 180 °C was studied using impedance analyzer. The study shows that impedance is frequency and temperature dependent and it is found to decrease with increase in frequency and temperature.

Keywords: polypyrrole, sol gel, impedance, composites

Procedia PDF Downloads 376