Search results for: iteration method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18486

Search results for: iteration method

18456 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method

Authors: Mai Abdul Latif, Yuntian Feng

Abstract:

Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.

Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear

Procedia PDF Downloads 201
18455 Second Order Analysis of Frames Using Modified Newmark Method

Authors: Seyed Amin Vakili, Sahar Sadat Vakili, Seyed Ehsan Vakili, Nader Abdoli Yazdi

Abstract:

The main purpose of this paper is to present the Modified Newmark Method as a method of non-linear frame analysis by considering the effect of the axial load (second order analysis). The discussion will be restricted to plane frameworks containing a constant cross-section for each element. In addition, it is assumed that the frames are prevented from out-of-plane deflection. This part of the investigation is performed to generalize the established method for the assemblage structures such as frameworks. As explained, the governing differential equations are non-linear and cannot be formulated easily due to unknown axial load of the struts in the frame. By the assumption of constant axial load, the governing equations are changed to linear ones in most methods. Since the modeling and the solutions of the non-linear form of the governing equations are cumbersome, the linear form of the equations would be used in the established method. However, according to the ability of the method to reconsider the minor omitted parameters in modeling during the solution procedure, the axial load in the elements at each stage of the iteration can be computed and applied in the next stage. Therefore, the ability of the method to present an accurate approach to the solutions of non-linear equations will be demonstrated again in this paper.

Keywords: nonlinear, stability, buckling, modified newmark method

Procedia PDF Downloads 387
18454 Adomian’s Decomposition Method to Generalized Magneto-Thermoelasticity

Authors: Hamdy M. Youssef, Eman A. Al-Lehaibi

Abstract:

Due to many applications and problems in the fields of plasma physics, geophysics, and other many topics, the interaction between the strain field and the magnetic field has to be considered. Adomian introduced the decomposition method for solving linear and nonlinear functional equations. This method leads to accurate, computable, approximately convergent solutions of linear and nonlinear partial and ordinary differential equations even the equations with variable coefficients. This paper is dealing with a mathematical model of generalized thermoelasticity of a half-space conducting medium. A magnetic field with constant intensity acts normal to the bounding plane has been assumed. Adomian’s decomposition method has been used to solve the model when the bounding plane is taken to be traction free and thermally loaded by harmonic heating. The numerical results for the temperature increment, the stress, the strain, the displacement, the induced magnetic, and the electric fields have been represented in figures. The magnetic field, the relaxation time, and the angular thermal load have significant effects on all the studied fields.

Keywords: Adomian’s decomposition method, magneto-thermoelasticity, finite conductivity, iteration method, thermal load

Procedia PDF Downloads 119
18453 Basins of Attraction for Quartic-Order Methods

Authors: Young Hee Geum

Abstract:

We compare optimal quartic order method for the multiple zeros of nonlinear equations illustrating the basins of attraction. To construct basins of attraction effectively, we take a 600×600 uniform grid points at the origin of the complex plane and paint the initial values on the basins of attraction with different colors according to the iteration number required for convergence.

Keywords: basins of attraction, convergence, multiple-root, nonlinear equation

Procedia PDF Downloads 231
18452 Modification of Newton Method in Two Point Block Backward Differentiation Formulas

Authors: Khairil I. Othman, Nur N. Kamal, Zarina B. Ibrahim

Abstract:

In this paper, we present modified Newton method as a new strategy for improving the efficiency of Two Point Block Backward Differentiation Formulas (BBDF) when solving stiff systems of ordinary differential equations (ODEs). These methods are constructed to produce two approximate solutions simultaneously at each iteration The detailed implementation of the predictor corrector BBDF with PE(CE)2 with modified Newton are discussed. The proposed modification of BBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing Block Backward Differentiation Formula. Numerical results show the advantage of using the new strategy for solving stiff ODEs in improving the accuracy of the solution.

Keywords: newton method, two point, block, accuracy

Procedia PDF Downloads 329
18451 A Simulated Scenario of WikiGIS to Support the Iteration and Traceability Management of the Geodesign Process

Authors: Wided Batita, Stéphane Roche, Claude Caron

Abstract:

Geodesign is an emergent term related to a new and complex process. Hence, it needs to rethink tools, technologies and platforms in order to efficiently achieve its goals. A few tools have emerged since 2010 such as CommunityViz, GeoPlanner, etc. In the era of Web 2.0 and collaboration, WikiGIS has been proposed as a new category of tools. In this paper, we present WikiGIS functionalities dealing mainly with the iteration and traceability management to support the collaboration of the Geodesign process. Actually, WikiGIS is built on GeoWeb 2.0 technologies —and primarily on wiki— and aims at managing the tracking of participants’ editing. This paper focuses on a simplified simulation to illustrate the strength of WikiGIS in the management of traceability and in the access to history in a Geodesign process. Indeed, a cartographic user interface has been implemented, and then a hypothetical use case has been imagined as proof of concept.

Keywords: geodesign, history, traceability, tracking of participants’ editing, WikiGIS

Procedia PDF Downloads 217
18450 Evaluation of Quasi-Newton Strategy for Algorithmic Acceleration

Authors: T. Martini, J. M. Martínez

Abstract:

An algorithmic acceleration strategy based on quasi-Newton (or secant) methods is displayed for address the practical problem of accelerating the convergence of the Newton-Lagrange method in the case of convergence to critical multipliers. Since the Newton-Lagrange iteration converges locally at a linear rate, it is natural to conjecture that quasi-Newton methods based on the so called secant equation and some minimal variation principle, could converge superlinearly, thus restoring the convergence properties of Newton's method. This strategy can also be applied to accelerate the convergence of algorithms applied to fixed-points problems. Computational experience is reported illustrating the efficiency of this strategy to solve fixed-point problems with linear convergence rate.

Keywords: algorithmic acceleration, fixed-point problems, nonlinear programming, quasi-newton method

Procedia PDF Downloads 464
18449 A Dirty Page Migration Method in Process of Memory Migration Based on Pre-copy Technology

Authors: Kang Zijian, Zhang Tingyu, Burra Venkata Durga Kumar

Abstract:

This article investigates the challenges in memory migration during the live migration of virtual machines. We found three challenges probably existing in pre-copy technology. One of the main challenges is the challenge of downtime migration. Decrease the downtime could promise the normal work for a virtual machine. Although pre-copy technology is greatly decreasing the downtime, we still need to shut down the machine in order to finish the last round of data transfer. This paper provides an optimization scheme for the problems existing in pro-copy technology, mainly the optimization of the dirty page migration mechanism. The typical pre-copy technology copy n-1th’s dirty pages in nth turn. However, our idea is to create a double iteration method to solve this problem.

Keywords: virtual machine, pre-copy technology, memory migration process, downtime, dirty pages migration method

Procedia PDF Downloads 92
18448 Low Complexity Carrier Frequency Offset Estimation for Cooperative Orthogonal Frequency Division Multiplexing Communication Systems without Cyclic Prefix

Authors: Tsui-Tsai Lin

Abstract:

Cooperative orthogonal frequency division multiplexing (OFDM) transmission, which possesses the advantages of better connectivity, expanded coverage, and resistance to frequency selective fading, has been a more powerful solution for the physical layer in wireless communications. However, such a hybrid scheme suffers from the carrier frequency offset (CFO) effects inherited from the OFDM-based systems, which lead to a significant degradation in performance. In addition, insertion of a cyclic prefix (CP) at each symbol block head for combating inter-symbol interference will lead to a reduction in spectral efficiency. The design on the CFO estimation for the cooperative OFDM system without CP is a suspended problem. This motivates us to develop a low complexity CFO estimator for the cooperative OFDM decode-and-forward (DF) communication system without CP over the multipath fading channel. Especially, using a block-type pilot, the CFO estimation is first derived in accordance with the least square criterion. A reliable performance can be obtained through an exhaustive two-dimensional (2D) search with a penalty of heavy computational complexity. As a remedy, an alternative solution realized with an iteration approach is proposed for the CFO estimation. In contrast to the 2D-search estimator, the iterative method enjoys the advantage of the substantially reduced implementation complexity without sacrificing the estimate performance. Computer simulations have been presented to demonstrate the efficacy of the proposed CFO estimation.

Keywords: cooperative transmission, orthogonal frequency division multiplexing (OFDM), carrier frequency offset, iteration

Procedia PDF Downloads 242
18447 Development of Chronic Obstructive Pulmonary Disease (COPD) Proforma (E-ICP) to Improve Guideline Adherence in Emergency Department: Modified Delphi Study

Authors: Hancy Issac, Gerben Keijzers, Ian Yang, Clint Moloney, Jackie Lea, Melissa Taylor

Abstract:

Introduction: Chronic obstructive pulmonary disease guideline non-adherence is associated with a reduction in health-related quality of life in patients (HRQoL). Improving guideline adherence has the potential to mitigate fragmented care thereby sustaining pulmonary function, preventing acute exacerbations, reducing economic health burdens, and enhancing HRQoL. The development of an electronic proforma stemming from expert consensus, including digital guideline resources and direct interdisciplinary referrals is hypothesised to improve guideline adherence and patient outcomes for emergency department (ED) patients with COPD. Aim: The aim of this study was to develop consensus among ED and respiratory staff for the correct composition of a COPD electronic proforma that aids in guideline adherence and management in the ED. Methods: This study adopted a mixed-method design to develop the most important indicators of care in the ED. The study involved three phases: (1) a systematic literature review and qualitative interdisciplinary staff interviews to assess barriers and solutions for guideline adherence and qualitative interdisciplinary staff interviews, (2) a modified Delphi panel to select interventions for the proforma, and (3) a consensus process through three rounds of scoring through a quantitative survey (ED and Respiratory consensus) and qualitative thematic analysis on each indicator. Results: The electronic proforma achieved acceptable and good internal consistency through all iterations from national emergency department and respiratory department interdisciplinary experts. Cronbach’s alpha score for internal consistency (α) in iteration 1 emergency department cohort (EDC) (α = 0.80 [CI = 0.89%]), respiratory department cohort (RDC) (α = 0.95 [CI = 0.98%]). Iteration 2 reported EDC (α = 0.85 [CI = 0.97%]) and RDC (α = 0.86 [CI = 0.97%]). Iteration 3 revealed EDC (α = 0.73 [CI = 0.91%]) and RDC (α = 0.86 [CI = 0.95%]), respectively. Conclusion: Electronic proformas have the potential to facilitate direct referrals from the ED leading to reduced hospital admissions, reduced length of hospital stays, holistic care, improved health care and quality of life and improved interdisciplinary guideline adherence.

Keywords: COPD, electronic proforma, modified delphi study, interdisciplinary, guideline adherence, COPD-X plan

Procedia PDF Downloads 26
18446 On the System of Split Equilibrium and Fixed Point Problems in Real Hilbert Spaces

Authors: Francis O. Nwawuru, Jeremiah N. Ezeora

Abstract:

In this paper, a new algorithm for solving the system of split equilibrium and fixed point problems in real Hilbert spaces is considered. The equilibrium bifunction involves a nite family of pseudo-monotone mappings, which is an improvement over monotone operators. More so, it turns out that the solution of the finite family of nonexpansive mappings. The regularized parameters do not depend on Lipschitz constants. Also, the computations of the stepsize, which plays a crucial role in the convergence analysis of the proposed method, do require prior knowledge of the norm of the involved bounded linear map. Furthermore, to speed up the rate of convergence, an inertial term technique is introduced in the proposed method. Under standard assumptions on the operators and the control sequences, using a modified Halpern iteration method, we establish strong convergence, a desired result in applications. Finally, the proposed scheme is applied to solve some optimization problems. The result obtained improves numerous results announced earlier in this direction.

Keywords: equilibrium, Hilbert spaces, fixed point, nonexpansive mapping, extragradient method, regularized equilibrium

Procedia PDF Downloads 16
18445 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 258
18444 Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation

Authors: Mohammad Sahadet Hossain, Shamsil Arifeen, Mehrab Hossian Likhon

Abstract:

In this paper, we analyze a linear time invariant (LTI) descriptor system of large dimension. Since these systems are difficult to simulate, compute and store, we attempt to reduce this large system using Low Rank Cholesky Factorized Alternating Directions Implicit (LRCF-ADI) iteration followed by Square Root Balanced Truncation. LRCF-ADI solves the dual Lyapunov equations of the large system and gives low-rank Cholesky factors of the gramians as the solution. Using these cholesky factors, we compute the Hankel singular values via singular value decomposition. Later, implementing square root balanced truncation, the reduced system is obtained. The bode plots of original and lower order systems are used to show that the magnitude and phase responses are same for both the systems.

Keywords: low-rank cholesky factor alternating directions implicit iteration, LTI Descriptor system, Lyapunov equations, Square-root balanced truncation

Procedia PDF Downloads 385
18443 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 320
18442 Some Efficient Higher Order Iterative Schemes for Solving Nonlinear Systems

Authors: Sandeep Singh

Abstract:

In this article, two classes of iterative schemes are proposed for approximating solutions of nonlinear systems of equations whose orders of convergence are six and eight respectively. Sixth order scheme requires the evaluation of two vector-functions, two first Fr'echet derivatives and three matrices inversion per iteration. This three-step sixth-order method is further extended to eighth-order method which requires one more step and the evaluation of one extra vector-function. Moreover, computational efficiency is compared with some other recently published methods in which we found, our methods are more efficient than existing numerical methods for higher and medium size nonlinear system of equations. Numerical tests are performed to validate the proposed schemes.

Keywords: Nonlinear systems, Computational complexity, order of convergence, Jarratt-type scheme

Procedia PDF Downloads 107
18441 Non-Convex Multi Objective Economic Dispatch Using Ramp Rate Biogeography Based Optimization

Authors: Susanta Kumar Gachhayat, S. K. Dash

Abstract:

Multi objective non-convex economic dispatch problems of a thermal power plant are of grave concern for deciding the cost of generation and reduction of emission level for diminishing the global warming level for improving green-house effect. This paper deals with ramp rate constraints for achieving better inequality constraints so as to incorporate valve point loading for cost of generation in thermal power plant through ramp rate biogeography based optimization involving mutation and migration. Through 50 out of 100 trials, the cost function and emission objective function were found to have outperformed other classical methods such as lambda iteration method, quadratic programming method and many heuristic methods like particle swarm optimization method, weight improved particle swarm optimization method, constriction factor based particle swarm optimization method, moderate random particle swarm optimization method etc. Ramp rate biogeography based optimization applications prove quite advantageous in solving non convex multi objective economic dispatch problems subjected to nonlinear loads that pollute the source giving rise to third harmonic distortions and other such disturbances.

Keywords: economic load dispatch, ELD, biogeography-based optimization, BBO, ramp rate biogeography-based optimization, RRBBO, valve-point loading, VPL

Procedia PDF Downloads 354
18440 A FE-Based Scheme for Computing Wave Interaction with Nonlinear Damage and Generation of Harmonics in Layered Composite Structures

Authors: R. K. Apalowo, D. Chronopoulos

Abstract:

A Finite Element (FE) based scheme is presented for quantifying guided wave interaction with Localised Nonlinear Structural Damage (LNSD) within structures of arbitrary layering and geometric complexity. The through-thickness mode-shape of the structure is obtained through a wave and finite element method. This is applied in a time domain FE simulation in order to generate time harmonic excitation for a specific wave mode. Interaction of the wave with LNSD within the system is computed through an element activation and deactivation iteration. The scheme is validated against experimental measurements and a WFE-FE methodology for calculating wave interaction with damage. Case studies for guided wave interaction with crack and delamination are presented to verify the robustness of the proposed method in classifying and identifying damage.

Keywords: layered structures, nonlinear ultrasound, wave interaction with nonlinear damage, wave finite element, finite element

Procedia PDF Downloads 125
18439 Least Support Orthogonal Matching Pursuit (LS-OMP) Recovery Method for Invisible Watermarking Image

Authors: Israa Sh. Tawfic, Sema Koc Kayhan

Abstract:

In this paper, first, we propose least support orthogonal matching pursuit (LS-OMP) algorithm to improve the performance, of the OMP (orthogonal matching pursuit) algorithm. LS-OMP algorithm adaptively chooses optimum L (least part of support), at each iteration. This modification helps to reduce the computational complexity significantly and performs better than OMP algorithm. Second, we give the procedure for the invisible image watermarking in the presence of compressive sampling. The image reconstruction based on a set of watermarked measurements is performed using LS-OMP.

Keywords: compressed sensing, orthogonal matching pursuit, restricted isometry property, signal reconstruction, least support orthogonal matching pursuit, watermark

Procedia PDF Downloads 311
18438 Aeroelastic Stability Analysis in Turbomachinery Using Reduced Order Aeroelastic Model Tool

Authors: Chandra Shekhar Prasad, Ludek Pesek Prasad

Abstract:

In the present day fan blade of aero engine, turboprop propellers, gas turbine or steam turbine low-pressure blades are getting bigger, lighter and thus, become more flexible. Therefore, flutter, forced blade response and vibration related failure of the high aspect ratio blade are of main concern for the designers, thus need to be address properly in order to achieve successful component design. At the preliminary design stage large number of design iteration is need to achieve the utter free safe design. Most of the numerical method used for aeroelastic analysis is based on field-based methods such as finite difference method, finite element method, finite volume method or coupled. These numerical schemes are used to solve the coupled fluid Flow-Structural equation based on full Naiver-Stokes (NS) along with structural mechanics’ equations. These type of schemes provides very accurate results if modeled properly, however, they are computationally very expensive and need large computing recourse along with good personal expertise. Therefore, it is not the first choice for aeroelastic analysis during preliminary design phase. A reduced order aeroelastic model (ROAM) with acceptable accuracy and fast execution is more demanded at this stage. Similar ROAM are being used by other researchers for aeroelastic and force response analysis of turbomachinery. In the present paper new medium fidelity ROAM is successfully developed and implemented in numerical tool to simulated the aeroelastic stability phenomena in turbomachinery and well as flexible wings. In the present, a hybrid flow solver based on 3D viscous-inviscid coupled 3D panel method (PM) and 3d discrete vortex particle method (DVM) is developed, viscous parameters are estimated using boundary layer(BL) approach. This method can simulate flow separation and is a good compromise between accuracy and speed compared to CFD. In the second phase of the research work, the flow solver (PM) will be coupled with ROM non-linear beam element method (BEM) based FEM structural solver (with multibody capabilities) to perform the complete aeroelastic simulation of a steam turbine bladed disk, propellers, fan blades, aircraft wing etc. The partitioned based coupling approach is used for fluid-structure interaction (FSI). The numerical results are compared with experimental data for different test cases and for the blade cascade test case, experimental data is obtained from in-house lab experiments at IT CAS. Furthermore, the results from the new aeroelastic model will be compared with classical CFD-CSD based aeroelastic models. The proposed methodology for the aeroelastic stability analysis of gas turbine or steam turbine blades, or propellers or fan blades will provide researchers and engineers a fast, cost-effective and efficient tool for aeroelastic (classical flutter) analysis for different design at preliminary design stage where large numbers of design iteration are required in short time frame.

Keywords: aeroelasticity, beam element method (BEM), discrete vortex particle method (DVM), classical flutter, fluid-structure interaction (FSI), panel method, reduce order aeroelastic model (ROAM), turbomachinery, viscous-inviscid coupling

Procedia PDF Downloads 233
18437 An Accurate Method for Phylogeny Tree Reconstruction Based on a Modified Wild Dog Algorithm

Authors: Essam Al Daoud

Abstract:

This study solves a phylogeny problem by using modified wild dog pack optimization. The least squares error is considered as a cost function that needs to be minimized. Therefore, in each iteration, new distance matrices based on the constructed trees are calculated and used to select the alpha dog. To test the suggested algorithm, ten homologous genes are selected and collected from National Center for Biotechnology Information (NCBI) databanks (i.e., 16S, 18S, 28S, Cox 1, ITS1, ITS2, ETS, ATPB, Hsp90, and STN). The data are divided into three categories: 50 taxa, 100 taxa and 500 taxa. The empirical results show that the proposed algorithm is more reliable and accurate than other implemented methods.

Keywords: least square, neighbor joining, phylogenetic tree, wild dog pack

Procedia PDF Downloads 293
18436 An Optimal and Efficient Family of Fourth-Order Methods for Nonlinear Equations

Authors: Parshanth Maroju, Ramandeep Behl, Sandile S. Motsa

Abstract:

In this study, we proposed a simple and interesting family of fourth-order multi-point methods without memory for obtaining simple roots. This family requires only three functional evaluations (viz. two of functions f(xn), f(yn) and third one of its first-order derivative f'(xn)) per iteration. Moreover, the accuracy and validity of new schemes is tested by a number of numerical examples are also proposed to illustrate their accuracy by comparing them with the new existing optimal fourth-order methods available in the literature. It is found that they are very useful in high precision computations. Further, the dynamic study of these methods also supports the theoretical aspect.

Keywords: basins of attraction, nonlinear equations, simple roots, Newton's method

Procedia PDF Downloads 290
18435 Analytical Formulae for the Approach Velocity Head Coefficient

Authors: Abdulrahman Abdulrahman

Abstract:

Critical depth meters, such as abroad crested weir, Venture Flume and combined control flume are standard devices for measuring flow in open channels. The discharge relation for these devices cannot be solved directly, but it needs iteration process to account for the approach velocity head. In this paper, analytical solution was developed to calculate the discharge in a combined critical depth-meter namely, a hump combined with lateral contraction in rectangular channel with subcritical approach flow including energy losses. Also analytical formulae were derived for approach velocity head coefficient for different types of critical depth meters. The solution was derived by solving a standard cubic equation considering energy loss on the base of trigonometric identity. The advantage of this technique is to avoid iteration process adopted in measuring flow by these devices. Numerical examples are chosen for demonstration of the proposed solution.

Keywords: broad crested weir, combined control meter, control structures, critical flow, discharge measurement, flow control, hydraulic engineering, hydraulic structures, open channel flow

Procedia PDF Downloads 248
18434 Solution of the Nonrelativistic Radial Wave Equation of Hydrogen Atom Using the Green's Function Approach

Authors: F. U. Rahman, R. Q. Zhang

Abstract:

This work aims to develop a systematic numerical technique which can be easily extended to many-body problem. The Lippmann Schwinger equation (integral form of the Schrodinger wave equation) is solved for the nonrelativistic radial wave of hydrogen atom using iterative integration scheme. As the unknown wave function appears on both sides of the Lippmann Schwinger equation, therefore an approximate wave function is used in order to solve the equation. The Green’s function is obtained by the method of Laplace transform for the radial wave equation with excluded potential term. Using the Lippmann Schwinger equation, the product of approximate wave function, the Green’s function and the potential term is integrated iteratively. Finally, the wave function is normalized and plotted against the standard radial wave for comparison. The outcome wave function converges to the standard wave function with the increasing number of iteration. Results are verified for the first fifteen states of hydrogen atom. The method is efficient and consistent and can be applied to complex systems in future.

Keywords: Green’s function, hydrogen atom, Lippmann Schwinger equation, radial wave

Procedia PDF Downloads 358
18433 The Mathematics of Fractal Art: Using a Derived Cubic Method and the Julia Programming Language to Make Fractal Zoom Videos

Authors: Darsh N. Patel, Eric Olson

Abstract:

Fractals can be found everywhere, whether it be the shape of a leaf or a system of blood vessels. Fractals are used to help study and understand different physical and mathematical processes; however, their artistic nature is also beautiful to simply explore. This project explores fractals generated by a cubically convergent extension to Newton's method. With this iteration as a starting point, a complex plane spanning from -2 to 2 is created with a color wheel mapped onto it. Next, the polynomial whose roots the fractal will generate from is established. From the Fundamental Theorem of Algebra, it is known that any polynomial has as many roots (counted by multiplicity) as its degree. When generating the fractals, each root will receive its own color. The complex plane can then be colored to indicate the basins of attraction that converge to each root. From a computational point of view, this project’s code identifies which points converge to which roots and then obtains fractal images. A zoom path into the fractal was implemented to easily visualize the self-similar structure. This path was obtained by selecting keyframes at different magnifications through which a path is then interpolated. Using parallel processing, many images were generated and condensed into a video. This project illustrates how practical techniques used for scientific visualization can also have an artistic side.

Keywords: fractals, cubic method, Julia programming language, basin of attraction

Procedia PDF Downloads 227
18432 Improved Acoustic Source Sensing and Localization Based On Robot Locomotion

Authors: V. Ramu Reddy, Parijat Deshpande, Ranjan Dasgupta

Abstract:

This paper presents different methodology for an acoustic source sensing and localization in an unknown environment. The developed methodology includes an acoustic based sensing and localization system, a converging target localization based on the recursive direction of arrival (DOA) error minimization, and a regressive obstacle avoidance function. Our method is able to augment the existing proven localization techniques and improve results incrementally by utilizing robot locomotion and is capable of converging to a position estimate with greater accuracy using fewer measurements. The results also evinced the DOA error minimization at each iteration, improvement in time for reaching the destination and the efficiency of this target localization method as gradually converging to the real target position. Initially, the system is tested using Kinect mounted on turntable with DOA markings which serve as a ground truth and then our approach is validated using a FireBird VI (FBVI) mobile robot on which Kinect is used to obtain bearing information.

Keywords: acoustic source localization, acoustic sensing, recursive direction of arrival, robot locomotion

Procedia PDF Downloads 460
18431 Determining Optimal Number of Trees in Random Forests

Authors: Songul Cinaroglu

Abstract:

Background: Random Forest is an efficient, multi-class machine learning method using for classification, regression and other tasks. This method is operating by constructing each tree using different bootstrap sample of the data. Determining the number of trees in random forests is an open question in the literature for studies about improving classification performance of random forests. Aim: The aim of this study is to analyze whether there is an optimal number of trees in Random Forests and how performance of Random Forests differ according to increase in number of trees using sample health data sets in R programme. Method: In this study we analyzed the performance of Random Forests as the number of trees grows and doubling the number of trees at every iteration using “random forest” package in R programme. For determining minimum and optimal number of trees we performed Mc Nemar test and Area Under ROC Curve respectively. Results: At the end of the analysis it was found that as the number of trees grows, it does not always means that the performance of the forest is better than forests which have fever trees. In other words larger number of trees only increases computational costs but not increases performance results. Conclusion: Despite general practice in using random forests is to generate large number of trees for having high performance results, this study shows that increasing number of trees doesn’t always improves performance. Future studies can compare different kinds of data sets and different performance measures to test whether Random Forest performance results change as number of trees increase or not.

Keywords: classification methods, decision trees, number of trees, random forest

Procedia PDF Downloads 372
18430 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation

Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes

Abstract:

The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.

Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization

Procedia PDF Downloads 279
18429 A Deep Learning Based Method for Faster 3D Structural Topology Optimization

Authors: Arya Prakash Padhi, Anupam Chakrabarti, Rajib Chowdhury

Abstract:

Topology or layout optimization often gives better performing economic structures and is very helpful in the conceptual design phase. But traditionally it is being done in finite element-based optimization schemes which, although gives a good result, is very time-consuming especially in 3D structures. Among other alternatives machine learning, especially deep learning-based methods, have a very good potential in resolving this computational issue. Here convolutional neural network (3D-CNN) based variational auto encoder (VAE) is trained using a dataset generated from commercially available topology optimization code ABAQUS Tosca using solid isotropic material with penalization (SIMP) method for compliance minimization. The encoded data in latent space is then fed to a 3D generative adversarial network (3D-GAN) to generate the outcome in 64x64x64 size. Here the network consists of 3D volumetric CNN with rectified linear unit (ReLU) activation in between and sigmoid activation in the end. The proposed network is seen to provide almost optimal results with significantly reduced computational time, as there is no iteration involved.

Keywords: 3D generative adversarial network, deep learning, structural topology optimization, variational auto encoder

Procedia PDF Downloads 137
18428 Human Action Retrieval System Using Features Weight Updating Based Relevance Feedback Approach

Authors: Munaf Rashid

Abstract:

For content-based human action retrieval systems, search accuracy is often inferior because of the following two reasons 1) global information pertaining to videos is totally ignored, only low level motion descriptors are considered as a significant feature to match the similarity between query and database videos, and 2) the semantic gap between the high level user concept and low level visual features. Hence, in this paper, we propose a method that will address these two issues and in doing so, this paper contributes in two ways. Firstly, we introduce a method that uses both global and local information in one framework for an action retrieval task. Secondly, to minimize the semantic gap, a user concept is involved by incorporating features weight updating (FWU) Relevance Feedback (RF) approach. We use statistical characteristics to dynamically update weights of the feature descriptors so that after every RF iteration feature space is modified accordingly. For testing and validation purpose two human action recognition datasets have been utilized, namely Weizmann and UCF. Results show that even with a number of visual challenges the proposed approach performs well.

Keywords: relevance feedback (RF), action retrieval, semantic gap, feature descriptor, codebook

Procedia PDF Downloads 437
18427 Characteristics-Based Lq-Control of Cracking Reactor by Integral Reinforcement

Authors: Jana Abu Ahmada, Zaineb Mohamed, Ilyasse Aksikas

Abstract:

The linear quadratic control system of hyperbolic first order partial differential equations (PDEs) are presented. The aim of this research is to control chemical reactions. This is achieved by converting the PDEs system to ordinary differential equations (ODEs) using the method of characteristics to reduce the system to control it by using the integral reinforcement learning. The designed controller is applied to a catalytic cracking reactor. Background—Transport-Reaction systems cover a large chemical and bio-chemical processes. They are best described by nonlinear PDEs derived from mass and energy balances. As a main application to be considered in this work is the catalytic cracking reactor. Indeed, the cracking reactor is widely used to convert high-boiling, high-molecular weight hydrocarbon fractions of petroleum crude oils into more valuable gasoline, olefinic gases, and others. On the other hand, control of PDEs systems is an important and rich area of research. One of the main control techniques is feedback control. This type of control utilizes information coming from the system to correct its trajectories and drive it to a desired state. Moreover, feedback control rejects disturbances and reduces the variation effects on the plant parameters. Linear-quadratic control is a feedback control since the developed optimal input is expressed as feedback on the system state to exponentially stabilize and drive a linear plant to the steady-state while minimizing a cost criterion. The integral reinforcement learning policy iteration technique is a strong method that solves the linear quadratic regulator problem for continuous-time systems online in real time, using only partial information about the system dynamics (i.e. the drift dynamics A of the system need not be known), and without requiring measurements of the state derivative. This is, in effect, a direct (i.e. no system identification procedure is employed) adaptive control scheme for partially unknown linear systems that converges to the optimal control solution. Contribution—The goal of this research is to Develop a characteristics-based optimal controller for a class of hyperbolic PDEs and apply the developed controller to a catalytic cracking reactor model. In the first part, developing an algorithm to control a class of hyperbolic PDEs system will be investigated. The method of characteristics will be employed to convert the PDEs system into a system of ODEs. Then, the control problem will be solved along the characteristic curves. The reinforcement technique is implemented to find the state-feedback matrix. In the other half, applying the developed algorithm to the important application of a catalytic cracking reactor. The main objective is to use the inlet fraction of gas oil as a manipulated variable to drive the process state towards desired trajectories. The outcome of this challenging research would yield the potential to provide a significant technological innovation for the gas industries since the catalytic cracking reactor is one of the most important conversion processes in petroleum refineries.

Keywords: PDEs, reinforcement iteration, method of characteristics, riccati equation, cracking reactor

Procedia PDF Downloads 64