Search results for: linear acceleration method
20543 Performance Based Design of Masonry Infilled Reinforced Concrete Frames for Near-Field Earthquakes Using Energy Methods
Authors: Alok Madan, Arshad K. Hashmi
Abstract:
Performance based design (PBD) is an iterative exercise in which a preliminary trial design of the building structure is selected and if the selected trial design of the building structure does not conform to the desired performance objective, the trial design is revised. In this context, development of a fundamental approach for performance based seismic design of masonry infilled frames with minimum number of trials is an important objective. The paper presents a plastic design procedure based on the energy balance concept for PBD of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames subjected to near-field earthquakes. The proposed energy based plastic design procedure was implemented for trial performance based seismic design of representative masonry infilled reinforced concrete frames with various practically relevant distributions of masonry infill panels over the frame elevation. Non-linear dynamic analyses of the trial PBD of masonry infilled R/C frames was performed under the action of near-field earthquake ground motions. The results of non-linear dynamic analyses demonstrate that the proposed energy method is effective for performance based design of masonry infilled R/C frames under near-field as well as far-field earthquakes.Keywords: masonry infilled frame, energy methods, near-fault ground motions, pushover analysis, nonlinear dynamic analysis, seismic demand
Procedia PDF Downloads 29220542 Understanding the Damage Evolution and the Risk of Failure of Pyrrhotite Containing Concrete Foundations
Authors: Marisa Chrysochoou, James Mahoney, Kay Wille
Abstract:
Pyrrhotite is an iron-sulfide mineral which releases sulfuric acid when exposed to water and oxygen. The presence of this mineral in concrete foundations across Connecticut and Massachusetts in the US is causing in some cases premature failure. This has resulted in a devastating crisis for all parties affected by this type of failure which can take up to 15-25 years before internal damage becomes visible on the surface. This study shares laboratory results aimed to investigate the fundamental mechanisms of pyrrhotite reaction and to further the understanding of its deterioration kinetics within concrete. This includes the following analyses: total sulfur, wavelength dispersive X-ray fluorescence, expansion, reaction rate combined with ion-chromatography, as well as damage evolution using electro-chemical acceleration. This information is coupled to a statistical analysis of over 150 analyzed concrete foundations. Those samples were obtained and process using a developed and validated sampling method that is minimally invasive to the foundation in use, provides representative samples of the concrete matrix across the entire foundation, and is time and cost-efficient. The processed samples were then analyzed using a developed modular testing method based on total sulfur and wavelength dispersive X-ray fluorescence analysis to quantify the amount of pyrrhotite. As part of the statistical analysis the results were grouped into the following three categories: no damage observed and no pyrrhotite detected, no damage observed and pyrrhotite detected and damaged observed and pyrrhotite detected. As expected, a strong correlation between amount of pyrrhotite, age of the concrete and damage is observed. Information from the laboratory investigation and from the statistical analysis of field samples will aid in forming a scientific basis to support the decision process towards sustainable financial and administrative solutions by state and local stakeholders.Keywords: concrete, pyrrhotite, risk of failure, statistical analysis
Procedia PDF Downloads 6820541 Energy Management System
Authors: S. Periyadharshini, K. Ramkumar, S. Jayalalitha, M. GuruPrasath, R. Manikandan
Abstract:
This paper presents a formulation and solution for industrial load management and product grade problem. The formulation is created using linear programming technique thereby optimizing the electricity cost by scheduling the loads satisfying the process, storage, time zone and production constraints which will create an impact of reducing maximum demand and thereby reducing the electricity cost. Product grade problem is formulated using integer linear programming technique of optimization using lingo software and the results show that overall increase in profit margin. In this paper, time of use tariff is utilized and this technique will provide significant reductions in peak electricity consumption.Keywords: cement industries, integer programming, optimal formulation, objective function, constraints
Procedia PDF Downloads 59320540 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures
Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.
Abstract:
Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.Keywords: CFD, reacting flow, DDT, gas explosion
Procedia PDF Downloads 9020539 Kriging-Based Global Optimization Method for Bluff Body Drag Reduction
Authors: Bingxi Huang, Yiqing Li, Marek Morzynski, Bernd R. Noack
Abstract:
We propose a Kriging-based global optimization method for active flow control with multiple actuation parameters. This method is designed to converge quickly and avoid getting trapped into local minima. We follow the model-free explorative gradient method (EGM) to alternate between explorative and exploitive steps. This facilitates a convergence similar to a gradient-based method and the parallel exploration of potentially better minima. In contrast to EGM, both kinds of steps are performed with Kriging surrogate model from the available data. The explorative step maximizes the expected improvement, i.e., favors regions of large uncertainty. The exploitive step identifies the best location of the cost function from the Kriging surrogate model for a subsequent weight-biased linear-gradient descent search method. To verify the effectiveness and robustness of the improved Kriging-based optimization method, we have examined several comparative test problems of varying dimensions with limited evaluation budgets. The results show that the proposed algorithm significantly outperforms some model-free optimization algorithms like genetic algorithm and differential evolution algorithm with a quicker convergence for a given budget. We have also performed direct numerical simulations of the fluidic pinball (N. Deng et al. 2020 J. Fluid Mech.) on three circular cylinders in equilateral-triangular arrangement immersed in an incoming flow at Re=100. The optimal cylinder rotations lead to 44.0% net drag power saving with 85.8% drag reduction and 41.8% actuation power. The optimal results for active flow control based on this configuration have achieved boat-tailing mechanism by employing Coanda forcing and wake stabilization by delaying separation and minimizing the wake region.Keywords: direct numerical simulations, flow control, kriging, stochastic optimization, wake stabilization
Procedia PDF Downloads 10620538 Seismic Response of Structure Using a Three Degree of Freedom Shake Table
Authors: Ketan N. Bajad, Manisha V. Waghmare
Abstract:
Earthquakes are the biggest threat to the civil engineering structures as every year it cost billions of dollars and thousands of deaths, around the world. There are various experimental techniques such as pseudo-dynamic tests – nonlinear structural dynamic technique, real time pseudo dynamic test and shaking table test method that can be employed to verify the seismic performance of structures. Shake table is a device that is used for shaking structural models or building components which are mounted on it. It is a device that simulates a seismic event using existing seismic data and nearly truly reproducing earthquake inputs. This paper deals with the use of shaking table test method to check the response of structure subjected to earthquake. The various types of shake table are vertical shake table, horizontal shake table, servo hydraulic shake table and servo electric shake table. The goal of this experiment is to perform seismic analysis of a civil engineering structure with the help of 3 degree of freedom (i.e. in X Y Z direction) shake table. Three (3) DOF shaking table is a useful experimental apparatus as it imitates a real time desired acceleration vibration signal for evaluating and assessing the seismic performance of structure. This study proceeds with the proper designing and erection of 3 DOF shake table by trial and error method. The table is designed to have a capacity up to 981 Newton. Further, to study the seismic response of a steel industrial building, a proportionately scaled down model is fabricated and tested on the shake table. The accelerometer is mounted on the model, which is used for recording the data. The experimental results obtained are further validated with the results obtained from software. It is found that model can be used to determine how the structure behaves in response to an applied earthquake motion, but the model cannot be used for direct numerical conclusions (such as of stiffness, deflection, etc.) as many uncertainties involved while scaling a small-scale model. The model shows modal forms and gives the rough deflection values. The experimental results demonstrate shake table as the most effective and the best of all methods available for seismic assessment of structure.Keywords: accelerometer, three degree of freedom shake table, seismic analysis, steel industrial shed
Procedia PDF Downloads 14020537 Parallel Computation of the Covariance-Matrix
Authors: Claude Tadonki
Abstract:
We address the issues related to the computation of the covariance matrix. This matrix is likely to be ill conditioned following its canonical expression, thus consequently raises serious numerical issues. The underlying linear system, which therefore should be solved by means of iterative approaches, becomes computationally challenging. A huge number of iterations is expected in order to reach an acceptable level of convergence, necessary to meet the required accuracy of the computation. In addition, this linear system needs to be solved at each iteration following the general form of the covariance matrix. Putting all together, its comes that we need to compute as fast as possible the associated matrix-vector product. This is our purpose in the work, where we consider and discuss skillful formulations of the problem, then propose a parallel implementation of the matrix-vector product involved. Numerical and performance oriented discussions are provided based on experimental evaluations.Keywords: covariance-matrix, multicore, numerical computing, parallel computing
Procedia PDF Downloads 31220536 Thermal Instability in Solid under Irradiation
Authors: P. Selyshchev
Abstract:
Construction materials for nuclear facilities are operated under extreme thermal and radiation conditions. First of all, they are nuclear fuel, fuel assemblies, and reactor vessel. It places high demands on the control of their state, stability of their state, and their operating conditions. An irradiated material is a typical example of an open non-equilibrium system with nonlinear feedbacks between its elements. Fluxes of energy, matter and entropy maintain states which are far away from thermal equilibrium. The links that arise under irradiation are inherently nonlinear. They form the mechanisms of feed-backs that can lead to instability. Due to this instability the temperature of the sample, heat transfer, and the defect density can exceed the steady-state value in several times. This can lead to change of typical operation and an accident. Therefore, it is necessary to take into account the thermal instability to avoid the emergency situation. The point is that non-thermal energy can be accumulated in materials because irradiation produces defects (first of all these are vacancies and interstitial atoms), which are metastable. The stored energy is about energy of defect formation. Thus, an annealing of the defects is accompanied by releasing of non-thermal stored energy into thermal one. Temperature of the material grows. Increase of temperature results in acceleration of defect annealing. Density of the defects drops and temperature grows more and more quickly. The positive feed-back is formed and self-reinforcing annealing of radiation defects develops. To describe these phenomena a theoretical approach to thermal instability is developed via formalism of complex systems. We consider system of nonlinear differential equations for different components of microstructure and temperature. The qualitative analysis of this non-linear dynamical system is carried out. Conditions for development of instability have been obtained. Points of bifurcation have been found. Convenient way to represent obtained results is a set of phase portraits. It has been shown that different regimes of material state under irradiation can develop. Thus degradation of irradiated material can be limited by means of choice appropriate kind of evolution of materials under irradiation.Keywords: irradiation, material, non-equilibrium state, nonlinear feed-back, thermal instability
Procedia PDF Downloads 26820535 Second Order Optimality Conditions in Nonsmooth Analysis on Riemannian Manifolds
Authors: Seyedehsomayeh Hosseini
Abstract:
Much attention has been paid over centuries to understanding and solving the problem of minimization of functions. Compared to linear programming and nonlinear unconstrained optimization problems, nonlinear constrained optimization problems are much more difficult. Since the procedure of finding an optimizer is a search based on the local information of the constraints and the objective function, it is very important to develop techniques using geometric properties of the constraints and the objective function. In fact, differential geometry provides a powerful tool to characterize and analyze these geometric properties. Thus, there is clearly a link between the techniques of optimization on manifolds and standard constrained optimization approaches. Furthermore, there are manifolds that are not defined as constrained sets in R^n an important example is the Grassmann manifolds. Hence, to solve optimization problems on these spaces, intrinsic methods are used. In a nondifferentiable problem, the gradient information of the objective function generally cannot be used to determine the direction in which the function is decreasing. Therefore, techniques of nonsmooth analysis are needed to deal with such a problem. As a manifold, in general, does not have a linear structure, the usual techniques, which are often used in nonsmooth analysis on linear spaces, cannot be applied and new techniques need to be developed. This paper presents necessary and sufficient conditions for a strict local minimum of extended real-valued, nonsmooth functions defined on Riemannian manifolds.Keywords: Riemannian manifolds, nonsmooth optimization, lower semicontinuous functions, subdifferential
Procedia PDF Downloads 36120534 Integral Image-Based Differential Filters
Authors: Kohei Inoue, Kenji Hara, Kiichi Urahama
Abstract:
We describe a relationship between integral images and differential images. First, we derive a simple difference filter from conventional integral image. In the derivation, we show that an integral image and the corresponding differential image are related to each other by simultaneous linear equations, where the numbers of unknowns and equations are the same, and therefore, we can execute the integration and differentiation by solving the simultaneous equations. We applied the relationship to an image fusion problem, and experimentally verified the effectiveness of the proposed method.Keywords: integral images, differential images, differential filters, image fusion
Procedia PDF Downloads 50620533 Orthogonal Regression for Nonparametric Estimation of Errors-In-Variables Models
Authors: Anastasiia Yu. Timofeeva
Abstract:
Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.Keywords: grade point average, orthogonal regression, penalized regression spline, locally weighted regression
Procedia PDF Downloads 41620532 Preparation of Pegylated Interferon Alpha-2b with High Antiviral Activity Using Linear 20 KDa Polyethylene Glycol Derivative
Authors: Ehab El-Dabaa, Omnia Ali, Mohamed Abd El-Hady, Ahmed Osman
Abstract:
Recombinant human interferon alpha 2 (rhIFN-α2) is FDA approved for treatment of some viral and malignant diseases. Approved pegylated rhIFN-α2 drugs have highly improved pharmacokinetics, pharmacodynamics and therapeutic efficiency compared to native protein. In this work, we studied the pegylation of purified properly refolded rhIFN-α2b using linear 20kDa PEG-NHS (polyethylene glycol- N-hydroxysuccinimidyl ester) to prepare pegylated rhIFN-α2b with high stability and activity. The effect of different parameters like rhIFN-α2b final concentration, pH, rhIFN-α2b/PEG molar ratios and reaction time on the efficiency of pegylation (high percentage of monopegylated rhIFN-α2b) have been studied in small scale (100µl) pegylation reaction trials. Study of the percentages of different components of these reactions (mono, di, polypegylated rhIFN-α2b and unpegylated rhIFN-α2b) indicated that 2h is optimum time to complete the reaction. The pegylation efficiency increased at pH 8 (57.9%) by reducing the protein concentration to 1mg/ml and reducing the rhIFN-α2b/PEG ratio to 1:2. Using larger scale pegylation reaction (65% pegylation efficiency), ion exchange chromatography method has been optimized to prepare and purify the monopegylated rhIFN-α2b with high purity (96%). The prepared monopegylated rhIFN-α2b had apparent Mwt of approximately 65 kDa and high in vitro antiviral activity (2.1x10⁷ ± 0.8 x10⁷ IU/mg). Although it retained approximately 8.4 % of the antiviral activity of the unpegylated rhIFN-α2b, its activity is high compared to other pegylated rhIFN-α2 developed by using similar approach or higher molecular weight branched PEG.Keywords: antiviral activity, rhIFN-α2b, pegylation, pegylation efficiency
Procedia PDF Downloads 17720531 Refined Procedures for Second Order Asymptotic Theory
Authors: Gubhinder Kundhi, Paul Rilstone
Abstract:
Refined procedures for higher-order asymptotic theory for non-linear models are developed. These include a new method for deriving stochastic expansions of arbitrary order, new methods for evaluating the moments of polynomials of sample averages, a new method for deriving the approximate moments of the stochastic expansions; an application of these techniques to gather improved inferences with the weak instruments problem is considered. It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. In our application, finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.Keywords: edgeworth expansions, higher order asymptotics, saddlepoint expansions, weak instruments
Procedia PDF Downloads 27720530 Comprehensive Feature Extraction for Optimized Condition Assessment of Fuel Pumps
Authors: Ugochukwu Ejike Akpudo, Jank-Wook Hur
Abstract:
The increasing demand for improved productivity, maintainability, and reliability has prompted rapidly increasing research studies on the emerging condition-based maintenance concept- Prognostics and health management (PHM). Varieties of fuel pumps serve critical functions in several hydraulic systems; hence, their failure can have daunting effects on productivity, safety, etc. The need for condition monitoring and assessment of these pumps cannot be overemphasized, and this has led to the uproar in research studies on standard feature extraction techniques for optimized condition assessment of fuel pumps. By extracting time-based, frequency-based and the more robust time-frequency based features from these vibrational signals, a more comprehensive feature assessment (and selection) can be achieved for a more accurate and reliable condition assessment of these pumps. With the aid of emerging deep classification and regression algorithms like the locally linear embedding (LLE), we propose a method for comprehensive condition assessment of electromagnetic fuel pumps (EMFPs). Results show that the LLE as a comprehensive feature extraction technique yields better feature fusion/dimensionality reduction results for condition assessment of EMFPs against the use of single features. Also, unlike other feature fusion techniques, its capabilities as a fault classification technique were explored, and the results show an acceptable accuracy level using standard performance metrics for evaluation.Keywords: electromagnetic fuel pumps, comprehensive feature extraction, condition assessment, locally linear embedding, feature fusion
Procedia PDF Downloads 11720529 Numerical Multi-Scale Modeling of Rubber Friction on Rough Pavements Using Finite Element Method
Authors: Ashkan Nazari, Saied Taheri
Abstract:
Knowledge of tire-pavement interaction plays a crucial role in designing safer and more reliable tires. Characterizing the tire-pavement frictional interaction leads to a better understanding of vehicle performance in braking and acceleration. In this work, we devise a multi-scale simulation approach to incorporate the effect of pavement surface asperities in different length-scales. We construct two- and three-dimensional Finite Element (FE) models to simulate the interaction between a rubber block and a rough pavement surface with asperities in different scales. To achieve this, the road profile is scanned via a laser profilometer and the obtained asperities are implemented in an FE software (ABAQUS) in micro and macro length-scales. The hysteresis friction, which is due to the dissipative nature of rubber, is the main component of the friction force and therefore is the subject of study in this work. Using different scales not only will assist in characterizing the pavement asperities with sufficient details but also, it is highly effective in preventing extreme local deformations and stress gradients which results in divergence in FE simulations. The simulation results will be validated with experimental results as well as the results reported in the literature.Keywords: friction, finite element, multi-scale modeling, rubber
Procedia PDF Downloads 13720528 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory
Authors: Damir Latypov
Abstract:
A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory
Procedia PDF Downloads 15520527 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game
Authors: Steven W. Carruthers
Abstract:
The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating
Procedia PDF Downloads 19220526 Application of Artificial Neural Network for Prediction of Retention Times of Some Secoestrane Derivatives
Authors: Nataša Kalajdžija, Strahinja Kovačević, Davor Lončar, Sanja Podunavac Kuzmanović, Lidija Jevrić
Abstract:
In order to investigate the relationship between retention and structure, a quantitative Structure Retention Relationships (QSRRs) study was applied for the prediction of retention times of a set of 23 secoestrane derivatives in a reversed-phase thin-layer chromatography. After the calculation of molecular descriptors, a suitable set of molecular descriptors was selected by using step-wise multiple linear regressions. Artificial Neural Network (ANN) method was employed to model the nonlinear structure-activity relationships. The ANN technique resulted in 5-6-1 ANN model with the correlation coefficient of 0.98. We found that the following descriptors: Critical pressure, total energy, protease inhibition, distribution coefficient (LogD) and parameter of lipophilicity (miLogP) have a significant effect on the retention times. The prediction results are in very good agreement with the experimental ones. This approach provided a new and effective method for predicting the chromatographic retention index for the secoestrane derivatives investigated.Keywords: lipophilicity, QSRR, RP TLC retention, secoestranes
Procedia PDF Downloads 45720525 Transitional Separation Bubble over a Rounded Backward Facing Step Due to a Temporally Applied Very High Adverse Pressure Gradient Followed by a Slow Adverse Pressure Gradient Applied at Inlet of the Profile
Authors: Saikat Datta
Abstract:
Incompressible laminar time-varying flow is investigated over a rounded backward-facing step for a triangular piston motion at the inlet of a straight channel with very high acceleration, followed by a slow deceleration experimentally and through numerical simulation. The backward-facing step is an important test-case as it embodies important flow characteristics such as separation point, reattachment length, and recirculation of flow. A sliding piston imparts two successive triangular velocities at the inlet, constant acceleration from rest, 0≤t≤t0, and constant deceleration to rest, t0≤t20524 Elvis Improved Method for Solving Simultaneous Equations in Two Variables with Some Applications
Authors: Elvis Adam Alhassan, Kaiyu Tian, Akos Konadu, Ernest Zamanah, Michael Jackson Adjabui, Ibrahim Justice Musah, Esther Agyeiwaa Owusu, Emmanuel K. A. Agyeman
Abstract:
In this paper, how to solve simultaneous equations using the Elvis improved method is shown. The Elvis improved method says; to make one variable in the first equation the subject; make the same variable in the second equation the subject; equate the results and simplify to obtain the value of the unknown variable; put the value of the variable found into one equation from the first or second steps and simplify for the remaining unknown variable. The difference between our Elvis improved method and the substitution method is that: with Elvis improved method, the same variable is made the subject in both equations, and the two resulting equations equated, unlike the substitution method where one variable is made the subject of only one equation and substituted into the other equation. After describing the Elvis improved method, findings from 100 secondary students and the views of 5 secondary tutors to demonstrate the effectiveness of the method are presented. The study's purpose is proved by hypothetical examples.Keywords: simultaneous equations, substitution method, elimination method, graphical method, Elvis improved method
Procedia PDF Downloads 13820523 Prediction Modeling of Compression Properties of a Knitted Sportswear Fabric Using Response Surface Method
Authors: Jawairia Umar, Tanveer Hussain, Zulfiqar Ali, Muhammad Maqsood
Abstract:
Different knitted structures and knitted parameters play a vital role in the stretch and recovery management of compression sportswear in addition to the materials use to generate this stretch and recovery behavior of the fabric. The present work was planned to predict the different performance indicators of a compression sportswear fabric with some ground parameters i.e. base yarn stitch length (polyester as base yarn and spandex as plating yarn involve to make a compression fabric) and linear density of the spandex which is a key material of any sportswear fabric. The prediction models were generated by response surface method for performance indicators such as stretch & recovery percentage, compression generated by the garment on body, total elongation on application of high power force and load generated on certain percentage extension in fabric. Certain physical properties of the fabric were also modeled using these two parameters.Keywords: Compression, sportswear, stretch and recovery, statistical model, kikuhime
Procedia PDF Downloads 37920522 Numerical and Experimental Analysis of Stiffened Aluminum Panels under Compression
Authors: Ismail Cengiz, Faruk Elaldi
Abstract:
Within the scope of the study presented in this paper, load carrying capacity and buckling behavior of a stiffened aluminum panel designed by adopting current ‘buckle-resistant’ design application and ‘Post –Buckling’ design approach were investigated experimentally and numerically. The test specimen that is stabilized by Z-type stiffeners and manufactured from aluminum 2024 T3 Clad material was test under compression load. Buckling behavior was observed by means of 3 – dimensional digital image correlation (DIC) and strain gauge pairs. The experimental study was followed by developing an efficient and reliable finite element model whose ability to predict behavior of the stiffened panel used for compression test is verified by compering experimental and numerical results in terms of load – shortening curve, strain-load curves and buckling mode shapes. While finite element model was being constructed, non-linear behaviors associated with material and geometry was considered. Finally, applicability of aluminum stiffened panel in airframe design against to composite structures was evaluated thorough the concept of ‘Structural Efficiency’. This study reveals that considerable amount of weight saving could be gained if the concept of ‘post-buckling design’ is preferred to the already conventionally used ‘buckle resistant design’ concept in aircraft industry without scarifying any of structural integrity under load spectrum.Keywords: post-buckling, stiffened panel, non-linear finite element method, aluminum, structural efficiency
Procedia PDF Downloads 14820521 A Comparative Study on Behavior Among Different Types of Shear Connectors using Finite Element Analysis
Authors: Mohd Tahseen Islam Talukder, Sheikh Adnan Enam, Latifa Akter Lithi, Soebur Rahman
Abstract:
Composite structures have made significant advances in construction applications during the last few decades. Composite structures are composed of structural steel shapes and reinforced concrete combined with shear connectors, which benefit each material's unique properties. Significant research has been conducted on different types of connectors’ behavior and shear capacity. Moreover, the AISC 360-16 “Specification for Steel Structural Buildings” consists of a formula for channel shear connectors' shear capacity. This research compares the behavior of C type and L type shear connectors using Finite Element Analysis. Experimental results from published literature are used to validate the finite element models. The 3-D Finite Element Model (FEM) was built using ABAQUS 2017 to investigate non-linear capabilities and the ultimate load-carrying potential of the connectors using push-out tests. The changes in connector dimensions were analyzed using this non-linear model in parametric investigations. The parametric study shows that by increasing the length of the shear connector by 10 mm, its shear strength increases by 21%. Shear capacity increased by 13% as the height was increased by 10 mm. The thickness of the specimen was raised by 1 mm, resulting in a 2% increase in shear capacity. However, the shear capacity of channel connectors was reduced by 21% due to an increase of thickness by 2 mm.Keywords: finite element method, channel shear connector, angle shear connector, ABAQUS, composite structure, shear connector, parametric study, ultimate shear capacity, push-out test
Procedia PDF Downloads 12520520 Inventory Management System of Seasonal Raw Materials of Feeds at San Jose Batangas through Integer Linear Programming and VBA
Authors: Glenda Marie D. Balitaan
Abstract:
The branch of business management that deals with inventory planning and control is known as inventory management. It comprises keeping track of supply levels and forecasting demand, as well as scheduling when and how to plan. Keeping excess inventory results in a loss of money, takes up physical space, and raises the risk of damage, spoilage, and loss. On the other hand, too little inventory frequently causes operations to be disrupted and raises the possibility of low customer satisfaction, both of which can be detrimental to a company's reputation. The United Victorious Feed mill Corporation's present inventory management practices were assessed in terms of inventory level, warehouse allocation, ordering frequency, shelf life, and production requirement. To help the company achieve their optimal level of inventory, a mathematical model was created using Integer Linear Programming. Due to the season, the goal function was to reduce the cost of purchasing US Soya and Yellow Corn. Warehouse space, annual production requirements, and shelf life were all considered. To ensure that the user only uses one application to record all relevant information, like production output and delivery, the researcher built a Visual Basic system. Additionally, the technology allows management to change the model's parameters.Keywords: inventory management, integer linear programming, inventory management system, feed mill
Procedia PDF Downloads 8320519 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving
Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian
Abstract:
In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning
Procedia PDF Downloads 14920518 Detecting Earnings Management via Statistical and Neural Networks Techniques
Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie
Abstract:
Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.Keywords: earnings management, generalized linear regression, neural networks multi-layer perceptron, Tehran stock exchange
Procedia PDF Downloads 42220517 Study of Wake Dynamics for a Rim-Driven Thruster Based on Numerical Method
Authors: Bao Liu, Maarten Vanierschot, Frank Buysschaert
Abstract:
The present work examines the wake dynamics of a rim-driven thruster (RDT) with Computational Fluid Dynamics (CFD). Unsteady Reynolds-averaged Navier-Stokes (URANS) equations were solved in the commercial solver ANSYS Fluent in combination with the SST k-ω turbulence model. The application of the moving reference frame (MRF) and sliding mesh (SM) approach to handling the rotational movement of the propeller were compared in the transient simulations. Validation and verification of the numerical model was performed to ensure numerical accuracy. Two representative scenarios were considered, i.e., the bollard condition (J=0) and a very light loading condition(J=0.7), respectively. From the results, it’s confirmed that compared to the SM method, the MRF method is not suitable for resolving the unsteady flow features as it only gives the general mean flow but smooths out lots of characteristic details in the flow field. By evaluating the simulation results with the SM technique, the instantaneous wake flow field under both conditions is presented and analyzed, most notably the helical vortex structure. It’s observed from the results that the tip vortices, blade shed vortices, and hub vortices are present in the wake flow field and convect downstream in a highly non-linear way. The shear layer vortices shedding from the duct displayed a strong interaction with the distorted tip vortices in an irregularmanner.Keywords: computational fluid dynamics, rim-driven thruster, sliding mesh, wake dynamics
Procedia PDF Downloads 26020516 Fixed Point Iteration of a Damped and Unforced Duffing's Equation
Authors: Paschal A. Ochang, Emmanuel C. Oji
Abstract:
The Duffing’s Equation is a second order system that is very important because they are fundamental to the behaviour of higher order systems and they have applications in almost all fields of science and engineering. In the biological area, it is useful in plant stem dependence and natural frequency and model of the Brain Crash Analysis (BCA). In Engineering, it is useful in the study of Damping indoor construction and Traffic lights and to the meteorologist it is used in the prediction of weather conditions. However, most Problems in real life that occur are non-linear in nature and may not have analytical solutions except approximations or simulations, so trying to find an exact explicit solution may in general be complicated and sometimes impossible. Therefore we aim to find out if it is possible to obtain one analytical fixed point to the non-linear ordinary equation using fixed point analytical method. We started by exposing the scope of the Duffing’s equation and other related works on it. With a major focus on the fixed point and fixed point iterative scheme, we tried different iterative schemes on the Duffing’s Equation. We were able to identify that one can only see the fixed points to a Damped Duffing’s Equation and not to the Undamped Duffing’s Equation. This is because the cubic nonlinearity term is the determining factor to the Duffing’s Equation. We finally came to the results where we identified the stability of an equation that is damped, forced and second order in nature. Generally, in this research, we approximate the solution of Duffing’s Equation by converting it to a system of First and Second Order Ordinary Differential Equation and using Fixed Point Iterative approach. This approach shows that for different versions of Duffing’s Equations (damped), we find fixed points, therefore the order of computations and running time of applied software in all fields using the Duffing’s equation will be reduced.Keywords: damping, Duffing's equation, fixed point analysis, second order differential, stability analysis
Procedia PDF Downloads 29220515 Settlement Prediction in Cape Flats Sands Using Shear Wave Velocity – Penetration Resistance Correlations
Authors: Nanine Fouche
Abstract:
The Cape Flats is a low-lying sand-covered expanse of approximately 460 square kilometres, situated to the southeast of the central business district of Cape Town in the Western Cape of South Africa. The aeolian sands masking this area are often loose and compressible in the upper 1m to 1.5m of the surface, and there is a general exceedance of the maximum allowable settlement in these sands. The settlement of shallow foundations on Cape Flats sands is commonly predicted using the results of in-situ tests such as the SPT or DPSH due to the difficulty of retrieving undisturbed samples for laboratory testing. Varying degrees of accuracy and reliability are associated with these methods. More recently, shear wave velocity (Vs) profiles obtained from seismic testing, such as continuous surface wave tests (CSW), are being used for settlement prediction. Such predictions have the advantage of considering non-linear stress-strain behaviour of soil and the degradation of stiffness with increasing strain. CSW tests are rarely executed in the Cape Flats, whereas SPT’s are commonly performed. For this reason, and to facilitate better settlement predictions in Cape Flats sand, equations representing shear wave velocity (Vs) as a function of SPT blow count (N60) and vertical effective stress (v’) were generated by statistical regression of site investigation data. To reveal the most appropriate method of overburden correction, analyses were performed with a separate overburden term (Pa/σ’v) as well as using stress corrected shear wave velocity and SPT blow counts (correcting Vs. and N60 to Vs1and (N1)60respectively). Shear wave velocity profiles and SPT blow count data from three sites masked by Cape Flats sands were utilised to generate 80 Vs-SPT N data pairs for analysis. Investigated terrains included sites in the suburbs of Athlone, Muizenburg, and Atlantis, all underlain by windblown deposits comprising fine and medium sand with varying fines contents. Elastic settlement analysis was also undertaken for the Cape Flats sands, using a non-linear stepwise method based on small-strain stiffness estimates, which was obtained from the best Vs-N60 model and compared to settlement estimates using the general elastic solution with stiffness profiles determined using Stroud’s (1989) and Webb’s (1969) SPT N60-E transformation models. Stroud’s method considers strain level indirectly whereasWebb’smethod does not take account of the variation in elastic modulus with strain. The expression of Vs. in terms of N60 and Pa/σv’ derived from the Atlantis data set revealed the best fit with R2 = 0.83 and a standard error of 83.5m/s. Less accurate Vs-SPT N relations associated with the combined data set is presumably the result of inversion routines used in the analysis of the CSW results showcasing significant variation in relative density and stiffness with depth. The regression analyses revealed that the inclusion of a separate overburden term in the regression of Vs and N60, produces improved fits, as opposed to the stress corrected equations in which the R2 of the regression is notably lower. It is the correction of Vs and N60 to Vs1 and (N1)60 with empirical constants ‘n’ and ‘m’ prior to regression, that introduces bias with respect to overburden pressure. When comparing settlement prediction methods, both Stroud’s method (considering strain level indirectly) and the small strain stiffness method predict higher stiffnesses for medium dense and dense profiles than Webb’s method, which takes no account of strain level in the determination of soil stiffness. Webb’s method appears to be suitable for loose sands only. The Versak software appears to underestimate differences in settlement between square and strip footings of similar width. In conclusion, settlement analysis using small-strain stiffness data from the proposed Vs-N60 model for Cape Flats sands provides a way to take account of the non-linear stress-strain behaviour of the sands when calculating settlement.Keywords: sands, settlement prediction, continuous surface wave test, small-strain stiffness, shear wave velocity, penetration resistance
Procedia PDF Downloads 17520514 Analysis of Barbell Kinematics of Snatch Technique among Women Weightlifters in India
Authors: Manish Kumar Pillai, Madhavi Pathak Pillai, Rajender Lal, Dinesh P. Sharma
Abstract:
India has not yet been able to produce many weightlifters in the past years. Karnam Malleshwari is the only woman to win a medal for India in Olympics. When we try to introspect, there seem to be different reasons. One of the probable cause could be the lack of biomechanical analysis for technique improvements. The analysis of motion in sports has gained prime importance for technical improvement. It helps an athlete to develop a better understanding of his own skills and increasing the rate of technical learning process. Kinematics is concerned with describing and quantifying both the linear and angular position of bodies and their time derivatives. The techniques analysis of barbell movement is very important in weightlifting. But women weightlifting has a shorter history than men’s. Research on women weightlifting based on video analysis is less; there is a lack of scientific evidence based on kinematic analysis of especially on Indian weightlifters at national level are limited. Hence, the present investigation was aimed to analyze the barbell kinematics of women weightlifters in India. The study was delimited to the medal winners of 69-kilogram weight category in the All India Inter-University Competition, age ranging between 18 and 28 years. The variables selected for the mechanical analysis of Barbell kinematics included barbell trajectory, velocity, acceleration, potential energy, kinetic energy, mechanical energy, and average power output. The performance was captured during the competition by two DV PC-60 Digital cameras (Panasonic Company, Ltd). Two cameras were placed 6-meters perpendicular to the plane of the motion, 130 cm. above the ground to record/capture the frontal and lateral view of the lifters simultaneously. Video recordings were analyzed by using Dartfish software, and barbell kinematics were analyzed with the information derived with the help of software. The result documented on the basis of the finding of the study clearly states that there are differences in the selected kinematic variables in all three lifters in respect to their technique in five phases during snatch technique using by them.Keywords: dartfish, digital camera, kinematic, snatch, weightlifting
Procedia PDF Downloads 136