Search results for: linear decomposition methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18079

Search results for: linear decomposition methods

17359 Comparative Study in Evaluating the Antioxidation Efficiency for Native Types Antioxidants Extracted from Crude Oil with the Synthesized Class

Authors: Mohammad Jamil Abd AlGhani

Abstract:

The natural native antioxidants N,N-P-methyl phenyl acetone and N,N-phenyl acetone were isolated from the Iraqi crude oil region of Kirkuk by ion exchange and their structure was characterized by spectral and chemical analysis methods. Tetraline was used as a liquid hydrocarbon to detect the efficiency of isolated molecules at elevated temperature (393 K) that it has physicochemical specifications and structure closed to hydrocarbons fractionated from crude oil. The synthesized universal antioxidant 2,6-ditertiaryisobutyl-p-methyl phenol (Unol) with known stochiometric coefficient of inhibition equal to (2) was used as a model for comparative evaluation at the same conditions. Modified chemiluminescence method was used to find the amount of absorbed oxygen and the induction periods in and without the existence of isolated antioxidants molecules. The results of induction periods and quantity of absorbed oxygen during the oxidation process were measured by manometric installation. It was seen that at specific equal concentrations of N,N-phenyl acetone and N, N-P-methyl phenyl acetone in comparison with Unol at 393 K were with (2) and (2.5) times efficient than do Unol. It means that they had the ability to inhibit the formation of new free radicals and prevent the chain reaction to pass from the propagation to the termination step rather than decomposition of formed hydroperoxides.

Keywords: antioxidants, chemiluminescence, inhibition, Unol

Procedia PDF Downloads 197
17358 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies

Authors: Roberta Martino, Viviana Ventre

Abstract:

Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.

Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty

Procedia PDF Downloads 124
17357 A Finite Element/Finite Volume Method for Dam-Break Flows over Deformable Beds

Authors: Alia Alghosoun, Ashraf Osman, Mohammed Seaid

Abstract:

A coupled two-layer finite volume/finite element method was proposed for solving dam-break flow problem over deformable beds. The governing equations consist of the well-balanced two-layer shallow water equations for the water flow and a linear elastic model for the bed deformations. Deformations in the topography can be caused by a brutal localized force or simply by a class of sliding displacements on the bathymetry. This deformation in the bed is a source of perturbations, on the water surface generating water waves which propagate with different amplitudes and frequencies. Coupling conditions at the interface are also investigated in the current study and two mesh procedure is proposed for the transfer of information through the interface. In the present work a new procedure is implemented at the soil-water interface using the finite element and two-layer finite volume meshes with a conservative distribution of the forces at their intersections. The finite element method employs quadratic elements in an unstructured triangular mesh and the finite volume method uses the Rusanove to reconstruct the numerical fluxes. The numerical coupled method is highly efficient, accurate, well balanced, and it can handle complex geometries as well as rapidly varying flows. Numerical results are presented for several test examples of dam-break flows over deformable beds. Mesh convergence study is performed for both methods, the overall model provides new insight into the problems at minimal computational cost.

Keywords: dam-break flows, deformable beds, finite element method, finite volume method, hybrid techniques, linear elasticity, shallow water equations

Procedia PDF Downloads 173
17356 Parallel Particle Swarm Optimization Optimized LDI Controller with Lyapunov Stability Criterion for Nonlinear Structural Systems

Authors: P. W. Tsai, W. L. Hong, C. W. Chen, C. Y. Chen

Abstract:

In this paper, we present a neural network (NN) based approach represent a nonlinear Tagagi-Sugeno (T-S) system. A linear differential inclusion (LDI) state-space representation is utilized to deal with the NN models. Taking advantage of the LDI representation, the stability conditions and controller design are derived for a class of nonlinear structural systems. Moreover, the concept of utilizing the Parallel Particle Swarm Optimization (PPSO) algorithm to solve the common P matrix under the stability criteria is given in this paper.

Keywords: Lyapunov stability, parallel particle swarm optimization, linear differential inclusion, artificial intelligence

Procedia PDF Downloads 648
17355 Hypergeometric Solutions to Linear Nonhomogeneous Fractional Equations with Spherical Bessel Functions of the First Kind

Authors: Pablo Martin, Jorge Olivares, Fernando Maass

Abstract:

The use of fractional derivatives to different problems in Engineering and Physics has been increasing in the last decade. For this reason, we have here considered partial derivatives when the integral is a spherical Bessel function of the first kind in both regular and modified ones simple initial conditions have been also considered. In this way, the solution has been found as a combination of hypergeometric functions. The case of a general rational value for α of the fractional derivative α has been solved in a general way for alpha between zero and two. The modified spherical Bessel functions of the first kind have been also considered and how to go from the regular case to the modified one will be also shown.

Keywords: caputo fractional derivatives, hypergeometric functions, linear differential equations, spherical Bessel functions

Procedia PDF Downloads 320
17354 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data

Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin

Abstract:

The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.

Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test

Procedia PDF Downloads 291
17353 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems

Authors: Riadh Zorgati, Thomas Triboulet

Abstract:

In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.

Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix

Procedia PDF Downloads 127
17352 Photoreflectance Anisotropy Spectroscopy of Coupled Quantum Wells

Authors: J. V. Gonzalez Fernandez, T. Mozume, S. Gozu, A. Lastras Martinez, L. F. Lastras Martinez, J. Ortega Gallegos, R. E. Balderas Navarro

Abstract:

We report on a theoretical-experimental study of photoreflectance anisotropy (PRA) spectroscopy of coupled double quantum wells. By probing the in-plane interfacial optical anisotropies, we demonstrate that PRA spectroscopy has the capacity to detect and distinguish layers with quantum dimensions. In order to account for the experimental PRA spectra, we have used a theoretical model at k=0 based on a linear electro-optic effect through a piezoelectric shear strain.

Keywords: coupled double quantum well (CDQW), linear electro-optic (LEO) effect, photoreflectance anisotropy (PRA), piezoelectric shear strain

Procedia PDF Downloads 689
17351 Experimental Analysis of Tuned Liquid Damper (TLD) for High Raised Structures

Authors: Mohamad Saberi, Arash Sohrabi

Abstract:

Tuned liquid damper is one the passive structural control ways which has been used since mid-1980 decade for seismic control in civil engineering. This system is made of one or many tanks filled with fluid, mostly water that installed on top of the high raised structure and used to prevent structure vibration. In this article, we will show how to make seismic table contain TLD system and analysis the result of using this system in our structure. Results imply that when frequency ratio approaches 1 this system can perform its best in both dissipate energy and increasing structural damping. And also results of these serial experiments are proved compatible with Hunzer linear theory behaviour.

Keywords: TLD, seismic table, structural system, Hunzer linear behaviour

Procedia PDF Downloads 329
17350 Non-Linear Finite Element Analysis of Bonded Single Lap Joint in Composite Material

Authors: A. Benhamena, L. Aminallah, A. Aid, M. Benguediab, A. Amrouche

Abstract:

The goal of this work is to analyze the severity of interfacial stress distribution in the single lap adhesive joint under tensile loading. The three-dimensional and non-linear finite element method based on the computation of the peel and shear stresses was used to analyze the fracture behaviour of single lap adhesive joint. The effect of the loading magnitude and the overlap length on the distribution of peel and shear stresses was highlighted. A good correlation was found between the FEM simulations and the analytical results.

Keywords: aluminum 2024-T3 alloy, single-lap adhesive joints, Interface stress distributions, material nonlinear analysis, adhesive, bending moment, finite element method

Procedia PDF Downloads 567
17349 Growth Performance, Body Linear Measurements and Body Condition Score of Savanna Brown Goats Fed Enzyme Treated Sawdust Diets as Replacement for Maize Offal and Managed Semi-intensively

Authors: Alabi Olushola John, Ogbiko Anthonia, Tsado Daniel Nma, Mbajiorgu Ejike Felix, Adama Theophilus Zubairu

Abstract:

A total of thirty (30) goats weighting between 5.8 and 7.3 kg were used to determine the growth performance, body linear measurements and body condition score of Semi intensively manged Savanna Brown goats fed enzyme treated sawdust diets (ETSD). They divided into five dietary treatments (T) groups with three replications using a completely randomized design. Treatment one (1) comprises of animals fed diet on 0 % enzyme treated sawdust while Treatment 2 (T2), Treatment 3 (T3), Treatment 4 (T4) and Treatment 5 (T5) comprises of animals fed diets containing 10, 20, 30 and 40 % enzyme treated sawdust diets, respectively. The study lasted 16 weeks. Data on growth performance parameters, body linear measurement (height at wither, body length, chest girth, hind leg length, foreleg length, facial length) and body condition score were collected and analyzed using one way analysis of variance. No significant difference (p>0.05) was observed in the all growth performance parameters and linear body measurements. However, significant difference was observed in body length and daily body length gains with highest value observed in animals fed the control diets (7.38 and 0.08 cm respectively) and animals on 30 % ETSD (7.25 and 0.07 cm respectively) and lowest values (4.75 and 0.05 cm respectively) were observed in animals fed 10 % ETSD among the treatment groups. It was, therefore, concluded that enzyme treated sawdust can be used in the diets of Savanna Brown goats up to 40 % replacement for maize offal since this treatment improved the body length and daily body length gains.

Keywords: performance, sawdust, enzyme treated, semi-intensively, replacement

Procedia PDF Downloads 93
17348 Nonhomogeneous Linear Second Order Differential Equations and Resonance through Geogebra Program

Authors: F. Maass, P. Martin, J. Olivares

Abstract:

The aim of this work is the application of the program GeoGebra in teaching the study of nonhomogeneous linear second order differential equations with constant coefficients. Different kind of functions or forces will be considered in the right hand side of the differential equations, in particular, the emphasis will be placed in the case of trigonometrical functions producing the resonance phenomena. In order to obtain this, the frequencies of the trigonometrical functions will be changed. Once the resonances appear, these have to be correlationated with the roots of the second order algebraic equation determined by the coefficients of the differential equation. In this way, the physics and engineering students will understand resonance effects and its consequences in the simplest way. A large variety of examples will be shown, using different kind of functions for the nonhomogeneous part of the differential equations.

Keywords: education, geogebra, ordinary differential equations, resonance

Procedia PDF Downloads 237
17347 Buckling Behavior of FGM Plates Using a Simplified Shear Deformation Theory

Authors: Mokhtar Bouazza

Abstract:

In this paper, the simplified theory will be used to predict the thermoelastic buckling behavior of rectangular functionally graded plates. The material properties of the functionally graded plates are assumed to vary continuously through the thickness, according to a simple power law distribution of the volume fraction of the constituents. The simplified theory is used to obtain the buckling of the plate under different types of thermal loads. The thermal loads are assumed to be uniform, linear, and non-linear distribution through the thickness. Additional numerical results are presented for FGM plates that show the effects of various parameters on thermal buckling response.

Keywords: buckling, functionally graded, plate, simplified higher-order deformation theory, thermal loading

Procedia PDF Downloads 375
17346 Downhole Corrosion Inhibition Treatment for Water Supply Wells

Authors: Nayif Alrasheedi, Sultan Almutairi

Abstract:

Field-wide, a water supply wells’ downhole corrosion inhibition program is being applied to maintain downhole component integrity and keep the fluid corrosivity below 5 MPY. Batch treatment is currently used to inject the oil field chemical. This work is a case study consisting of analytical procedures used to optimize the frequency of the good corrosion inhibition treatments. During the study, a corrosion cell was fitted with a special three-electrode configuration for electrochemical measurements, electrochemical linear polarization, corrosion monitoring, and microbial analysis. This study revealed that the current practice is not able to mitigate material corrosion in the downhole system for more than three months.

Keywords: downhole corrosion inhibition, electrochemical measurements, electrochemical linear polarization, corrosion monitoring

Procedia PDF Downloads 167
17345 Extension of the Simplified Theory of Plastic Zones for Analyzing Elastic Shakedown in a Multi-Dimensional Load Domain

Authors: Bastian Vollrath, Hartwig Hubel

Abstract:

In case of over-elastic and cyclic loading, strain may accumulate due to a ratcheting mechanism until the state of shakedown is possibly achieved. Load history dependent numerical investigations by a step-by-step analysis are rather costly in terms of engineering time and numerical effort. In the case of multi-parameter loading, where various independent loadings affect the final state of shakedown, the computational effort becomes an additional challenge. Therefore, direct methods like the Simplified Theory of Plastic Zones (STPZ) are developed to solve the problem with a few linear elastic analyses. Post-shakedown quantities such as strain ranges and cyclic accumulated strains are calculated approximately by disregarding the load history. The STPZ is based on estimates of a transformed internal variable, which can be used to perform modified elastic analyses, where the elastic material parameters are modified, and initial strains are applied as modified loading, resulting in residual stresses and strains. The STPZ already turned out to work well with respect to cyclic loading between two states of loading. Usually, few linear elastic analyses are sufficient to obtain a good approximation to the post-shakedown quantities. In a multi-dimensional load domain, the approximation of the transformed internal variable transforms from a plane problem into a hyperspace problem, where time-consuming approximation methods need to be applied. Therefore, a solution restricted to structures with four stress components was developed to estimate the transformed internal variable by means of three-dimensional vector algebra. This paper presents the extension to cyclic multi-parameter loading so that an unlimited number of load cases can be taken into account. The theoretical basis and basic presumptions of the Simplified Theory of Plastic Zones are outlined for the case of elastic shakedown. The extension of the method to many load cases is explained, and a workflow of the procedure is illustrated. An example, adopting the FE-implementation of the method into ANSYS and considering multilinear hardening is given which highlights the advantages of the method compared to incremental, step-by-step analysis.

Keywords: cyclic loading, direct method, elastic shakedown, multi-parameter loading, STPZ

Procedia PDF Downloads 157
17344 A Comparative Study between FEM and Meshless Methods

Authors: Jay N. Vyas, Sachin Daxini

Abstract:

Numerical simulation techniques are widely used now in product development and testing instead of expensive, time-consuming and sometimes dangerous laboratory experiments. Numerous numerical methods are available for performing simulation of physical problems of different engineering fields. Grid based methods, like Finite Element Method, are extensively used in performing various kinds of static, dynamic, structural and non-structural analysis during product development phase. Drawbacks of grid based methods in terms of discontinuous secondary field variable, dealing fracture mechanics and large deformation problems led to development of a relatively a new class of numerical simulation techniques in last few years, which are popular as Meshless methods or Meshfree Methods. Meshless Methods are expected to be more adaptive and flexible than Finite Element Method because domain descretization in Meshless Method requires only nodes. Present paper introduces Meshless Methods and differentiates it with Finite Element Method in terms of following aspects: Shape functions used, role of weight function, techniques to impose essential boundary conditions, integration techniques for discrete system equations, convergence rate, accuracy of solution and computational effort. Capabilities, benefits and limitations of Meshless Methods are discussed and concluded at the end of paper.

Keywords: numerical simulation, Grid-based methods, Finite Element Method, Meshless Methods

Procedia PDF Downloads 383
17343 Development of Sulfite Biosensor Based on Sulfite Oxidase Immobilized on 3-Aminoproplytriethoxysilane Modified Indium Tin Oxide Electrode

Authors: Pawasuth Saengdee, Chamras Promptmas, Ting Zeng, Silke Leimkühler, Ulla Wollenberger

Abstract:

Sulfite has been used as a versatile preservative to limit the microbial growth and to control the taste in some food and beverage. However, it has been reported to cause a wide spectrum of severe adverse reactions. Therefore, it is important to determine the amount of sulfite in food and beverage to ensure consumer safety. An efficient electrocatalytic biosensor for sulfite detection was developed by immobilizing of human sulfite oxidase (hSO) on 3-aminoproplytriethoxysilane (APTES) modified indium tin oxide (ITO) electrode. Cyclic voltammetry was employed to investigate the electrochemical characteristics of the hSO modified ITO electrode for various pretreatment and binding conditions. Amperometry was also utilized to demonstrate the current responses of the sulfite sensor toward sodium sulfite in an aqueous solution at a potential of 0 V (vs. Ag/AgCl 1 M KCl). The proposed sulfite sensor has a linear range between 0.5 to 2 mM with a correlation coefficient 0.972. Then, the additional polymer layer of PVA was introduced to extend the linear range of sulfite sensor and protect the enzyme. The linear range of sulfite sensor with 5% coverage increases from 2.8 to 20 mM at a correlation coefficient of 0.983. In addition, the stability of sulfite sensor with 5% PVA coverage increases until 14 days when kept in 0.5 mM Tris-buffer, pH 7.0 at 4 8C. Therefore, this sensor could be applied for the detection of sulfite in the real sample, especially in food and beverage.

Keywords: sulfite oxidase, bioelectrocatalytsis, indium tin oxide, direct electrochemistry, sulfite sensor

Procedia PDF Downloads 226
17342 Vehicle to Vehicle Communication: Collision Avoidance Scenarios

Authors: Ahmed Emad, Ahmed Salah, Abdelrahman Magdy, Omar Rashid, Mohammed Adel

Abstract:

This research paper discusses vehicle-to-vehicle technology as an important application of linear algebra. This communication technology represents an efficient and promising application to help to ensure the safety of the drivers by warning them when a crash possibility is close. The major link that combines our topic with linear algebra is the Laplacian matrix. Some main definitions used in the V2V were illustrated, such as VANET and its characteristics. The V2V technology could be applied in different applications with different traffic scenarios and various ways to warn car drivers. These scenarios were simulated programs such as MATLAB and Python to test how the V2V system would respond to the different scenarios and warn the car drivers exposed to the threat of collisions.

Keywords: V2V communication, vehicle to vehicle scenarios, VANET, FCW, EEBL, IMA, Laplacian matrix

Procedia PDF Downloads 149
17341 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 140
17340 Energy Management System

Authors: S. Periyadharshini, K. Ramkumar, S. Jayalalitha, M. GuruPrasath, R. Manikandan

Abstract:

This paper presents a formulation and solution for industrial load management and product grade problem. The formulation is created using linear programming technique thereby optimizing the electricity cost by scheduling the loads satisfying the process, storage, time zone and production constraints which will create an impact of reducing maximum demand and thereby reducing the electricity cost. Product grade problem is formulated using integer linear programming technique of optimization using lingo software and the results show that overall increase in profit margin. In this paper, time of use tariff is utilized and this technique will provide significant reductions in peak electricity consumption.

Keywords: cement industries, integer programming, optimal formulation, objective function, constraints

Procedia PDF Downloads 581
17339 Parallel Computation of the Covariance-Matrix

Authors: Claude Tadonki

Abstract:

We address the issues related to the computation of the covariance matrix. This matrix is likely to be ill conditioned following its canonical expression, thus consequently raises serious numerical issues. The underlying linear system, which therefore should be solved by means of iterative approaches, becomes computationally challenging. A huge number of iterations is expected in order to reach an acceptable level of convergence, necessary to meet the required accuracy of the computation. In addition, this linear system needs to be solved at each iteration following the general form of the covariance matrix. Putting all together, its comes that we need to compute as fast as possible the associated matrix-vector product. This is our purpose in the work, where we consider and discuss skillful formulations of the problem, then propose a parallel implementation of the matrix-vector product involved. Numerical and performance oriented discussions are provided based on experimental evaluations.

Keywords: covariance-matrix, multicore, numerical computing, parallel computing

Procedia PDF Downloads 307
17338 Computational Insight into a Mechanistic Overview of Water Exchange Kinetics and Thermodynamic Stabilities of Bis and Tris-Aquated Complexes of Lanthanides

Authors: Niharika Keot, Manabendra Sarma

Abstract:

A thorough investigation of Ln3+ complexes with more than one inner-sphere water molecule is crucial for designing high relaxivity contrast agents (CAs) used in magnetic resonance imaging (MRI). This study accomplished a comparative stability analysis of two hexadentate (H3cbda and H3dpaa) and two heptadentate (H4peada and H3tpaa) ligands with Ln3+ ions. The higher stability of the hexadentate H3cbda and heptadentate H4peada ligands has been confirmed by the binding affinity and Gibbs free energy analysis in aqueous solution. In addition, energy decomposition analysis (EDA) reveals the higher binding affinity of the peada4− ligand than the cbda3− ligand towards Ln3+ ions due to the higher charge density of the peada4− ligand. Moreover, a mechanistic overview of water exchange kinetics has been carried out based on the strength of the metal–water bond. The strength of the metal–water bond follows the trend Gd–O47 (w) > Gd–O39 (w) > Gd–O36 (w) in the case of the tris-aquated [Gd(cbda)(H2O)3] and Gd–O43 (w) > Gd–O40 (w) for the bis-aquated [Gd(peada)(H2O)2]− complex, which was confirmed by bond length, electron density (ρ), and electron localization function (ELF) at the corresponding bond critical points. Our analysis also predicts that the activation energy barrier decreases with the decrease in bond strength; hence kex increases. The 17O and 1H hyperfine coupling constant values of all the coordinated water molecules were different, calculated by using the second-order Douglas–Kroll–Hess (DKH2) approach. Furthermore, the ionic nature of the bonding in the metal–ligand (M–L) bond was confirmed by the Quantum Theory of Atoms-In-Molecules (QTAIM) and ELF along with energy decomposition analysis (EDA). We hope that the results can be used as a basis for the design of highly efficient Gd(III)-based high relaxivity MRI contrast agents for medical applications.

Keywords: MRI contrast agents, lanthanide chemistry, thermodynamic stability, water exchange kinetics

Procedia PDF Downloads 75
17337 Orthogonal Regression for Nonparametric Estimation of Errors-In-Variables Models

Authors: Anastasiia Yu. Timofeeva

Abstract:

Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.

Keywords: grade point average, orthogonal regression, penalized regression spline, locally weighted regression

Procedia PDF Downloads 407
17336 Comparison of the Existing Damage Indices in Steel Moment-Resisting Frame Structures

Authors: Hamid Kazemi, Abbasali Sadeghi

Abstract:

Assessment of seismic behavior of frame structures is just done for evaluating life and financial damages or lost. The new structural seismic behavior assessment methods have been proposed, so it is necessary to define a formulation as a damage index, which the damage amount has been quantified and qualified. In this paper, four new steel moment-resisting frames with intermediate ductility and different height (2, 5, 8, and 12-story) with regular geometry and simple rectangular plan were supposed and designed. The three existing groups’ damage indices were studied, each group consisting of local index (Drift, Maximum Roof Displacement, Banon Failure, Kinematic, Banon Normalized Cumulative Rotation, Cumulative Plastic Rotation and Ductility), global index (Roufaiel and Meyer, Papadopoulos, Sozen, Rosenblueth, Ductility and Base Shear), and story (Banon Failure and Inter-story Rotation). The necessary parameters for these damage indices have been calculated under the effect of far-fault ground motion records by Non-linear Dynamic Time History Analysis. Finally, prioritization of damage indices is defined based on more conservative values in terms of more damageability rate. The results show that the selected damage index has an important effect on estimation of the damage state. Also, failure, drift, and Rosenblueth damage indices are more conservative indices respectively for local, story and global damage indices.

Keywords: damage index, far-fault ground motion records, non-linear time history analysis, SeismoStruct software, steel moment-resisting frame

Procedia PDF Downloads 287
17335 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 33
17334 Artificial Intelligence in Bioscience: The Next Frontier

Authors: Parthiban Srinivasan

Abstract:

With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.

Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction

Procedia PDF Downloads 354
17333 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 263
17332 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 441
17331 Light Harvesting Titanium Nanocatalyst for Remediation of Methyl Orange

Authors: Brajesh Kumar, Luis Cumbal

Abstract:

An eco-friendly Citrus paradisi peel extract mediated synthesis of TiO2 nanoparticles is reported under sonication. U.V.-vis, Transmission Electron Microscopy, Dynamic Light Scattering and X-ray analyses are performed to characterize the formation of TiO2 nanoparticles. It is almost spherical in shape, having a size of 60–140 nm and the XRD peaks at 2θ = 25.363° confirm the characteristic facets for anatase form. The synthesized nano catalyst is highly active in the decomposition of methyl orange (64 mg/L) in sunlight (~73%) for 2.5 hours.

Keywords: eco-friendly, TiO2 nanoparticles, citrus paradisi, TEM

Procedia PDF Downloads 522
17330 Decomposition of Solidification Carbides during Cyclic Thermal Treatments in a Co-Based Alloy Deposit Applied to Stainless Steel

Authors: Sellidj Abdelaziz, Lebaili Soltane

Abstract:

A cobalt-based alloy type Co-Cr-Ni-WC was deposited by plasma transferred arc projection (PTA) on a stainless steel valve. The alloy is characterized at the equilibrium by a solid solution Co (γ) mainly dendritic, and eutectic carbides M₇C₃ and ηM₆C. At the deposit/substrate interface, this microstructure is modified by the fast cooling mode of the alloy when applied in the liquid state on the relatively cold steel substrate. The structure formed in this case is heterogeneous and metastable phases can occur and evolve over temperature service. Coating properties and reliability are directly related to microstructures formed during deposition. We were interested more particularly in this microstructure formed during the solidification of the deposit in the region of the interface joining the soldered couple and its evolution during cyclic heat treatments at temperatures similar to those of the thermal environment of the valve. The characterization was carried out by SEM-EDS microprobe CAMECA, XRD, and micro hardness profiles. The deposit obtained has a linear and regular appearance that is free of cracks and with little porosity. The morphology of the microstructure represents solidification stages that are relatively fast with a temperature gradient high at the beginning of the interface by forming a plane front solid solution Co (γ). It gradually changes with the decreasing temperature gradient by getting farther from the junction towards the outer limit of the deposit. The matrix takes the forms: cellular, mixed (cells and dendrites) and dendritic. Dendritic growth is done according to primary ramifications in the direction of the heat removal which takes place in the direction perpendicular to the interface, towards the external surface of the deposit, following secondary and tertiary undeveloped arms. The eutectic carbides M₇C₃ and ηM₆C formed are very thin and are located in the intercellular and interdendritic spaces of the solid solution Co (γ).

Keywords: Co-Ni-Cr-W-C alloy, solid deposit, microstructure, carbides, cyclic heat treatment

Procedia PDF Downloads 108