Search results for: nonlinear mathematical model
16647 Effect of Model Dimension in Numerical Simulation on Assessment of Water Inflow to Tunnel in Discontinues Rock
Authors: Hadi Farhadian, Homayoon Katibeh
Abstract:
Groundwater inflow to the tunnels is one of the most important problems in tunneling operation. The objective of this study is the investigation of model dimension effects on tunnel inflow assessment in discontinuous rock masses using numerical modeling. In the numerical simulation, the model dimension has an important role in prediction of water inflow rate. When the model dimension is very small, due to low distance to the tunnel border, the model boundary conditions affect the estimated amount of groundwater flow into the tunnel and results show a very high inflow to tunnel. Hence, in this study, the two-dimensional universal distinct element code (UDEC) used and the impact of different model parameters, such as tunnel radius, joint spacing, horizontal and vertical model domain extent has been evaluated. Results show that the model domain extent is a function of the most significant parameters, which are tunnel radius and joint spacing.Keywords: water inflow, tunnel, discontinues rock, numerical simulation
Procedia PDF Downloads 52416646 Structure-Constructivism in the Philosophy of Mathematics
Authors: Jeansou Moun
Abstract:
This study argues that constructivism and structuralism, which have been the two important schools of mathematical philosophy since the mid-19th century, can and should be synthesized into structure-constructivism. In fact, the philosophy of mathematics is divided into more than ten schools depending on the point of view. However, the biggest trend is Platonism which claims that mathematical objects are "abstract entities" that exists independently of the human mind and material objects. Its opposite is constructivism. According to the latter, mathematical objects are products of the construction of the human mind. However, whether the basis of the construction is a logical device, a symbolic system, or an empirical perception, it is subdivided into logicism, formalism, and intuitionism. However, these three schools themselves are further subdivided into various variants, and among them, structuralism, which emerged in the mid-20th century, is receiving the most attention. On the other hand, structuralism which emphasizes structure instead of individual objects, is divided into non-eliminative structuralism, which supports the a priori of structure, and non-eliminative structuralism, which rejects any abstract entity. In this context, it is believed that the structure itself is not an a priori entity but a result of the construction of the cognitive subject and that no object has ever been given to us in its full meaning from the outset. In other words, concepts are progressively structured through a dialectical cycle between sensory perception, imagination (abstraction), concepts, judgments, and reasoning. Symbols are needed for formal operation. However, without concrete manipulation, the formal operation cannot have any meaning. However, when formal structurization is achieved, the reality (object) itself is also newly structured. This is the "structure-constructivism".Keywords: philosophy of mathematics, platonism, logicism, formalism, constructivism, structuralism, structure-constructivism
Procedia PDF Downloads 9716645 Nonlinear Finite Element Modeling of Reinforced Concrete Flat Plate-Inclined Column Connection
Authors: Rabab Allouzi, Amer Alkloub
Abstract:
As the complex shaped buildings become a popular trend for architects, this paper is presented to investigate the performance of reinforced concrete flat plate-inclined column connection. The studies on the inclined column and flat plate connections are not sufficient in comparison to those on the conventional structures. The effect of column angle of inclination on the punching shear strength is found significant and studied herein. This paper presents a non-linear finite element based modeling approach to estimate behavior of RC flat plate inclined column connection. Results from simulations of RC flat plate-straight column connection show good agreement with experimental response of specimens tested by other researchers. The model is further used to study the response of inclined columns to punching at various ranges of inclination angles. The inclination angle can be included in the punching shear strength provisions provided by ACI 318-14 to account for the effect of column inclination.Keywords: punching shear, non-linear finite element, inclined columns, reinforced concrete connection
Procedia PDF Downloads 24616644 A Comparative Analysis of Heuristics Applied to Collecting Used Lubricant Oils Generated in the City of Pereira, Colombia
Authors: Diana Fajardo, Sebastián Ortiz, Oscar Herrera, Angélica Santis
Abstract:
Currently, in Colombia is arising a problem related to collecting used lubricant oils which are generated by the increment of the vehicle fleet. This situation does not allow a proper disposal of this type of waste, which in turn results in a negative impact on the environment. Therefore, through the comparative analysis of various heuristics, the best solution to the VRP (Vehicle Routing Problem) was selected by comparing costs and times for the collection of used lubricant oils in the city of Pereira, Colombia; since there is no presence of management companies engaged in the direct administration of the collection of this pollutant. To achieve this aim, six proposals of through methods of solution of two phases were discussed. First, the assignment of the group of generator points of the residue was made (previously identified). Proposals one and four of through methods are based on the closeness of points. The proposals two and five are using the scanning method and the proposals three and six are considering the restriction of the capacity of collection vehicle. Subsequently, the routes were developed - in the first three proposals by the Clarke and Wright's savings algorithm and in the following proposals by the Traveling Salesman optimization mathematical model. After applying techniques, a comparative analysis of the results was performed and it was determined which of the proposals presented the most optimal values in terms of the distance, cost and travel time.Keywords: Heuristics, optimization Model, savings algorithm, used vehicular oil, V.R.P.
Procedia PDF Downloads 41416643 Chaotic Electronic System with Lambda Diode
Authors: George Mahalu
Abstract:
The Chua diode has been configured over time in various ways, using electronic structures like operational amplifiers (AOs) or devices with gas or semiconductors. When discussing the use of semiconductor devices, tunnel diodes (Esaki diodes) are most often considered, and more recently, transistorized configurations such as lambda diodes. The paperwork proposed here uses in the modeling a lambda diode type configuration consisting of two junction field effect transistors (JFET). The original scheme is created in the MULTISIM electronic simulation environment and is analyzed in order to identify the conditions for the appearance of evolutionary unpredictability specific to nonlinear dynamic systems with chaos-induced behavior. The chaotic deterministic oscillator is one autonomous type, a fact that places it in the class of Chua’s type oscillators, the only significant and most important difference being the presence of a nonlinear device like the one mentioned structure above. The chaotic behavior is identified both by means of strange attractor-type trajectories and visible during the simulation and by highlighting the hypersensitivity of the system to small variations of one of the input parameters. The results obtained through simulation and the conclusions drawn are useful in the further research of ways to implement such constructive electronic solutions in theoretical and practical applications related to modern small signal amplification structures, to systems for encoding and decoding messages through various modern ways of communication, as well as new structures that can be imagined both in modern neural networks and in those for the physical implementation of some requirements imposed by current research with the aim of obtaining practically usable solutions in quantum computing and quantum computers.Keywords: chua, diode, memristor, chaos
Procedia PDF Downloads 8816642 Alternative Mathematical form for Determining the Effectiveness of High-LET Radiations at Lower Doses Region
Authors: Abubaker A. Yousif, Muhamad S. Yasir
Abstract:
The Effectiveness of lower doses of high-LET radiations is not accurately determined by using energy-based physical parameters such as absorbed dose and radio-sensitivity parameters. Therefore, an attempt has been carried out in this research to propose alternative parameter that capable to quantify the effectiveness of these high LET radiations at lower doses regions. The linear energy transfer and mean free path are employed to achieve this objective. A new mathematical form of the effectiveness of high-LET radiations at lower doses region has been formulated. Based on this parameter, the optimized effectiveness of high-LET radiations occurs when the energy of charged particles is deposited at spacing of 2 nm for primary ionization.Keywords: effectiveness, low dose, radiation mean free path, linear energy transfer
Procedia PDF Downloads 46216641 An Early Intervention Framework for Supporting Students’ Mathematical Development in the Transition to University STEM Programmes
Authors: Richard Harrison
Abstract:
Developing competency in mathematics and related critical thinking skills is essential to the education of undergraduate students of Science, Technology, Engineering and Mathematics (STEM). Recently, the HE sector has been impacted by a seemingly widening disconnect between the mathematical competency of incoming first-year STEM students and their entrance qualification tariffs. Despite relatively high grades in A-Level Mathematics, students may initially lack fundamental skills in key areas such as algebraic manipulation and have limited capacity to apply problem solving strategies. Compounded by compensatory measures applied to entrance qualifications during the pandemic, there has been an associated decline in student performance on introductory university mathematics modules. In the UK, a number of online resources have been developed to help scaffold the transition to university mathematics. However, in general, these do not offer a structured learning journey focused on individual developmental needs, nor do they offer an experience coherent with the teaching and learning characteristics of the destination institution. In order to address some of these issues, a bespoke framework has been designed and implemented on our VLE in the Faculty of Engineering & Physical Sciences (FEPS) at the University of Surrey. Called the FEPS Maths Support Framework, it was conceived to scaffold the mathematical development of individuals prior to entering the university and during the early stages of their transition to undergraduate studies. More than 90% of our incoming STEM students voluntarily participate in the process. Students complete a set of initial diagnostic questions in the late summer. Based on their performance and feedback on these questions, they are subsequently guided to self-select specific mathematical topic areas for review using our proprietary resources. This further assists students in preparing for discipline related diagnostic tests. The framework helps to identify students who are mathematically weak and facilitates early intervention to support students according to their specific developmental needs. This paper presents a summary of results from a rich data set captured from the framework over a 3-year period. Quantitative data provides evidence that students have engaged and developed during the process. This is further supported by process evaluation feedback from the students. Ranked performance data associated with seven key mathematical topic areas and eight engineering and science discipline areas reveals interesting patterns which can be used to identify more generic relative capabilities of the discipline area cohorts. In turn, this facilitates evidence based management of the mathematical development of the new cohort, informing any associated adjustments to teaching and learning at a more holistic level. Evidence is presented establishing our framework as an effective early intervention strategy for addressing the sector-wide issue of supporting the mathematical development of STEM students transitioning to HEKeywords: competency, development, intervention, scaffolding
Procedia PDF Downloads 6616640 A Case Study on the Seismic Performance Assessment of the High-Rise Setback Tower Under Multiple Support Excitations on the Basis of TBI Guidelines
Authors: Kamyar Kildashti, Rasoul Mirghaderi
Abstract:
This paper describes the three-dimensional seismic performance assessment of a high-rise steel moment-frame setback tower, designed and detailed per the 2010 ASCE7, under multiple support excitations. The vulnerability analyses are conducted based on nonlinear history analyses under a set of multi-directional strong ground motion records which are scaled to design-based site-specific spectrum in accordance with ASCE41-13. Spatial variation of input motions between far distant supports of each part of the tower is considered by defining time lag. Plastic hinge monotonic and cyclic behavior for prequalified steel connections, panel zones, as well as steel columns is obtained from predefined values presented in TBI Guidelines, PEER/ATC72 and FEMA P440A to include stiffness and strength degradation. Inter-story drift ratios, residual drift ratios, as well as plastic hinge rotation demands under multiple support excitations, are compared to those obtained from uniform support excitations. Performance objectives based on acceptance criteria declared by TBI Guidelines are compared between uniform and multiple support excitations. The results demonstrate that input motion discrepancy results in detrimental effects on the local and global response of the tower.Keywords: high-rise building, nonlinear time history analysis, multiple support excitation, performance-based design
Procedia PDF Downloads 28516639 Day of the Week Patterns and the Financial Trends' Role: Evidence from the Greek Stock Market during the Euro Era
Authors: Nikolaos Konstantopoulos, Aristeidis Samitas, Vasileiou Evangelos
Abstract:
The purpose of this study is to examine if the financial trends influence not only the stock markets’ returns, but also their anomalies. We choose to study the day of the week effect (DOW) for the Greek stock market during the Euro period (2002-12), because during the specific period there are not significant structural changes and there are long term financial trends. Moreover, in order to avoid possible methodological counterarguments that usually arise in the literature, we apply several linear (OLS) and nonlinear (GARCH family) models to our sample until we reach to the conclusion that the TGARCH model fits better to our sample than any other. Our results suggest that in the Greek stock market there is a long term predisposition for positive/negative returns depending on the weekday. However, the statistical significance is influenced from the financial trend. This influence may be the reason why there are conflict findings in the literature through the time. Finally, we combine the DOW’s empirical findings from 1985-2012 and we may assume that in the Greek case there is a tendency for long lived turn of the week effect.Keywords: day of the week effect, GARCH family models, Athens stock exchange, economic growth, crisis
Procedia PDF Downloads 41016638 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow
Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen
Abstract:
Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.Keywords: flux limiters, implicit SGS, MILES, OpenFOAM, turbulence statistics
Procedia PDF Downloads 19016637 Towards Designing of a Potential New HIV-1 Protease Inhibitor Using Quantitative Structure-Activity Relationship Study in Combination with Molecular Docking and Molecular Dynamics Simulations
Authors: Mouna Baassi, Mohamed Moussaoui, Hatim Soufi, Sanchaita RajkhowaI, Ashwani Sharma, Subrata Sinha, Said Belaaouad
Abstract:
Human Immunodeficiency Virus type 1 protease (HIV-1 PR) is one of the most challenging targets of antiretroviral therapy used in the treatment of AIDS-infected people. The performance of protease inhibitors (PIs) is limited by the development of protease mutations that can promote resistance to the treatment. The current study was carried out using statistics and bioinformatics tools. A series of thirty-three compounds with known enzymatic inhibitory activities against HIV-1 protease was used in this paper to build a mathematical model relating the structure to the biological activity. These compounds were designed by software; their descriptors were computed using various tools, such as Gaussian, Chem3D, ChemSketch and MarvinSketch. Computational methods generated the best model based on its statistical parameters. The model’s applicability domain (AD) was elaborated. Furthermore, one compound has been proposed as efficient against HIV-1 protease with comparable biological activity to the existing ones; this drug candidate was evaluated using ADMET properties and Lipinski’s rule. Molecular Docking performed on Wild Type and Mutant Type HIV-1 proteases allowed the investigation of the interaction types displayed between the proteases and the ligands, Darunavir (DRV) and the new drug (ND). Molecular dynamics simulation was also used in order to investigate the complexes’ stability, allowing a comparative study of the performance of both ligands (DRV & ND). Our study suggested that the new molecule showed comparable results to that of Darunavir and may be used for further experimental studies. Our study may also be used as a pipeline to search and design new potential inhibitors of HIV-1 proteases.Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation.
Procedia PDF Downloads 4016636 A Model of Condensation and Solidification of Metallurgical Vapor in a Supersonic Nozzle
Authors: Thien X. Dinh, Peter Witt
Abstract:
A one-dimensional model for the simulation of condensation and solidification of a metallurgical vapor in the mixture of gas during supersonic expansion is presented. In the model, condensation is based on critical nucleation and drop-growth theory. When the temperature falls below the supercooling point, all the formed liquid droplets in the condensation phase are assumed to solidify at an infinite rate. The model was verified with a Computational Fluid Dynamics simulation of magnesium vapor condensation and solidification. The obtained results are in reasonable agreement with CFD data. Therefore, the model is a promising, efficient tool for use in the design process for supersonic nozzles applied in mineral processes since it is faster than the CFD counterpart by an order of magnitude.Keywords: condensation, metallurgical flow, solidification, supersonic expansion
Procedia PDF Downloads 6316635 The Power of the Proper Orthogonal Decomposition Method
Authors: Charles Lee
Abstract:
The Principal Orthogonal Decomposition (POD) technique has been used as a model reduction tool for many applications in engineering and science. In principle, one begins with an ensemble of data, called snapshots, collected from an experiment or laboratory results. The beauty of the POD technique is that when applied, the entire data set can be represented by the smallest number of orthogonal basis elements. It is the such capability that allows us to reduce the complexity and dimensions of many physical applications. Mathematical formulations and numerical schemes for the POD method will be discussed along with applications in NASA’s Deep Space Large Antenna Arrays, Satellite Image Reconstruction, Cancer Detection with DNA Microarray Data, Maximizing Stock Return, and Medical Imaging.Keywords: reduced-order methods, principal component analysis, cancer detection, image reconstruction, stock portfolios
Procedia PDF Downloads 8416634 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling
Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow
Abstract:
Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.Keywords: dynamic modeling, missing data, mobility, multiple imputation
Procedia PDF Downloads 16416633 The State Model of Corporate Governance
Authors: Asaiel Alohaly
Abstract:
A theoretical framework for corporate governance is needed to bridge the gap between the corporate governance of private companies and State-owned Enterprises (SOEs). The two dominant models, being shareholder and stakeholder, do not always address the specific requirements and challenges posed by ‘hybrid’ companies; namely, previously national bodies that have been privatised bffu t where the government retains significant control or holds a majority of shareholders. Thus, an exploratory theoretical study is needed to identify how ‘hybrid’ companies should be defined and why the state model should be acknowledged since it is the less conspicuous model in comparison with the shareholder and stakeholder models. This research focuses on ‘the state model of corporate governance to understand the complex ownership, control pattern, goals, and corporate governance of these hybrid companies. The significance of this research lies in the fact that there is a limited available publication on the state model. The outcomes of this research are as follows. It became evident that the state model exists in the ecosystem. However, corporate governance theories have not extensively covered this model. Though, there is a lot being said about it by OECD and the World Bank. In response to this gap between theories and industry practice, this research argues for the state model, which proceeds from an understanding of the institutionally embedded character of hybrid companies where the government is either a majority of the total shares or a controlling shareholder.Keywords: corporate governance, control, shareholders, state model
Procedia PDF Downloads 14316632 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals
Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam
Abstract:
The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study
Procedia PDF Downloads 32016631 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models
Authors: Azadeh Jafari, Robert G. Owens
Abstract:
In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics
Procedia PDF Downloads 36116630 Comparison of Two Theories for the Critical Laser Radius in Thermal Quantum Plasma
Authors: Somaye Zare
Abstract:
The critical beam radius is a significant factor that predicts the behavior of the laser beam in the plasma, so if the laser beam radius is adequately greater in comparison to it, the beam will experience stable focusing on the plasma; otherwise, the beam will diverge after entering into the plasma. In this work, considering the paraxial approximation and moment theories, the localization of a relativistic laser beam in thermal quantum plasma is investigated. Using the dielectric function obtained in the quantum hydrodynamic model, the mathematical equation for the laser beam width parameter is attained and solved numerically by the fourth-order Runge-Kutta method. The results demonstrate that the stouter focusing effect is occurred in the moment theory compared to the paraxial approximation. Besides, similar to the two theories, with increasing Fermi temperature, plasma density, and laser intensity, the oscillation rate of the beam width parameter growths and focusing length reduces which means improving the focusing effect. Furthermore, it is understood that behaviors of the critical laser radius are different in the two theories, in the paraxial approximation, the critical radius after a minimum value is enhanced with increasing laser intensity, but in the moment theory, with increasing laser intensity, the critical radius decreases until it becomes independent of the laser intensity.Keywords: laser localization, quantum plasma, paraxial approximation, moment theory, quantum hydrodynamic model
Procedia PDF Downloads 7316629 Numerical Study of Dynamic Buckling of Fiber Metal Laminates's Profile
Authors: Monika Kamocka, Radoslaw Mania
Abstract:
The design of Fiber Metal Laminates - combining thin aluminum sheets and prepreg layers, allows creating a hybrid structure with high strength to weight ratio. This feature makes FMLs very attractive for aerospace industry, where thin-walled structures are commonly used. Nevertheless, those structures are prone to buckling phenomenon. Buckling could occur also under static load as well as dynamic pulse loads. In this paper, the problem of dynamic buckling of open cross-section FML profiles under axial dynamic compression in the form of pulse load of finite duration is investigated. In the numerical model, material properties of FML constituents were assumed as nonlinear elastic-plastic aluminum and linear-elastic glass-fiber-reinforced composite. The influence of pulse shape was investigated. Sinusoidal and rectangular pulse loads of finite duration were compared in two ways, i.e. with respect to magnitude and force pulse. The dynamic critical buckling load was determined based on Budiansky-Hutchinson, Ari Gur, and Simonetta dynamic buckling criteria.Keywords: dynamic buckling, dynamic stability, Fiber Metal Laminate, Finite Element Method
Procedia PDF Downloads 19416628 Investigation of Distortion and Impact Strength of 304L Butt Joint Using Different Weld Groove
Authors: A. Sharma, S. S. Sandhu, A. Shahi, A. Kumar
Abstract:
The aim of present investigation was to carry out Finite element modeling of distortion in the case of butt weld. 12mm thick AISI 304L plates were butt welded using three different combinations of groove design namely Double U, Double V and Composite. A full simulation of shielded metal arc welding (SMAW) of nonlinear heat transfer is carried out. Aspects like, temperature-dependent thermal properties of AISI stainless steel above liquid phase, the effect of thermal boundary conditions, were included in the model. Since welding heat dissipation characteristics changed due to variable groove design significant changes in the microhardness tensile strength and impact toughness of the joints were observed. The cumulative distortion was found to be least in double V joint followed by the Composite and Double U-joints. All the joints have joint efficiency more than 100%. CVN value of the Double V-groove weld metal was highest. The experimental results and the FEM results were compared and reveal a very good correlation for distortion and weld groove design for a multipass joint with a standard analogy of 83%.Keywords: AISI 304 L, Butt joint, distortion, FEM, groove design, SMAW
Procedia PDF Downloads 40816627 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method
Authors: Mai Abdul Latif, Yuntian Feng
Abstract:
Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear
Procedia PDF Downloads 22516626 Toward a Characteristic Optimal Power Flow Model for Temporal Constraints
Authors: Zongjie Wang, Zhizhong Guo
Abstract:
While the regular optimal power flow model focuses on a single time scan, the optimization of power systems is typically intended for a time duration with respect to a desired objective function. In this paper, a temporal optimal power flow model for a time period is proposed. To reduce the computation burden needed for calculating temporal optimal power flow, a characteristic optimal power flow model is proposed, which employs different characteristic load patterns to represent the objective function and security constraints. A numerical method based on the interior point method is also proposed for solving the characteristic optimal power flow model. Both the temporal optimal power flow model and characteristic optimal power flow model can improve the systems’ desired objective function for the entire time period. Numerical studies are conducted on the IEEE 14 and 118-bus test systems to demonstrate the effectiveness of the proposed characteristic optimal power flow model.Keywords: optimal power flow, time period, security, economy
Procedia PDF Downloads 45116625 The Evaluation Model for the Quality of Software Based on Open Source Code
Authors: Li Donghong, Peng Fuyang, Yang Guanghua, Su Xiaoyan
Abstract:
Using open source code is a popular method of software development. How to evaluate the quality of software becomes more important. This paper introduces an evaluation model. The model evaluates the quality from four dimensions: technology, production, management, and development. Each dimension includes many indicators. The weight of indicator can be modified according to the purpose of evaluation. The paper also introduces a method of using the model. The evaluating result can provide good advice for evaluating or purchasing the software.Keywords: evaluation model, software quality, open source code, evaluation indicator
Procedia PDF Downloads 38916624 Applying the Crystal Model to Different Nuclear Systems
Authors: A. Amar
Abstract:
The angular distributions of the nuclear systems under consideration have been analyzed in the framework of the optical model (OM), where the real part was taken in the crystal model form. A crystal model (CM) has been applied to deuteron elastically scattered by ⁶,⁷Li and ⁹Be. A crystal model (CM) + distorted-wave Born approximation (DWBA) + dynamic polarization potential (DPP) potential has been applied to deuteron elastically scattered by ⁶,⁷Li and 9Be. Also, a crystal model has been applied to ⁶Li elastically scattered by ¹⁶O and ²⁸Sn in addition to the ⁷Li+⁷Li system and the ¹²C(alpha,⁸Be) ⁸Be reaction. The continuum-discretized coupled-channels (CDCC) method has been applied to the ⁷Li+⁷Li system and agreement between the crystal model and the continuum-discretized coupled-channels (CDCC) method has been observed. In general, the models succeeded in reproducing the differential cross sections at the full angular range and for all the energies under consideration.Keywords: optical model (OM), crystal model (CM), distorted-wave born approximation (DWBA), dynamic polarization potential (DPP), the continuum-discretized coupled-channels (CDCC) method, and deuteron elastically scattered by ⁶, ⁷Li and ⁹Be
Procedia PDF Downloads 7916623 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 10816622 Reworking of the Anomalies in the Discounted Utility Model as a Combination of Cognitive Bias and Decrease in Impatience: Decision Making in Relation to Bounded Rationality and Emotional Factors in Intertemporal Choices
Authors: Roberta Martino, Viviana Ventre
Abstract:
Every day we face choices whose consequences are deferred in time. These types of choices are the intertemporal choices and play an important role in the social, economic, and financial world. The Discounted Utility Model is the mathematical model of reference to calculate the utility of intertemporal prospects. The discount rate is the main element of the model as it describes how the individual perceives the indeterminacy of subsequent periods. Empirical evidence has shown a discrepancy between the behavior expected from the predictions of the model and the effective choices made from the decision makers. In particular, the term temporal inconsistency indicates those choices that do not remain optimal with the passage of time. This phenomenon has been described with hyperbolic models of the discount rate which, unlike the linear or exponential nature assumed by the discounted utility model, is not constant over time. This paper explores the problem of inconsistency by tracing the decision-making process through the concept of impatience. The degree of impatience and the degree of decrease of impatience are two parameters that allow to quantify the weight of emotional factors and cognitive limitations during the evaluation and selection of alternatives. In fact, although the theory assumes perfectly rational decision makers, behavioral finance and cognitive psychology have made it possible to understand that distortions in the decision-making process and emotional influence have an inevitable impact on the decision-making process. The degree to which impatience is diminished is the focus of the first part of the study. By comparing consistent and inconsistent preferences over time, it was possible to verify that some anomalies in the discounted utility model are a result of the combination of cognitive bias and emotional factors. In particular: the delay effect and the interval effect are compared through the concept of misperception of time; starting from psychological considerations, a criterion is proposed to identify the causes of the magnitude effect that considers the differences in outcomes rather than their ratio; the sign effect is analyzed by integrating in the evaluation of prospects with negative outcomes the psychological aspects of loss aversion provided by Prospect Theory. An experiment implemented confirms three findings: the greatest variation in the degree of decrease in impatience corresponds to shorter intervals close to the present; the greatest variation in the degree of impatience occurs for outcomes of lower magnitude; the variation in the degree of impatience is greatest for negative outcomes. The experimental phase was implemented with the construction of the hyperbolic factor through the administration of questionnaires constructed for each anomaly. This work formalizes the underlying causes of the discrepancy between the discounted utility model and the empirical evidence of preference reversal.Keywords: decreasing impatience, discount utility model, hyperbolic discount, hyperbolic factor, impatience
Procedia PDF Downloads 10316621 A Fuzzy Programming Approach for Solving Intuitionistic Fuzzy Linear Fractional Programming Problem
Authors: Sujeet Kumar Singh, Shiv Prasad Yadav
Abstract:
This paper develops an approach for solving intuitionistic fuzzy linear fractional programming (IFLFP) problem where the cost of the objective function, the resources, and the technological coefficients are triangular intuitionistic fuzzy numbers. Here, the IFLFP problem is transformed into an equivalent crisp multi-objective linear fractional programming (MOLFP) problem. By using fuzzy mathematical programming approach the transformed MOLFP problem is reduced into a single objective linear programming (LP) problem. The proposed procedure is illustrated through a numerical example.Keywords: triangular intuitionistic fuzzy number, linear programming problem, multi objective linear programming problem, fuzzy mathematical programming, membership function
Procedia PDF Downloads 56616620 Bifurcation and Stability Analysis of the Dynamics of Cholera Model with Controls
Authors: C. E. Madubueze, S. C. Madubueze, S. Ajama
Abstract:
Cholera is a disease that is predominately common in developing countries due to poor sanitation and overcrowding population. In this paper, a deterministic model for the dynamics of cholera is developed and control measures such as health educational message, therapeutic treatment, and vaccination are incorporated in the model. The effective reproduction number is computed in terms of the model parameters. The existence and stability of the equilibrium states, disease free and endemic equilibrium states are established and showed to be locally and globally asymptotically stable when R0 < 1 and R0 > 1 respectively. The existence of backward bifurcation of the model is investigated. Furthermore, numerical simulation of the model developed is carried out to show the impact of the control measures and the result indicates that combined control measures will help to reduce the spread of cholera in the populationKeywords: backward bifurcation, cholera, equilibrium, dynamics, stability
Procedia PDF Downloads 43116619 Flood Predicting in Karkheh River Basin Using Stochastic ARIMA Model
Authors: Karim Hamidi Machekposhti, Hossein Sedghi, Abdolrasoul Telvari, Hossein Babazadeh
Abstract:
Floods have huge environmental and economic impact. Therefore, flood prediction is given a lot of attention due to its importance. This study analysed the annual maximum streamflow (discharge) (AMS or AMD) of Karkheh River in Karkheh River Basin for flood predicting using ARIMA model. For this purpose, we use the Box-Jenkins approach, which contains four-stage method model identification, parameter estimation, diagnostic checking and forecasting (predicting). The main tool used in ARIMA modelling was the SAS and SPSS software. Model identification was done by visual inspection on the ACF and PACF. SAS software computed the model parameters using the ML, CLS and ULS methods. The diagnostic checking tests, AIC criterion, RACF graph and RPACF graphs, were used for selected model verification. In this study, the best ARIMA models for Annual Maximum Discharge (AMD) time series was (4,1,1) with their AIC value of 88.87. The RACF and RPACF showed residuals’ independence. To forecast AMD for 10 future years, this model showed the ability of the model to predict floods of the river under study in the Karkheh River Basin. Model accuracy was checked by comparing the predicted and observation series by using coefficient of determination (R2).Keywords: time series modelling, stochastic processes, ARIMA model, Karkheh river
Procedia PDF Downloads 28716618 Theoretical Analysis and Design Consideration of Screened Heat Pipes for Low-Medium Concentration Solar Receivers
Authors: Davoud Jafari, Paolo Di Marco, Alessandro Franco, Sauro Filippeschi
Abstract:
This paper summarizes the results of an investigation into the heat pipe heat transfer for solar collector applications. The study aims to show the feasibility of a concentrating solar collector, which is coupled with a heat pipe. Particular emphasis is placed on the capillary and boiling limits in capillary porous structures, with different mesh numbers and wick thicknesses. A mathematical model of a cylindrical heat pipe is applied to study its behaviour when it is exposed to higher heat input at the evaporator. The steady state analytical model includes two-dimensional heat conduction in the HP’s wall, the liquid flow in the wick and vapor hydrodynamics. A sensitivity analysis was conducted by considering different design criteria and working conditions. Different wicks (mesh 50, 100, 150, 200, 250, and, 300), different porosities (0.5, 0.6, 0.7, 0.8, and 0.9) with different wick thicknesses (0.25, 0.5, 1, 1.5, and 2 mm) are analyzed with water as a working fluid. Results show that it is possible to improve heat transfer capability (HTC) of a HP by selecting the appropriate wick thickness, the effective pore radius, and lengths for a given HP configuration, and there exist optimal design criteria (optimal thick, evaporator adiabatic and condenser sections). It is shown that the boiling and wicking limits are connected and occurs in dependence on each other. As different parts of the HP external surface collect different fractions of the total incoming insolation, the analysis of non-uniform heat flux distribution indicates that peak heat flux is not affecting parameter. The parametric investigations are aimed to determine working limits and thermal performance of HP for medium temperature SC application.Keywords: screened heat pipes, analytical model, boiling and capillary limits, concentrating collector
Procedia PDF Downloads 560