Search results for: Mathematical modeling
938 Experimental Investigation of Beams Having Spring Mass Resonators
Authors: Somya R. Patro, Arnab Banerjee, G. V. Ramana
Abstract:
A flexural beam carrying elastically mounted concentrated masses, such as engines, motors, oscillators, or vibration absorbers, is often encountered in mechanical, civil, and aeronautical engineering domains. To prevent resonance conditions, the designers must predict the natural frequencies of such a constrained beam system. This paper investigates experimental and analytical studies on vibration suppression in a cantilever beam with a tip mass with the help of spring-mass to achieve local resonance conditions. The system consists of a 3D printed polylactic acid (PLA) beam screwed at the base plate of the shaker system. The top of the free end is connected by an accelerometer which also acts as a tip mass. A spring and a mass are attached at the bottom to replicate the mechanism of the spring-mass resonator. The Fast Fourier Transform (FFT) algorithm converts time acceleration plots into frequency amplitude plots from which transmittance is calculated as a function of the excitation frequency. The mathematical formulation is based on the transfer matrix method, and the governing differential equations are based on Euler Bernoulli's beam theory. The experimental results are successfully validated with the analytical results, providing us essential confidence in our proposed methodology. The beam spring-mass system is then converted to an equivalent two-degree of freedom system, from which frequency response function is obtained. The H2 optimization technique is also used to obtain the closed-form expression of optimum spring stiffness, which shows the influence of spring stiffness on the system's natural frequency and vibration response.Keywords: euler bernoulli beam theory, fast fourier transform, natural frequencies, polylactic acid, transmittance, vibration absorbers
Procedia PDF Downloads 102937 Comparative Study on Daily Discharge Estimation of Soolegan River
Authors: Redvan Ghasemlounia, Elham Ansari, Hikmet Kerem Cigizoglu
Abstract:
Hydrological modeling in arid and semi-arid regions is very important. Iran has many regions with these climate conditions such as Chaharmahal and Bakhtiari province that needs lots of attention with an appropriate management. Forecasting of hydrological parameters and estimation of hydrological events of catchments, provide important information that used for design, management and operation of water resources such as river systems, and dams, widely. Discharge in rivers is one of these parameters. This study presents the application and comparison of some estimation methods such as Feed-Forward Back Propagation Neural Network (FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) to predict the daily flow discharge of the Soolegan River, located at Chaharmahal and Bakhtiari province, in Iran. In this study, Soolegan, station was considered. This Station is located in Soolegan River at 51° 14՜ Latitude 31° 38՜ longitude at North Karoon basin. The Soolegan station is 2086 meters higher than sea level. The data used in this study are daily discharge and daily precipitation of Soolegan station. Feed Forward Back Propagation Neural Network(FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) models were developed using the same input parameters for Soolegan's daily discharge estimation. The results of estimation models were compared with observed discharge values to evaluate performance of the developed models. Results of all methods were compared and shown in tables and charts.Keywords: ANN, multi linear regression, Bayesian network, forecasting, discharge, gene expression programming
Procedia PDF Downloads 559936 Multi-scale Spatial and Unified Temporal Feature-fusion Network for Multivariate Time Series Anomaly Detection
Authors: Hang Yang, Jichao Li, Kewei Yang, Tianyang Lei
Abstract:
Multivariate time series anomaly detection is a significant research topic in the field of data mining, encompassing a wide range of applications across various industrial sectors such as traffic roads, financial logistics, and corporate production. The inherent spatial dependencies and temporal characteristics present in multivariate time series introduce challenges to the anomaly detection task. Previous studies have typically been based on the assumption that all variables belong to the same spatial hierarchy, neglecting the multi-level spatial relationships. To address this challenge, this paper proposes a multi-scale spatial and unified temporal feature fusion network, denoted as MSUT-Net, for multivariate time series anomaly detection. The proposed model employs a multi-level modeling approach, incorporating both temporal and spatial modules. The spatial module is designed to capture the spatial characteristics of multivariate time series data, utilizing an adaptive graph structure learning model to identify the multi-level spatial relationships between data variables and their attributes. The temporal module consists of a unified temporal processing module, which is tasked with capturing the temporal features of multivariate time series. This module is capable of simultaneously identifying temporal dependencies among different variables. Extensive testing on multiple publicly available datasets confirms that MSUT-Net achieves superior performance on the majority of datasets. Our method is able to model and accurately detect systems data with multi-level spatial relationships from a spatial-temporal perspective, providing a novel perspective for anomaly detection analysis.Keywords: data mining, industrial system, multivariate time series, anomaly detection
Procedia PDF Downloads 13935 The Development of an Agent-Based Model to Support a Science-Based Evacuation and Shelter-in-Place Planning Process within the United States
Authors: Kyle Burke Pfeiffer, Carmella Burdi, Karen Marsh
Abstract:
The evacuation and shelter-in-place planning process employed by most jurisdictions within the United States is not informed by a scientifically-derived framework that is inclusive of the behavioral and policy-related indicators of public compliance with evacuation orders. While a significant body of work exists to define these indicators, the research findings have not been well-integrated nor translated into useable planning factors for public safety officials. Additionally, refinement of the planning factors alone is insufficient to support science-based evacuation planning as the behavioral elements of evacuees—even with consideration of policy-related indicators—must be examined in the context of specific regional transportation and shelter networks. To address this problem, the Federal Emergency Management Agency and Argonne National Laboratory developed an agent-based model to support regional analysis of zone-based evacuation in southeastern Georgia. In particular, this model allows public safety officials to analyze the consequences that a range of hazards may have upon a community, assess evacuation and shelter-in-place decisions in the context of specified evacuation and response plans, and predict outcomes based on community compliance with orders and the capacity of the regional (to include extra-jurisdictional) transportation and shelter networks. The intention is to use this model to aid evacuation planning and decision-making. Applications for the model include developing a science-driven risk communication strategy and, ultimately, in the case of evacuation, the shortest possible travel distance and clearance times for evacuees within the regional boundary conditions.Keywords: agent-based modeling for evacuation, decision-support for evacuation planning, evacuation planning, human behavior in evacuation
Procedia PDF Downloads 231934 Future Design and Innovative Economic Models for Futuristic Markets in Developing Countries
Authors: Nessreen Y. Ibrahim
Abstract:
Designing the future according to realistic analytical study for the futuristic market needs can be a milestone strategy to make a huge improvement in developing countries economics. In developing countries, access to high technology and latest science approaches is very limited. The financial problems in low and medium income countries have negative effects on the kind and quality of imported new technologies and application for their markets. Thus, there is a strong need for shifting paradigm thinking in the design process to improve and evolve their development strategy. This paper discusses future possibilities in developing countries, and how they can design their own future according to specific future models FDM (Future Design Models), which established to solve certain economical problems, as well as political and cultural conflicts. FDM is strategic thinking framework provides an improvement in both content and process. The content includes; beliefs, values, mission, purpose, conceptual frameworks, research, and practice, while the process includes; design methodology, design systems, and design managements tools. In this paper the main objective was building an innovative economic model to design a chosen possible futuristic scenario; by understanding the market future needs, analyze real world setting, solve the model questions by future driven design, and finally interpret the results, to discuss to what extent the results can be transferred to the real world. The paper discusses Egypt as a potential case study. Since, Egypt has highly complex economical problems, extra-dynamic political factors, and very rich cultural aspects; we considered Egypt is a very challenging example for applying FDM. The paper results recommended using FDM numerical modeling as a starting point to design the future.Keywords: developing countries, economic models, future design, possible futures
Procedia PDF Downloads 265933 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation
Authors: Samuel Ahamefula Mba
Abstract:
Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation
Procedia PDF Downloads 92932 Development of Automated Quality Management System for the Management of Heat Networks
Authors: Nigina Toktasynova, Sholpan Sagyndykova, Zhanat Kenzhebayeva, Maksat Kalimoldayev, Mariya Ishimova, Irbulat Utepbergenov
Abstract:
Any business needs a stable operation and continuous improvement, therefore it is necessary to constantly interact with the environment, to analyze the work of the enterprise in terms of employees, executives and consumers, as well as to correct any inconsistencies of certain types of processes and their aggregate. In the case of heat supply organizations, in addition to suppliers, local legislation must be considered which often is the main regulator of pricing of services. In this case, the process approach used to build a functional organizational structure in these types of businesses in Kazakhstan is a challenge not only in the implementation, but also in ways of analyzing the employee's salary. To solve these problems, we investigated the management system of heating enterprise, including strategic planning based on the balanced scorecard (BSC), quality management in accordance with the standards of the Quality Management System (QMS) ISO 9001 and analysis of the system based on expert judgment using fuzzy inference. To carry out our work we used the theory of fuzzy sets, the QMS in accordance with ISO 9001, BSC according to the method of Kaplan and Norton, method of construction of business processes according to the notation IDEF0, theory of modeling using Matlab software simulation tools and graphical programming LabVIEW. The results of the work are as follows: We determined possibilities of improving the management of heat-supply plant-based on QMS; after the justification and adaptation of software tool it has been used to automate a series of functions for the management and reduction of resources and for the maintenance of the system up to date; an application for the analysis of the QMS based on fuzzy inference has been created with novel organization of communication software with the application enabling the analysis of relevant data of enterprise management system.Keywords: balanced scorecard, heat supply, quality management system, the theory of fuzzy sets
Procedia PDF Downloads 367931 Determination of the Phosphate Activated Glutaminase Localization in the Astrocyte Mitochondria Using Kinetic Approach
Authors: N. V. Kazmiruk, Y. R. Nartsissov
Abstract:
Phosphate activated glutaminase (GA, E.C. 3.5.1.2) plays a key role in glutamine/glutamate homeostasis in mammalian brain, catalyzing the hydrolytic deamidation of glutamine to glutamate and ammonium ions. GA is mainly localized in mitochondria, where it has the catalytically active form on the inner mitochondrial membrane (IMM) and the other soluble form, which is supposed to be dormant. At present time, the exact localization of the membrane glutaminase active site remains a controversial and an unresolved issue. The first hypothesis called c-side localization suggests that the catalytic site of GA faces the inter-membrane space and products of the deamidation reaction have immediate access to cytosolic metabolism. According to the alternative m-side localization hypothesis, GA orients to the matrix, making glutamate and ammonium available for the tricarboxylic acid cycle metabolism in mitochondria directly. In our study, we used a multi-compartment kinetic approach to simulate metabolism of glutamate and glutamine in the astrocytic cytosol and mitochondria. We used physiologically important ratio between the concentrations of glutamine inside the matrix of mitochondria [Glnₘᵢₜ] and glutamine in the cytosol [Glncyt] as a marker for precise functioning of the system. Since this ratio directly depends on the mitochondrial glutamine carrier (MGC) flow parameters, key observation was to investigate the dependence of the [Glnmit]/[Glncyt] ratio on the maximal velocity of MGC at different initial concentrations of mitochondrial glutamate. Another important task was to observe the similar dependence at different inhibition constants of the soluble GA. The simulation results confirmed the experimental c-side localization hypothesis, in which the glutaminase active site faces the outer surface of the IMM. Moreover, in the case of such localization of the enzyme, a 3-fold decrease in ammonium production was predicted.Keywords: glutamate metabolism, glutaminase, kinetic approach, mitochondrial membrane, multi-compartment modeling
Procedia PDF Downloads 118930 Project Progress Prediction in Software Devlopment Integrating Time Prediction Algorithms and Large Language Modeling
Authors: Dong Wu, Michael Grenn
Abstract:
Managing software projects effectively is crucial for meeting deadlines, ensuring quality, and managing resources well. Traditional methods often struggle with predicting project timelines accurately due to uncertain schedules and complex data. This study addresses these challenges by combining time prediction algorithms with Large Language Models (LLMs). It makes use of real-world software project data to construct and validate a model. The model takes detailed project progress data such as task completion dynamic, team Interaction and development metrics as its input and outputs predictions of project timelines. To evaluate the effectiveness of this model, a comprehensive methodology is employed, involving simulations and practical applications in a variety of real-world software project scenarios. This multifaceted evaluation strategy is designed to validate the model's significant role in enhancing forecast accuracy and elevating overall management efficiency, particularly in complex software project environments. The results indicate that the integration of time prediction algorithms with LLMs has the potential to optimize software project progress management. These quantitative results suggest the effectiveness of the method in practical applications. In conclusion, this study demonstrates that integrating time prediction algorithms with LLMs can significantly improve the predictive accuracy and efficiency of software project management. This offers an advanced project management tool for the industry, with the potential to improve operational efficiency, optimize resource allocation, and ensure timely project completion.Keywords: software project management, time prediction algorithms, large language models (LLMS), forecast accuracy, project progress prediction
Procedia PDF Downloads 75929 Numerical Simulation of Flexural Strength of Steel Fiber Reinforced High Volume Fly Ash Concrete by Finite Element Analysis
Authors: Mahzabin Afroz, Indubhushan Patnaikuni, Srikanth Venkatesan
Abstract:
It is well-known that fly ash can be used in high volume as a partial replacement of cement to get beneficial effects on concrete. High volume fly ash (HVFA) concrete is currently emerging as a popular option to strengthen by fiber. Although studies have supported the use of fibers with fly ash, a unified model along with the incorporation into finite element software package to estimate the maximum flexural loads need to be developed. In this study, nonlinear finite element analysis of steel fiber reinforced high strength HVFA concrete beam under static loadings was conducted to investigate their failure modes in terms of ultimate load. First of all, the experimental investigation of mechanical properties of high strength HVFA concrete was done and validates with developed numerical model with the appropriate modeling of element size and mesh by ANSYS 16.2. To model the fiber within the concrete, three-dimensional random fiber distribution was simulated by spherical coordinate system. Three types of high strength HVFA concrete beams were analyzed reinforced with 0.5, 1 and 1.5% volume fractions of steel fibers with specific mechanical and physical properties. The result reveals that the use of nonlinear finite element analysis technique and three-dimensional random fiber orientation exhibited fairly good agreement with the experimental results of flexural strength, load deflection and crack propagation mechanism. By utilizing this improved model, it is possible to determine the flexural behavior of different types and proportions of steel fiber reinforced HVFA concrete beam under static load. So, this paper has the originality to predict the flexural properties of steel fiber reinforced high strength HVFA concrete by numerical simulations.Keywords: finite element analysis, high volume fly ash, steel fibers, spherical coordinate system
Procedia PDF Downloads 132928 Potential Effects of Climate Change on Streamflow, Based on the Occurrence of Severe Floods in Kelantan, East Coasts of Peninsular Malaysia River Basin
Authors: Muhd. Barzani Gasim, Mohd. Ekhwan Toriman, Mohd. Khairul Amri Kamarudin, Azman Azid, Siti Humaira Haron, Muhammad Hafiz Md. Saad
Abstract:
Malaysia is a country in Southeast Asia that constantly exposed to flooding and landslide. The disaster has caused some troubles such loss of property, loss of life and discomfort of people involved. This problem occurs as a result of climate change leading to increased stream flow rate as a result of disruption to regional hydrological cycles. The aim of the study is to determine hydrologic processes in the east coasts of Peninsular Malaysia, especially in Kelantan Basin. Parameterized to account for the spatial and temporal variability of basin characteristics and their responses to climate variability. For hydrological modeling of the basin, the Soil and Water Assessment Tool (SWAT) model such as relief, soil type, and its use, and historical daily time series of climate and river flow rates are studied. The interpretation of Landsat map/land uses will be applied in this study. The combined of SWAT and climate models, the system will be predicted an increase in future scenario climate precipitation, increase in surface runoff, increase in recharge and increase in the total water yield. As a result, this model has successfully developed the basin analysis by demonstrating analyzing hydrographs visually, good estimates of minimum and maximum flows and severe floods observed during calibration and validation periods.Keywords: east coasts of Peninsular Malaysia, Kelantan river basin, minimum and maximum flows, severe floods, SWAT model
Procedia PDF Downloads 261927 Modeling Socioeconomic and Political Dynamics of Terrorism in Pakistan
Authors: Syed Toqueer, Omer Younus
Abstract:
Terrorism, today, has emerged as a global menace with Pakistan being the most adversely affected state. Therefore, the motive behind this study is to empirically establish the linkage of terrorism with socio-economic (uneven income distribution, poverty and unemployment) and political nexuses so that a policy recommendation can be put forth to better approach this issue in Pakistan. For this purpose, the study employs two competing models, namely, the distributed lag model and OLS, so that findings of the model may be consolidated comprehensively, over the reference period of 1984-2012. The findings of both models are indicative of the fact that uneven income distribution of Pakistan is rather a contributing factor towards terrorism when measured through GDP per capita. This supports the hypothesis that immiserizing modernization theory is applicable for the state of Pakistan where the underprivileged are marginalized. Results also suggest that other socio-economic variables (poverty, unemployment and consumer confidence) can condense the brutality of terrorism once these conditions are catered to and improved. The rational of opportunity cost is at the base of this argument. Poor conditions of employment and poverty reduces the opportunity cost for individuals to be recruited by terrorist organizations as economic returns are considerably low and thus increasing the supply of volunteers and subsequently increasing the intensity of terrorism. The argument of political freedom as a means of lowering terrorism stands true. The more the people are politically repressed the more alternative and illegal means they will find to make their voice heard. Also, the argument that politically transitioning economy faces more terrorism is found applicable for Pakistan. Finally, the study contributes to an ongoing debate on which of the two set of factors are more significant with relation to terrorism by suggesting that socio-economic factors are found to be the primary causes of terrorism for Pakistan.Keywords: terrorism, socioeconomic conditions, political freedom, distributed lag model, ordinary least square
Procedia PDF Downloads 321926 Optimum Structural Wall Distribution in Reinforced Concrete Buildings Subjected to Earthquake Excitations
Authors: Nesreddine Djafar Henni, Akram Khelaifia, Salah Guettala, Rachid Chebili
Abstract:
Reinforced concrete shear walls and vertical plate-like elements play a pivotal role in efficiently managing a building's response to seismic forces. This study investigates how the performance of reinforced concrete buildings equipped with shear walls featuring different shear wall-to-frame stiffness ratios aligns with the requirements stipulated in the Algerian seismic code RPA99v2003, particularly in high-seismicity regions. Seven distinct 3D finite element models are developed and evaluated through nonlinear static analysis. Engineering Demand Parameters (EDPs) such as lateral displacement, inter-story drift ratio, shear force, and bending moment along the building height are analyzed. The findings reveal two predominant categories of induced responses: force-based and displacement-based EDPs. Furthermore, as the shear wall-to-frame ratio increases, there is a concurrent increase in force-based EDPs and a decrease in displacement-based ones. Examining the distribution of shear walls from both force and displacement perspectives, model G with the highest stiffness ratio, concentrating stiffness at the building's center, intensifies induced forces. This configuration necessitates additional reinforcements, leading to a conservative design approach. Conversely, model C, with the lowest stiffness ratio, distributes stiffness towards the periphery, resulting in minimized induced shear forces and bending moments, representing an optimal scenario with maximal performance and minimal strength requirements.Keywords: dual RC buildings, RC shear walls, modeling, static nonlinear pushover analysis, optimization, seismic performance
Procedia PDF Downloads 55925 Assessing Denitrification-Disintegration Model’s Efficacy in Simulating Greenhouse Gas Emissions, Crop Growth, Yield, and Soil Biochemical Processes in Moroccan Context
Authors: Mohamed Boullouz, Mohamed Louay Metougui
Abstract:
Accurate modeling of greenhouse gas (GHG) emissions, crop growth, soil productivity, and biochemical processes is crucial considering escalating global concerns about climate change and the urgent need to improve agricultural sustainability. The application of the denitrification-disintegration (DNDC) model in the context of Morocco's unique agro-climate is thoroughly investigated in this study. Our main research hypothesis is that the DNDC model offers an effective and powerful tool for precisely simulating a wide range of significant parameters, including greenhouse gas emissions, crop growth, yield potential, and complex soil biogeochemical processes, all consistent with the intricate features of environmental Moroccan agriculture. In order to verify these hypotheses, a vast amount of field data covering Morocco's various agricultural regions and encompassing a range of soil types, climatic factors, and crop varieties had to be gathered. These experimental data sets will serve as the foundation for careful model calibration and subsequent validation, ensuring the accuracy of simulation results. In conclusion, the prospective research findings add to the global conversation on climate-resilient agricultural practices while encouraging the promotion of sustainable agricultural models in Morocco. A policy architect's and an agricultural actor's ability to make informed decisions that not only advance food security but also environmental stability may be strengthened by the impending recognition of the DNDC model as a potent simulation tool tailored to Moroccan conditions.Keywords: greenhouse gas emissions, DNDC model, sustainable agriculture, Moroccan cropping systems
Procedia PDF Downloads 63924 Modeling of Bipolar Charge Transport through Nanocomposite Films for Energy Storage
Authors: Meng H. Lean, Wei-Ping L. Chu
Abstract:
The effects of ferroelectric nanofiller size, shape, loading, and polarization, on bipolar charge injection, transport, and recombination through amorphous and semicrystalline polymers are studied. A 3D particle-in-cell model extends the classical electrical double layer representation to treat ferroelectric nanoparticles. Metal-polymer charge injection assumes Schottky emission and Fowler-Nordheim tunneling, migration through field-dependent Poole-Frenkel mobility, and recombination with Monte Carlo selection based on collision probability. A boundary integral equation method is used for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit. Trajectories for charge that make it through the film are curvilinear paths that meander through the interspaces. Results indicate that charge transport behavior depends on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and lowest level of charge trapping in the interaction zone. Simulation prediction of a size range of 80 to 100 nm to minimize attachment and maximize conduction is validated by theory. Attached charge fractions go from 2.2% to 97% as nanofiller size is decreased from 150 nm to 60 nm. Computed conductivity of 0.4 x 1014 S/cm is in agreement with published data for plastics. Charge attachment is increased with spheroids due to the increase in surface area, and especially so for oblate spheroids showing the influence of larger cross-sections. Charge attachment to nanofillers and nanocrystallites increase with vol.% loading or degree of crystallinity, and saturate at about 40 vol.%.Keywords: nanocomposites, nanofillers, electrical double layer, bipolar charge transport
Procedia PDF Downloads 353923 Multistep Thermal Degradation Kinetics: Pyrolysis of CaSO₄-Complex Obtained by Antiscaling Effect of Maleic-Anhydride Polymer
Authors: Yousef M. Al-Roomi, Kaneez Fatema Hussain
Abstract:
This work evaluates the thermal degradation kinetic parameters of CaSO₄-complex isolated after the inhibition effect of maleic-anhydride based polymer (YMR-polymers). Pyrolysis experiments were carried out at four heating rates (5, 10, 15 and 20°C/min). Several analytical model-free methods were used to determine the kinetic parameters, including Friedman, Coats and Redfern, Kissinger, Flynn-Wall-Ozawa and Kissinger-Akahira–Sunose methods. The Criado model fitting method based on real mechanism followed in thermal degradation of the complex has been applied to explain the degradation mechanism of CaSO₄-complex. In addition, a simple dynamic model was proposed over two temperature ranges for successive decomposition of CaSO₄-complex which has a combination of organic and inorganic part (adsorbed polymer + CaSO₄.2H₂O scale). The model developed enabled the assessment of pre-exponential factor (A) and apparent activation-energy (Eₐ) for both stages independently using a mathematical developed expression based on an integral solution. The unique reaction mechanism approach applied in this study showed that (Eₐ₁-160.5 kJ/mole) for organic decomposition (adsorbed polymer stage-I) has been lower than Eₐ₂-388 kJ/mole for the CaSO₄ decomposition (inorganic stage-II). Further adsorbed YMR-antiscalant not only reduced the decomposition temperature of CaSO₄-complex compared to CaSO₄-blank (CaSO₄.2H₂O scales in the absence of YMR-polymer) but also distorted the crystal lattice of the organic complex of CaSO₄ precipitates, destroying their compact and regular crystal structures observed from XRD and SEM studies.Keywords: CaSO₄-complex, maleic-anhydride polymers, thermal degradation kinetics and mechanism, XRD and SEM studies
Procedia PDF Downloads 117922 Comprehensive Risk Analysis of Decommissioning Activities with Multifaceted Hazard Factors
Authors: Hyeon-Kyo Lim, Hyunjung Kim, Kune-Woo Lee
Abstract:
Decommissioning process of nuclear facilities can be said to consist of a sequence of problem solving activities, partly because there may exist working environments contaminated by radiological exposure, and partly because there may also exist industrial hazards such as fire, explosions, toxic materials, and electrical and physical hazards. As for an individual hazard factor, risk assessment techniques are getting known to industrial workers with advance of safety technology, but the way how to integrate those results is not. Furthermore, there are few workers who experienced decommissioning operations a lot in the past. Therefore, not a few countries in the world have been trying to develop appropriate counter techniques in order to guarantee safety and efficiency of the process. In spite of that, there still exists neither domestic nor international standard since nuclear facilities are too diverse and unique. In the consequence, it is quite inevitable to imagine and assess the whole risk in the situation anticipated one by one. This paper aimed to find out an appropriate technique to integrate individual risk assessment results from the viewpoint of experts. Thus, on one hand the whole risk assessment activity for decommissioning operations was modeled as a sequence of individual risk assessment steps, and on the other, a hierarchical risk structure was developed. Then, risk assessment procedure that can elicit individual hazard factors one by one were introduced with reference to the standard operation procedure (SOP) and hierarchical task analysis (HTA). With an assumption of quantification and normalization of individual risks, a technique to estimate relative weight factors was tried by using the conventional Analytic Hierarchical Process (AHP) and its result was reviewed with reference to judgment of experts. Besides, taking the ambiguity of human judgment into consideration, debates based upon fuzzy inference was added with a mathematical case study.Keywords: decommissioning, risk assessment, analytic hierarchical process (AHP), fuzzy inference
Procedia PDF Downloads 423921 Flow Field Analysis of Different Intake Bump (Compression Surface) Configurations on a Supersonic Aircraft
Authors: Mudassir Ghafoor, Irsalan Arif, Shuaib Salamat
Abstract:
This paper presents modeling and analysis of different intake bump (compression surface) configurations and comparison with an existing supersonic aircraft having bump intake configuration. Many successful aircraft models have shown that Diverter less Supersonic Inlet (DSI) as compared to conventional intake can reduce weight, complexity and also maintenance cost. The research is divided into two parts. In the first part, four different intake bumps are modeled for comparative analysis keeping in view the consistency of outer perimeter dimensions of fighter aircraft and various characteristics such as flow behavior, boundary layer diversion and pressure recovery are analyzed. In the second part, modeled bumps are integrated with intake duct for performance analysis and comparison with existing supersonic aircraft data is carried out. The bumps are named as uniform large (Config 1), uniform small (Config 2), uniform sharp (Config 3), non-uniform (Config 4) based on their geometric features. Analysis is carried out at different Mach Numbers to analyze flow behavior in subsonic and supersonic regime. Flow behavior, boundary layer diversion and Pressure recovery are examined for each bump characteristics, and comparative study is carried out. The analysis reveals that at subsonic speed, Config 1 and Config 2 give similar pressure recoveries as diverterless supersonic intake, but difference in pressure recoveries becomes significant at supersonic speed. It was concluded from research that Config 1 gives better results as compared to Config 3. Also, higher amplitude (Config 1) is preferred over lower (Config 2 and 4). It was observed that maximum height of bump is preferred to be placed near cowl lip of intake duct.Keywords: bump intake, boundary layer, computational fluid dynamics, diverter-less supersonic inlet
Procedia PDF Downloads 242920 Mental Contrasting with Implementation Intentions: A Metacognitive Strategy on Educational Context
Authors: Paula Paulino, Alzira Matias, Ana Margarida Veiga Simão
Abstract:
Self-regulated learning (SRL) directs students in analyzing proposed tasks, setting goals and designing plans to achieve those goals. The literature has suggested a metacognitive strategy for goal attainment known as Mental Contrasting with Implementation Intentions (MCII). This strategy involves Mental Contrasting (MC), in which a significant goal and an obstacle are identified, and Implementation Intentions (II), in which an "if... then…" plan is conceived and operationalized to overcome that obstacle. The present study proposes to assess the MCII process and whether it promotes students’ commitment towards learning goals during school tasks in sciences subjects. In this investigation, we intended to study the MCII strategy in a systemic context of the classroom. Fifty-six students from middle school and secondary education attending a public school in Lisbon (Portugal) participated in the study. The MCII strategy was explicitly taught in a procedure that included metacognitive modeling, guided practice and autonomous practice of strategy. A mental contrast between a goal they wanted to achieve and a possible obstacle to achieving that desire was instructed, and then the formulation of plans in order to overcome the obstacle identified previously. The preliminary results suggest that the MCII metacognitive strategy, applied to the school context, leads to more sophisticated reflections, the promotion of learning goals and the elaboration of more complex and specific self-regulated plans. Further, students achieve better results on school tests and worksheets after strategy practice. This study presents important implications since the MCII has been related to improved outcomes and increased attendance. Additionally, MCII seems to be an innovative process that captures students’ efforts to learn and enhances self-efficacy beliefs during learning tasks.Keywords: implementation intentions, learning goals, mental contrasting, metacognitive strategy, self-regulated learning
Procedia PDF Downloads 241919 Effect of Correlation of Random Variables on Structural Reliability Index
Authors: Agnieszka Dudzik
Abstract:
The problem of correlation between random variables in the structural reliability analysis has been extensively discussed in literature on the subject. The cases taken under consideration were usually related to correlation between random variables from one side of ultimate limit state: correlation between particular loads applied on structure or correlation between resistance of particular members of a structure as a system. It has been proved that positive correlation between these random variables reduces the reliability of structure and increases the probability of failure. In the paper, the problem of correlation between random variables from both side of the limit state equation will be taken under consideration. The simplest case where these random variables are of the normal distributions will be concerned. The case when a degree of that correlation is described by the covariance or the coefficient of correlation will be used. Special attention will be paid on questions: how much that correlation changes the reliability level and can it be ignored. In reliability analysis will be used well-known methods for assessment of the failure probability: based on the Hasofer-Lind reliability index and Monte Carlo method adapted to the problem of correlation. The main purpose of this work will be a presentation how correlation of random variables influence on reliability index of steel bar structures. Structural design parameters will be defined as deterministic values and random variables. The latter will be correlated. The criterion of structural failure will be expressed by limit functions related to the ultimate and serviceability limit state. In the description of random variables will be used only for the normal distribution. Sensitivity of reliability index to the random variables will be defined. If the reliability index sensitivity due to the random variable X will be low when compared with other variables, it can be stated that the impact of this variable on failure probability is small. Therefore, in successive computations, it can be treated as a deterministic parameter. Sensitivity analysis leads to simplify the description of the mathematical model, determine the new limit functions and values of the Hasofer-Lind reliability index. In the examples, the NUMPRESS software will be used in the reliability analysis.Keywords: correlation of random variables, reliability index, sensitivity of reliability index, steel structure
Procedia PDF Downloads 237918 The Relationship Between Cyberbullying Victimization, Parent and Peer Attachment and Unconditional Self-Acceptance
Authors: Florina Magdalena Anichitoae, Anca Dobrean, Ionut Stelian Florean
Abstract:
Due to the fact that cyberbullying victimization is an increasing problem nowadays, affecting more and more children and adolescents around the world, we wanted to take a step forward analyzing this phenomenon. So, we took a look at some variables which haven't been studied together before, trying to develop another way to view cyberbullying victimization. We wanted to test the effects of the mother, father, and peer attachment on adolescent involvement in cyberbullying as victims through unconditional self acceptance. Furthermore, we analyzed each subscale of the IPPA-R, the instrument we have used for parents and peer attachment measurement, in regards to cyberbullying victimization through unconditional self acceptance. We have also analyzed if gender and age could be taken into consideration as moderators in this model. The analysis has been performed on 653 adolescents aged 11-17 years old from Romania. We used structural equation modeling, working in R program. For the fidelity analysis of the IPPA-R subscales, USAQ, and Cyberbullying Test, we have calculated the internal consistency index, which varies between .68-.91. We have created 2 models: the first model including peer alienation, peer trust, peer communication, self acceptance and cyberbullying victimization, having CFI=0.97, RMSEA=0.02, 90%CI [0.02, 0.03] and SRMR=0.07, and the second model including parental alienation, parental trust, parental communication, self acceptance and cyberbullying victimization and had CFI=0.97, RMSEA=0.02, 90%CI [0.02, 0.03] and SRMR=0.07. Our results were interesting: on one hand, cyberbullying victimization is predicted by peer alienation and peer communication through unconditional self acceptance. Peer trust directly, significantly, and negatively predicted the implication in cyberbullying. In this regard, considering gender and age as moderators, we found that the relationship between unconditional self acceptance and cyberbullying victimization is stronger in girls, but age does not moderate the relationship between unconditional self acceptance and cyberbullying victimization. On the other hand, regarding the degree of cyberbullying victimization as being predicted through unconditional self acceptance by parental alienation, parental communication, and parental trust, this hypothesis was not supported. Still, we could identify a direct path to positively predict victimization through parental alienation and negatively through parental trust. There are also some limitations to this study, which we've discussed in the end.Keywords: adolescent, attachment, cyberbullying victimization, parents, peers, unconditional self-acceptance
Procedia PDF Downloads 202917 Influence of High Hydrostatic Pressure Application (HHP) and Osmotic Dehydration (DO) as a Pretreatment to Hot –Air Drying of Abalone (Haliotis Rufescens) Cubes
Authors: Teresa Roco, Mario Perez Won, Roberto Lemus-Mondaca, Sebastian Pizarro
Abstract:
This research presents the simultaneous application of high hydrostatic pressure application (HHP) and osmotic dehydration (DO) as a pretreatment to hot –air drying of abalone cubes. The drying time was reduced to 6 hours at 60ºC as compared to the abalone drying by only a 15% NaCl osmotic pretreatment and at an atmospheric pressure that took 10 hours to dry at the same temperature. This was due to the salt and HHP saturation since osmotic pressure increases as water loss increases, thus needing a more reduced time in a convective drying, so water effective diffusion in drying plays an important role in this research. Different working conditions as pressure (350-550 MPa), pressure time ( 5-10 min), salt concentration, NaCl 15% and drying temperature (40-60ºC) will be optimized according to kinetic parameters of each mathematical model (Table 1). The models used for drying experimental curves were those corresponding to Weibull, Logarithmic and Midilli-Kucuk, but the latest one was the best fitted to the experimental data (Figure 1). The values for water effective diffusivity varied from 4.54 – to 9.95x10-9 m2/s for the 8 curves (DO+HHP) whereas the control samples (neither DO nor HHP) varied among 4.35 and 5.60x10-9 m2/s, for 40 and 60°C, respectively and as to drying by osmotic pretreatment at 15% NaCl from 3.804 to 4.36x10-9 m2/s at the same temperatures. Finally as to energy and efficiency consumption values for drying process (control and pretreated samples) it was found that they would be within a range of 777-1815 KJ/Kg and 8.22–19.20% respectively. Therefore, a knowledge concerning the drying kinetic as well as the consumption energy, in addition to knowledge about the quality of abalones subjected to an osmotic pretreatment (DO) and a high hydrostatic pressure (HHP) are extremely important to an industrial level so that the drying process can be successful at different pretreatment conditions and/or variable processes.Keywords: abalone, convective drying, high pressure hydrostatic, pretreatments, diffusion coefficient
Procedia PDF Downloads 664916 Behavior of Common Philippine-Made Concrete Hollow Block Structures Subjected to Seismic Load Using Rigid Body Spring-Discrete Element Method
Authors: Arwin Malabanan, Carl Chester Ragudo, Jerome Tadiosa, John Dee Mangoba, Eric Augustus Tingatinga, Romeo Eliezer Longalong
Abstract:
Concrete hollow blocks (CHB) are the most commonly used masonry block for walls in residential houses, school buildings and public buildings in the Philippines. During the recent 2013 Bohol earthquake (Mw 7.2), it has been proven that CHB walls are very vulnerable to severe external action like strong ground motion. In this paper, a numerical model of CHB structures is proposed, and seismic behavior of CHB houses is presented. In modeling, the Rigid Body Spring-Discrete Element method (RBS-DEM)) is used wherein masonry blocks are discretized into rigid elements and connected by nonlinear springs at preselected contact points. The shear and normal stiffness of springs are derived from the material properties of CHB unit incorporating the grout and mortar fillings through the volumetric transformation of the dimension using material ratio. Numerical models of reinforced and unreinforced walls are first subjected to linearly-increasing in plane loading to observe the different failure mechanisms. These wall models are then assembled to form typical model masonry houses and then subjected to the El Centro and Pacoima earthquake records. Numerical simulations show that the elastic, failure and collapse behavior of the model houses agree well with shaking table tests results. The effectiveness of the method in replicating failure patterns will serve as a basis for the improvement of the design and provides a good basis of strengthening the structure.Keywords: concrete hollow blocks, discrete element method, earthquake, rigid body spring model
Procedia PDF Downloads 371915 Agent-Based Modelling to Improve Dairy-origin Beef Production: Model Description and Evaluation
Authors: Addisu H. Addis, Hugh T. Blair, Paul R. Kenyon, Stephen T. Morris, Nicola M. Schreurs, Dorian J. Garrick
Abstract:
Agent-based modeling (ABM) enables an in silico representation of complex systems and cap-tures agent behavior resulting from interaction with other agents and their environment. This study developed an ABM to represent a pasture-based beef cattle finishing systems in New Zea-land (NZ) using attributes of the rearer, finisher, and processor, as well as specific attributes of dairy-origin beef cattle. The model was parameterized using values representing 1% of NZ dairy-origin cattle, and 10% of rearers and finishers in NZ. The cattle agent consisted of 32% Holstein-Friesian, 50% Holstein-Friesian–Jersey crossbred, and 8% Jersey, with the remainder being other breeds. Rearers and finishers repetitively and simultaneously interacted to determine the type and number of cattle populating the finishing system. Rearers brought in four-day-old spring-born calves and reared them until 60 calves (representing a full truck load) on average had a live weight of 100 kg before selling them on to finishers. Finishers mainly attained weaners from rearers, or directly from dairy farmers when weaner demand was higher than the supply from rearers. Fast-growing cattle were sent for slaughter before the second winter, and the re-mainder were sent before their third winter. The model finished a higher number of bulls than heifers and steers, although it was 4% lower than the industry reported value. Holstein-Friesian and Holstein-Friesian–Jersey-crossbred cattle dominated the dairy-origin beef finishing system. Jersey cattle account for less than 5% of total processed beef cattle. Further studies to include re-tailer and consumer perspectives and other decision alternatives for finishing farms would im-prove the applicability of the model for decision-making processes.Keywords: agent-based modelling, dairy cattle, beef finishing, rearers, finishers
Procedia PDF Downloads 97914 Accurately Measuring Stress Using Latest Breathing Technology and Its Relationship with Academic Performance
Authors: Farshid Marbouti, Jale Ulas, Julia Thompson
Abstract:
The main sources of stress among college students are: changes in sleeping and eating habits, undertaking new responsibilities, and financial difficulties as the most common sources of stress, exams, meeting new people, career decisions, fear of failure, and pressure from parents, transition to university especially if it requires leaving home, working with people that they do not know, trouble with parents, and relationship with the opposite sex. The students use a variety of stress coping strategies, including talking to family and friends, leisure activities and exercising. The Yerkes–Dodson law indicates while a moderate amount of stress may be beneficial for performance, too high stress will result in weak performance. In other words, if students are too stressed, they are likely to have low academic performance. In a preliminary study conducted in 2017 with engineering students enrolled in three high failure rate classes, the majority of the students stated that they have high levels of stress mainly for academic, financial, or family-related reasons. As the second stage of the study, the main purpose of this research is to investigate the students’ level of stress, sources of stress, their relationship with student demographic background, students’ coping strategies, and academic performance. A device is being developed to gather data from students breathing patterns and measure their stress levels. In addition, all participants are asked to fill out a survey. The survey under development has the following categories: exam stressor, study-related stressors, financial pressures, transition to university, family-related stress, student response to stress, and stress management. After the data collection, Structural Equation Modeling (SEM) analysis will be conducted in order to identify the relationship among students’ level of stress, coping strategies, and academic performance.Keywords: college student stress, coping strategies, academic performance, measuring stress
Procedia PDF Downloads 104913 Finite Element Modeling of a Lower Limb Based on the East Asian Body Characteristics for Pedestrian Protection
Authors: Xianping Du, Runlu Miao, Guanjun Zhang, Libo Cao, Feng Zhu
Abstract:
Current vehicle safety standards and human body injury criteria were established based on the biomechanical response of Euro-American human body, without considering the difference in the body anthropometry and injury characteristics among different races, particularly the East Asian people with smaller body size. Absence of such race specific design considerations will negatively influence the protective performance of safety products for these populations, and weaken the accuracy of injury thresholds derived. To resolve these issues, in this study, we aim to develop a race specific finite element model to simulate the impact response of the lower extremity of a 50th percentile East Asian (Chinese) male. The model was built based on medical images for the leg of an average size Chinese male and slightly adjusted based on the statistical data. The model includes detailed anatomic features and is able to simulate the muscle active force. Thirteen biomechanical tests available in the literature were used to validate its biofidelity. Using the validated model, a pedestrian-car impact accident taking place in China was re-constructed computationally. The results show that the newly developed lower leg model has a good performance in predicting dynamic response and tibia fracture pattern. An additional comparison on the fracture tolerance of the East Asian and Euro-American lower limb suggests that the current injury criterion underestimates the degree of injury of East Asian human body.Keywords: lower limb, East Asian body characteristics, traffic accident reconstruction, finite element analysis, injury tolerance
Procedia PDF Downloads 285912 Development of a Regression Based Model to Predict Subjective Perception of Squeak and Rattle Noise
Authors: Ramkumar R., Gaurav Shinde, Pratik Shroff, Sachin Kumar Jain, Nagesh Walke
Abstract:
Advancements in electric vehicles have significantly reduced the powertrain noise and moving components of vehicles. As a result, in-cab noises have become more noticeable to passengers inside the car. To ensure a comfortable ride for drivers and other passengers, it has become crucial to eliminate undesirable component noises during the development phase. Standard practices are followed to identify the severity of noises based on subjective ratings, but it can be a tedious process to identify the severity of each development sample and make changes to reduce it. Additionally, the severity rating can vary from jury to jury, making it challenging to arrive at a definitive conclusion. To address this, an automotive component was identified to evaluate squeak and rattle noise issue. Physical tests were carried out for random and sine excitation profiles. Aim was to subjectively assess the noise using jury rating method and objectively evaluate the same by measuring the noise. Suitable jury evaluation method was selected for the said activity, and recorded sounds were replayed for jury rating. Objective data sound quality metrics viz., loudness, sharpness, roughness, fluctuation strength and overall Sound Pressure Level (SPL) were measured. Based on this, correlation co-efficients was established to identify the most relevant sound quality metrics that are contributing to particular identified noise issue. Regression analysis was then performed to establish the correlation between subjective and objective data. Mathematical model was prepared using artificial intelligence and machine learning algorithm. The developed model was able to predict the subjective rating with good accuracy.Keywords: BSR, noise, correlation, regression
Procedia PDF Downloads 78911 Invistigation of Surface Properties of Nanostructured Carbon Films
Authors: Narek Margaryan, Zhozef Panosyan
Abstract:
Due to their unique properties, carbon nanofilms have become the object of general attention and intensive research. In this case it plays a very important role to study surface properties of these films. It is also important to study processes of forming of this films, which is accompanied by a process of self-organization at the nano and micro levels. For more detailed investigation, we examined diamond-like carbon (DLC) layers deposited by chemical vapor deposition (CVD) method on Ge substrate and hydro-generated grapheme layers obtained on surface of colloidal solution using grouping method. In this report surface transformation of these CVD nanolayers is studied by atomic force microscopy (AFM) upon deposition time. Also, it can be successfully used to study surface properties of self-assembled grapheme layers. In turn, it is possible to sketch out their boundary line, which enables one to draw an idea of peculiarities of formation of these layers. Images obtained by AFM are investigated as a mathematical set of numbers and fractal and roughness analysis were done. Fractal dimension, Regne’s fractal coefficient, histogram, Fast Fourier transformation, etc. were obtained. The dependence of fractal parameters on the deposition duration for CVD films and on temperature of solution tribolayers was revealed. As an important surface parameter for our carbon films, surface energy was calculated as function of Regne’s fractal coefficient. Surface potential was also measured with Kelvin probe method using semi-contacting AFM. The dependence of surface potential on the deposition duration for CVD films and on temperature of solution for hydro-generated graphene was found as well. Results obtained by fractal analysis method was related with purly esperimental results for number of samples.Keywords: nanostructured films, self-assembled grapheme, diamond-like carbon, surface potential, Kelvin probe method, fractal analysis
Procedia PDF Downloads 267910 Surface Thermodynamics Approach to Mycobacterium tuberculosis (M-TB) – Human Sputum Interactions
Authors: J. L. Chukwuneke, C. H. Achebe, S. N. Omenyi
Abstract:
This research work presents the surface thermodynamics approach to M-TB/HIV-Human sputum interactions. This involved the use of the Hamaker coefficient concept as a surface energetics tool in determining the interaction processes, with the surface interfacial energies explained using van der Waals concept of particle interactions. The Lifshitz derivation for van der Waals forces was applied as an alternative to the contact angle approach which has been widely used in other biological systems. The methodology involved taking sputum samples from twenty infected persons and from twenty uninfected persons for absorbance measurement using a digital Ultraviolet visible Spectrophotometer. The variables required for the computations with the Lifshitz formula were derived from the absorbance data. The Matlab software tools were used in the mathematical analysis of the data produced from the experiments (absorbance values). The Hamaker constants and the combined Hamaker coefficients were obtained using the values of the dielectric constant together with the Lifshitz equation. The absolute combined Hamaker coefficients A132abs and A131abs on both infected and uninfected sputum samples gave the values of A132abs = 0.21631x10-21Joule for M-TB infected sputum and Ã132abs = 0.18825x10-21Joule for M-TB/HIV infected sputum. The significance of this result is the positive value of the absolute combined Hamaker coefficient which suggests the existence of net positive van der waals forces demonstrating an attraction between the bacteria and the macrophage. This however, implies that infection can occur. It was also shown that in the presence of HIV, the interaction energy is reduced by 13% conforming adverse effects observed in HIV patients suffering from tuberculosis.Keywords: absorbance, dielectric constant, hamaker coefficient, lifshitz formula, macrophage, mycobacterium tuberculosis, van der waals forces
Procedia PDF Downloads 274909 Topology Enhancement of a Straight Fin Using a Porous Media Computational Fluid Dynamics Simulation Approach
Authors: S. Wakim, M. Nemer, B. Zeghondy, B. Ghannam, C. Bouallou
Abstract:
Designing the optimal heat exchanger is still an essential objective to be achieved. Parametrical optimization involves the evaluation of the heat exchanger dimensions to find those that best satisfy certain objectives. This method contributes to an enhanced design rather than an optimized one. On the contrary, topology optimization finds the optimal structure that satisfies the design objectives. The huge development in metal additive manufacturing allowed topology optimization to find its way into engineering applications especially in the aerospace field to optimize metal structures. Using topology optimization in 3d heat and mass transfer problems requires huge computational time, therefore coupling it with CFD simulations can reduce this it. However, existed CFD models cannot be coupled with topology optimization. The CFD model must allow creating a uniform mesh despite the initial geometry complexity and also to swap the cells from fluid to solid and vice versa. In this paper, a porous media approach compatible with topology optimization criteria is developed. It consists of modeling the fluid region of the heat exchanger as porous media having high porosity and similarly the solid region is modeled as porous media having low porosity. The switching from fluid to solid cells required by topology optimization is simply done by changing each cell porosity using a user defined function. This model is tested on a plate and fin heat exchanger and validated by comparing its results to experimental data and simulations results. Furthermore, this model is used to perform a material reallocation based on local criteria to optimize a plate and fin heat exchanger under a constant heat duty constraint. The optimized fin uses 20% fewer materials than the first while the pressure drop is reduced by about 13%.Keywords: computational methods, finite element method, heat exchanger, porous media, topology optimization
Procedia PDF Downloads 153