Search results for: computational neuroscience
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2138

Search results for: computational neuroscience

428 Seismicity and Ground Response Analysis for MP Tourism Office in Indore, India

Authors: Deepshikha Shukla, C. H. Solanki, Mayank Desai

Abstract:

In the last few years, it has been observed that earthquake is proving a threat to the scientist across the world. With a large number of earthquakes occurring in day to day life, the threat to life and property has increased manifolds which call for an urgent attention of all the researchers globally to carry out the research in the field of Earthquake Engineering. Any hazard related to the earthquake and seismicity is considered to be seismic hazards. The common forms of seismic hazards are Ground Shaking, Structure Damage, Structural Hazards, Liquefaction, Landslides, Tsunami to name a few. Among all the natural hazards, the most devastating and damaging is the earthquake as all other hazards are triggered only after the occurrence of an earthquake. In order to quantify and estimate the seismicity and seismic hazards, many methods and approaches have been proposed in the past few years. Such approaches are Mathematical, Conventional and Computational. Convex Set Theory, Empirical Green’s Function are some of the Mathematical Approaches whereas the Deterministic and Probabilistic Approaches are the Conventional Approach for the estimation of the seismic Hazards. Ground response and Ground Shaking of a particular area or region plays an important role in the damage caused due to the earthquake. In this paper, seismic study using Deterministic Approach and 1 D Ground Response Analysis has been carried out for Madhya Pradesh Tourism Office in Indore Region in Madhya Pradesh in Central India. Indore lies in the seismic zone III (IS: 1893, 2002) in the Seismic Zoning map of India. There are various faults and lineament in this area and Narmada Some Fault and Gavilgadh fault are the active sources of earthquake in the study area. Deepsoil v6.1.7 has been used to perform the 1 D Linear Ground Response Analysis for the study area. The Peak Ground Acceleration (PGA) of the city ranges from 0.1g to 0.56g.

Keywords: seismicity, seismic hazards, deterministic, probabilistic methods, ground response analysis

Procedia PDF Downloads 167
427 Analysis of Aerodynamic Forces Acting on a Train Passing Through a Tornado

Authors: Masahiro Suzuki, Nobuyuki Okura

Abstract:

The crosswind effect on ground transportations has been extensively investigated for decades. The effect of tornado, however, has been hardly studied in spite of the fact that even heavy ground vehicles, namely, trains were overturned by tornadoes with casualties in the past. Therefore, aerodynamic effects of the tornado on the train were studied by several approaches in this study. First, an experimental facility was developed to clarify aerodynamic forces acting on a vehicle running through a tornado. Our experimental set-up consists of two apparatus. One is a tornado simulator, and the other is a moving model rig. PIV measurements showed that the tornado simulator can generate a swirling-flow field similar to those of the natural tornadoes. The flow field has the maximum tangential velocity of 7.4 m/s and the vortex core radius of 96 mm. The moving model rig makes a 1/40 scale model train of single-car/three-car unit run thorough the swirling flow with the maximum speed of 4.3 m/s. The model car has 72 pressure ports on its surface to estimate the aerodynamic forces. The experimental results show that the aerodynamic forces vary its magnitude and direction depends on the location of the vehicle in the flow field. Second, the aerodynamic forces on the train were estimated by using Rankin vortex model. The Rankin vortex model is a simple tornado model which widely used in the field of civil engineering. The estimated aerodynamic forces on the middle car were fairly good agreement with the experimental results. Effects of the vortex core radius and the path of the train on the aerodynamic forces were investigated using the Rankin vortex model. The results shows that the side and lift forces increases as the vortex core radius increases, while the yawing moment is maximum when the core radius is 0.3875 times of the car length. Third, a computational simulation was conducted to clarify the flow field around the train. The simulated results qualitatively agreed with the experimental ones.

Keywords: aerodynamic force, experimental method, tornado, train

Procedia PDF Downloads 237
426 Simulation of Bird Strike on Airplane Wings by Using SPH Methodology

Authors: Tuğçe Kiper Elibol, İbrahim Uslan, Mehmet Ali Guler, Murat Buyuk, Uğur Yolum

Abstract:

According to the FAA report, 142603 bird strikes were reported for a period of 24 years, between 1990 – 2013. Bird strike with aerospace structures not only threaten the flight security but also cause financial loss and puts life in danger. The statistics show that most of the bird strikes are happening with the nose and the leading edge of the wings. Also, a substantial amount of bird strikes is absorbed by the jet engines and causes damage on blades and engine body. Crash proof designs are required to overcome the possibility of catastrophic failure of the airplane. Using computational methods for bird strike analysis during the product development phase has considerable importance in terms of cost saving. Clearly, using simulation techniques to reduce the number of reference tests can dramatically affect the total cost of an aircraft, where for bird strike often full-scale tests are considered. Therefore, development of validated numerical models is required that can replace preliminary tests and accelerate the design cycle. In this study, to verify the simulation parameters for a bird strike analysis, several different numerical options are studied for an impact case against a primitive structure. Then, a representative bird mode is generated with the verified parameters and collided against the leading edge of a training aircraft wing, where each structural member of the wing was explicitly modeled. A nonlinear explicit dynamics finite element code, LS-DYNA was used for the bird impact simulations. SPH methodology was used to model the behavior of the bird. Dynamic behavior of the wing superstructure was observed and will be used for further design optimization purposes.

Keywords: bird impact, bird strike, finite element modeling, smoothed particle hydrodynamics

Procedia PDF Downloads 328
425 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling

Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed

Abstract:

The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.

Keywords: streamflow, neural network, optimisation, algorithm

Procedia PDF Downloads 153
424 Cloud Computing Impact on e-Government Adoption

Authors: Ali Elshabrawy

Abstract:

Cloud computing is expected to be important for e Government in near future. Governments need it for solving some of its e Government, financial, infrastructure, legacy systems and integration problems. It reduces information technology (IT) infrastructure needs and support costs, and offers on-demand infrastructure and computational power, improved collaboration capabilities, which are important for e Government projects start up and sustainability. Budget pressures will continue to drive more and more government IT to hybrid and even public clouds, and more cooperation between cloud service providers and governmental agencies are expected, Or developing governmental private, community clouds. Motivation to convince governments to use cloud computing services, will create a pressure on cloud service providers to cope with government's requirements for interoperability, security standards, open data and integration between their cloud systems There will be significant legal action arising out of governmental uses of cloud computing, and legislation addressing both IT and business needs and consumer fears and protections. Cloud computing is a considered a revolution for IT and E business in general and e commerce, e Government in particular. As governments faces increasing challenges regarding IT infrastructure required for e Government projects implementation. As a result of Lack of required financial resources allocated for e Government projects in developed and developing countries. Cloud computing can play a major role to solve some of e Government projects challenges such as, lack of financial resources, IT infrastructure, Human resources trained to manage e Government applications, interoperability, cost efficiency challenges. If we could solve some security issues related to cloud computing usage which considered critical for e Government projects. Pretty sure it’s Just a matter of time before cloud service providers will find out solutions to attract governments as major customers for their business.

Keywords: cloud computing, e-government, adoption, supply side barriers, e-government requirements, challenges

Procedia PDF Downloads 347
423 Mitigation Strategies in the Urban Context of Sydney, Australia

Authors: Hamed Reza Heshmat Mohajer, Lan Ding, Mattheos Santamouris

Abstract:

One of the worst environmental dangers for people who live in cities is the Urban Heat Island (UHI) impact which is anticipated to become stronger in the coming years as a result of climate change. Accordingly, the key aim of this paper is to study the interaction between the urban configuration and mitigation strategies including increasing albedo of the urban environment (reflective material), implementation of Urban Green Infrastructure (UGI) and/or a combination thereof. To analyse the microclimate models of different urban categories in the metropolis of Sydney, this study will assess meteorological parameters using a 3D model simulation tool of computational fluid dynamics (CFD) named ENVI-met. In this study, four main parameters are taken into consideration while assessing the effectiveness of UHI mitigation strategies: ambient air temperature, wind speed/direction, and outdoor thermal comfort. Layouts with present condition simulation studies from the basic model (scenario one) are taken as the benchmark. A base model is used to calculate the relative percentage variations between each scenario. The findings showed that maximum cooling potential across different urban layouts can be decreased by 2.15 °C degrees by combining high-albedo material with flora; besides layouts with open arrangements(OT1) present a highly remarkable improvement in ambient air temperature and outdoor thermal comfort when mitigation technologies applied compare to compact counterparts. Besides all layouts present a higher intensity on the maximum ambient air temperature reduction rather than the minimum ambient air temperature. On the other hand, Scenarios associated with an increase in greeneries are anticipated to have a slight cooling effect, especially on high-rise layouts.

Keywords: sustainable urban development, urban green infrastructure, high-albedo materials, heat island effect

Procedia PDF Downloads 95
422 Mechanical Behavior of Laminated Glass Cylindrical Shell with Hinged Free Boundary Conditions

Authors: Ebru Dural, M. Zulfu Asık

Abstract:

Laminated glass is a kind of safety glass, which is made by 'sandwiching' two glass sheets and a polyvinyl butyral (PVB) interlayer in between them. When the glass is broken, the interlayer in between the glass sheets can stick them together. Because of this property, the hazards of sharp projectiles during natural and man-made disasters reduces. They can be widely applied in building, architecture, automotive, transport industries. Laminated glass can easily undergo large displacements even under their own weight. In order to explain their true behavior, they should be analyzed by using large deflection theory to represent nonlinear behavior. In this study, a nonlinear mathematical model is developed for the analysis of laminated glass cylindrical shell which is free in radial directions and restrained in axial directions. The results will be verified by using the results of the experiment, carried out on laminated glass cylindrical shells. The behavior of laminated composite cylindrical shell can be represented by five partial differential equations. Four of the five equations are used to represent axial displacements and radial displacements and the fifth one for the transverse deflection of the unit. Governing partial differential equations are derived by employing variational principles and minimum potential energy concept. Finite difference method is employed to solve the coupled differential equations. First, they are converted into a system of matrix equations and then iterative procedure is employed. Iterative procedure is necessary since equations are coupled. Problems occurred in getting convergent sequence generated by the employed procedure are overcome by employing variable underrelaxation factor. The procedure developed to solve the differential equations provides not only less storage but also less calculation time, which is a substantial advantage in computational mechanics problems.

Keywords: laminated glass, mathematical model, nonlinear behavior, PVB

Procedia PDF Downloads 320
421 Design, Development and Analysis of Combined Darrieus and Savonius Wind Turbine

Authors: Ashish Bhattarai, Bishnu Bhatta, Hem Raj Joshi, Nabin Neupane, Pankaj Yadav

Abstract:

This report concerns the design, development, and analysis of the combined Darrieus and Savonius wind turbine. Vertical Axis Wind Turbines (VAWT's) are of two type's viz. Darrieus (lift type) and Savonius (drag type). The problem associated with Darrieus is the lack of self-starting while Savonius has low efficiency. There are 3 straight Darrieus blades having the cross-section of NACA(National Advisory Committee of Aeronautics) 0018 placed circumferentially and a helically twisted Savonius blade to get even torque distribution. This unique design allows the use of Savonius as a method of self-starting the wind turbine, which the Darrieus cannot achieve on its own. All the parts of the wind turbine are designed in CAD software, and simulation data were obtained via CFD(Computational Fluid Dynamics) approach. Also, the design was imported to FlashForge Finder to 3D print the wind turbine profile and finally, testing was carried out. The plastic material used for Savonius was ABS(Acrylonitrile Butadiene Styrene) and that for Darrieus was PLA(Polylactic Acid). From the data obtained experimentally, the hybrid VAWT so fabricated has been found to operate at the low cut-in speed of 3 m/s and maximum power output has been found to be 7.5537 watts at the wind speed of 6 m/s. The maximum rpm of the rotor blade is recorded to be 431 rpm(rotation per minute) at the wind velocity of 6 m/s, signifying its potentiality of wind power production. Besides, the data so obtained from both the process when analyzed through graph plots has shown the similar nature slope wise. Also, the difference between the experimental and theoretical data obtained has shown mechanical losses. The objective is to eliminate the need for external motors for self-starting purposes and study the performance of the model. The testing of the model was carried out for different wind velocities.

Keywords: VAWT, Darrieus, Savonius, helical blades, CFD, flash forge finder, ABS, PLA

Procedia PDF Downloads 210
420 Multivariate Analysis on Water Quality Attributes Using Master-Slave Neural Network Model

Authors: A. Clementking, C. Jothi Venkateswaran

Abstract:

Mathematical and computational functionalities such as descriptive mining, optimization, and predictions are espoused to resolve natural resource planning. The water quality prediction and its attributes influence determinations are adopted optimization techniques. The water properties are tainted while merging water resource one with another. This work aimed to predict influencing water resource distribution connectivity in accordance to water quality and sediment using an innovative proposed master-slave neural network back-propagation model. The experiment results are arrived through collecting water quality attributes, computation of water quality index, design and development of neural network model to determine water quality and sediment, master–slave back propagation neural network back-propagation model to determine variations on water quality and sediment attributes between the water resources and the recommendation for connectivity. The homogeneous and parallel biochemical reactions are influences water quality and sediment while distributing water from one location to another. Therefore, an innovative master-slave neural network model [M (9:9:2)::S(9:9:2)] designed and developed to predict the attribute variations. The result of training dataset given as an input to master model and its maximum weights are assigned as an input to the slave model to predict the water quality. The developed master-slave model is predicted physicochemical attributes weight variations for 85 % to 90% of water quality as a target values.The sediment level variations also predicated from 0.01 to 0.05% of each water quality percentage. The model produced the significant variations on physiochemical attribute weights. According to the predicated experimental weight variation on training data set, effective recommendations are made to connect different resources.

Keywords: master-slave back propagation neural network model(MSBPNNM), water quality analysis, multivariate analysis, environmental mining

Procedia PDF Downloads 478
419 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods

Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo

Abstract:

The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.

Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines

Procedia PDF Downloads 622
418 3D Numerical Studies and Design Optimization of a Swallowtail Butterfly with Twin Tail

Authors: Arunkumar Balamurugan, G. Soundharya Lakshmi, V. Thenmozhi, M. Jegannath, V. R. Sanal Kumar

Abstract:

Aerodynamics of insects is of topical interest in aeronautical industries due to its wide applications on various types of Micro Air Vehicles (MAVs). Note that the MAVs are having smaller geometric dimensions operate at significantly lower speeds on the order of 10 m/s and their Reynolds numbers range is approximately 1,50,000 or lower. In this paper, numerical study has been carried out to capture the flow physics of a biological inspired Swallowtail Butterfly with fixed wing having twin tail at a flight speed of 10 m/s. Comprehensive numerical simulations have been carried out on swallow butterfly with twin tail flying at a speed of 10 m/s with uniform upper and lower angles of attack in both lateral and longitudinal position for identifying the best wing orientation with better aerodynamic efficiency. Grid system in the computational domain is selected after a detailed grid refinement exercises. Parametric analytical studies have been carried out with different lateral and longitudinal angles of attack for finding the better aerodynamic efficiency at the same flight speed. The results reveal that lift coefficient significantly increases with marginal changes in the longitudinal angle and vice versa. But in the case of drag coefficient the conventional changes have been noticed, viz., drag increases at high longitudinal angles. We observed that the change of twin tail section has a significant impact on the formation of vortices and aerodynamic efficiency of the MAV’s. We concluded that for every lateral angle there is an exact longitudinal orientation for the existence of an aerodynamically efficient flying condition of any MAV. This numerical study is a pointer towards for the design optimization of Twin tail MAVs with flapping wings.

Keywords: aerodynamics of insects, MAV, swallowtail butterfly, twin tail MAV design

Procedia PDF Downloads 395
417 Development of Immuno-Modulators: Application of Molecular Dynamics Simulation

Authors: Ruqaiya Khalil, Saman Usmani, Zaheer Ul-Haq

Abstract:

The accurate characterization of ligand binding affinity is indispensable for designing molecules with optimized binding affinity. Computational tools help in many directions to predict quantitative correlations between protein-ligand structure and their binding affinities. Molecular dynamics (MD) simulation is a modern state-of-the-art technique to evaluate the underlying basis of ligand-protein interactions by characterizing dynamic and energetic properties during the event. Autoimmune diseases arise from an abnormal immune response of the body against own tissues. The current regimen for the described condition is limited to immune-modulators having compromised pharmacodynamics and pharmacokinetics profiles. One of the key player mediating immunity and tolerance, thus invoking autoimmunity is Interleukin-2; a cytokine influencing the growth of T cells. Molecular dynamics simulation techniques are applied to seek insight into the inhibitory mechanisms of newly synthesized compounds that manifested immunosuppressant potentials during in silico pipeline. In addition to estimation of free energies associated with ligand binding, MD simulation yielded us a great deal of information about ligand-macromolecule interactions to evaluate the pattern of interactions and the molecular basis of inhibition. The present study is a continuum of our efforts to identify interleukin-2 inhibitors of both natural and synthetic origin. Herein, we report molecular dynamics simulation studies of Interluekin-2 complexed with different antagonists previously reported by our group. The study of protein-ligand dynamics enabled us to gain a better understanding of the contribution of different active site residues in ligand binding. The results of the study will be used as the guide to rationalize the fragment based synthesis of drug-like interleukin-2 inhibitors as immune-modulators.

Keywords: immuno-modulators, MD simulation, protein-ligand interaction, structure-based drug design

Procedia PDF Downloads 262
416 Graphical Theoretical Construction of Discrete time Share Price Paths from Matroid

Authors: Min Wang, Sergey Utev

Abstract:

The lessons from the 2007-09 global financial crisis have driven scientific research, which considers the design of new methodologies and financial models in the global market. The quantum mechanics approach was introduced in the unpredictable stock market modeling. One famous quantum tool is Feynman path integral method, which was used to model insurance risk by Tamturk and Utev and adapted to formalize the path-dependent option pricing by Hao and Utev. The research is based on the path-dependent calculation method, which is motivated by the Feynman path integral method. The path calculation can be studied in two ways, one way is to label, and the other is computational. Labeling is a part of the representation of objects, and generating functions can provide many different ways of representing share price paths. In this paper, the recent works on graphical theoretical construction of individual share price path via matroid is presented. Firstly, a study is done on the knowledge of matroid, relationship between lattice path matroid and Tutte polynomials and ways to connect points in the lattice path matroid and Tutte polynomials is suggested. Secondly, It is found that a general binary tree can be validly constructed from a connected lattice path matroid rather than general lattice path matroid. Lastly, it is suggested that there is a way to represent share price paths via a general binary tree, and an algorithm is developed to construct share price paths from general binary trees. A relationship is also provided between lattice integer points and Tutte polynomials of a transversal matroid. Use this way of connection together with the algorithm, a share price path can be constructed from a given connected lattice path matroid.

Keywords: combinatorial construction, graphical representation, matroid, path calculation, share price, Tutte polynomial

Procedia PDF Downloads 138
415 A Novel Epitope Prediction for Vaccine Designing against Ebola Viral Envelope Proteins

Authors: Manju Kanu, Subrata Sinha, Surabhi Johari

Abstract:

Viral proteins of Ebola viruses belong to one of the best studied viruses; however no effective prevention against EBOV has been developed. Epitope-based vaccines provide a new strategy for prophylactic and therapeutic application of pathogen-specific immunity. A critical requirement of this strategy is the identification and selection of T-cell epitopes that act as vaccine targets. This study describes current methodologies for the selection process, with Ebola virus as a model system. Hence great challenge in the field of ebola virus research is to design universal vaccine. A combination of publicly available bioinformatics algorithms and computational tools are used to screen and select antigen sequences as potential T-cell epitopes of supertypes Human Leukocyte Antigen (HLA) alleles. MUSCLE and MOTIF tools were used to find out most conserved peptide sequences of viral proteins. Immunoinformatics tools were used for prediction of immunogenic peptides of viral proteins in zaire strains of Ebola virus. Putative epitopes for viral proteins (VP) were predicted from conserved peptide sequences of VP. Three tools NetCTL 1.2, BIMAS and Syfpeithi were used to predict the Class I putative epitopes while three tools, ProPred, IEDB-SMM-align and NetMHCII 2.2 were used to predict the Class II putative epitopes. B cell epitopes were predicted by BCPREDS 1.0. Immunogenic peptides were identified and selected manually by putative epitopes predicted from online tools individually for both MHC classes. Finally sequences of predicted peptides for both MHC classes were looked for common region which was selected as common immunogenic peptide. The immunogenic peptides were found for viral proteins of Ebola virus: epitopes FLESGAVKY, SSLAKHGEY. These predicted peptides could be promising candidates to be used as target for vaccine design.

Keywords: epitope, b cell, immunogenicity, ebola

Procedia PDF Downloads 315
414 Evaluating Structural Crack Propagation Induced by Soundless Chemical Demolition Agent Using an Energy Release Rate Approach

Authors: Shyaka Eugene

Abstract:

The efficient and safe demolition of structures is a critical challenge in civil engineering and construction. This study focuses on the development of optimal demolition strategies by investigating the crack propagation behavior in beams induced by soundless cracking agents. It is commonly used in controlled demolition and has gained prominence due to its non-explosive and environmentally friendly nature. This research employs a comprehensive experimental and computational approach to analyze the crack initiation, propagation, and eventual failure in beams subjected to soundless cracking agents. Experimental testing involves the application of various cracking agents under controlled conditions to understand their effects on the structural integrity of beams. High-resolution imaging and strain measurements are used to capture the crack propagation process. In parallel, numerical simulations are conducted using advanced finite element analysis (FEA) techniques to model crack propagation in beams, considering various parameters such as cracking agent composition, loading conditions, and beam properties. The FEA models are validated against experimental results, ensuring their accuracy in predicting crack propagation patterns. The findings of this study provide valuable insights into optimizing demolition strategies, allowing engineers and demolition experts to make informed decisions regarding the selection of cracking agents, their application techniques, and structural reinforcement methods. Ultimately, this research contributes to enhancing the safety, efficiency, and sustainability of demolition practices in the construction industry, reducing environmental impact and ensuring the protection of adjacent structures and the surrounding environment.

Keywords: expansion pressure, energy release rate, soundless chemical demolition agent, crack propagation

Procedia PDF Downloads 63
413 Practical Modelling of RC Structural Walls under Monotonic and Cyclic Loading

Authors: Reza E. Sedgh, Rajesh P. Dhakal

Abstract:

Shear walls have been used extensively as the main lateral force resisting systems in multi-storey buildings. The recent development in performance based design urges practicing engineers to conduct nonlinear static or dynamic analysis to evaluate seismic performance of multi-storey shear wall buildings by employing distinct analytical models suggested in the literature. For practical purpose, application of macroscopic models to simulate the global and local nonlinear behavior of structural walls outweighs the microscopic models. The skill level, computational time and limited access to RC specialized finite element packages prevents the general application of this method in performance based design or assessment of multi-storey shear wall buildings in design offices. Hence, this paper organized to verify capability of nonlinear shell element in commercially available package (Sap2000) in simulating results of some specimens under monotonic and cyclic loads with very oversimplified available cyclic material laws in the analytical tool. The selection of constitutive models, the determination of related parameters of the constituent material and appropriate nonlinear shear model are presented in detail. Adoption of proposed simple model demonstrated that the predicted results follow the overall trend of experimental force-displacement curve. Although, prediction of ultimate strength and the overall shape of hysteresis model agreed to some extent with experiment, the ultimate displacement(significant strength degradation point) prediction remains challenging in some cases.

Keywords: analytical model, nonlinear shell element, structural wall, shear behavior

Procedia PDF Downloads 407
412 Modeling and Simulation of Turbulence Induced in Nozzle Cavitation and Its Effects on Internal Flow in a High Torque Low Speed Diesel Engine

Authors: Ali Javaid, Rizwan Latif, Syed Adnan Qasim, Imran Shafi

Abstract:

To control combustion inside a direct injection diesel engine, fuel atomization is the best tool. Controlling combustion helps in reducing emissions and improves efficiency. Cavitation is one of the most important factors that significantly affect the nature of spray before it injects into combustion chamber. Typical fuel injector nozzles are small and operate at a very high pressure, which limits the study of internal nozzle behavior especially in case of diesel engine. Simulating cavitation in a fuel injector will help in understanding the phenomenon and will assist in further development. There is a parametric variation between high speed and high torque low speed diesel engines. The objective of this study is to simulate internal spray characteristics for a low speed high torque diesel engine. In-nozzle cavitation has strong effects on the parameters e.g. mass flow rate, fuel velocity, and momentum flux of fuel that is to be injected into the combustion chamber. The external spray dynamics and subsequently the air – fuel mixing depends on a lot of the parameters of fuel injecting the nozzle. The approach used to model turbulence induced in – nozzle cavitation for high-torque low-speed diesel engine, is homogeneous equilibrium model. The governing equations were modeled using Matlab. Complete Model in question was extensively evaluated by performing 3-D time-dependent simulations on Open FOAM, which is an open source flow solver and implemented in CFD (Computational Fluid Dynamics). Results thus obtained will be analyzed for better evaporation in the near-nozzle region. The proposed analyses will further help in better engine efficiency, low emission, and improved fuel economy.

Keywords: cavitation, HEM model, nozzle flow, open foam, turbulence

Procedia PDF Downloads 291
411 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers' equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile

Procedia PDF Downloads 171
410 Free Energy Computation of A G-Quadruplex-Ligand Structure: A Classical Molecular Dynamics and Metadynamics Simulation Study

Authors: Juan Antonio Mondragon Sanchez, Ruben Santamaria

Abstract:

The DNA G-quadruplex is a four-stranded DNA structure formed by stacked planes of four base paired guanines (G-quartet). Guanine rich DNA sequences appear in many sites of genomic DNA and can potential form G-quadruplexes, such as those occurring at 3'-terminus of the human telomeric DNA. The formation and stabilization of a G-quadruplex by small ligands at the telomeric region can inhibit the telomerase activity. In turn, the ligands can be used to down regulate oncogene expression making G-quadruplex an attractive target for anticancer therapy. Many G-quadruplex ligands have been proposed with a planar core to facilitate the pi–pi stacking and electrostatic interactions with the G-quartets. However, many drug candidates are impossibilitated to discriminate a G-quadruplex from a double helix DNA structure. In this context, it is important to investigate the site topology for the interaction of a G-quadruplex with a ligand. In this work, we determine the free energy surface of a G-quadruplex-ligand to study the binding modes of the G-quadruplex (TG4T) with the daunomycin (DM) drug. The complex TG4T-DM is studied using classical molecular dynamics in combination with metadynamics simulations. The metadynamics simulations permit an enhanced sampling of the conformational space with a modest computational cost and obtain free energy surfaces in terms of the collective variables (CV). The free energy surfaces of TG4T-DM exhibit other local minima, indicating the presence of additional binding modes of daunomycin that are not observed in short MD simulations without the metadynamics approach. The results are compared with similar calculations on a different structure (the mutated mu-G4T-DM where the 5' thymines on TG4T-DM have been deleted). The results should be of help to design new G-quadruplex drugs, and understand the differences in the recognition topology sites of the duplex and quadruplex DNA structures in their interaction with ligands.

Keywords: g-quadruplex, cancer, molecular dynamics, metadynamics

Procedia PDF Downloads 460
409 FlameCens: Visualization of Expressive Deviations in Music Performance

Authors: Y. Trantafyllou, C. Alexandraki

Abstract:

Music interpretation accounts to the way musicians shape their performance by deliberately deviating from composers’ intentions, which are commonly communicated via some form of music transcription, such as a music score. For transcribed and non-improvised music, music expression is manifested by introducing subtle deviations in tempo, dynamics and articulation during the evolution of performance. This paper presents an application, named FlameCens, which, given two recordings of the same piece of music, presumably performed by different musicians, allow visualising deviations in tempo and dynamics during playback. The application may also compare a certain performance to the music score of that piece (i.e. MIDI file), which may be thought of as an expression-neutral representation of that piece, hence depicting the expressive queues employed by certain performers. FlameCens uses the Dynamic Time Warping algorithm to compare two audio sequences, based on CENS (Chroma Energy distribution Normalized Statistics) audio features. Expressive deviations are illustrated in a moving flame, which is generated by an animation of particles. The length of the flame is mapped to deviations in dynamics, while the slope of the flame is mapped to tempo deviations so that faster tempo changes the slope to the right and slower tempo changes the slope to the left. Constant slope signifies no tempo deviation. The detected deviations in tempo and dynamics can be additionally recorded in a text file, which allows for offline investigation. Moreover, in the case of monophonic music, the color of particles is used to convey the pitch of the notes during performance. FlameCens has been implemented in Python and it is openly available via GitHub. The application has been experimentally validated for different music genres including classical, contemporary, jazz and popular music. These experiments revealed that FlameCens can be a valuable tool for music specialists (i.e. musicians or musicologists) to investigate the expressive performance strategies employed by different musicians, as well as for music audience to enhance their listening experience.

Keywords: audio synchronization, computational music analysis, expressive music performance, information visualization

Procedia PDF Downloads 131
408 Transformer-Driven Multi-Category Classification for an Automated Academic Strand Recommendation Framework

Authors: Ma Cecilia Siva

Abstract:

This study introduces a Bidirectional Encoder Representations from Transformers (BERT)-based machine learning model aimed at improving educational counseling by automating the process of recommending academic strands for students. The framework is designed to streamline and enhance the strand selection process by analyzing students' profiles and suggesting suitable academic paths based on their interests, strengths, and goals. Data was gathered from a sample of 200 grade 10 students, which included personal essays and survey responses relevant to strand alignment. After thorough preprocessing, the text data was tokenized, label-encoded, and input into a fine-tuned BERT model set up for multi-label classification. The model was optimized for balanced accuracy and computational efficiency, featuring a multi-category classification layer with sigmoid activation for independent strand predictions. Performance metrics showed an F1 score of 88%, indicating a well-balanced model with precision at 80% and recall at 100%, demonstrating its effectiveness in providing reliable recommendations while reducing irrelevant strand suggestions. To facilitate practical use, the final deployment phase created a recommendation framework that processes new student data through the trained model and generates personalized academic strand suggestions. This automated recommendation system presents a scalable solution for academic guidance, potentially enhancing student satisfaction and alignment with educational objectives. The study's findings indicate that expanding the data set, integrating additional features, and refining the model iteratively could improve the framework's accuracy and broaden its applicability in various educational contexts.

Keywords: tokenized, sigmoid activation, transformer, multi category classification

Procedia PDF Downloads 13
407 Modelling of Damage as Hinges in Segmented Tunnels

Authors: Gelacio JuáRez-Luna, Daniel Enrique GonzáLez-RamíRez, Enrique Tenorio-Montero

Abstract:

Frame elements coupled with springs elements are used for modelling the development of hinges in segmented tunnels, the spring elements modelled the rotational, transversal and axial failure. These spring elements are equipped with constitutive models to include independently the moment, shear force and axial force, respectively. These constitutive models are formulated based on damage mechanics and experimental test reported in the literature review. The mesh of the segmented tunnels was discretized in the software GID, and the nonlinear analyses were carried out in the finite element software ANSYS. These analyses provide the capacity curve of the primary and secondary lining of a segmented tunnel. Two numerical examples of segmented tunnels show the capability of the spring elements to release energy by the development of hinges. The first example is a segmental concrete lining discretized with frame elements loaded until hinges occurred in the lining. The second example is a tunnel with primary and secondary lining, discretized with a double ring frame model. The outer ring simulates the segmental concrete lining and the inner ring simulates the secondary cast-in-place concrete lining. Spring elements also modelled the joints between the segments in the circumferential direction and the ring joints, which connect parallel adjacent rings. The computed load vs displacement curves are congruent with numerical and experimental results reported in the literature review. It is shown that the modelling of a tunnel with primary and secondary lining with frame elements and springs provides reasonable results and save computational cost, comparing with 2D or 3D models equipped with smeared crack models.

Keywords: damage, hinges, lining, tunnel

Procedia PDF Downloads 391
406 21st Century Biotechnological Research and Development Advancements for Industrial Development in India

Authors: Monisha Isaac

Abstract:

Biotechnology is a discipline which explains the use of living organisms and systems to construct a product, or we can define it as an application or technology developed to use biological systems and organisms processes for a specific use. Particularly, it includes cells and its components use for new technologies and inventions. The tools developed can be further used in diverse fields such as agriculture, industry, research and hospitals etc. The 21st century has seen a drastic development and advancement in biotechnology in India. Significant increase in Government of India’s outlays for biotechnology over the past decade has been observed. A sectoral break up of biotechnology-based companies in India shows that most of the companies are agriculture-based companies having interests ranging from tissue culture to biopesticides. Major attention has been given by the companies in health related activities and in environmental biotechnology. The biopharmaceutical, which comprises of vaccines, diagnostic, and recombinant products is the most reliable and largest segment of the Indian Biotech industry. India has developed its vaccine markets and supplies them to various countries. Then there are the bio-services, which mainly comprise of contract researches and manufacturing services. India has made noticeable developments in the field of bio industries including manufacturing of enzymes, biofuels and biopolymers. Biotechnology is also playing a crucial and significant role in the field of agriculture. Traditional methods have been replaced by new technologies that mainly focus on GM crops, marker assisted technologies and the use of biotechnological tools to improve the quality of fertilizers and soil. It may only be a small contributor but has shown to have huge potential for growth. Bioinformatics is a computational method which helps to store, manage, arrange and design tools to interpret the extensive data gathered through experimental trials, making it important in the design of drugs.

Keywords: biotechnology, advancement, agriculture, bio-services, bio-industries, bio-pharmaceuticals

Procedia PDF Downloads 237
405 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model

Authors: Donatella Giuliani

Abstract:

In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.

Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation

Procedia PDF Downloads 217
404 The Effects of Damping Devices on Displacements, Velocities and Accelerations of Structures

Authors: Radhwane Boudjelthia

Abstract:

The most recent earthquakes occurred in the world have killed thousands of people and severe damage. For all the actors involved in the building process, the earthquake is the litmus test for construction. The goal we set ourselves is to contribute to the implementation of a thoughtful approach to the seismic protection of structures. For many engineers, the most conventional approach to protection works (buildings and bridges) the effects of earthquakes is to increase rigidity. This approach is not always effective, especially when there is a context that favors the phenomenon of resonance and amplification of seismic forces. Therefore, the field of earthquake engineering has made significant inroads, among others catalyzed by the development of computational techniques in computer form and the use of powerful test facilities. This has led to the emergence of several innovative technologies, such as the introduction of special devices insulation between infrastructure and superstructure. This approach, commonly known as "seismic isolation," to absorb the significant efforts without the structure is damaged and thus ensuring the protection of lives and property. In addition, the restraints to the construction by the ground shaking are located mainly at the supports. With these moves, the natural period of construction is increasing, and seismic loads are reduced. Thus, there is an attenuation of the seismic movement. Likewise, the insulation of the base mechanism may be used in combination with earthquake dampers in order to control the deformation of the insulation system and the absolute displacement of the superstructure located above the isolation interface. On the other hand, only can use these earthquake dampers to reduce the oscillation amplitudes and thus reduce seismic loads. The use of damping devices represents an effective solution for the rehabilitation of existing structures. Given all these acceleration reducing means considered passive, much research has been conducted for several years to develop an active control system of the response of buildings to earthquakes.

Keywords: earthquake, building, seismic forces, displacement, resonance, response.

Procedia PDF Downloads 69
403 Computational Fluid Dynamics Simulation of a Boiler Outlet Header Constructed of Inconel Alloy 740H

Authors: Sherman Ho, Ahmed Cherif Megri

Abstract:

Headers play a critical role in conveying steam to regulate heating system temperatures. While various materials like steel grades 91 and 92 have been traditionally used for pipes, this research proposes the use of a robust and innovative material, INCONEL Alloy 740H. Boilers in power plant configurations are exposed to cycling conditions due to factors such as daily, seasonal, and yearly variations in weather. These cycling conditions can lead to the deterioration of headers, which are vital components with intricate geometries. Header failures result in substantial financial losses from repair costs and power plant shutdowns, along with significant public inconveniences such as the loss of heating and hot water. To address this issue and seek solutions, a mechanical analysis, as well as a structural analysis, are recommended. Transient analysis to predict heat transfer conditions is of paramount importance, as the direction of heat transfer within the header walls and the passing steam can vary based on the location of interest, load, and operating conditions. The geometry and material of the header are also crucial design factors, and the choice of pipe material depends on its usage. In this context, the heat transfer coefficient plays a vital role in header design and analysis. This research employs ANSYS Fluent, a numerical simulation program, to understand header behavior, predict heat transfer, and analyze mechanical phenomena within the header. Transient simulations are conducted to investigate parameters like heat transfer coefficient, pressure loss coefficients, and heat flux, with the results used to optimize header design.

Keywords: CFD, header, power plant, heat transfer coefficient, simulation using experimental data

Procedia PDF Downloads 67
402 Four-Way Coupled CFD-Dem Simulation of Concrete Pipe Flow Using a Non-Newtonian Rheological Model: Investigating the Simulation of Lubrication Layer Formation and Plug Flow Zones

Authors: Tooran Tavangar, Masoud Hosseinpoor, Jeffrey S. Marshall, Ammar Yahia, Kamal Henri Khayat

Abstract:

In this study, a four-way coupled CFD-DEM methodology was used to simulate the behavior of concrete pipe flow. Fresh concrete, characterized as a biphasic suspension, features aggregates comprising the solid-suspended phase with diverse particle-size distributions (PSD) within a non-Newtonian cement paste/mortar matrix forming the liquid phase. The fluid phase was simulated using CFD, while the aggregates were modeled using DEM. Interaction forces between the fluid and solid particles were considered through CFD-DEM computations. To capture the viscoelastic characteristics of the suspending fluid, a bi-viscous approach was adopted, incorporating a critical shear rate proportional to the yield stress of the mortar. In total, three diphasic suspensions were simulated, each featuring distinct particle size distributions and a concentration of 10% for five subclasses of spherical particles ranging from 1 to 17 mm in a suspending fluid. The adopted bi-viscous approach successfully simulated both un-sheared (plug flow) and sheared zones. Furthermore, shear-induced particle migration (SIPM) was assessed by examining coefficients of variation in particle concentration across the pipe. These SIPM values were then compared with results obtained using CFD-DEM under the Newtonian assumption. The study highlighted the crucial role of yield stress in the mortar phase, revealing that lower yield stress values can lead to increased flow rates and higher SIPM across the pipe.

Keywords: computational fluid dynamics, concrete pumping, coupled CFD-DEM, discrete element method, plug flow, shear-induced particle migration.

Procedia PDF Downloads 70
401 Hybridization of Manually Extracted and Convolutional Features for Classification of Chest X-Ray of COVID-19

Authors: M. Bilal Ishfaq, Adnan N. Qureshi

Abstract:

COVID-19 is the most infectious disease these days, it was first reported in Wuhan, the capital city of Hubei in China then it spread rapidly throughout the whole world. Later on 11 March 2020, the World Health Organisation (WHO) declared it a pandemic. Since COVID-19 is highly contagious, it has affected approximately 219M people worldwide and caused 4.55M deaths. It has brought the importance of accurate diagnosis of respiratory diseases such as pneumonia and COVID-19 to the forefront. In this paper, we propose a hybrid approach for the automated detection of COVID-19 using medical imaging. We have presented the hybridization of manually extracted and convolutional features. Our approach combines Haralick texture features and convolutional features extracted from chest X-rays and CT scans. We also employ a minimum redundancy maximum relevance (MRMR) feature selection algorithm to reduce computational complexity and enhance classification performance. The proposed model is evaluated on four publicly available datasets, including Chest X-ray Pneumonia, COVID-19 Pneumonia, COVID-19 CTMaster, and VinBig data. The results demonstrate high accuracy and effectiveness, with 0.9925 on the Chest X-ray pneumonia dataset, 0.9895 on the COVID-19, Pneumonia and Normal Chest X-ray dataset, 0.9806 on the Covid CTMaster dataset, and 0.9398 on the VinBig dataset. We further evaluate the effectiveness of the proposed model using ROC curves, where the AUC for the best-performing model reaches 0.96. Our proposed model provides a promising tool for the early detection and accurate diagnosis of COVID-19, which can assist healthcare professionals in making informed treatment decisions and improving patient outcomes. The results of the proposed model are quite plausible and the system can be deployed in a clinical or research setting to assist in the diagnosis of COVID-19.

Keywords: COVID-19, feature engineering, artificial neural networks, radiology images

Procedia PDF Downloads 76
400 Fem Models of Glued Laminated Timber Beams Enhanced by Bayesian Updating of Elastic Moduli

Authors: L. Melzerová, T. Janda, M. Šejnoha, J. Šejnoha

Abstract:

Two finite element (FEM) models are presented in this paper to address the random nature of the response of glued timber structures made of wood segments with variable elastic moduli evaluated from 3600 indentation measurements. This total database served to create the same number of ensembles as was the number of segments in the tested beam. Statistics of these ensembles were then assigned to given segments of beams and the Latin Hypercube Sampling (LHS) method was called to perform 100 simulations resulting into the ensemble of 100 deflections subjected to statistical evaluation. Here, a detailed geometrical arrangement of individual segments in the laminated beam was considered in the construction of two-dimensional FEM model subjected to in four-point bending to comply with the laboratory tests. Since laboratory measurements of local elastic moduli may in general suffer from a significant experimental error, it appears advantageous to exploit the full scale measurements of timber beams, i.e. deflections, to improve their prior distributions with the help of the Bayesian statistical method. This, however, requires an efficient computational model when simulating the laboratory tests numerically. To this end, a simplified model based on Mindlin’s beam theory was established. The improved posterior distributions show that the most significant change of the Young’s modulus distribution takes place in laminae in the most strained zones, i.e. in the top and bottom layers within the beam center region. Posterior distributions of moduli of elasticity were subsequently utilized in the 2D FEM model and compared with the original simulations.

Keywords: Bayesian inference, FEM, four point bending test, laminated timber, parameter estimation, prior and posterior distribution, Young’s modulus

Procedia PDF Downloads 284
399 Engineering Thermal-Hydraulic Simulator Based on Complex Simulation Suite “Virtual Unit of Nuclear Power Plant”

Authors: Evgeny Obraztsov, Ilya Kremnev, Vitaly Sokolov, Maksim Gavrilov, Evgeny Tretyakov, Vladimir Kukhtevich, Vladimir Bezlepkin

Abstract:

Over the last decade, a specific set of connected software tools and calculation codes has been gradually developed. It allows simulating I&C systems, thermal-hydraulic, neutron-physical and electrical processes in elements and systems at the Unit of NPP (initially with WWER (pressurized water reactor)). In 2012 it was called a complex simulation suite “Virtual Unit of NPP” (or CSS “VEB” for short). Proper application of this complex tool should result in a complex coupled mathematical computational model. And for a specific design of NPP, it is called the Virtual Power Unit (or VPU for short). VPU can be used for comprehensive modelling of a power unit operation, checking operator's functions on a virtual main control room, and modelling complicated scenarios for normal modes and accidents. In addition, CSS “VEB” contains a combination of thermal hydraulic codes: the best-estimate (two-liquid) calculation codes KORSAR and CORTES and a homogenous calculation code TPP. So to analyze a specific technological system one can build thermal-hydraulic simulation models with different detalization levels up to a nodalization scheme with real geometry. And the result at some points is similar to the notion “engineering/testing simulator” described by the European utility requirements (EUR) for LWR nuclear power plants. The paper is dedicated to description of the tools mentioned above and an example of the application of the engineering thermal-hydraulic simulator in analysis of the boron acid concentration in the primary coolant (changed by the make-up and boron control system).

Keywords: best-estimate code, complex simulation suite, engineering simulator, power plant, thermal hydraulic, VEB, virtual power unit

Procedia PDF Downloads 381