Search results for: computational aeroacoustics
1588 Computational Fluid Dynamics Analysis of Sit-Ski Aerodynamics in Crosswind Conditions
Authors: Lev Chernyshev, Ekaterina Lieshout, Natalia Kabaliuk
Abstract:
Sit-skis enable individuals with limited lower limb or core movement to ski unassisted confidently. The rise in popularity of the Winter Paralympics has seen an influx of engineering innovation, especially for the Downhill and Super-Giant Slalom events, where the athletes achieve speeds as high as 160km/h. The growth in the sport has inspired recent research into sit-ski aerodynamics. Crosswinds are expected in mountain climates and, therefore, can greatly impact a skier's maneuverability and aerodynamics. This research investigates the impact of crosswinds on the drag force of a Paralympic sit-ski using Computational Fluid Dynamics (CFD). A Paralympic sit-ski with a model of a skier, a leg cover, a bucket seat, and a simplified suspension system was used for CFD analysis in ANSYS Fluent. The hybrid initialisation tool and the SST k–ω turbulence model were used with two tetrahedral mesh bodies of influence. The crosswinds (10, 30, and 50 km/h) acting perpendicular to the sit-ski's direction of travel were simulated, corresponding to the straight-line skiing speeds of 60, 80, and 100km/h. Following the initialisation, 150 iterations for both first and second order steady-state solvers were used, before switching to a transient solver with a computational time of 1.5s and a time step of 0.02s, to allow the solution to converge. CFD results were validated against wind tunnel data. The results suggested that for all crosswind and sit-ski speeds, on average, 64% of the total drag on the ski was due to the athlete's torso. The suspension was associated with the second largest overall sit-ski drag force contribution, averaging at 27%, followed by the leg cover at 10%. While the seat contributed a negligible 0.5% of the total drag force, averaging at 1.2N across the conditions studied. The effect of the crosswind increased the total drag force across all skiing speed studies, with the drag on the athlete's torso and suspension being the most sensitive to the changes in the crosswind magnitude. The effect of the crosswind on the ski drag reduced as the simulated skiing speed increased: for skiing at 60km/h, the drag force on the torso increased by 154% with the increase of the crosswind from 10km/h to 50km/h; whereas, at 100km/h the corresponding drag force increase was halved (75%). The analysis of the flow and pressure field characteristics for a sit-ski in crosswind conditions indicated the flow separation localisation and wake size correlated with the magnitude and directionality of the crosswind relative to straight-line skiing. The findings can inform aerodynamic improvements in sit-ski design and increase skiers' medalling chances.Keywords: sit-ski, aerodynamics, CFD, crosswind effects
Procedia PDF Downloads 661587 Computational Analysis of Cavity Effect over Aircraft Wing
Authors: P. Booma Devi, Dilip A. Shah
Abstract:
This paper seeks the potentials of studying aerodynamic characteristics of inward cavities called dimples, as an alternative to the classical vortex generators. Increasing stalling angle is a greater challenge in wing design. But our examination is primarily focused on increasing lift. In this paper, enhancement of lift is mainly done by introduction of dimple or cavity in a wing. In general, aircraft performance can be enhanced by increasing aerodynamic efficiency that is lift to drag ratio of an aircraft wing. Efficiency improvement can be achieved by improving the maximum lift co-efficient or by reducing the drag co-efficient. At the time of landing aircraft, high angle of attack may lead to stalling of aircraft. To avoid this kind of situation, increase in the stalling angle is warranted. Hence, improved stalling characteristic is the best way to ease landing complexity. Computational analysis is done for the wing segment made of NACA 0012. Simulation is carried out for 30 m/s free stream velocity over plain airfoil and different types of cavities. The wing is modeled in CATIA V5R20 and analyses are carried out using ANSYS CFX. Triangle and square shapes are used as cavities for analysis. Simulations revealed that cavity placed on wing segment shows an increase of maximum lift co-efficient when compared to normal wing configuration. Flow separation is delayed at downstream of the wing by the presence of cavities up to a particular angle of attack.Keywords: lift, drag reduce, square dimple, triangle dimple, enhancement of stall angle
Procedia PDF Downloads 3471586 CFD Simulation of Spacer Effect on Turbulent Mixing Phenomena in Sub Channels of Boiling Nuclear Assemblies
Authors: Shashi Kant Verma, S. L. Sinha, D. K. Chandraker
Abstract:
Numerical simulations of selected subchannel tracer (Potassium Nitrate) based experiments have been performed to study the capabilities of state-of-the-art of Computational Fluid Dynamics (CFD) codes. The Computational Fluid Dynamics (CFD) methodology can be useful for investigating the spacer effect on turbulent mixing to predict turbulent flow behavior such as Dimensionless mixing scalar distributions, radial velocity and vortices in the nuclear fuel assembly. A Gibson and Launder (GL) Reynolds stress model (RSM) has been selected as the primary turbulence model to be applied for the simulation case as it has been previously found reasonably accurate to predict flows inside rod bundles. As a comparison, the case is also simulated using a standard k-ε turbulence model that is widely used in industry. Despite being an isotropic turbulence model, it has also been used in the modeling of flow in rod bundles and to produce lateral velocities after thorough mixing of coolant fairly. Both these models have been solved numerically to find out fully developed isothermal turbulent flow in a 30º segment of a 54-rod bundle. Numerical simulation has been carried out for the study of natural mixing of a Tracer (Passive scalar) to characterize the growth of turbulent diffusion in an injected sub-channel and, afterwards on, cross-mixing between adjacent sub-channels. The mixing with water has been numerically studied by means of steady state CFD simulations with the commercial code STAR-CCM+. Flow enters into the computational domain through the mass inflow at the three subchannel faces. Turbulence intensity and hydraulic diameter of 1% and 5.9 mm respectively were used for the inlet. A passive scalar (Potassium nitrate) is injected through the mass fraction of 5.536 PPM at subchannel 2 (Upstream of the mixing section). Flow exited the domain through the pressure outlet boundary (0 Pa), and the reference pressure was 1 atm. Simulation results have been extracted at different locations of the mixing zone and downstream zone. The local mass fraction shows uniform mixing. The effect of the applied turbulence model is nearly negligible just before the outlet plane because the distributions look like almost identical and the flow is fully developed. On the other hand, quantitatively the dimensionless mixing scalar distributions change noticeably, which is visible in the different scale of the colour bars.Keywords: single-phase flow, turbulent mixing, tracer, sub channel analysis
Procedia PDF Downloads 2071585 An Automated Approach to the Nozzle Configuration of Polycrystalline Diamond Compact Drill Bits for Effective Cuttings Removal
Authors: R. Suresh, Pavan Kumar Nimmagadda, Ming Zo Tan, Shane Hart, Sharp Ugwuocha
Abstract:
Polycrystalline diamond compact (PDC) drill bits are extensively used in the oil and gas industry as well as the mining industry. Industry engineers continually improve upon PDC drill bit designs and hydraulic conditions. Optimized injection nozzles play a key role in improving the drilling performance and efficiency of these ever changing PDC drill bits. In the first part of this study, computational fluid dynamics (CFD) modelling is performed to investigate the hydrodynamic characteristics of drilling fluid flow around the PDC drill bit. An Open-source CFD software – OpenFOAM simulates the flow around the drill bit, based on the field input data. A specifically developed console application integrates the entire CFD process including, domain extraction, meshing, and solving governing equations and post-processing. The results from the OpenFOAM solver are then compared with that of the ANSYS Fluent software. The data from both software programs agree. The second part of the paper describes the parametric study of the PDC drill bit nozzle to determine the effect of parameters such as number of nozzles, nozzle velocity, nozzle radial position and orientations on the flow field characteristics and bit washing patterns. After analyzing a series of nozzle configurations, the best configuration is identified and recommendations are made for modifying the PDC bit design.Keywords: ANSYS Fluent, computational fluid dynamics, nozzle configuration, OpenFOAM, PDC dill bit
Procedia PDF Downloads 4201584 ISMARA: Completely Automated Inference of Gene Regulatory Networks from High-Throughput Data
Authors: Piotr J. Balwierz, Mikhail Pachkov, Phil Arnold, Andreas J. Gruber, Mihaela Zavolan, Erik van Nimwegen
Abstract:
Understanding the key players and interactions in the regulatory networks that control gene expression and chromatin state across different cell types and tissues in metazoans remains one of the central challenges in systems biology. Our laboratory has pioneered a number of methods for automatically inferring core gene regulatory networks directly from high-throughput data by modeling gene expression (RNA-seq) and chromatin state (ChIP-seq) measurements in terms of genome-wide computational predictions of regulatory sites for hundreds of transcription factors and micro-RNAs. These methods have now been completely automated in an integrated webserver called ISMARA that allows researchers to analyze their own data by simply uploading RNA-seq or ChIP-seq data sets and provides results in an integrated web interface as well as in downloadable flat form. For any data set, ISMARA infers the key regulators in the system, their activities across the input samples, the genes and pathways they target, and the core interactions between the regulators. We believe that by empowering experimental researchers to apply cutting-edge computational systems biology tools to their data in a completely automated manner, ISMARA can play an important role in developing our understanding of regulatory networks across metazoans.Keywords: gene expression analysis, high-throughput sequencing analysis, transcription factor activity, transcription regulation
Procedia PDF Downloads 651583 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 1281582 Two-Phase Flow Study of Airborne Transmission Control in Dental Practices
Authors: Mojtaba Zabihi, Stephen Munro, Jonathan Little, Ri Li, Joshua Brinkerhoff, Sina Kheirkhah
Abstract:
Occupational Safety and Health Administration (OSHA) identified dental workers at the highest risk of contracting COVID-19. This is because aerosol-generating procedures (AGP) during dental practices generate aerosols ( < 5µm) and droplets. These particles travel at varying speeds, in varying directions, and for varying durations. If these particles bear infectious viruses, their spreading causes airborne transmission of the virus in the dental room, exposing dentists, hygienists, dental assistants, and even other dental clinic clients to the infection risk. Computational fluid dynamics (CFD) simulation of two-phase flows based on a discrete phase model (DPM) is carried out to study the spreading of aerosol and droplets in a dental room. The simulation includes momentum, heat, and mass transfers between the particles and the airflow. Two simulations are conducted and compared. One simulation focuses on the effects of room ventilation in winter and summer on the particles' travel. The other simulation focuses on the control of aerosol and droplets' spreading. A suction collector is added near the source of aerosol and droplets, creating a flow sink in order to remove the particles. The effects of the suction flow on the aerosol and droplet travel are studied. The suction flow can remove aerosols and also reduce the spreading of droplets.Keywords: aerosols, computational fluid dynamics, COVID-19, dental, discrete phase model, droplets, two-phase flow
Procedia PDF Downloads 2651581 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory
Authors: Damir Latypov
Abstract:
A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory
Procedia PDF Downloads 1541580 Two-Dimensional CFD Simulation of the Behaviors of Ferromagnetic Nanoparticles in Channel
Authors: Farhad Aalizadeh, Ali Moosavi
Abstract:
This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, particle tracking. The purpose of this paper is applied magnetic field effect on Magnetic Nanoparticles velocities distribution. It is shown that the permeability of the particles determines the effect of the magnetic field on the deposition of the particles and the deposition of the particles is inversely proportional to the Reynolds number. Using MHD and its property it is possible to control the flow velocity, remove the fouling on the walls and return the system to its original form. we consider a channel 2D geometry and solve for the resulting spatial distribution of particles. According to obtained results when only magnetic fields are applied perpendicular to the flow, local particles velocity is decreased due to the direct effect of the magnetic field return the system to its original fom. In the method first, in order to avoid mixing with blood, the ferromagnetic particles are covered with a gel-like chemical composition and are injected into the blood vessels. Then, a magnetic field source with a specified distance from the vessel is used and the particles are guided to the affected area. This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, laminar flow of an incompressible magnetorheological (MR) fluid between two fixed parallel plates in the presence of a uniform magnetic field. The purpose of this study is to develop a numerical tool that is able to simulate MR fluids flow in valve mode and determineB0, applied magnetic field effect on flow velocities and pressure distributions.Keywords: MHD, channel clots, magnetic nanoparticles, simulations
Procedia PDF Downloads 3681579 Improving Cheon-Kim-Kim-Song (CKKS) Performance with Vector Computation and GPU Acceleration
Authors: Smaran Manchala
Abstract:
Homomorphic Encryption (HE) enables computations on encrypted data without requiring decryption, mitigating data vulnerability during processing. Usable Fully Homomorphic Encryption (FHE) could revolutionize secure data operations across cloud computing, AI training, and healthcare, providing both privacy and functionality, however, the computational inefficiency of schemes like Cheon-Kim-Kim-Song (CKKS) hinders their widespread practical use. This study focuses on optimizing CKKS for faster matrix operations through the implementation of vector computation parallelization and GPU acceleration. The variable effects of vector parallelization on GPUs were explored, recognizing that while parallelization typically accelerates operations, it could introduce overhead that results in slower runtimes, especially in smaller, less computationally demanding operations. To assess performance, two neural network models, MLPN and CNN—were tested on the MNIST dataset using both ARM and x86-64 architectures, with CNN chosen for its higher computational demands. Each test was repeated 1,000 times, and outliers were removed via Z-score analysis to measure the effect of vector parallelization on CKKS performance. Model accuracy was also evaluated under CKKS encryption to ensure optimizations did not compromise results. According to the results of the trail runs, applying vector parallelization had a 2.63X efficiency increase overall with a 1.83X performance increase for x86-64 over ARM architecture. Overall, these results suggest that the application of vector parallelization in tandem with GPU acceleration significantly improves the efficiency of CKKS even while accounting for vector parallelization overhead, providing impact in future zero trust operations.Keywords: CKKS scheme, runtime efficiency, fully homomorphic encryption (FHE), GPU acceleration, vector parallelization
Procedia PDF Downloads 231578 Liesegang Phenomena: Experimental and Simulation Studies
Authors: Vemula Amalakrishna, S. Pushpavanam
Abstract:
Change and motion characterize and persistently reshape the world around us, on scales from molecular to global. The subtle interplay between change (Reaction) and motion (Diffusion) gives rise to an astonishing intricate spatial or temporal pattern. These pattern formation in nature has been intellectually appealing for many scientists since antiquity. Periodic precipitation patterns, also known as Liesegang patterns (LP), are one of the stimulating examples of such self-assembling reaction-diffusion (RD) systems. LP formation has a great potential in micro and nanotechnology. So far, the research on LPs has been concentrated mostly on how these patterns are forming, retrieving information to build a universal mathematical model for them. Researchers have developed various theoretical models to comprehensively construct the geometrical diversity of LPs. To the best of our knowledge, simulation studies of LPs assume an arbitrary value of RD parameters to explain experimental observation qualitatively. In this work, existing models were studied to understand the mechanism behind this phenomenon and challenges pertaining to models were understood and explained. These models are not computationally effective due to the presence of discontinuous precipitation rate in RD equations. To overcome the computational challenges, smoothened Heaviside functions have been introduced, which downsizes the computational time as well. Experiments were performed using a conventional LP system (AgNO₃-K₂Cr₂O₇) to understand the effects of different gels and temperatures on formed LPs. The model is extended for real parameter values to compare the simulated results with experimental data for both 1-D (Cartesian test tubes) and 2-D(cylindrical and Petri dish).Keywords: reaction-diffusion, spatio-temporal patterns, nucleation and growth, supersaturation
Procedia PDF Downloads 1521577 Computational Fluid Dynamics (CFD) Simulation Approach for Developing New Powder Dispensing Device
Authors: Revanth Rallapalli
Abstract:
Manually dispensing solids and powders can be difficult as it requires gradually pour and check the amount on the scale to be dispensed. Current systems are manual and non-continuous in nature and are user-dependent and difficult to control powder dispensation. Recurrent dosing of powdered medicines in precise amounts quickly and accurately has been an all-time challenge. Various new powder dispensing mechanisms are being designed to overcome these challenges. A battery-operated screw conveyor mechanism is being innovated to overcome the above problems faced. These inventions are numerically evaluated at the concept development level by employing Computational Fluid Dynamics (CFD) of gas-solids multiphase flow systems. CFD has been very helpful in development of such devices saving time and money by reducing the number of prototypes and testing. Furthermore, this paper describes a simulation of powder dispensation from the trocar’s end by considering the powder as secondary flow in air, is simulated by using the technique called Dense Discrete Phase Model incorporated with Kinetic Theory of Granular Flow (DDPM-KTGF). By considering the volume fraction of powder as 50%, the transportation of powder from the inlet side to trocar’s end side is done by rotation of the screw conveyor. Thus, the performance is calculated for a 1-sec time frame in an unsteady computation manner. This methodology will help designers in developing design concepts to improve the dispensation and also at the effective area within a quick turnaround time frame.Keywords: DDPM-KTGF, gas-solids multiphase flow, screw conveyor, Unsteady
Procedia PDF Downloads 1801576 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery
Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko
Abstract:
In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analysed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realised via a two-way coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary lagrangian-eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analysed in the study. The axial velocity at normalised position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.Keywords: Large Eddy Simulation, Fluid Structural Interaction, constricted artery, Computational Fluid Dynamics
Procedia PDF Downloads 2931575 A Prediction Model for Dynamic Responses of Building from Earthquake Based on Evolutionary Learning
Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park
Abstract:
The seismic responses-based structural health monitoring system has been performed to prevent seismic damage. Structural seismic damage of building is caused by the instantaneous stress concentration which is related with dynamic characteristic of earthquake. Meanwhile, seismic response analysis to estimate the dynamic responses of building demands significantly high computational cost. To prevent the failure of structural members from the characteristic of the earthquake and the significantly high computational cost for seismic response analysis, this paper presents an artificial neural network (ANN) based prediction model for dynamic responses of building considering specific time length. Through the measured dynamic responses, input and output node of the ANN are formed by the length of specific time, and adopted for the training. In the model, evolutionary radial basis function neural network (ERBFNN), that radial basis function network (RBFN) is integrated with evolutionary optimization algorithm to find variables in RBF, is implemented. The effectiveness of the proposed model is verified through an analytical study applying responses from dynamic analysis for multi-degree of freedom system to training data in ERBFNN.Keywords: structural health monitoring, dynamic response, artificial neural network, radial basis function network, genetic algorithm
Procedia PDF Downloads 3041574 Design and Testing of Electrical Capacitance Tomography Sensors for Oil Pipeline Monitoring
Authors: Sidi M. A. Ghaly, Mohammad O. Khan, Mohammed Shalaby, Khaled A. Al-Snaie
Abstract:
Electrical capacitance tomography (ECT) is a valuable, non-invasive technique used to monitor multiphase flow processes, especially within industrial pipelines. This study focuses on the design, testing, and performance comparison of ECT sensors configured with 8, 12, and 16 electrodes, aiming to evaluate their effectiveness in imaging accuracy, resolution, and sensitivity. Each sensor configuration was designed to capture the spatial permittivity distribution within a pipeline cross-section, enabling visualization of phase distribution and flow characteristics such as oil and water interactions. The sensor designs were implemented and tested in closed pipes to assess their response to varying flow regimes. Capacitance data collected from each electrode configuration were reconstructed into cross-sectional images, enabling a comparison of image resolution, noise levels, and computational demands. Results indicate that the 16-electrode configuration yields higher image resolution and sensitivity to phase boundaries compared to the 8- and 12-electrode setups, making it more suitable for complex flow visualization. However, the 8 and 12-electrode sensors demonstrated advantages in processing speed and lower computational requirements. This comparative analysis provides critical insights into optimizing ECT sensor design based on specific industrial requirements, from high-resolution imaging to real-time monitoring needs.Keywords: capacitance tomography, modeling, simulation, electrode, permittivity, fluid dynamics, imaging sensitivity measurement
Procedia PDF Downloads 101573 Computational Fluid Dynamics Modeling of Flow Properties Fluctuations in Slug-Churn Flow through Pipe Elbow
Authors: Nkemjika Chinenye-Kanu, Mamdud Hossain, Ghazi Droubi
Abstract:
Prediction of multiphase flow induced forces, void fraction and pressure is crucial at both design and operating stages of practical energy and process pipe systems. In this study, transient numerical simulations of upward slug-churn flow through a vertical 90-degree elbow have been conducted. The volume of fluid (VOF) method was used to model the two-phase flows while the K-epsilon Reynolds-Averaged Navier-Stokes (RANS) equations were used to model turbulence in the flows. The simulation results were validated using experimental results. Void fraction signal, peak frequency and maximum magnitude of void fraction fluctuation of the slug-churn flow validation case studies compared well with experimental results. The x and y direction force fluctuation signals at the elbow control volume were obtained by carrying out force balance calculations using the directly extracted time domain signals of flow properties through the control volume in the numerical simulation. The computed force signal compared well with experiment for the slug and churn flow validation case studies. Hence, the present numerical simulation technique was able to predict the behaviours of the one-way flow induced forces and void fraction fluctuations.Keywords: computational fluid dynamics, flow induced vibration, slug-churn flow, void fraction and force fluctuation
Procedia PDF Downloads 1561572 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 1221571 Optimizing Emergency Rescue Center Layouts: A Backpropagation Neural Networks-Genetic Algorithms Method
Authors: Xiyang Li, Qi Yu, Lun Zhang
Abstract:
In the face of natural disasters and other emergency situations, determining the optimal location of rescue centers is crucial for improving rescue efficiency and minimizing impact on affected populations. This paper proposes a method that integrates genetic algorithms (GA) and backpropagation neural networks (BPNN) to address the site selection optimization problem for emergency rescue centers. We utilize BPNN to accurately estimate the cost of delivering supplies from rescue centers to each temporary camp. Moreover, a genetic algorithm with a special partially matched crossover (PMX) strategy is employed to ensure that the number of temporary camps assigned to each rescue center adheres to predetermined limits. Using the population distribution data during the 2022 epidemic in Jiading District, Shanghai, as an experimental case, this paper verifies the effectiveness of the proposed method. The experimental results demonstrate that the BPNN-GA method proposed in this study outperforms existing algorithms in terms of computational efficiency and optimization performance. Especially considering the requirements for computational resources and response time in emergency situations, the proposed method shows its ability to achieve rapid convergence and optimal performance in the early and mid-stages. Future research could explore incorporating more real-world conditions and variables into the model to further improve its accuracy and applicability.Keywords: emergency rescue centers, genetic algorithms, back-propagation neural networks, site selection optimization
Procedia PDF Downloads 851570 Investigating Kinetics and Mathematical Modeling of Batch Clarification Process for Non-Centrifugal Sugar Production
Authors: Divya Vats, Sanjay Mahajani
Abstract:
The clarification of sugarcane juice plays a pivotal role in the production of non-centrifugal sugar (NCS), profoundly influencing the quality of the final NCS product. In this study, we have investigated the kinetics and mathematical modeling of the batch clarification process. The turbidity of the clarified cane juice (NTU) emerges as the determinant of the end product’s color. Moreover, this parameter underscores the significance of considering other variables as performance indicators for accessing the efficacy of the clarification process. Temperature-controlled experiments were meticulously conducted in a laboratory-scale batch mode. The primary objective was to discern the essential and optimized parameters crucial for augmenting the clarity of cane juice. Additionally, we explored the impact of pH and flocculant loading on the kinetics. Particle Image Velocimetry (PIV) is employed to comprehend the particle-particle and fluid-particle interaction. This technique facilitated a comprehensive understanding, paving the way for the subsequent multiphase computational fluid dynamics (CFD) simulations using the Eulerian-Lagrangian approach in the Ansys fluent. Impressively, these simulations accurately replicated comparable velocity profiles. The final mechanism of this study helps to make a mathematical model and presents a valuable framework for transitioning from the traditional batch process to a continuous process. The ultimate aim is to attain heightened productivity and unwavering consistency in product quality.Keywords: non-centrifugal sugar, particle image velocimetry, computational fluid dynamics, mathematical modeling, turbidity
Procedia PDF Downloads 711569 A Transient Coupled Numerical Analysis of the Flow of Magnetorheological Fluids in Closed Domains
Authors: Wael Elsaady, S. Olutunde Oyadiji, Adel Nasser
Abstract:
The non-linear flow characteristics of magnetorheological (MR) fluids in MR dampers are studied via a coupled numerical approach that incorporates a two-phase flow model. The approach couples the Finite Element (FE) modelling of the damper magnetic circuit, with the Computational Fluid Dynamics (CFD) analysis of the flow field in the damper. The two-phase flow CFD model accounts for the effect of fluid compressibility due to the presence of liquid and gas in the closed domain of the damper. The dynamic mesh model included in ANSYS/Fluent CFD solver is used to simulate the movement of the MR damper piston in order to perform the fluid excitation. The two-phase flow analysis is studied by both Volume-Of-Fluid (VOF) model and mixture model that are included in ANSYS/Fluent. The CFD models show that the hysteretic behaviour of MR dampers is due to the effect of fluid compressibility. The flow field shows the distributions of pressure, velocity, and viscosity contours. In particular, it shows the high non-Newtonian viscosity in the affected fluid regions by the magnetic field and the low Newtonian viscosity elsewhere. Moreover, the dependence of gas volume fraction on the liquid pressure inside the damper is predicted by the mixture model. The presented approach targets a better understanding of the complicated flow characteristics of viscoplastic fluids that could be applied in different applications.Keywords: viscoplastic fluid, magnetic FE analysis, computational fluid dynamics, two-phase flow, dynamic mesh, user-defined functions
Procedia PDF Downloads 1741568 Electrochemical Behavior of Cocaine on Carbon Paste Electrode Chemically Modified with Cu(II) Trans 3-MeO Salcn Complex
Authors: Alex Soares Castro, Matheus Manoel Teles de Menezes, Larissa Silva de Azevedo, Ana Carolina Caleffi Patelli, Osmair Vital de Oliveira, Aline Thais Bruni, Marcelo Firmino de Oliveira
Abstract:
Considering the problem of the seizure of illicit drugs, as well as the development of electrochemical sensors using chemically modified electrodes, this work shows the study of the electrochemical activity of cocaine in carbon paste electrode chemically modified with Cu (II) trans 3-MeO salcn complex. In this context, cyclic voltammetry was performed on 0.1 mol.L⁻¹ KCl supporting electrolyte at a scan speed of 100 mV s⁻¹, using an electrochemical cell composed of three electrodes: Ag /AgCl electrode (filled KCl 3 mol.L⁻¹) from Metrohm® (reference electrode); a platinum spiral electrode, as an auxiliary electrode, and a carbon paste electrode chemically modified with Cu (II) trans 3-MeO complex (as working electrode). Two forms of cocaine were analyzed: cocaine hydrochloride (pH 3) and cocaine free base form (pH 8). The PM7 computational method predicted that the hydrochloride form is more stable than the free base form of cocaine, so with cyclic voltammetry, we found electrochemical signal only for cocaine in the form of hydrochloride, with an anodic peak at 1.10 V, with a linearity range between 2 and 20 μmol L⁻¹ had LD and LQ of 2.39 and 7.26x10-5 mol L⁻¹, respectively. The study also proved that cocaine is adsorbed on the surface of the working electrode, where through an irreversible process, where only anode peaks are observed, we have the oxidation of cocaine, which occurs in the hydrophilic region due to the loss of two electrons. The mechanism of this reaction was confirmed by the ab-inito quantum method.Keywords: ab-initio computational method, analytical method, cocaine, Schiff base complex, voltammetry
Procedia PDF Downloads 1941567 A Single Stage Rocket Using Solid Fuels in Conventional Propulsion Systems
Authors: John R Evans, Sook-Ying Ho, Rey Chin
Abstract:
This paper describes the research investigations orientated to the starting and propelling of a solid fuel rocket engine which operates as combined cycle propulsion system using three thrust pulses. The vehicle has been designed to minimise the cost of launching small number of Nano/Cube satellites into low earth orbits (LEO). A technology described in this paper is a ground-based launch propulsion system which starts the rocket vertical motion immediately causing air flow to enter the ramjet’s intake. Current technology has a ramjet operation predicted to be able to start high subsonic speed of 280 m/s using a liquid fuel ramjet (LFRJ). The combined cycle engine configuration is in many ways fundamentally different from the LFRJ. A much lower subsonic start speed is highly desirable since the use of a mortar to obtain the latter speed for rocket means a shorter launcher length can be utilized. This paper examines the means and has some performance calculations, including Computational Fluid Dynamics analysis of air-intake at suitable operational conditions, 3-DOF point mass trajectory analysis of multi-pulse propulsion system (where pulse ignition time and thrust magnitude can be controlled), etc. of getting a combined cycle rocket engine use in a single stage vehicle.Keywords: combine cycle propulsion system, low earth orbit launch vehicle, computational fluid dynamics analysis, 3dof trajectory analysis
Procedia PDF Downloads 1911566 Evaluation of Initial Graft Tension during ACL Reconstruction Using a Three-Dimensional Computational Finite Element Simulation: Effect of the Combination of a Band of Gracilis with the Former Graft
Authors: S. Alireza Mirghasemi, Javad Parvizi, Narges R. Gabaran, Shervin Rashidinia, Mahdi M. Bijanabadi, Dariush G. Savadkoohi
Abstract:
Background: The anterior cruciate ligament is one of the most frequent ligament to be disrupted. Surgical reconstruction of the anterior cruciate ligament is a common practice to treat the disability or chronic instability of the knee. Several factors associated with success or failure of the ACL reconstruction including preoperative laxity of the knee, selection of the graft material, surgical technique, graft tension, and postoperative rehabilitation. We aimed to examine the biomechanical properties of any graft type and initial graft tensioning during ACL reconstruction using 3-dimensional computational finite element simulation. Methods: In this paper, 3-dimensional model of the knee was constructed to investigate the effect of graft tensioning on the knee joint biomechanics. Four different grafts were compared: 1) Bone-patellar tendon-bone graft (BPTB) 2) Hamstring tendon 3) BPTB and a band of gracilis4) Hamstring and a band of gracilis. The initial graft tension was set as “0, 20, 40, or 60N”. The anterior loading was set to 134 N. Findings: The resulting stress pattern and deflection in any of these models were compared to that of the intact knee. The obtained results showed that the combination of a band of gracilis with the former graft (BPTB or Hamstring) increases the structural stiffness of the knee. Conclusion: Required pretension during surgery decreases significantly by adding a band of gracilis to the proper graft.Keywords: ACL reconstruction, deflection, finite element simulation, stress pattern
Procedia PDF Downloads 3001565 Influences of Separation of the Boundary Layer in the Reservoir Pressure in the Shock Tube
Authors: Bruno Coelho Lima, Joao F.A. Martos, Paulo G. P. Toro, Israel S. Rego
Abstract:
The shock tube is a ground-facility widely used in aerospace and aeronautics science and technology for studies on gas dynamic and chemical-physical processes in gases at high-temperature, explosions and dynamic calibration of pressure sensors. A shock tube in its simplest form is comprised of two separate tubes of equal cross-section by a diaphragm. The diaphragm function is to separate the two reservoirs at different pressures. The reservoir containing high pressure is called the Driver, the low pressure reservoir is called Driven. When the diaphragm is broken by pressure difference, a normal shock wave and non-stationary (named Incident Shock Wave) will be formed in the same place of diaphragm and will get around toward the closed end of Driven. When this shock wave reaches the closer end of the Driven section will be completely reflected. Now, the shock wave will interact with the boundary layer that was created by the induced flow by incident shock wave passage. The interaction between boundary layer and shock wave force the separation of the boundary layer. The aim of this paper is to make an analysis of influences of separation of the boundary layer in the reservoir pressure in the shock tube. A comparison among CDF (Computational Fluids Dynamics), experiments test and analytical analysis were performed. For the analytical analysis, some routines in Python was created, in the numerical simulations (Computational Fluids Dynamics) was used the Ansys Fluent, and the experimental tests were used T1 shock tube located in IEAv (Institute of Advanced Studies).Keywords: boundary layer separation, moving shock wave, shock tube, transient simulation
Procedia PDF Downloads 3151564 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer
Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın
Abstract:
We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer
Procedia PDF Downloads 2451563 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation
Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski
Abstract:
In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming
Procedia PDF Downloads 4061562 Frequency Response of Complex Systems with Localized Nonlinearities
Authors: E. Menga, S. Hernandez
Abstract:
Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber
Procedia PDF Downloads 2661561 A Modified Nonlinear Conjugate Gradient Algorithm for Large Scale Unconstrained Optimization Problems
Authors: Tsegay Giday Woldu, Haibin Zhang, Xin Zhang, Yemane Hailu Fissuh
Abstract:
It is well known that nonlinear conjugate gradient method is one of the widely used first order methods to solve large scale unconstrained smooth optimization problems. Because of the low memory requirement, attractive theoretical features, practical computational efficiency and nice convergence properties, nonlinear conjugate gradient methods have a special role for solving large scale unconstrained optimization problems. Large scale optimization problems are with important applications in practical and scientific world. However, nonlinear conjugate gradient methods have restricted information about the curvature of the objective function and they are likely less efficient and robust compared to some second order algorithms. To overcome these drawbacks, the new modified nonlinear conjugate gradient method is presented. The noticeable features of our work are that the new search direction possesses the sufficient descent property independent of any line search and it belongs to a trust region. Under mild assumptions and standard Wolfe line search technique, the global convergence property of the proposed algorithm is established. Furthermore, to test the practical computational performance of our new algorithm, numerical experiments are provided and implemented on the set of some large dimensional unconstrained problems. The numerical results show that the proposed algorithm is an efficient and robust compared with other similar algorithms.Keywords: conjugate gradient method, global convergence, large scale optimization, sufficient descent property
Procedia PDF Downloads 2051560 Simulation and Experimental Study on Dual Dense Medium Fluidization Features of Air Dense Medium Fluidized Bed
Authors: Cheng Sheng, Yuemin Zhao, Chenlong Duan
Abstract:
Air dense medium fluidized bed is a typical application of fluidization techniques for coal particle separation in arid areas, where it is costly to implement wet coal preparation technologies. In the last three decades, air dense medium fluidized bed, as an efficient dry coal separation technique, has been studied in many aspects, including energy and mass transfer, hydrodynamics, bubbling behaviors, etc. Despite numerous researches have been published, the fluidization features, especially dual dense medium fluidization features have been rarely reported. In dual dense medium fluidized beds, different combinations of different dense mediums play a significant role in fluidization quality variation, thus influencing coal separation efficiency. Moreover, to what extent different dense mediums mix and to what extent the two-component particulate mixture affects the fluidization performance and quality have been in suspense. The proposed work attempts to reveal underlying mechanisms of generation and evolution of two-component particulate mixture in the fluidization process. Based on computational fluid dynamics methods and discrete particle modelling, movement and evolution of dual dense mediums in air dense medium fluidized bed have been simulated. Dual dense medium fluidization experiments have been conducted. Electrical capacitance tomography was employed to investigate the distribution of two-component mixture in experiments. Underlying mechanisms involving two-component particulate fluidization are projected to be demonstrated with the analysis and comparison of simulation and experimental results.Keywords: air dense medium fluidized bed, particle separation, computational fluid dynamics, discrete particle modelling
Procedia PDF Downloads 3821559 Effect of Bi-Dispersity on Particle Clustering in Sedimentation
Authors: Ali Abbas Zaidi
Abstract:
In free settling or sedimentation, particles form clusters at high Reynolds number and dilute suspensions. It is due to the entrapment of particles in the wakes of upstream particles. In this paper, the effect of bi-dispersity of settling particles on particle clustering is investigated using particle-resolved direct numerical simulation. Immersed boundary method is used for particle fluid interactions and discrete element method is used for particle-particle interactions. The solid volume fraction used in the simulation is 1% and the Reynolds number based on Sauter mean diameter is 350. Both solid volume fraction and Reynolds number lie in the clustering regime of sedimentation. In simulations, the particle diameter ratio (i.e. diameter of larger particle to smaller particle (d₁/d₂)) is varied from 2:1, 3:1 and 4:1. For each case of particle diameter ratio, solid volume fraction for each particle size (φ₁/φ₂) is varied from 1:1, 1:2 and 2:1. For comparison, simulations are also performed for monodisperse particles. For studying particles clustering, radial distribution function and instantaneous location of particles in the computational domain are studied. It is observed that the degree of particle clustering decreases with the increase in the bi-dispersity of settling particles. The smallest degree of particle clustering or dispersion of particles is observed for particles with d₁/d₂ equal to 4:1 and φ₁/φ₂ equal to 1:2. Simulations showed that the reduction in particle clustering by increasing bi-dispersity is due to the difference in settling velocity of particles. Particles with larger size settle faster and knockout the smaller particles from clustered regions of particles in the computational domain.Keywords: dispersion in bi-disperse settling particles, particle microstructures in bi-disperse suspensions, particle resolved direct numerical simulations, settling of bi-disperse particles
Procedia PDF Downloads 207