Search results for: computational methods
16632 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer
Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın
Abstract:
We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer
Procedia PDF Downloads 24816631 Accelerating Molecular Dynamics Simulations of Electrolytes with Neural Network: Bridging the Gap between Ab Initio Molecular Dynamics and Classical Molecular Dynamics
Authors: Po-Ting Chen, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
Classical molecular dynamics (CMD) simulations are highly efficient for material simulations but have limited accuracy. In contrast, ab initio molecular dynamics (AIMD) provides high precision by solving the Kohn–Sham equations yet requires significant computational resources, restricting the size of systems and time scales that can be simulated. To address these challenges, we employed NequIP, a machine learning model based on an E(3)-equivariant graph neural network, to accelerate molecular dynamics simulations of a 1M LiPF6 in EC/EMC (v/v 3:7) for Li battery applications. AIMD calculations were initially conducted using the Vienna Ab initio Simulation Package (VASP) to generate highly accurate atomic positions, forces, and energies. This data was then used to train the NequIP model, which efficiently learns from the provided data. NequIP achieved AIMD-level accuracy with significantly less training data. After training, NequIP was integrated into the LAMMPS software to enable molecular dynamics simulations of larger systems over longer time scales. This method overcomes the computational limitations of AIMD while improving the accuracy limitations of CMD, providing an efficient and precise computational framework. This study showcases NequIP’s applicability to electrolyte systems, particularly for simulating the dynamics of LiPF6 ionic mixtures. The results demonstrate substantial improvements in both computational efficiency and simulation accuracy, highlighting the potential of machine learning models to enhance molecular dynamics simulations.Keywords: lithium-ion batteries, electrolyte simulation, molecular dynamics, neural network
Procedia PDF Downloads 2416630 Development of Geo-computational Model for Analysis of Lassa Fever Dynamics and Lassa Fever Outbreak Prediction
Authors: Adekunle Taiwo Adenike, I. K. Ogundoyin
Abstract:
Lassa fever is a neglected tropical virus that has become a significant public health issue in Nigeria, with the country having the greatest burden in Africa. This paper presents a Geo-Computational Model for Analysis and Prediction of Lassa Fever Dynamics and Outbreaks in Nigeria. The model investigates the dynamics of the virus with respect to environmental factors and human populations. It confirms the role of the rodent host in virus transmission and identifies how climate and human population are affected. The proposed methodology is carried out on a Linux operating system using the OSGeoLive virtual machine for geographical computing, which serves as a base for spatial ecology computing. The model design uses Unified Modeling Language (UML), and the performance evaluation uses machine learning algorithms such as random forest, fuzzy logic, and neural networks. The study aims to contribute to the control of Lassa fever, which is achievable through the combined efforts of public health professionals and geocomputational and machine learning tools. The research findings will potentially be more readily accepted and utilized by decision-makers for the attainment of Lassa fever elimination.Keywords: geo-computational model, lassa fever dynamics, lassa fever, outbreak prediction, nigeria
Procedia PDF Downloads 9516629 A Predictive Model for Turbulence Evolution and Mixing Using Machine Learning
Authors: Yuhang Wang, Jorg Schluter, Sergiy Shelyag
Abstract:
The high cost associated with high-resolution computational fluid dynamics (CFD) is one of the main challenges that inhibit the design, development, and optimisation of new combustion systems adapted for renewable fuels. In this study, we propose a physics-guided CNN-based model to predict turbulence evolution and mixing without requiring a traditional CFD solver. The model architecture is built upon U-Net and the inception module, while a physics-guided loss function is designed by introducing two additional physical constraints to allow for the conservation of both mass and pressure over the entire predicted flow fields. Then, the model is trained on the Large Eddy Simulation (LES) results of a natural turbulent mixing layer with two different Reynolds number cases (Re = 3000 and 30000). As a result, the model prediction shows an excellent agreement with the corresponding CFD solutions in terms of both spatial distributions and temporal evolution of turbulent mixing. Such promising model prediction performance opens up the possibilities of doing accurate high-resolution manifold-based combustion simulations at a low computational cost for accelerating the iterative design process of new combustion systems.Keywords: computational fluid dynamics, turbulence, machine learning, combustion modelling
Procedia PDF Downloads 9216628 Simulation and Experimental Study on Dual Dense Medium Fluidization Features of Air Dense Medium Fluidized Bed
Authors: Cheng Sheng, Yuemin Zhao, Chenlong Duan
Abstract:
Air dense medium fluidized bed is a typical application of fluidization techniques for coal particle separation in arid areas, where it is costly to implement wet coal preparation technologies. In the last three decades, air dense medium fluidized bed, as an efficient dry coal separation technique, has been studied in many aspects, including energy and mass transfer, hydrodynamics, bubbling behaviors, etc. Despite numerous researches have been published, the fluidization features, especially dual dense medium fluidization features have been rarely reported. In dual dense medium fluidized beds, different combinations of different dense mediums play a significant role in fluidization quality variation, thus influencing coal separation efficiency. Moreover, to what extent different dense mediums mix and to what extent the two-component particulate mixture affects the fluidization performance and quality have been in suspense. The proposed work attempts to reveal underlying mechanisms of generation and evolution of two-component particulate mixture in the fluidization process. Based on computational fluid dynamics methods and discrete particle modelling, movement and evolution of dual dense mediums in air dense medium fluidized bed have been simulated. Dual dense medium fluidization experiments have been conducted. Electrical capacitance tomography was employed to investigate the distribution of two-component mixture in experiments. Underlying mechanisms involving two-component particulate fluidization are projected to be demonstrated with the analysis and comparison of simulation and experimental results.Keywords: air dense medium fluidized bed, particle separation, computational fluid dynamics, discrete particle modelling
Procedia PDF Downloads 38316627 Defectoscopy of Reinforced Concrete Structures with Using an Ultrasonic Method for Failure Monitoring
Authors: Sabina Hublova, Kristyna Hrabova, Petr Cikrle
Abstract:
Sustainable development and preservation of existing buildings are becoming increasingly important worldwide. In order to reduce the amount of CO2 emissions in the air and to reduce the amount of waste from building structures, we can predict an increasing demand for maintenance of some existing buildings in the future. The use of modern diagnostic methods, which allow detailed determination of the properties of structures, the identification of critical points, could be the great importance for the better assessment of existing structures. Non-destructive methods could be one of the options. From these methods, ultrasonic appears to be a highly perspective method, thanks to which we are able to identify critical points of an element or a structure. The experiment will focus on the use of electroacoustic methods for defectoscopy in reinforced concrete columns.Keywords: sustainability, defectoscopy, ultrasonic method, non-destructive methods, electroacoustic methods
Procedia PDF Downloads 16816626 A Continuous Boundary Value Method of Order 8 for Solving the General Second Order Multipoint Boundary Value Problems
Authors: T. A. Biala
Abstract:
This paper deals with the numerical integration of the general second order multipoint boundary value problems. This has been achieved by the development of a continuous linear multistep method (LMM). The continuous LMM is used to construct a main discrete method to be used with some initial and final methods (also obtained from the continuous LMM) so that they form a discrete analogue of the continuous second order boundary value problems. These methods are used as boundary value methods and adapted to cope with the integration of the general second order multipoint boundary value problems. The convergence, the use and the region of absolute stability of the methods are discussed. Several numerical examples are implemented to elucidate our solution process.Keywords: linear multistep methods, boundary value methods, second order multipoint boundary value problems, convergence
Procedia PDF Downloads 37716625 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method
Authors: M. M. Qasaymeh, M. A. Khodeir
Abstract:
Subspace channel estimation methods have been studied widely. It depends on subspace decomposition of the covariance matrix to separate signal subspace from noise subspace. The decomposition normally is done by either Eigenvalue Decomposition (EVD) or Singular Value Decomposition (SVD) of the Auto-Correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. In this paper, the multipath channel estimation problem for a Slow Frequency Hopping (SFH) system using noise space based method is considered. An efficient method to estimate multipath the time delays basically is proposed, by applying MUltiple Signal Classification (MUSIC) algorithm which used the null space extracted by the Rank Revealing LU factorization (RRLU). The RRLU provides accurate information about the rank and the numerical null space which make it a valuable tool in numerical linear algebra. The proposed novel method decreases the computational complexity approximately to the half compared with RRQR methods keeping the same performance. Computer simulations are also included to demonstrate the effectiveness of the proposed scheme.Keywords: frequency hopping, channel model, time delay estimation, RRLU, RRQR, MUSIC, LS-ESPRIT
Procedia PDF Downloads 41016624 Computational Intelligence and Machine Learning for Urban Drainage Infrastructure Asset Management
Authors: Thewodros K. Geberemariam
Abstract:
The rapid physical expansion of urbanization coupled with aging infrastructure presents a unique decision and management challenges for many big city municipalities. Cities must therefore upgrade and maintain the existing aging urban drainage infrastructure systems to keep up with the demands. Given the overall contribution of assets to municipal revenue and the importance of infrastructure to the success of a livable city, many municipalities are currently looking for a robust and smart urban drainage infrastructure asset management solution that combines management, financial, engineering and technical practices. This robust decision-making shall rely on sound, complete, current and relevant data that enables asset valuation, impairment testing, lifecycle modeling, and forecasting across the multiple asset portfolios. On this paper, predictive computational intelligence (CI) and multi-class machine learning (ML) coupled with online, offline, and historical record data that are collected from an array of multi-parameter sensors are used for the extraction of different operational and non-conforming patterns hidden in structured and unstructured data to determine and produce actionable insight on the current and future states of the network. This paper aims to improve the strategic decision-making process by identifying all possible alternatives; evaluate the risk of each alternative, and choose the alternative most likely to attain the required goal in a cost-effective manner using historical and near real-time urban drainage infrastructure data for urban drainage infrastructures assets that have previously not benefited from computational intelligence and machine learning advancements.Keywords: computational intelligence, machine learning, urban drainage infrastructure, machine learning, classification, prediction, asset management space
Procedia PDF Downloads 15216623 Numerical Study on Parallel Rear-Spoiler on Super Cars
Authors: Anshul Ashu
Abstract:
Computers are applied to the vehicle aerodynamics in two ways. One of two is Computational Fluid Dynamics (CFD) and other is Computer Aided Flow Visualization (CAFV). Out of two CFD is chosen because it shows the result with computer graphics. The simulation of flow field around the vehicle is one of the important CFD applications. The flow field can be solved numerically using panel methods, k-ε method, and direct simulation methods. The spoiler is the tool in vehicle aerodynamics used to minimize unfavorable aerodynamic effects around the vehicle and the parallel spoiler is set of two spoilers which are designed in such a manner that it could effectively reduce the drag. In this study, the standard k-ε model of the simplified version of Bugatti Veyron, Audi R8 and Porsche 911 are used to simulate the external flow field. Flow simulation is done for variable Reynolds number. The flow simulation consists of three different levels, first over the model without a rear spoiler, second for over model with single rear spoiler, and third over the model with parallel rear-spoiler. The second and third level has following parameter: the shape of the spoiler, the angle of attack and attachment position. A thorough analysis of simulations results has been found. And a new parallel spoiler is designed. It shows a little improvement in vehicle aerodynamics with a decrease in vehicle aerodynamic drag and lift. Hence, it leads to good fuel economy and traction force of the model.Keywords: drag, lift, flow simulation, spoiler
Procedia PDF Downloads 50116622 Evaluation of the Self-Efficacy and Learning Experiences of Final year Students of Computer Science of Southwest Nigerian Universities
Authors: Olabamiji J. Onifade, Peter O. Ajayi, Paul O. Jegede
Abstract:
This study aimed at investigating the preparedness of the undergraduate final year students of Computer Science as the next entrants into the workplace. It assessed their self-efficacy in computational tasks and examined the relationship between their self-efficacy and their learning experiences in Southwest Nigerian universities. The study employed a descriptive survey research design. The population of the study comprises all the final year students of Computer Science. A purposive sampling technique was adopted in selecting a representative sample of interest from the final year students of Computer Science. The Students’ Computational Task Self-Efficacy Questionnaire (SCTSEQ) was used to collect data. Mean, standard deviation, frequency, percentages, and linear regression were used for data analysis. The result obtained revealed that the final year students of Computer Science were averagely confident in performing computational tasks, and there is a significant relationship between the learning experiences of the students and their self-efficacy. The study recommends that the curriculum be improved upon to accommodate industry experts as lecturers in some of the courses, make provision for more practical sessions, and the learning experiences of the student be considered an important component in the undergraduate Computer Science curriculum development process.Keywords: computer science, learning experiences, self-efficacy, students
Procedia PDF Downloads 14416621 Methods for Preparation of Soil Samples for Determination of Trace Elements
Authors: S. Krustev, V. Angelova, K. Ivanov, P. Zaprjanova
Abstract:
It is generally accepted that only about ten microelements are vitally important to all plants, and approximately ten more elements are proved to be significant for the development of some species. The main methods for their determination in soils are the atomic spectral techniques - AAS and ICP-OAS. Critical stage to obtain correct results for content of heavy metals and nutrients in the soil is the process of mineralization. A comparative study of the most widely spread methods for soil sample preparation for determination of some trace elements was carried out. Three most commonly used methods for sample preparation were used as follows: ISO11466, EPA Method 3051 and BDS ISO 14869-1. Their capabilities were assessed and their bounds of applicability in determining the levels of the most important microelements in agriculture were defined.Keywords: analysis, copper, methods, zinc
Procedia PDF Downloads 25716620 Earthquake Forecasting Procedure Due to Diurnal Stress Transfer by the Core to the Crust
Authors: Hassan Gholibeigian, Kazem Gholibeigian
Abstract:
In this paper, our goal is determination of loading versus time in crust. For this goal, we present a computational procedure to propose a cumulative strain energy time profile which can be used to predict the approximate location and time of the next major earthquake (M > 4.5) along a specific fault, which we believe, is more accurate than many of the methods presently in use. In the coming pages, after a short review of the research works presently going on in the area of earthquake analysis and prediction, earthquake mechanisms in both the jerk and sequence earthquake direction is discussed, then our computational procedure is presented using differential equations of equilibrium which govern the nonlinear dynamic response of a system of finite elements, modified with an extra term to account for the jerk produced during the quake. We then employ Von Mises developed model for the stress strain relationship in our calculations, modified with the addition of an extra term to account for thermal effects. For calculation of the strain energy the idea of Pulsating Mantle Hypothesis (PMH) is used. This hypothesis, in brief, states that the mantle is under diurnal cyclic pulsating loads due to unbalanced gravitational attraction of the sun and the moon. A brief discussion is done on the Denali fault as a case study. The cumulative strain energy is then graphically represented versus time. At the end, based on some hypothetic earthquake data, the final results are verified.Keywords: pulsating mantle hypothesis, inner core’s dislocation, outer core’s bulge, constitutive model, transient hydro-magneto-thermo-mechanical load, diurnal stress, jerk, fault behaviour
Procedia PDF Downloads 27616619 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers
Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi
Abstract:
Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics
Procedia PDF Downloads 17316618 Artificial Intelligence in Bioscience: The Next Frontier
Authors: Parthiban Srinivasan
Abstract:
With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction
Procedia PDF Downloads 35916617 Developing Reading Methods of Industrial Education Students at King Mongkut’s Institute of Technology Ladkrabang
Authors: Rattana Sangchan, Pattaraporn Thampradit
Abstract:
Teaching students to use a variety of reading methods in developing reading is essential for Thai university students. However, there haven’t been a lot of studies concerned about developing reading methods that are used by Thai students in the industrial education field. Therefore, this study was carried out not only to investigate the developing reading methods of Industrial Education students at King Mongkut’s Institute of Technology Ladkrabang, but also to determine if the developing reading strategies differ among the students’ reading abilities and differ gender: male and female. The research instrument used in collecting the data consisted of fourteen statements which include either metacognitive strategies, cognitive strategies or social / affective strategies. Results of this study revealed that students could develop their reading methods in moderate level (mean=3.13). Furthermore, high reading ability students had different levels of using reading methods to develop their reading from those of mid reading ability students. In addition, high reading ability students could use either metacognitive reading methods or cognitive reading methods to develop their reading much better than mid reading ability students. Interestingly, male students could develop their reading methods in great levels while female students could develop their reading methods only in moderate level. Last but not least, male students could use either metacognitive reading methods or cognitive reading methods to develop their reading much better than female students. Thus, the results of this study could indicate that most students need to apply much more reading strategies to develop their reading. At the same time, suggestions on how to motivate and train their students to apply much more appropriate effective reading strategies to better comprehend their reading were also provided.Keywords: developing reading methods, industrial education, reading abilities, reading method classification
Procedia PDF Downloads 28516616 A New Family of Globally Convergent Conjugate Gradient Methods
Authors: B. Sellami, Y. Laskri, M. Belloufi
Abstract:
Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for large-scale problems. Recently, they have been much studied. In this paper, a new family of conjugate gradient method is proposed for unconstrained optimization. This method includes the already existing two practical nonlinear conjugate gradient methods, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the Wolfe conditions. The numerical experiments are done to test the efficiency of the new method, which implies the new method is promising. In addition the methods related to this family are uniformly discussed.Keywords: conjugate gradient method, global convergence, line search, unconstrained optimization
Procedia PDF Downloads 41016615 [Keynote Talk]: Mathematical and Numerical Modelling of the Cardiovascular System: Macroscale, Mesoscale and Microscale Applications
Authors: Aymen Laadhari
Abstract:
The cardiovascular system is centered on the heart and is characterized by a very complex structure with different physical scales in space (e.g. micrometers for erythrocytes and centimeters for organs) and time (e.g. milliseconds for human brain activity and several years for development of some pathologies). The development and numerical implementation of mathematical models of the cardiovascular system is a tremendously challenging topic at the theoretical and computational levels, inducing consequently a growing interest over the past decade. The accurate computational investigations in both healthy and pathological cases of processes related to the functioning of the human cardiovascular system can be of great potential in tackling several problems of clinical relevance and in improving the diagnosis of specific diseases. In this talk, we focus on the specific task of simulating three particular phenomena related to the cardiovascular system on the macroscopic, mesoscopic and microscopic scales, respectively. Namely, we develop numerical methodologies tailored for the simulation of (i) the haemodynamics (i.e., fluid mechanics of blood) in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets, (ii) the hyperelastic anisotropic behaviour of cardiomyocytes and the influence of calcium concentrations on the contraction of single cells, and (iii) the dynamics of red blood cells in microvasculature. For each problem, we present an appropriate fully Eulerian finite element methodology. We report several numerical examples to address in detail the relevance of the mathematical models in terms of physiological meaning and to illustrate the accuracy and efficiency of the numerical methods.Keywords: finite element method, cardiovascular system, Eulerian framework, haemodynamics, heart valve, cardiomyocyte, red blood cell
Procedia PDF Downloads 25316614 On a Generalization of the Spectral Dichotomy Method of a Matrix With Respect to Parabolas
Authors: Mouhamadou Dosso
Abstract:
This paper presents methods of spectral dichotomy of a matrix which compute spectral projectors on the subspace associated with the eigenvalues external to the parabolas described by a general equation. These methods are modifications of the one proposed in [A. N. Malyshev and M. Sadkane, SIAM J. MATRIX ANAL. APPL. 18 (2), 265-278, 1997] which uses the spectral dichotomy method of a matrix with respect to the imaginary axis. Theoretical and algorithmic aspects of the methods are developed. Numerical results obtained by applying methods presented on matrices are reported.Keywords: spectral dichotomy method, spectral projector, eigensubspaces, eigenvalue
Procedia PDF Downloads 9416613 Computational Fluid Dynamics (CFD) Modeling of Local with a Hot Temperature in Sahara
Authors: Selma Bouasria, Mahi Abdelkader, Abbès Azzi, Herouz Keltoum
Abstract:
This paper reports concept was used into the computational fluid dynamics (CFD) code cfx through user-defined functions to assess ventilation efficiency inside (forced-ventilation local). CFX is a simulation tool which uses powerful computer and applied mathematics, to model fluid flow situations for the prediction of heat, mass and momentum transfer and optimal design in various heat transfer and fluid flow processes to evaluate thermal comfort in a room ventilated (highly-glazed). The quality of the solutions obtained from CFD simulations is an effective tool for predicting the behavior and performance indoor thermo-aéraulique comfort.Keywords: ventilation, thermal comfort, CFD, indoor environment, solar air heater
Procedia PDF Downloads 63416612 Fluid Structure Interaction Study between Ahead and Angled Impact of AGM 88 Missile Entering Relatively High Viscous Fluid for K-Omega Turbulence Model
Authors: Abu Afree Andalib, Rafiur Rahman, Md Mezbah Uddin
Abstract:
The main objective of this work is to anatomize on the various parameters of AGM 88 missile anatomized using FSI module in Ansys. Computational fluid dynamics is used for the study of fluid flow pattern and fluidic phenomenon such as drag, pressure force, energy dissipation and shockwave distribution in water. Using finite element analysis module of Ansys, structural parameters such as stress and stress density, localization point, deflection, force propagation is determined. Separate analysis on structural parameters is done on Abacus. State of the art coupling module is used for FSI analysis. Fine mesh is considered in every case for better result during simulation according to computational machine power. The result of the above-mentioned parameters is analyzed and compared for two phases using graphical representation. The result of Ansys and Abaqus are also showed. Computational Fluid Dynamics and Finite Element analyses and subsequently the Fluid-Structure Interaction (FSI) technique is being considered. Finite volume method and finite element method are being considered for modelling fluid flow and structural parameters analysis. Feasible boundary conditions are also utilized in the research. Significant change in the interaction and interference pattern while the impact was found. Theoretically as well as according to simulation angled condition was found with higher impact.Keywords: FSI (Fluid Surface Interaction), impact, missile, high viscous fluid, CFD (Computational Fluid Dynamics), FEM (Finite Element Analysis), FVM (Finite Volume Method), fluid flow, fluid pattern, structural analysis, AGM-88, Ansys, Abaqus, meshing, k-omega, turbulence model
Procedia PDF Downloads 46816611 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations
Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan
Abstract:
Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers
Procedia PDF Downloads 7816610 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran
Authors: Fatemeh Faramarzi, Hosein Mahjoob
Abstract:
Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6
Procedia PDF Downloads 31316609 CFD Modeling of Insect Flight at Low Reynolds Numbers
Authors: Wu Di, Yeo Khoon Seng, Lim Tee Tai
Abstract:
The typical insects employ a flapping-wing mode of flight. The numerical simulations on free flight of a model fruit fly (Re=143) including hovering and are presented in this paper. Unsteady aerodynamics around a flapping insect is studied by solving the three-dimensional Newtonian dynamics of the flyer coupled with Navier-Stokes equations. A hybrid-grid scheme (Generalized Finite Difference Method) that combines great geometry flexibility and accuracy of moving boundary definition is employed for obtaining flow dynamics. The results show good points of agreement and consistency with the outcomes and analyses of other researchers, which validate the computational model and demonstrate the feasibility of this computational approach on analyzing fluid phenomena in insect flight. The present modeling approach also offers a promising route of investigation that could complement as well as overcome some of the limitations of physical experiments in the study of free flight aerodynamics of insects. The results are potentially useful for the design of biomimetic flapping-wing flyers.Keywords: free hovering flight, flapping wings, fruit fly, insect aerodynamics, leading edge vortex (LEV), computational fluid dynamics (CFD), Navier-Stokes equations (N-S), fluid structure interaction (FSI), generalized finite-difference method (GFD)
Procedia PDF Downloads 41016608 Flow Characteristic Analysis for Hatch Type Air Vent Head of Bulk Cargo Ship by Computational Fluid Dynamics
Authors: Hanik Park, Kyungsook Jeon, Suchul Shin, Youngchul Park
Abstract:
The air vent head prevents the inflow of seawater into the cargo holds when it is used for the ballast tank on heavy weather. In this study, the flow characteristics and the grid size were created by the application of Computational Fluid Dynamics by taking into the consideration of comparison of test results. Then, the accuracy of the analysis was verified by comparing with experimental results. Based on this analysis, accurate turbulence model and grid size can be selected. Thus, the design characteristic of air vent head for bulk carrier contributes the reliability based on the research results.Keywords: bulk carrier, FEM, SST, vent
Procedia PDF Downloads 51916607 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure
Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer
Abstract:
The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition
Procedia PDF Downloads 11016606 Assessing the Influence of Using Traditional Methods of Construction on Cost and Quality of Building Construction
Authors: Musoke Ivan, Birungi Racheal
Abstract:
The construction trend is characterized by increased use of modern methods yet traditional methods are cheaper in terms of costs, in addition to the benefits it offers to the construction sector, like providing more jobs that could have been worked with the intensive machines. The purpose of this research was to assess the influence of using Traditional methods of construction (TMC) on the costs and quality of building structures and determine the different ways. Traditional methods of construction (TMC) can be applicable and integrated into the construction trend, and propose ways how this can be a success. The study adopted a quantitative method approach targeting various construction professionals like Architects, Quantity surveyors, Engineers, and Construction Managers. Questionnaires and analyses of literature were used to obtain research data and findings. Simple random sampling was used to select 40 construction professionals to which questionnaires were administered. The data was then analyzed using Microsoft Excel. The findings of the research indicate that Traditional methods of construction (TMCs) in Uganda are cheaper in terms of costs, but the quality is still low. This is attributed to a lack of skilled labour and efficient supervision while undertaking tasks leading to low quality. The study identifies strategies that would improve Traditional methods of construction (TMC), which include the employment of skilled manpower and effective supervision. It also identifies the need by stakeholders like the government, clients, and professionals to appreciate Traditional methods of construction (TMCs) and allow for a levelled ground for Traditional Methods of Construction and Modern methods of construction (MMCs).Keywords: traditional methods of construction, integration, cost, quality
Procedia PDF Downloads 6316605 Marine Propeller Cavitation Analysis Using BEM
Authors: Ehsan Yari
Abstract:
In this paper, a numerical study of sheet cavitation has been performed on DTMB4119 and E779A marine propellers with the boundary element method. In propeller design, various parameters of geometry and fluid are incorporated. So a program is needed to solve the flow taking the whole parameters changing into account. The capability of analyzing the wetted and cavitation flow around propellers in steady, unsteady, uniform, and non-uniform conditions while decreasing computational time compared to numerical finite volume methods with acceptable precision are the characteristic features of the present method. Moreover, modifying the position of the detachment point and its corresponding potential value has been considered. Numerical results have been validated with experimental data, showing a good conformation.Keywords: cavitation, BEM, DTMB4119, E779A
Procedia PDF Downloads 7016604 An Entropy Stable Three Dimensional Ideal MHD Solver with Guaranteed Positive Pressure
Authors: Andrew R. Winters, Gregor J. Gassner
Abstract:
A high-order numerical magentohydrodynamics (MHD) solver built upon a non-linear entropy stable numerical flux function that supports eight traveling wave solutions will be described. The method is designed to treat the divergence-free constraint on the magnetic field in a similar fashion to a hyperbolic divergence cleaning technique. The solver is especially well-suited for flows involving strong discontinuities due to its strong stability without the need to enforce artificial low density or energy limits. Furthermore, a new formulation of the numerical algorithm to guarantee positivity of the pressure during the simulation is described and presented. By construction, the solver conserves mass, momentum, and energy and is entropy stable. High spatial order is obtained through the use of a third order limiting technique. High temporal order is achieved by utilizing the family of strong stability preserving (SSP) Runge-Kutta methods. Main attributes of the solver are presented as well as details on an implementation of the new solver into the multi-physics, multi-scale simulation code FLASH. The accuracy, robustness, and computational efficiency is demonstrated with a variety of numerical tests. Comparisons are also made between the new solver and existing methods already present in FLASH framework.Keywords: entropy stability, finite volume scheme, magnetohydrodynamics, pressure positivity
Procedia PDF Downloads 34316603 Solving Mean Field Problems: A Survey of Numerical Methods and Applications
Authors: Amal Machtalay
Abstract:
In this survey, we aim to review the rapidly growing literature on numerical methods to solve different forms of mean field problems, namely mean field games (MFG), mean field controls (MFC), potential MFGs, and master equations, as well as their corresponding recent applications. Here, we distinguish two families of numerical methods: iterative methods based on mesh generation and those called mesh-free, normally related to neural networking and learning frameworks.Keywords: mean-field games, numerical schemes, partial differential equations, complex systems, machine learning
Procedia PDF Downloads 113