Search results for: computational calculations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2806

Search results for: computational calculations

1966 The Rational Design of Original Anticancer Agents Using Computational Approach

Authors: Majid Farsadrooh, Mehran Feizi-Dehnayebi

Abstract:

Serum albumin is the most abundant protein that is present in the circulatory system of a wide variety of organisms. Although it is a significant macromolecule, it can contribute to osmotic blood pressure and also, plays a superior role in drug disposition and efficiency. Molecular docking simulation can improve in silico drug design and discovery procedures to propound a lead compound and develop it from the discovery step to the clinic. In this study, the molecular docking simulation was applied to select a lead molecule through an investigation of the interaction of the two anticancer drugs (Alitretinoin and Abemaciclib) with Human Serum Albumin (HSA). Then, a series of new compounds (a-e) were suggested using lead molecule modification. Density functional theory (DFT) including MEP map and HOMO-LUMO analysis were used for the newly proposed compounds to predict the reactivity zones on the molecules, stability, and chemical reactivity. DFT calculation illustrated that these new compounds were stable. The estimated binding free energy (ΔG) values for a-e compounds were obtained as -5.78, -5.81, -5.95, -5,98, and -6.11 kcal/mol, respectively. Finally, the pharmaceutical properties and toxicity of these new compounds were estimated through OSIRIS DataWarrior software. The results indicated no risk of tumorigenic, irritant, or reproductive effects and mutagenicity for compounds d and e. As a result, compounds d and e, could be selected for further study as potential therapeutic candidates. Moreover, employing molecular docking simulation with the prediction of pharmaceutical properties helps to discover new potential drug compounds.

Keywords: drug design, anticancer, computational studies, DFT analysis

Procedia PDF Downloads 66
1965 Characterization of Femur Development in Mice: A Computational Approach

Authors: Moncayo Donoso Miguelangel, Guevara Morales Johana, Kalenia Flores Kalenia, Barrera Avellaneda Luis Alejandro, Garzon Alvarado Diego Alexander

Abstract:

In mammals, long bones are formed by ossification of a cartilage mold during early embryonic development, forming structures called secondary ossification centers (SOCs), a primary ossification center (POC) and growth plates. This last structure is responsible for long bone growth. During the femur growth, the morphology of the growth plate and the SOCs may vary during different developmental stages. So far there are no detailed morphological studies of the development process from embryonic to adult stages. In this work, we carried out a morphological characterization of femur development from embryonic period to adulthood in mice. 15, 17 and 19 days old embryos and 1, 7, 14, 35, 46 and 52 days old mice were used. Samples were analyzed by a computational approach, using 3D images obtained by micro-CT imaging. Results obtained in this study showed that femur, its growth plates and SOCs undergo morphological changes during different stages of development, including changes in shape, position and thickness. These variations may be related with a response to mechanical loads imposed for muscle development surrounding the femur and a high activity during early stages necessary to support the high growth rates during first weeks and years of development. This study is important to improve our knowledge about the ossification patterns on every stage of bone development and characterize the morphological changes of important structures in bone growth like SOCs and growth plates.

Keywords: development, femur, growth plate, mice

Procedia PDF Downloads 334
1964 Utilizing Computational Fluid Dynamics in the Analysis of Natural Ventilation in Buildings

Authors: A. W. J. Wong, I. H. Ibrahim

Abstract:

Increasing urbanisation has driven building designers to incorporate natural ventilation in the designs of sustainable buildings. This project utilises Computational Fluid Dynamics (CFD) to investigate the natural ventilation of an academic building, SIT@SP, using an assessment criterion based on daily mean temperature and mean velocity. The areas of interest are the pedestrian level of first and fourth levels of the building. A reference case recommended by the Architectural Institute of Japan was used to validate the simulation model. The validated simulation model was then used for coupled simulations on SIT@SP and neighbouring geometries, under two wind speeds. Both steady and transient simulations were used to identify differences in results. Steady and transient results are agreeable with the transient simulation identifying peak velocities during flow development. Under a lower wind speed, the first level was sufficiently ventilated while the fourth level was not. The first level has excessive wind velocities in the higher wind speed and the fourth level was adequately ventilated. Fourth level flow velocity was consistently lower than those of the first level. This is attributed to either simulation model error or poor building design. SIT@SP is concluded to have a sufficiently ventilated first level and insufficiently ventilated fourth level. Future works for this project extend to modifying the urban geometry, simulation model improvements, evaluation using other assessment metrics and extending the area of interest to the entire building.

Keywords: buildings, CFD Simulations, natural ventilation, urban airflow

Procedia PDF Downloads 212
1963 Energy Consumption Statistic of Gas-Solid Fluidized Beds through Computational Fluid Dynamics-Discrete Element Method Simulations

Authors: Lei Bi, Yunpeng Jiao, Chunjiang Liu, Jianhua Chen, Wei Ge

Abstract:

Two energy paths are proposed from thermodynamic viewpoints. Energy consumption means total power input to the specific system, and it can be decomposed into energy retention and energy dissipation. Energy retention is the variation of accumulated mechanical energy in the system, and energy dissipation is the energy converted to heat by irreversible processes. Based on the Computational Fluid Dynamics-Discrete Element Method (CFD-DEM) framework, different energy terms are quantified from the specific flow elements of fluid cells and particles as well as their interactions with the wall. Direct energy consumption statistics are carried out for both cold and hot flow in gas-solid fluidization systems. To clarify the statistic method, it is necessary to identify which system is studied: the particle-fluid system or the particle sub-system. For the cold flow, the total energy consumption of the particle sub-system can predict the onset of bubbling and turbulent fluidization, while the trends of local energy consumption can reflect the dynamic evolution of mesoscale structures. For the hot flow, different heat transfer mechanisms are analyzed, and the original solver is modified to reproduce the experimental results. The influence of the heat transfer mechanisms and heat source on energy consumption is also investigated. The proposed statistic method has proven to be energy-conservative and easy to conduct, and it is hopeful to be applied to other multiphase flow systems.

Keywords: energy consumption statistic, gas-solid fluidization, CFD-DEM, regime transition, heat transfer mechanism

Procedia PDF Downloads 60
1962 Characterisation of Wind-Driven Ventilation in Complex Terrain Conditions

Authors: Daniel Micallef, Damien Bounaudet, Robert N. Farrugia, Simon P. Borg, Vincent Buhagiar, Tonio Sant

Abstract:

The physical effects of upstream flow obstructions such as vegetation on cross-ventilation phenomena of a building are important for issues such as indoor thermal comfort. Modelling such effects in Computational Fluid Dynamics simulations may also be challenging. The aim of this work is to establish the cross-ventilation jet behaviour in such complex terrain conditions as well as to provide guidelines on the implementation of CFD numerical simulations in order to model complex terrain features such as vegetation in an efficient manner. The methodology consists of onsite measurements on a test cell coupled with numerical simulations. It was found that the cross-ventilation flow is highly turbulent despite the very low velocities encountered internally within the test cells. While no direct measurement of the jet direction was made, the measurements indicate that flow tends to be reversed from the leeward to the windward side. Modelling such a phenomenon proves challenging and is strongly influenced by how vegetation is modelled. A solid vegetation tends to predict better the direction and magnitude of the flow than a porous vegetation approach. A simplified terrain model was also shown to provide good comparisons with observation. The findings have important implications on the study of cross-ventilation in complex terrain conditions since the flow direction does not remain trivial, as with the traditional isolated building case.

Keywords: complex terrain, cross-ventilation, wind driven ventilation, wind resource, computational fluid dynamics, CFD

Procedia PDF Downloads 388
1961 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton

Abstract:

Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.

Keywords: cold-start learning, expectation propagation, multi-armed bandits, Thompson Sampling, variational inference

Procedia PDF Downloads 101
1960 Comparison of Different Methods to Produce Fuzzy Tolerance Relations for Rainfall Data Classification in the Region of Central Greece

Authors: N. Samarinas, C. Evangelides, C. Vrekos

Abstract:

The aim of this paper is the comparison of three different methods, in order to produce fuzzy tolerance relations for rainfall data classification. More specifically, the three methods are correlation coefficient, cosine amplitude and max-min method. The data were obtained from seven rainfall stations in the region of central Greece and refers to 20-year time series of monthly rainfall height average. Three methods were used to express these data as a fuzzy relation. This specific fuzzy tolerance relation is reformed into an equivalence relation with max-min composition for all three methods. From the equivalence relation, the rainfall stations were categorized and classified according to the degree of confidence. The classification shows the similarities among the rainfall stations. Stations with high similarity can be utilized in water resource management scenarios interchangeably or to augment data from one to another. Due to the complexity of calculations, it is important to find out which of the methods is computationally simpler and needs fewer compositions in order to give reliable results.

Keywords: classification, fuzzy logic, tolerance relations, rainfall data

Procedia PDF Downloads 307
1959 Graph-Based Semantical Extractive Text Analysis

Authors: Mina Samizadeh

Abstract:

In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them), has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. This algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as a result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework, which can be used individually or as a part of generating the summary to overcome coverage problems.

Keywords: keyword extraction, n-gram extraction, text summarization, topic clustering, semantic analysis

Procedia PDF Downloads 60
1958 Optimizing Cell Culture Performance in an Ambr15 Microbioreactor Using Dynamic Flux Balance and Computational Fluid Dynamic Modelling

Authors: William Kelly, Sorelle Veigne, Xianhua Li, Zuyi Huang, Shyamsundar Subramanian, Eugene Schaefer

Abstract:

The ambr15™ bioreactor is a single-use microbioreactor for cell line development and process optimization. The ambr system offers fully automatic liquid handling with the possibility of fed-batch operation and automatic control of pH and oxygen delivery. With operating conditions for large scale biopharmaceutical production properly scaled down, micro bioreactors such as the ambr15™ can potentially be used to predict the effect of process changes such as modified media or different cell lines. In this study, gassing rates and dilution rates were varied for a semi-continuous cell culture system in the ambr15™ bioreactor. The corresponding changes to metabolite production and consumption, as well as cell growth rate and therapeutic protein production were measured. Conditions were identified in the ambr15™ bioreactor that produced metabolic shifts and specific metabolic and protein production rates also seen in the corresponding larger (5 liter) scale perfusion process. A Dynamic Flux Balance model was employed to understand and predict the metabolic changes observed. The DFB model-predicted trends observed experimentally, including lower specific glucose consumption when CO₂ was maintained at higher levels (i.e. 100 mm Hg) in the broth. A Computational Fluid Dynamic (CFD) model of the ambr15™ was also developed, to understand transfer of O₂ and CO₂ to the liquid. This CFD model predicted gas-liquid flow in the bioreactor using the ANSYS software. The two-phase flow equations were solved via an Eulerian method, with population balance equations tracking the size of the gas bubbles resulting from breakage and coalescence. Reasonable results were obtained in that the Carbon Dioxide mass transfer coefficient (kLa) and the air hold up increased with higher gas flow rate. Volume-averaged kLa values at 500 RPM increased as the gas flow rate was doubled and matched experimentally determined values. These results form a solid basis for optimizing the ambr15™, using both CFD and FBA modelling approaches together, for use in microscale simulations of larger scale cell culture processes.

Keywords: cell culture, computational fluid dynamics, dynamic flux balance analysis, microbioreactor

Procedia PDF Downloads 270
1957 The Impact of Data Science on Geography: A Review

Authors: Roberto Machado

Abstract:

We conducted a systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology, analyzing 2,996 studies and synthesizing 41 of them to explore the evolution of data science and its integration into geography. By employing optimization algorithms, we accelerated the review process, significantly enhancing the efficiency and precision of literature selection. Our findings indicate that data science has developed over five decades, facing challenges such as the diversified integration of data and the need for advanced statistical and computational skills. In geography, the integration of data science underscores the importance of interdisciplinary collaboration and methodological innovation. Techniques like large-scale spatial data analysis and predictive algorithms show promise in natural disaster management and transportation route optimization, enabling faster and more effective responses. These advancements highlight the transformative potential of data science in geography, providing tools and methodologies to address complex spatial problems. The relevance of this study lies in the use of optimization algorithms in systematic reviews and the demonstrated need for deeper integration of data science into geography. Key contributions include identifying specific challenges in combining diverse spatial data and the necessity for advanced computational skills. Examples of connections between these two fields encompass significant improvements in natural disaster management and transportation efficiency, promoting more effective and sustainable environmental solutions with a positive societal impact.

Keywords: data science, geography, systematic review, optimization algorithms, supervised learning

Procedia PDF Downloads 9
1956 Determination of Slope of Hilly Terrain by Using Proposed Method of Resolution of Forces

Authors: Reshma Raskar-Phule, Makarand Landge, Saurabh Singh, Vijay Singh, Jash Saparia, Shivam Tripathi

Abstract:

For any construction project, slope calculations are necessary in order to evaluate constructability on the site, such as the slope of parking lots, sidewalks, and ramps, the slope of sanitary sewer lines, slope of roads and highways. When slopes and grades are to be determined, designers are concerned with establishing proper slopes and grades for their projects to assess cut and fill volume calculations and determine inverts of pipes. There are several established instruments commonly used to determine slopes, such as Dumpy level, Abney level or Hand Level, Inclinometer, Tacheometer, Henry method, etc., and surveyors are very familiar with the use of these instruments to calculate slopes. However, they have some other drawbacks which cannot be neglected while major surveying works. Firstly, it requires expert surveyors and skilled staff. The accessibility, visibility, and accommodation to remote hilly terrain with these instruments and surveying teams are difficult. Also, determination of gentle slopes in case of road and sewer drainage constructions in congested urban places with these instruments is not easy. This paper aims to develop a method that requires minimum field work, minimum instruments, no high-end technology or instruments or software, and low cost. It requires basic and handy surveying accessories like a plane table with a fixed weighing machine, standard weights, alidade, tripod, and ranging rods should be able to determine the terrain slope in congested areas as well as in remote hilly terrain. Also, being simple and easy to understand and perform the people of that local rural area can be easily trained for the proposed method. The idea for the proposed method is based on the principle of resolution of weight components. When any object of standard weight ‘W’ is placed on an inclined surface with a weighing machine below it, then its cosine component of weight is presently measured by that weighing machine. The slope can be determined from the relation between the true or actual weight and the apparent weight. A proper procedure is to be followed, which includes site location, centering and sighting work, fixing the whole set at the identified station, and finally taking the readings. A set of experiments for slope determination, mild and moderate slopes, are carried out by the proposed method and by the theodolite instrument in a controlled environment, on the college campus, and uncontrolled environment actual site. The slopes determined by the proposed method were compared with those determined by the established instruments. For example, it was observed that for the same distances for mild slope, the difference in the slope obtained by the proposed method and by the established method ranges from 4’ for a distance of 8m to 2o15’20” for a distance of 16m for an uncontrolled environment. Thus, for mild slopes, the proposed method is suitable for a distance of 8m to 10m. The correlation between the proposed method and the established method shows a good correlation of 0.91 to 0.99 for various combinations, mild and moderate slope, with the controlled and uncontrolled environment.

Keywords: surveying, plane table, weight component, slope determination, hilly terrain, construction

Procedia PDF Downloads 84
1955 Parallel Tracking and Mapping of a Fleet of Quad-Rotor

Authors: M. Bazin, I. Bouguir, D. Combe, V. Germain, G. Lassade

Abstract:

The problem of managing a fleet of quad-rotor drones in a completely unknown environment is analyzed in the present paper. This work is following the footsteps of other studies about how should be managed the movements of a swarm of elements that have to stay gathered throughout their activities. In this paper we aim to demonstrate the limitations of a system where absolutely all the calculations and physical movements of our elements are done by one single external element. The strategy of control is an adaptive approach which takes into account the explored environment. This is made possible thanks to a set of command rules which can guide the drones through various missions with defined goal. The result of the mission is independent of the nature of environment and the number of drones in the fleet. This strategy is based on a simultaneous usage of different data: obstacles positions, real-time positions of all drones and relative positions between the different drones. The present work is made with the Robot Operating System and used several open-source projects on localization and usage of drones.

Keywords: cooperative guidance, distributed control, unmanned aerial vehicle, obstacle avoidance

Procedia PDF Downloads 293
1954 Modelling and Numerical Analysis of Thermal Non-Destructive Testing on Complex Structure

Authors: Y. L. Hor, H. S. Chu, V. P. Bui

Abstract:

Composite material is widely used to replace conventional material, especially in the aerospace industry to reduce the weight of the devices. It is formed by combining reinforced materials together via adhesive bonding to produce a bulk material with alternated macroscopic properties. In bulk composites, degradation may occur in microscopic scale, which is in each individual reinforced fiber layer or especially in its matrix layer such as delamination, inclusion, disbond, void, cracks, and porosity. In this paper, we focus on the detection of defect in matrix layer which the adhesion between the composite plies is in contact but coupled through a weak bond. In fact, the adhesive defects are tested through various nondestructive methods. Among them, pulsed phase thermography (PPT) has shown some advantages providing improved sensitivity, large-area coverage, and high-speed testing. The aim of this work is to develop an efficient numerical model to study the application of PPT to the nondestructive inspection of weak bonding in composite material. The resulting thermal evolution field is comprised of internal reflections between the interfaces of defects and the specimen, and the important key-features of the defects presented in the material can be obtained from the investigation of the thermal evolution of the field distribution. Computational simulation of such inspections has allowed the improvement of the techniques to apply in various inspections, such as materials with high thermal conductivity and more complex structures.

Keywords: pulsed phase thermography, weak bond, composite, CFRP, computational modelling, optimization

Procedia PDF Downloads 159
1953 DNA PLA: A Nano-Biotechnological Programmable Device

Authors: Hafiz Md. HasanBabu, Khandaker Mohammad Mohi Uddin, Md. IstiakJaman Ami, Rahat Hossain Faisal

Abstract:

Computing in biomolecular programming performs through the different types of reactions. Proteins and nucleic acids are used to store the information generated by biomolecular programming. DNA (Deoxyribose Nucleic Acid) can be used to build a molecular computing system and operating system for its predictable molecular behavior property. The DNA device has clear advantages over conventional devices when applied to problems that can be divided into separate, non-sequential tasks. The reason is that DNA strands can hold so much data in memory and conduct multiple operations at once, thus solving decomposable problems much faster. Programmable Logic Array, abbreviated as PLA is a programmable device having programmable AND operations and OR operations. In this paper, a DNA PLA is designed by different molecular operations using DNA molecules with the proposed algorithms. The molecular PLA could take advantage of DNA's physical properties to store information and perform calculations. These include extremely dense information storage, enormous parallelism, and extraordinary energy efficiency.

Keywords: biological systems, DNA computing, parallel computing, programmable logic array, PLA, DNA

Procedia PDF Downloads 116
1952 Computational Fluid Dynamics Analysis of Sit-Ski Aerodynamics in Crosswind Conditions

Authors: Lev Chernyshev, Ekaterina Lieshout, Natalia Kabaliuk

Abstract:

Sit-skis enable individuals with limited lower limb or core movement to ski unassisted confidently. The rise in popularity of the Winter Paralympics has seen an influx of engineering innovation, especially for the Downhill and Super-Giant Slalom events, where the athletes achieve speeds as high as 160km/h. The growth in the sport has inspired recent research into sit-ski aerodynamics. Crosswinds are expected in mountain climates and, therefore, can greatly impact a skier's maneuverability and aerodynamics. This research investigates the impact of crosswinds on the drag force of a Paralympic sit-ski using Computational Fluid Dynamics (CFD). A Paralympic sit-ski with a model of a skier, a leg cover, a bucket seat, and a simplified suspension system was used for CFD analysis in ANSYS Fluent. The hybrid initialisation tool and the SST k–ω turbulence model were used with two tetrahedral mesh bodies of influence. The crosswinds (10, 30, and 50 km/h) acting perpendicular to the sit-ski's direction of travel were simulated, corresponding to the straight-line skiing speeds of 60, 80, and 100km/h. Following the initialisation, 150 iterations for both first and second order steady-state solvers were used, before switching to a transient solver with a computational time of 1.5s and a time step of 0.02s, to allow the solution to converge. CFD results were validated against wind tunnel data. The results suggested that for all crosswind and sit-ski speeds, on average, 64% of the total drag on the ski was due to the athlete's torso. The suspension was associated with the second largest overall sit-ski drag force contribution, averaging at 27%, followed by the leg cover at 10%. While the seat contributed a negligible 0.5% of the total drag force, averaging at 1.2N across the conditions studied. The effect of the crosswind increased the total drag force across all skiing speed studies, with the drag on the athlete's torso and suspension being the most sensitive to the changes in the crosswind magnitude. The effect of the crosswind on the ski drag reduced as the simulated skiing speed increased: for skiing at 60km/h, the drag force on the torso increased by 154% with the increase of the crosswind from 10km/h to 50km/h; whereas, at 100km/h the corresponding drag force increase was halved (75%). The analysis of the flow and pressure field characteristics for a sit-ski in crosswind conditions indicated the flow separation localisation and wake size correlated with the magnitude and directionality of the crosswind relative to straight-line skiing. The findings can inform aerodynamic improvements in sit-ski design and increase skiers' medalling chances.

Keywords: sit-ski, aerodynamics, CFD, crosswind effects

Procedia PDF Downloads 61
1951 Strengthening Bridge Piers by Carbon Fiber Reinforced Polymer (CFRP): A Case Study for Thuan Phuoc Suspension Bridge in Vietnam

Authors: Lan Nguyen, Lam Cao Van

Abstract:

Thuan Phuoc is a suspension bridge built in Danang city, Vietnam. Because this bridge locates near the estuary, its structure has degraded rapidly. Many cracks have currently occurred on most of the concrete piers of the curved approach spans. This paper aims to present the results of diagnostic analysis of causes for cracks as well as some calculations for strengthening piers by carbon fiber reinforced polymer (CFRP). Besides, it describes how to use concrete nonlinear analysis software ATENA to diagnostically analyze cracks, strengthening designs. Basing on the results of studying the map of distributing crack on Thuan Phuoc bridge’s concrete piers is analyzed by the software ATENA is suitable for the real conditions and CFRP would be the best solution to strengthen piers in a sound and fast way.

Keywords: ATENA, bridge pier strengthening, carbon fiber reinforced polymer (CFRP), crack prediction analysis

Procedia PDF Downloads 234
1950 Computational Analysis of Cavity Effect over Aircraft Wing

Authors: P. Booma Devi, Dilip A. Shah

Abstract:

This paper seeks the potentials of studying aerodynamic characteristics of inward cavities called dimples, as an alternative to the classical vortex generators. Increasing stalling angle is a greater challenge in wing design. But our examination is primarily focused on increasing lift. In this paper, enhancement of lift is mainly done by introduction of dimple or cavity in a wing. In general, aircraft performance can be enhanced by increasing aerodynamic efficiency that is lift to drag ratio of an aircraft wing. Efficiency improvement can be achieved by improving the maximum lift co-efficient or by reducing the drag co-efficient. At the time of landing aircraft, high angle of attack may lead to stalling of aircraft. To avoid this kind of situation, increase in the stalling angle is warranted. Hence, improved stalling characteristic is the best way to ease landing complexity. Computational analysis is done for the wing segment made of NACA 0012. Simulation is carried out for 30 m/s free stream velocity over plain airfoil and different types of cavities. The wing is modeled in CATIA V5R20 and analyses are carried out using ANSYS CFX. Triangle and square shapes are used as cavities for analysis. Simulations revealed that cavity placed on wing segment shows an increase of maximum lift co-efficient when compared to normal wing configuration. Flow separation is delayed at downstream of the wing by the presence of cavities up to a particular angle of attack.

Keywords: lift, drag reduce, square dimple, triangle dimple, enhancement of stall angle

Procedia PDF Downloads 337
1949 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 266
1948 CFD Simulation of Spacer Effect on Turbulent Mixing Phenomena in Sub Channels of Boiling Nuclear Assemblies

Authors: Shashi Kant Verma, S. L. Sinha, D. K. Chandraker

Abstract:

Numerical simulations of selected subchannel tracer (Potassium Nitrate) based experiments have been performed to study the capabilities of state-of-the-art of Computational Fluid Dynamics (CFD) codes. The Computational Fluid Dynamics (CFD) methodology can be useful for investigating the spacer effect on turbulent mixing to predict turbulent flow behavior such as Dimensionless mixing scalar distributions, radial velocity and vortices in the nuclear fuel assembly. A Gibson and Launder (GL) Reynolds stress model (RSM) has been selected as the primary turbulence model to be applied for the simulation case as it has been previously found reasonably accurate to predict flows inside rod bundles. As a comparison, the case is also simulated using a standard k-ε turbulence model that is widely used in industry. Despite being an isotropic turbulence model, it has also been used in the modeling of flow in rod bundles and to produce lateral velocities after thorough mixing of coolant fairly. Both these models have been solved numerically to find out fully developed isothermal turbulent flow in a 30º segment of a 54-rod bundle. Numerical simulation has been carried out for the study of natural mixing of a Tracer (Passive scalar) to characterize the growth of turbulent diffusion in an injected sub-channel and, afterwards on, cross-mixing between adjacent sub-channels. The mixing with water has been numerically studied by means of steady state CFD simulations with the commercial code STAR-CCM+. Flow enters into the computational domain through the mass inflow at the three subchannel faces. Turbulence intensity and hydraulic diameter of 1% and 5.9 mm respectively were used for the inlet. A passive scalar (Potassium nitrate) is injected through the mass fraction of 5.536 PPM at subchannel 2 (Upstream of the mixing section). Flow exited the domain through the pressure outlet boundary (0 Pa), and the reference pressure was 1 atm. Simulation results have been extracted at different locations of the mixing zone and downstream zone. The local mass fraction shows uniform mixing. The effect of the applied turbulence model is nearly negligible just before the outlet plane because the distributions look like almost identical and the flow is fully developed. On the other hand, quantitatively the dimensionless mixing scalar distributions change noticeably, which is visible in the different scale of the colour bars.

Keywords: single-phase flow, turbulent mixing, tracer, sub channel analysis

Procedia PDF Downloads 200
1947 Rotor Radial Vent Pumping in Large Synchronous Electrical Machines

Authors: Darren Camilleri, Robert Rolston

Abstract:

Rotor radial vents make use of the pumping effect to increase airflow through the active material thus reduce hotspot temperatures. The effect of rotor radial pumping in synchronous machines has been studied previously. This paper presents the findings of previous studies and builds upon their theories using a parametric numerical approach to investigate the rotor radial pumping effect. The pressure head generated by the poles and radial vent flow-rate were identified as important factors in maximizing the benefits of the pumping effect. The use of Minitab and ANSYS Workbench to investigate the key performance characteristics of radial pumping through a Design of Experiments (DOE) was described. CFD results were compared with theoretical calculations. A correlation for each response variable was derived through a statistical analysis. Findings confirmed the strong dependence of radial vent length on vent pressure head, and radial vent cross-sectional area was proved to be significant in maximising radial vent flow rate.

Keywords: CFD, cooling, electrical machines, regression analysis

Procedia PDF Downloads 309
1946 Atomic Hydrogen Storage in Hexagonal GdNi5 and GdNi4Cu Rare Earth Compounds: A Comparative Density Functional Theory Study

Authors: A. Kellou, L. Rouaiguia, L. Rabahi

Abstract:

In the present work, the atomic hydrogen absorption trend in the GdNi5 and GdNi4Cu rare earth compounds within the hexagonal CaCu5 type of crystal structure (space group P6/mmm) is investigated. The density functional theory (DFT) combined with the generalized gradient approximation (GGA) is used to study the site preference of atomic hydrogen at 0K. The octahedral and tetrahedral interstitial sites are considered. The formation energies and structural properties are determined in order to evaluate hydrogen effects on the stability of the studied compounds. The energetic diagram of hydrogen storage is established and compared in GdNi5 and GdNi4Cu. The magnetic properties of the selected compounds are determined using spin polarized calculations. The obtained results are discussed with and without hydrogen addition taking into account available theoretical and experimental results.

Keywords: density functional theory, hydrogen storage, rare earth compounds, structural and magnetic properties

Procedia PDF Downloads 104
1945 An Automated Approach to the Nozzle Configuration of Polycrystalline Diamond Compact Drill Bits for Effective Cuttings Removal

Authors: R. Suresh, Pavan Kumar Nimmagadda, Ming Zo Tan, Shane Hart, Sharp Ugwuocha

Abstract:

Polycrystalline diamond compact (PDC) drill bits are extensively used in the oil and gas industry as well as the mining industry. Industry engineers continually improve upon PDC drill bit designs and hydraulic conditions. Optimized injection nozzles play a key role in improving the drilling performance and efficiency of these ever changing PDC drill bits. In the first part of this study, computational fluid dynamics (CFD) modelling is performed to investigate the hydrodynamic characteristics of drilling fluid flow around the PDC drill bit. An Open-source CFD software – OpenFOAM simulates the flow around the drill bit, based on the field input data. A specifically developed console application integrates the entire CFD process including, domain extraction, meshing, and solving governing equations and post-processing. The results from the OpenFOAM solver are then compared with that of the ANSYS Fluent software. The data from both software programs agree. The second part of the paper describes the parametric study of the PDC drill bit nozzle to determine the effect of parameters such as number of nozzles, nozzle velocity, nozzle radial position and orientations on the flow field characteristics and bit washing patterns. After analyzing a series of nozzle configurations, the best configuration is identified and recommendations are made for modifying the PDC bit design.

Keywords: ANSYS Fluent, computational fluid dynamics, nozzle configuration, OpenFOAM, PDC dill bit

Procedia PDF Downloads 413
1944 In-Silico Investigation of Phytochemicals from Ocimum Sanctum as Plausible Antiviral Agent in COVID-19

Authors: Dileep Kumar, Janhavi Ramchandra Rao Kumar, Rao

Abstract:

COVID-19 has ravaged the globe, and it is spreading its Spectre day by day. In the absence of established drugs, this disease has created havoc. Some of the infected persons are symptomatic or asymptomatic. The respiratory system, cardiac system, digestive system, etc. in human beings are affected by this virus. In our present investigation, we have undertaken a study of the Indian Ayurvedic herb, Ocimum sanctum against SARS-CoV-2 using molecular docking and dynamics studies. The docking analysis was performed on the Glide module of Schrödinger suite on two different proteins from SARS-CoV-2 viz. NSP15 Endoribonuclease and spike receptor-binding domain. MM-GBSA based binding free energy calculations also suggest the most favorable binding affinities of carvacrol, β elemene, and β caryophyllene with binding energies of −61.61, 58.23, and −54.19 Kcal/mol respectively with spike receptor-binding domain and NSP15 Endoribonuclease. It rekindles our hope for the design and development of new drug candidates for the treatment of COVID19.

Keywords: molecular docking, COVID-19, ocimum sanctum, binding energy

Procedia PDF Downloads 175
1943 DFT Study of Half Sandwich of Vanadium (IV) Cyclopentadienyl Complexes

Authors: Salem El-Tohami Ashoor

Abstract:

A novel new vanadium (IV) complexes incorporating the chelating diamido cyclopentadienyl {ArN(CH2)3NAr)}2-((ηn-Cp)Cp)} (Ar = 2,6-Pri2C6H3)(Cp = C5H5 and n = 1,2,3,4 and 5) have been studied with calculation of the properties of species involved in various of cyclopentadienyl reaction. These were carried out under investigation of density functional theory (DFT) calculation, and comparing together. Other methods, explicitly including electron correlation, are necessary for more accurate calculations; MB3LYP (Becke) (Lee–Yang–Parr) level of theory often being used to obtain more exact results. These complexes were estimated of electronic energy for molecular system, because it accounts for all electron correlation interactions. The optimised of [V(ArN(CH2)3NAr)2Cl(η5-Cp)] (Ar = 2,6-Pri2C6H3 and Cp= C5H5) was found to be thermally more stable than others of vanadium cyclopentadienyl. In the meantime the complex [V(ArN(CH2)3NAr)2Cl(η1-Cp)] (Ar = 2,6-Pri2C6H3 and Cp= C5H5) which is showed a low thermal stability in case of the just one carbon of cyclopentadienyl can be insertion with vanadium metal centre. By using Dewar-Chatt-Duncanson model, as a basis of the molecular orbital (MO) analysis and showed the highest occupied molecular orbital (HOMO) and lowest occupied molecular orbital LUMO.

Keywords: vanadium (IV) cyclopentadienyl complexes, DFT, MO, HOMO, LUMO

Procedia PDF Downloads 404
1942 ISMARA: Completely Automated Inference of Gene Regulatory Networks from High-Throughput Data

Authors: Piotr J. Balwierz, Mikhail Pachkov, Phil Arnold, Andreas J. Gruber, Mihaela Zavolan, Erik van Nimwegen

Abstract:

Understanding the key players and interactions in the regulatory networks that control gene expression and chromatin state across different cell types and tissues in metazoans remains one of the central challenges in systems biology. Our laboratory has pioneered a number of methods for automatically inferring core gene regulatory networks directly from high-throughput data by modeling gene expression (RNA-seq) and chromatin state (ChIP-seq) measurements in terms of genome-wide computational predictions of regulatory sites for hundreds of transcription factors and micro-RNAs. These methods have now been completely automated in an integrated webserver called ISMARA that allows researchers to analyze their own data by simply uploading RNA-seq or ChIP-seq data sets and provides results in an integrated web interface as well as in downloadable flat form. For any data set, ISMARA infers the key regulators in the system, their activities across the input samples, the genes and pathways they target, and the core interactions between the regulators. We believe that by empowering experimental researchers to apply cutting-edge computational systems biology tools to their data in a completely automated manner, ISMARA can play an important role in developing our understanding of regulatory networks across metazoans.

Keywords: gene expression analysis, high-throughput sequencing analysis, transcription factor activity, transcription regulation

Procedia PDF Downloads 57
1941 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 123
1940 Two-Phase Flow Study of Airborne Transmission Control in Dental Practices

Authors: Mojtaba Zabihi, Stephen Munro, Jonathan Little, Ri Li, Joshua Brinkerhoff, Sina Kheirkhah

Abstract:

Occupational Safety and Health Administration (OSHA) identified dental workers at the highest risk of contracting COVID-19. This is because aerosol-generating procedures (AGP) during dental practices generate aerosols ( < 5µm) and droplets. These particles travel at varying speeds, in varying directions, and for varying durations. If these particles bear infectious viruses, their spreading causes airborne transmission of the virus in the dental room, exposing dentists, hygienists, dental assistants, and even other dental clinic clients to the infection risk. Computational fluid dynamics (CFD) simulation of two-phase flows based on a discrete phase model (DPM) is carried out to study the spreading of aerosol and droplets in a dental room. The simulation includes momentum, heat, and mass transfers between the particles and the airflow. Two simulations are conducted and compared. One simulation focuses on the effects of room ventilation in winter and summer on the particles' travel. The other simulation focuses on the control of aerosol and droplets' spreading. A suction collector is added near the source of aerosol and droplets, creating a flow sink in order to remove the particles. The effects of the suction flow on the aerosol and droplet travel are studied. The suction flow can remove aerosols and also reduce the spreading of droplets.

Keywords: aerosols, computational fluid dynamics, COVID-19, dental, discrete phase model, droplets, two-phase flow

Procedia PDF Downloads 254
1939 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory

Authors: Damir Latypov

Abstract:

A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.

Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory

Procedia PDF Downloads 143
1938 The Effect of Pre-Cracks on Structural Strength of the Nextel Fibers: A Multiscale Modeling Approach

Authors: Seyed Mohammad Mahdi Zamani, Kamran Behdinan

Abstract:

In this study, a multiscale framework is performed to model the strength of Nextel fibers in presence of an atomistic scale pre-crack at finite temperatures. The bridging cell method (BCM) is the multiscale technique applied in this study, which decomposes the system into the atomistic, bridging and continuum domains; solves the whole system in a finite element framework; and incorporates temperature dependent calculations. Since Nextel is known to be structurally stable and retain 70% of its initial strength up to 1100°C; simulations are conducted at both of the room temperatures, 25°C, and fire temperatures, 1200°C. Two cases are modeled for a pre-crack present in either phases of alumina or mullite of the Nextel structure. The materials’ response is studied with respect to deformation behavior and ultimate tensile strength. Results show different crack growth trends for the two cases, and as the temperature increases, the crack growth resistance and material’s strength decrease.

Keywords: Nextel fibers, multiscale modeling, pre-crack, ultimate tensile strength

Procedia PDF Downloads 407
1937 Brittle Fracture Tests on Steel Bridge Bearings: Application of the Potential Drop Method

Authors: Natalie Hoyer

Abstract:

Usually, steel structures are designed for the upper region of the steel toughness-temperature curve. To address the reduced toughness properties in the temperature transition range, additional safety assessments based on fracture mechanics are necessary. These assessments enable the appropriate selection of steel materials to prevent brittle fracture. In this context, recommendations were established in 2011 to regulate the appropriate selection of steel grades for bridge bearing components. However, these recommendations are no longer fully aligned with more recent insights: Designing bridge bearings and their components in accordance with DIN EN 1337 and the relevant sections of DIN EN 1993 has led to an increasing trend of using large plate thicknesses, especially for long-span bridges. However, these plate thicknesses surpass the application limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with the regulations outlined in DIN EN 1993-1-10 regarding material toughness and through-thickness properties requires some further modifications. Therefore, these standards cannot be directly applied to the material selection for bearings without additional information. In addition, recent findings indicate that certain bridge bearing components are subjected to high fatigue loads, necessitating consideration in structural design, material selection, and calculations. To address this issue, the German Center for Rail Traffic Research initiated a research project aimed at developing a proposal to enhance the existing standards. This proposal seeks to establish guidelines for the selection of steel materials for bridge bearings to prevent brittle fracture, particularly for thick plates and components exposed to specific fatigue loads. The results derived from theoretical analyses, including finite element simulations and analytical calculations, are verified through component testing on a large-scale. During these large-scale tests, where a brittle failure is deliberately induced in a bearing component, an artificially generated defect is introduced into the specimen at the predetermined hotspot. Subsequently, a dynamic load is imposed until the crack initiation process transpires, replicating realistic conditions akin to a sharp notch resembling a fatigue crack. To stop the action of the dynamic load in time, it is important to precisely determine the point at which the crack size transitions from stable crack growth to unstable crack growth. To achieve this, the potential drop measurement method is employed. The proposed paper informs about the choice of measurement method (alternating current potential drop (ACPD) or direct current potential drop (DCPD)), presents results from correlations with created FE models, and may proposes a new approach to introduce beach marks into the fracture surface within the framework of potential drop measurement.

Keywords: beach marking, bridge bearing design, brittle fracture, design for fatigue, potential drop

Procedia PDF Downloads 28