Search results for: Numerical simulations
673 THz Phase Extraction Algorithms for a THz Modulating Interferometric Doppler Radar
Authors: Shaolin Allen Liao, Hual-Te Chien
Abstract:
Various THz phase extraction algorithms have been developed for a novel THz Modulating Interferometric Doppler Radar (THz-MIDR) developed recently by the author. The THz-MIDR differs from the well-known FTIR technique in that it introduces a continuously modulating reference branch, compared to the time-consuming discrete FTIR stepping reference branch. Such change allows real-time tracking of a moving object and capturing of its Doppler signature. The working principle of the THz-MIDR is similar to the FTIR technique: the incoming THz emission from the scene is split by a beam splitter/combiner; one of the beams is continuously modulated by a vibrating mirror or phase modulator and the other split beam is reflected by a reflection mirror; finally both the modulated reference beam and reflected beam are combined by the same beam splitter/combiner and detected by a THz intensity detector (for example, a pyroelectric detector). In order to extract THz phase from the single intensity measurement signal, we have derived rigorous mathematical formulas for 3 Frequency Banded (FB) signals: 1) DC Low-Frequency Banded (LFB) signal; 2) Fundamental Frequency Banded (FFB) signal; and 3) Harmonic Frequency Banded (HFB) signal. The THz phase extraction algorithms are then developed based combinations of 2 or all of these 3 FB signals with efficient algorithms such as Levenberg-Marquardt nonlinear fitting algorithm. Numerical simulation has also been performed in Matlab with simulated THz-MIDR interferometric signal of various Signal to Noise Ratio (SNR) to verify the algorithms.Keywords: algorithm, modulation, THz phase, THz interferometry doppler radar
Procedia PDF Downloads 349672 Generating 3D Battery Cathode Microstructures using Gaussian Mixture Models and Pix2Pix
Authors: Wesley Teskey, Vedran Glavas, Julian Wegener
Abstract:
Generating battery cathode microstructures is an important area of research, given the proliferation of the use of automotive batteries. Currently, finite element analysis (FEA) is often used for simulations of battery cathode microstructures before physical batteries can be manufactured and tested to verify the simulation results. Unfortunately, a key drawback of using FEA is that this method of simulation is very slow in terms of computational runtime. Generative AI offers the key advantage of speed when compared to FEA, and because of this, generative AI is capable of evaluating very large numbers of candidate microstructures. Given AI generated candidate microstructures, a subset of the promising microstructures can be selected for further validation using FEA. Leveraging the speed advantage of AI allows for a better final microstructural selection because high speed allows for the evaluation of many more candidate microstructures. For the approach presented, battery cathode 3D candidate microstructures are generated using Gaussian Mixture Models (GMMs) and pix2pix. This approach first uses GMMs to generate a population of spheres (representing the “active material” of the cathode). Once spheres have been sampled from the GMM, they are placed within a microstructure. Subsequently, the pix2pix sweeps over the 3D microstructure (iteratively) slice by slice and adds details to the microstructure to determine what portions of the microstructure will become electrolyte and what part of the microstructure will become binder. In this manner, each subsequent slice of the microstructure is evaluated using pix2pix, where the inputs into pix2pix are the previously processed layers of the microstructure. By feeding into pix2pix previously fully processed layers of the microstructure, pix2pix can be used to ensure candidate microstructures represent a realistic physical reality. More specifically, in order for the microstructure to represent a realistic physical reality, the locations of electrolyte and binder in each layer of the microstructure must reasonably match the locations of electrolyte and binder in previous layers to ensure geometric continuity. Using the above outlined approach, a 10x to 100x speed increase was possible when generating candidate microstructures using AI when compared to using a FEA only approach for this task. A key metric for evaluating microstructures was the battery specific power value that the microstructures would be able to produce. The best generative AI result obtained was a 12% increase in specific power for a candidate microstructure when compared to what a FEA only approach was capable of producing. This 12% increase in specific power was verified by FEA simulation.Keywords: finite element analysis, gaussian mixture models, generative design, Pix2Pix, structural design
Procedia PDF Downloads 112671 Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm
Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee
Abstract:
Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that considers the uncertainty caused by measurement noise. Enhanced ideal gas molecular movement (EIGMM) is used as the main algorithm for model updating. Ideal gas molecular movement (IGMM) is a multiagent algorithm based on the ideal gas molecular movement. Ideal gas molecules disperse rapidly in different directions and cover all the space inside. This is embedded in the high speed of molecules, collisions between them and with the surrounding barriers. In IGMM algorithm to accomplish the optimal solutions, the initial population of gas molecules is randomly generated and the governing equations related to the velocity of gas molecules and collisions between those are utilized. In this paper, an enhanced version of IGMM, which removes unchanged variables after specified iterations, is developed. The proposed method is implemented on two numerical examples in the field of structural damage detection. The results show that the proposed method can perform well and competitive in PBDD of structures.Keywords: enhanced ideal gas molecular movement (EIGMM), ideal gas molecular movement (IGMM), model updating method, probability-based damage detection (PBDD), uncertainty quantification
Procedia PDF Downloads 282670 Electronic Spectral Function of Double Quantum Dots–Superconductors Nanoscopic Junction
Authors: Rajendra Kumar
Abstract:
We study the Electronic spectral density of a double coupled quantum dots sandwich between superconducting leads, where one of the superconducting leads (QD1) are connected with left superconductor lead and (QD1) also connected right superconductor lead. (QD1) and (QD2) are coupling to each other. The electronic spectral density through a quantum dots between superconducting leads having s-wave symmetry of the superconducting order parameter. Such junction is called superconducting –quantum dot (S-QD-S) junction. For this purpose, we have considered a renormalized Anderson model that includes the double coupled of the superconducting leads with the quantum dots level and an attractive BCS-type effective interaction in superconducting leads. We employed the Green’s function technique to obtain superconducting order parameter with the BCS framework and Ambegaoker-Baratoff formalism to analyze the electronic spectral density through such (S-QD-S) junction. It has been pointed out that electronic spectral density through such a junction is dominated by the attractive the paring interaction in the leads, energy of the level on the dot with respect to Fermi energy and also on the coupling parameter of the two in an essential way. On the basis of numerical analysis we have compared the theoretical results of electronic spectral density with the recent transport existing theoretical analysis. QDs is the charging energy that may give rise to effects based on the interplay of Coulomb repulsion and superconducting correlations. It is, therefore, an interesting question to ask how the discrete level spectrum and the charging energy affect the DC and AC Josephson transport between two superconductors coupled via a QD. In the absence of a bias voltage, a finite DC current can be sustained in such an S-QD-S by the DC Josephson effect.Keywords: quantum dots, S-QD-S junction, BCS superconductors, Anderson model
Procedia PDF Downloads 380669 Control Performance Simulation and Analysis for Microgravity Vibration Isolation System Onboard Chinese Space Station
Authors: Wei Liu, Shuquan Wang, Yang Gao
Abstract:
Microgravity Science Experiment Rack (MSER) will be onboard TianHe (TH) spacecraft planned to be launched in 2018. TH is one module of Chinese Space Station. Microgravity Vibration Isolation System (MVIS), which is MSER’s core part, is used to isolate disturbance from TH and provide high-level microgravity for science experiment payload. MVIS is two stage vibration isolation system, consisting of Follow Unit (FU) and Experiment Support Unit (ESU). FU is linked to MSER by umbilical cables, and ESU suspends within FU and without physical connection. The FU’s position and attitude relative to TH is measured by binocular vision measuring system, and the acceleration and angular velocity is measured by accelerometers and gyroscopes. Air-jet thrusters are used to generate force and moment to control FU’s motion. Measurement module on ESU contains a set of Position-Sense-Detectors (PSD) sensing the ESU’s position and attitude relative to FU, accelerometers and gyroscopes sensing ESU’s acceleration and angular velocity. Electro-magnetic actuators are used to control ESU’s motion. Firstly, the linearized equations of FU’s motion relative to TH and ESU’s motion relative to FU are derived, laying the foundation for control system design and simulation analysis. Subsequently, two control schemes are proposed. One control scheme is that ESU tracks FU and FU tracks TH, shorten as E-F-T. The other one is that FU tracks ESU and ESU tracks TH, shorten as F-E-T. In addition, motion spaces are constrained within ±15 mm、±2° between FU and ESU, and within ±300 mm between FU and TH or between ESU and TH. A Proportional-Integrate-Differentiate (PID) controller is designed to control FU’s position and attitude. ESU’s controller includes an acceleration feedback loop and a relative position feedback loop. A Proportional-Integrate (PI) controller is designed in the acceleration feedback loop to reduce the ESU’s acceleration level, and a PID controller in the relative position feedback loop is used to avoid collision. Finally, simulations of E-F-T and F-E-T are performed considering variety uncertainties, disturbances and motion space constrains. The simulation results of E-T-H showed that control performance was from 0 to -20 dB for vibration frequency from 0.01 to 0.1 Hz, and vibration was attenuated 40 dB per ten octave above 0.1Hz. The simulation results of T-E-H showed that vibration was attenuated 20 dB per ten octave at the beginning of 0.01Hz.Keywords: microgravity science experiment rack, microgravity vibration isolation system, PID control, vibration isolation performance
Procedia PDF Downloads 165668 Impact of Curvatures in the Dike Line on Wave Run-up and Wave Overtopping, ConDike-Project
Authors: Malte Schilling, Mahmoud M. Rabah, Sven Liebisch
Abstract:
Wave run-up and overtopping are the relevant parameters for the dimensioning of the crest height of dikes. Various experimental as well as numerical studies have investigated these parameters under different boundary conditions (e.g. wave conditions, structure type). Particularly for the dike design in Europe, a common approach is formulated where wave and structure properties are parameterized. However, this approach assumes equal run-up heights and overtopping discharges along the longitudinal axis. However, convex dikes have a heterogeneous crest by definition. Hence, local differences in a convex dike line are expected to cause wave-structure interactions different to a straight dike. This study aims to assess both run-up and overtopping at convexly curved dikes. To cast light on the relevance of curved dikes for the design approach mentioned above, physical model tests were conducted in a 3D wave basin of the Ludwig-Franzius-Institute Hannover. A dike of a slope of 1:6 (height over length) was tested under both regular waves and TMA wave spectra. Significant wave heights ranged from 7 to 10 cm and peak periods from 1.06 to 1.79 s. Both run-up and overtopping was assessed behind the curved and straight sections of the dike. Both measurements were compared to a dike with a straight line. It was observed that convex curvatures in the longitudinal dike line cause a redirection of incident waves leading to a concentration around the center point. Measurements prove that both run-up heights and overtopping rates are higher than on the straight dike. It can be concluded that deviations from a straight longitudinal dike line have an impact on design parameters and imply uncertainties within the design approach in force. Therefore, it is recommended to consider these influencing factors for such cases.Keywords: convex dike, longitudinal curvature, overtopping, run-up
Procedia PDF Downloads 295667 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling
Authors: Erfan Niazi, Marianne Fenech
Abstract:
Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.Keywords: red blood cell, rouleaux, microfluidics, image processing, population balance modeling
Procedia PDF Downloads 361666 Aerodynamic Heating Analysis of Hypersonic Flow over Blunt-Nosed Bodies Using Computational Fluid Dynamics
Authors: Aakash Chhunchha, Assma Begum
Abstract:
The qualitative aspects of hypersonic flow over a range of blunt bodies have been extensively analyzed in the past. It is well known that the curvature of a body’s geometry in the sonic region predominantly dictates the bow shock shape and its standoff distance from the body, while the surface pressure distribution depends on both the sonic region and on the local body shape. The present study is an extension to analyze the hypersonic flow characteristics over several blunt-nosed bodies using modern Computational Fluid Dynamics (CFD) tools to determine the shock shape and its effect on the heat flux around the body. 4 blunt-nosed models with cylindrical afterbodies were analyzed for a flow at a Mach number of 10 corresponding to the standard atmospheric conditions at an altitude of 50 km. The nose radii of curvature of the models range from a hemispherical nose to a flat nose. Appropriate numerical models and the supplementary convergence techniques that were implemented for the CFD analysis are thoroughly described. The flow contours are presented highlighting the key characteristics of shock wave shape, shock standoff distance and the sonic point shift on the shock. The variation of heat flux, due to different shock detachments for various models is comprehensively discussed. It is observed that the more the bluntness of the nose radii, the farther the shock stands from the body; and consequently, the less the surface heating at the nose. The results obtained from the CFD analyses are compared with approximated theoretical engineering correlations. Overall, a satisfactory agreement is observed between the two.Keywords: aero-thermodynamics, blunt-nosed bodies, computational fluid dynamics (CFD), hypersonic flow
Procedia PDF Downloads 148665 A Mixed 3D Finite Element for Highly Deformable Thermoviscoplastic Materials Under Ductile Damage
Authors: João Paulo Pascon
Abstract:
In this work, a mixed 3D finite element formulation is proposed in order to analyze thermoviscoplastic materials under large strain levels and ductile damage. To this end, a tetrahedral element of linear order is employed, considering a thermoviscoplastic constitutive law together with the neo-Hookean hyperelastic relationship and a nonlocal Gurson`s porous plasticity theory The material model is capable of reproducing finite deformations, elastoplastic behavior, void growth, nucleation and coalescence, thermal effects such as plastic work heating and conductivity, strain hardening and strain-rate dependence. The nonlocal character is introduced by means of a nonlocal parameter applied to the Laplacian of the porosity field. The element degrees of freedom are the nodal values of the deformed position, the temperature and the nonlocal porosity field. The internal variables are updated at the Gauss points according to the yield criterion and the evolution laws, including the yield stress of matrix, the equivalent plastic strain, the local porosity and the plastic components of the Cauchy-Green stretch tensor. Two problems involving 3D specimens and ductile damage are numerically analyzed with the developed computational code: the necking problem and a notched sample. The effect of the nonlocal parameter and the mesh refinement is investigated in detail. Results indicate the need of a proper nonlocal parameter. In addition, the numerical formulation can predict ductile fracture, based on the evolution of the fully damaged zone.Keywords: mixed finite element, large strains, ductile damage, thermoviscoplasticity
Procedia PDF Downloads 100664 Mathematics as the Foundation for the STEM Disciplines: Different Pedagogical Strategies Addressed
Authors: Marion G. Ben-Jacob, David Wang
Abstract:
There is a mathematics requirement for entry level college and university students, especially those who plan to study STEM (Science, Technology, Engineering and Mathematics). Most of them take College Algebra, and to continue their studies, they need to succeed in this course. Different pedagogical strategies are employed to promote the success of our students. There is, of course, the Traditional Method of teaching- lecture, examples, problems for students to solve. The Emporium Model, another pedagogical approach, replaces traditional lectures with a learning resource center model featuring interactive software and on-demand personalized assistance. This presentation will compare these two methods of pedagogy and the study done with its results on this comparison. Math is the foundation for science, technology, and engineering. Its work is generally used in STEM to find patterns in data. These patterns can be used to test relationships, draw general conclusions about data, and model the real world. In STEM, solutions to problems are analyzed, reasoned, and interpreted using math abilities in a assortment of real-world scenarios. This presentation will examine specific examples of how math is used in the different STEM disciplines. Math becomes practical in science when it is used to model natural and artificial experiments to identify a problem and develop a solution for it. As we analyze data, we are using math to find the statistical correlation between the cause of an effect. Scientists who use math include the following: data scientists, scientists, biologists and geologists. Without math, most technology would not be possible. Math is the basis of binary, and without programming, you just have the hardware. Addition, subtraction, multiplication, and division is also used in almost every program written. Mathematical algorithms are inherent in software as well. Mechanical engineers analyze scientific data to design robots by applying math and using the software. Electrical engineers use math to help design and test electrical equipment. They also use math when creating computer simulations and designing new products. Chemical engineers often use mathematics in the lab. Advanced computer software is used to aid in their research and production processes to model theoretical synthesis techniques and properties of chemical compounds. Mathematics mastery is crucial for success in the STEM disciplines. Pedagogical research on formative strategies and necessary topics to be covered are essential.Keywords: emporium model, mathematics, pedagogy, STEM
Procedia PDF Downloads 78663 Influence of Wind Induced Fatigue Damage in the Reliability of Wind Turbines
Authors: Emilio A. Berny-Brandt, Sonia E. Ruiz
Abstract:
Steel tubular towers serving as support structures for large wind turbines are subject to several hundred million stress cycles arising from the turbulent nature of the wind. This causes high-cycle fatigue which can govern tower design. The practice of maintaining the support structure after wind turbines reach its typical 20-year design life have become common, but without quantifying the changes in the reliability on the tower. There are several studies on this topic, but most of them are based on the S-N curve approach using the Miner’s rule damage summation method, the de-facto standard in the wind industry. However, the qualitative nature of Miner’s method makes desirable the use of fracture mechanics to measure the effects of fatigue in the capacity curve of the structure, which is important in order to evaluate the integrity and reliability of these towers. Temporal and spatially varying wind speed time histories are simulated based on power spectral density and coherence functions. Simulations are then applied to a SAP2000 finite element model and step-by-step analysis is used to obtain the stress time histories for a range of representative wind speeds expected during service conditions of the wind turbine. Rainflow method is then used to obtain cycle and stress range information of each of these time histories and a statistical analysis is performed to obtain the distribution parameters of each variable. Monte Carlo simulation is used here to evaluate crack growth over time in the tower base using the Paris-Erdogan equation. A nonlinear static pushover analysis to assess the capacity curve of the structure after a number of years is performed. The capacity curves are then used to evaluate the changes in reliability of a steel tower located in Oaxaca, Mexico, where wind energy facilities are expected to grow in the near future. Results show that fatigue on the tower base can have significant effects on the structural capacity of the wind turbine, especially after the 20-year design life when the crack growth curve starts behaving exponentially.Keywords: crack growth, fatigue, Monte Carlo simulation, structural reliability, wind turbines
Procedia PDF Downloads 517662 High Performance Wood Shear Walls and Dissipative Anchors for Damage Limitation
Authors: Vera Wilden, Benno Hoffmeister, Georgios Balaskas, Lukas Rauber, Burkhard Walter
Abstract:
Light-weight timber frame elements represent an efficient structural solution for wooden multistory buildings. The wall elements of such buildings – which act as shear diaphragms- provide lateral stiffness and resistance to wind and seismic loads. The tendency towards multi-story structures leads to challenges regarding the prediction of stiffness, strength and ductility of the buildings. Lightweight timber frame elements are built up of several structural parts (sheeting, fasteners, frame, support and anchorages); each of them contributing to the dynamic response of the structure. This contribution describes the experimental and numerical investigation and development of enhanced lightweight timber frame buildings. These developments comprise high-performance timber frame walls with the variable arrangements of sheathing planes and dissipative anchors at the base of the timber buildings, which reduce damages to the timber structure and can be exchanged after significant earthquakes. In order to prove the performance of the developed elements in the context of a real building a full-scale two-story building core was designed and erected in the laboratory and tested experimentally for its seismic performance. The results of the tests and a comparison of the test results to the predicted behavior are presented. Observation during the test also reveals some aspects of the design and details which need to consider in the application of the timber walls in the context of the complete building.Keywords: dissipative anchoring, full scale test, push-over-test, wood shear walls
Procedia PDF Downloads 255661 A Sharp Interface Model for Simulating Seawater Intrusion in the Coastal Aquifer of Wadi Nador (Algeria)
Authors: Abdelkader Hachemi, Boualem Remini
Abstract:
Seawater intrusion is a significant challenge faced by coastal aquifers in the Mediterranean basin. This study aims to determine the position of the sharp interface between seawater and freshwater in the aquifer of Wadi Nador, located in the Wilaya of Tipaza, Algeria. A numerical areal sharp interface model using the finite element method is developed to investigate the spatial and temporal behavior of seawater intrusion. The aquifer is assumed to be homogeneous and isotropic. The simulation results are compared with geophysical prospection data obtained through electrical methods in 2011 to validate the model. The simulation results demonstrate a good agreement with the geophysical prospection data, confirming the accuracy of the sharp interface model. The position of the sharp interface in the aquifer is found to be approximately 1617 meters from the sea. Two scenarios are proposed to predict the interface position for the year 2024: one without pumping and the other with pumping. The results indicate a noticeable retreat of the sharp interface position in the first scenario, while a slight decline is observed in the second scenario. The findings of this study provide valuable insights into the dynamics of seawater intrusion in the Wadi Nador aquifer. The predicted changes in the sharp interface position highlight the potential impact of pumping activities on the aquifer's vulnerability to seawater intrusion. This study emphasizes the importance of implementing measures to manage and mitigate seawater intrusion in coastal aquifers. The sharp interface model developed in this research can serve as a valuable tool for assessing and monitoring the vulnerability of aquifers to seawater intrusion.Keywords: seawater intrusion, sharp interface, coastal aquifer, algeria
Procedia PDF Downloads 124660 Creep Behaviour of Heterogeneous Timber-UHPFRC Beams Assembled by Bonding: Experimental and Analytical Investigation
Authors: K. Kong, E. Ferrier, L. Michel
Abstract:
The purpose of this research was to investigate the creep behaviour of the heterogeneous Timber-UHPFRC beams. New developments have been done to further improve the structural performance, such as strengthening of the timber (glulam) beam by bonding composite material combine with an ultra-high performance fibre reinforced concrete (UHPFRC) internally reinforced with or without carbon fibre reinforced polymer (CFRP) bars. However, in the design of wooden structures, in addition to the criteria of strengthening and stiffness, deformability due to the creep of wood, especially in horizontal elements, is also a design criterion. Glulam, UHPFRC and CFRP may be an interesting composite mix to respond to the issue of creep behaviour of composite structures made of different materials with different rheological properties. In this paper, we describe an experimental and analytical investigation of the creep performance of the glulam-UHPFRC-CFRP beams assembled by bonding. The experimental investigations creep behaviour was conducted for different environments: in- and outside under constant loading for approximately a year. The measured results are compared with numerical ones obtained by an analytical model. This model was developed to predict the creep response of the glulam-UHPFRC-CFRP beams based on the creep characteristics of the individual components. The results show that heterogeneous glulam-UHPFRC beams provide an improvement in both the strengthening and stiffness, and can also effectively reduce the creep deflection of wooden beams.Keywords: carbon fibre-reinforced polymer (CFRP) bars, creep behaviour, glulam, ultra-high performance fibre reinforced concrete (UHPFRC)
Procedia PDF Downloads 406659 Modeling and Simulation of Vibratory Behavior of Hybrid Smart Composite Plate
Authors: Salah Aguib, Noureddine Chikh, Abdelmalek Khabli, Abdelkader Nour, Toufik Djedid, Lallia Kobzili
Abstract:
This study presents the behavior of a hybrid smart sandwich plate with a magnetorheological elastomer core. In order to improve the vibrational behavior of the plate, the pseudo‐fibers formed by the effect of the magnetic field on the elastomer charged by the ferromagnetic particles are oriented at 45° with respect to the direction of the magnetic field at 0°. Ritz's approach is taken to solve the physical problem. In order to verify and compare the results obtained by the Ritz approach, an analysis using the finite element method was carried out. The rheological property of the MRE material at 0° and at 45° are determined experimentally, The studied elastomer is prepared by a mixture of silicone oil, RTV141A polymer, and 30% of iron particles of total mixture, the mixture obtained is mixed for about 15 minutes to obtain an elastomer paste with good homogenization. In order to develop a magnetorheological elastomer (MRE), this paste is injected into an aluminum mold and subjected to a magnetic field. In our work, we have chosen an ideal percentage of filling of 30%, to obtain the best characteristics of the MRE. The mechanical characteristics obtained by dynamic mechanical viscoanalyzer (DMA) are used in the two numerical approaches. The natural frequencies and the modal damping of the sandwich plate are calculated and discussed for various magnetic field intensities. The results obtained by the two methods are compared. These off‐axis anisotropic MRE structures could open up new opportunities in various fields of aeronautics, aerospace, mechanical engineering and civil engineering.Keywords: hybrid smart sandwich plate, vibratory behavior, FEM, Ritz approach, MRE
Procedia PDF Downloads 70658 Investigating Effects of Vehicle Speed and Road PSDs on Response of a 35-Ton Heavy Commercial Vehicle (HCV) Using Mathematical Modelling
Authors: Amal G. Kurian
Abstract:
The use of mathematical modeling has seen a considerable boost in recent times with the development of many advanced algorithms and mathematical modeling capabilities. The advantages this method has over other methods are that they are much closer to standard physics theories and thus represent a better theoretical model. They take lesser solving time and have the ability to change various parameters for optimization, which is a big advantage, especially in automotive industry. This thesis work focuses on a thorough investigation of the effects of vehicle speed and road roughness on a heavy commercial vehicle ride and structural dynamic responses. Since commercial vehicles are kept in operation continuously for longer periods of time, it is important to study effects of various physical conditions on the vehicle and its user. For this purpose, various experimental as well as simulation methodologies, are adopted ranging from experimental transfer path analysis to various road scenario simulations. To effectively investigate and eliminate several causes of unwanted responses, an efficient and robust technique is needed. Carrying forward this motivation, the present work focuses on the development of a mathematical model of a 4-axle configuration heavy commercial vehicle (HCV) capable of calculating responses of the vehicle on different road PSD inputs and vehicle speeds. Outputs from the model will include response transfer functions and PSDs and wheel forces experienced. A MATLAB code will be developed to implement the objectives in a robust and flexible manner which can be exploited further in a study of responses due to various suspension parameters, loading conditions as well as vehicle dimensions. The thesis work resulted in quantifying the effect of various physical conditions on ride comfort of the vehicle. An increase in discomfort is seen with velocity increase; also the effect of road profiles has a considerable effect on comfort of the driver. Details of dominant modes at each frequency are analysed and mentioned in work. The reduction in ride height or deflection of tire and suspension with loading along with load on each axle is analysed and it is seen that the front axle supports a greater portion of vehicle weight while more of payload weight comes on fourth and third axles. The deflection of the vehicle is seen to be well inside acceptable limits.Keywords: mathematical modeling, HCV, suspension, ride analysis
Procedia PDF Downloads 264657 Eco-Environmental Vulnerability Evaluation in Mountain Regions Using Remote Sensing and Geographical Information System: A Case Study of Pasol Gad Watershed of Garhwal Himalaya, India
Authors: Suresh Kumar Bandooni, Mirana Laishram
Abstract:
The Mid Himalaya of Garhwal Himalaya in Uttarakhand (India) has a complex Physiographic features withdiversified climatic conditions and therefore it is suspect to environmental vulnerability. Thenatural disasters and also anthropogenic activities accelerate the rate of environmental vulnerability. To analyse the environmental vulnerability, we have used geoinformatics technologies and numerical models and it is adoptedby using Spatial Principal Component Analysis (SPCA). The model consist of many factors such as slope, landuse/landcover, soil, forest fire risk, landslide susceptibility zone, human population density and vegetation index. From this model, the environmental vulnerability integrated index (EVSI) is calculated for Pasol Gad Watershed of Garhwal Himalaya for the years 1987, 2000, and 2013 and the Vulnerability is classified into five levelsi.e. Very low, low, medium, high and very highby means of cluster principle. The resultsforeco-environmental vulnerability distribution in study area shows that medium, high and very high levels are dominating in the area and it is mainly caused by the anthropogenic activities and natural disasters. Therefore, proper management forconservation of resources is utmost necessity of present century. It is strongly believed that participation at community level along with social worker, institutions and Non-governmental organization (NGOs) have become a must to conserve and protect the environment.Keywords: eco-environment vulnerability, spatial principal component analysis, remote sensing, geographic information system, institutions, Himalaya
Procedia PDF Downloads 264656 The Effect of CPU Location in Total Immersion of Microelectronics
Authors: A. Almaneea, N. Kapur, J. L. Summers, H. M. Thompson
Abstract:
Meeting the growth in demand for digital services such as social media, telecommunications, and business and cloud services requires large scale data centres, which has led to an increase in their end use energy demand. Generally, over 30% of data centre power is consumed by the necessary cooling overhead. Thus energy can be reduced by improving the cooling efficiency. Air and liquid can both be used as cooling media for the data centre. Traditional data centre cooling systems use air, however liquid is recognised as a promising method that can handle the more densely packed data centres. Liquid cooling can be classified into three methods; rack heat exchanger, on-chip heat exchanger and full immersion of the microelectronics. This study quantifies the improvements of heat transfer specifically for the case of immersed microelectronics by varying the CPU and heat sink location. Immersion of the server is achieved by filling the gap between the microelectronics and a water jacket with a dielectric liquid which convects the heat from the CPU to the water jacket on the opposite side. Heat transfer is governed by two physical mechanisms, which is natural convection for the fixed enclosure filled with dielectric liquid and forced convection for the water that is pumped through the water jacket. The model in this study is validated with published numerical and experimental work and shows good agreement with previous work. The results show that the heat transfer performance and Nusselt number (Nu) is improved by 89% by placing the CPU and heat sink on the bottom of the microelectronics enclosure.Keywords: CPU location, data centre cooling, heat sink in enclosures, immersed microelectronics, turbulent natural convection in enclosures
Procedia PDF Downloads 276655 Magnetohydrodynamic Flow of Viscoelastic Nanofluid and Heat Transfer over a Stretching Surface with Non-Uniform Heat Source/Sink and Non-Linear Radiation
Authors: Md. S. Ansari, S. S. Motsa
Abstract:
In this paper, an analysis has been made on the flow of non-Newtonian viscoelastic nanofluid over a linearly stretching sheet under the influence of uniform magnetic field. Heat transfer characteristics is analyzed taking into the effect of nonlinear radiation and non-uniform heat source/sink. Transport equations contain the simultaneous effects of Brownian motion and thermophoretic diffusion of nanoparticles. The relevant partial differential equations are non-dimensionalized and transformed into ordinary differential equations by using appropriate similarity transformations. The transformed, highly nonlinear, ordinary differential equations are solved by spectral local linearisation method. The numerical convergence, error and stability analysis of iteration schemes are presented. The effects of different controlling parameters, namely, radiation, space and temperature-dependent heat source/sink, Brownian motion, thermophoresis, viscoelastic, Lewis number and the magnetic force parameter on the flow field, heat transfer characteristics and nanoparticles concentration are examined. The present investigation has many industrial and engineering applications in the fields of coatings and suspensions, cooling of metallic plates, oils and grease, paper production, coal water or coal–oil slurries, heat exchangers’ technology, and materials’ processing and exploiting.Keywords: magnetic field, nonlinear radiation, non-uniform heat source/sink, similar solution, spectral local linearisation method, Rosseland diffusion approximation
Procedia PDF Downloads 377654 Efficient Implementation of Finite Volume Multi-Resolution Weno Scheme on Adaptive Cartesian Grids
Authors: Yuchen Yang, Zhenming Wang, Jun Zhu, Ning Zhao
Abstract:
An easy-to-implement and robust finite volume multi-resolution Weighted Essentially Non-Oscillatory (WENO) scheme is proposed on adaptive cartesian grids in this paper. Such a multi-resolution WENO scheme is combined with the ghost cell immersed boundary method (IBM) and wall-function technique to solve Navier-Stokes equations. Unlike the k-exact finite volume WENO schemes which involve large amounts of extra storage, repeatedly solving the matrix generated in a least-square method or the process of calculating optimal linear weights on adaptive cartesian grids, the present methodology only adds very small overhead and can be easily implemented in existing edge-based computational fluid dynamics (CFD) codes with minor modifications. Also, the linear weights of this adaptive finite volume multi-resolution WENO scheme can be any positive numbers on condition that their sum is one. It is a way of bypassing the calculation of the optimal linear weights and such a multi-resolution WENO scheme avoids dealing with the negative linear weights on adaptive cartesian grids. Some benchmark viscous problems are numerical solved to show the efficiency and good performance of this adaptive multi-resolution WENO scheme. Compared with a second-order edge-based method, the presented method can be implemented into an adaptive cartesian grid with slight modification for big Reynolds number problems.Keywords: adaptive mesh refinement method, finite volume multi-resolution WENO scheme, immersed boundary method, wall-function technique.
Procedia PDF Downloads 152653 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure
Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik
Abstract:
Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT
Procedia PDF Downloads 132652 Numerical Simulation of Phase Transfer during Cryosurgery for an Irregular Tumor Using Hybrid Approach
Authors: Rama Bhargava, Surabhi Nishad
Abstract:
The infusion of nanofluids has dramatically enhanced the heat-carrying capacity of the fluids, applicable to many engineering and medical process where the temperature below freezing is required. Cryosurgery is an efficient therapy for the treatment of cancer, but sometimes the excessive cooling may harm the nearby healthy cells. Efforts are therefore done to develop a model which can cause to generate the low temperature as required. In the present study, a mathematical model is developed based on the bioheat transfer equation to simulate the heat transfer from the probe on a tumor (with irregular domain) using the hybrid technique consisting of element free Galerkin method with αα-family of approximation. The probe is loaded will nano-particles. The effects of different nanoparticles, namely Al₂O₃, Fe₃O₄, Au on the heat-producing rate, is obtained. It is observed that the temperature can be brought to (60°C)-(-30°C) at a faster freezing rate on the infusion of different nanoparticles. Besides increasing the freezing rate, the volume of the nanoparticle can also control the size and growth of ice crystals formed during the freezing process. The study is also made to find the time required to achieve the desired temperature. The problem is further extended for multi tumors of different shapes and sizes. The irregular shape of the frozen domain and the direction of ice growth are very sensitive issues, posing a challenge for simulation. The Meshfree method has been one of the accurate methods in such problems as a domain is naturally irregular. The discretization is done using the nodes only. MLS approximation is taken in order to generate the shape functions. Sufficiently accurate results are obtained.Keywords: cryosurgery, EFGM, hybrid, nanoparticles
Procedia PDF Downloads 129651 Intensity-Enhanced Super-Resolution Amplitude Apodization Effect on the Non-Spherical Near-Field Particle-Lenses
Authors: Liyang Yue, Bing Yan, James N. Monks, Rakesh Dhama, Zengbo Wang, Oleg V. Minin, Igor V. Minin
Abstract:
A particle can function as a refractive lens to focus a plane wave, generating a narrow, high intensive, weak-diverging beam within a sub-wavelength volume, known as the ‘photonic jet’. Refractive index contrast (particle to background media) and scaling effect of the dielectric particle (relative-to-wavelength size) play key roles in photonic jet formation, rather than the shape of particle-lens. Waist (full width of half maximum, FWHM) of a photonic jet could be beyond the diffraction limit and smaller than the Airy disk, which defines the minimum distance between two objects to be imaged as two instead of one. Many important applications for imaging and sensing have been afforded based upon the super-resolution characteristic of the photonic jet. It is known that apodization method, in the form of an amplitude pupil-mask centrally situated on a particle-lens, can further reduce the waist of a photonic nanojet, however, usually lower its intensity at the focus due to blocking of the incident light. In this paper, the anomalously intensity-enhanced apodization effect was discovered in the near-field via numerical simulation. It was also experimentally verified by a scale model using a copper-masked Teflon cuboid solid immersion lens (SIL) with 22 mm side length under radiation of a plane wave with 8 mm wavelength. Peak intensity enhancement and the lateral resolution of the produced photonic jet increased by about 36.0 % and 36.4 % in this approach, respectively. This phenomenon may possess the scale effect and would be valid in multiple frequency bands.Keywords: apodization, particle-lens, scattering, near-field optics
Procedia PDF Downloads 195650 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE
Procedia PDF Downloads 104649 Virtual Academy Next: Addressing Transition Challenges Through a Gamified Virtual Transition Program for Students with Disabilities
Authors: Jennifer Gallup, Joel Bocanegra, Greg Callan, Abigail Vaughn
Abstract:
Students with disabilities (SWD) engaged in a distance summer program delivered over multiple virtual mediums that used gaming principles to teach and practice self-regulated learning (SRL) through the process of exploring possible jobs. Gaming quests were developed to explore jobs and teach transition skills. Students completed specially designed quests that taught and reinforced SRL and problem-solving through individual, group, and teacher-led experiences. SRL skills learned were reinforced through guided job explorations over the context of MinecraftEDU, zoom with experts in the career, collaborations with a team over Marco Polo, and Zoom. The quests were developed and laid out on an accessible web page, with active learning opportunities and feedback conducted within multiple virtual mediums including MinecraftEDU. Gaming mediums actively engage players in role-playing, problem-solving, critical thinking, and collaboration. Gaming has been used as a medium for education since the inception of formal education. Games, and specifically board games, are pre-historic, meaning we had board games before we had written language. Today, games are widely used in education, often as a reinforcer for behavior or for rewards for work completion. Games are not often used as a direct method of instruction and assessment; however, the inclusion of games as an assessment tool and as a form of instruction increases student engagement and participation. Games naturally include collaboration, problem-solving, and communication. Therefore, our summer program was developed using gaming principles and MinecraftEDU. This manuscript describes a virtual learning summer program called Virtual Academy New and Exciting Transitions (VAN) that was redesigned from a face-to-face setting to a completely online setting with a focus on SWD aged 14-21. The focus of VAN was to address transition planning needs such as problem-solving skills, self-regulation, interviewing, job exploration, and communication for transition-aged youth diagnosed with various disabilities (e.g., learning disabilities, attention-deficit hyperactivity disorder, intellectual disability, down syndrome, autism spectrum disorder).Keywords: autism, disabilities, transition, summer program, gaming, simulations
Procedia PDF Downloads 78648 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model
Authors: Bokkasam Sasidhar, Ibrahim Aljasser
Abstract:
The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.Keywords: scheduling, maximal flow problem, multiple arc network model, optimization
Procedia PDF Downloads 408647 Modeling and Simulation of Primary Atomization and Its Effects on Internal Flow Dynamics in a High Torque Low Speed Diesel Engine
Authors: Muteeb Ulhaq, Rizwan Latif, Sayed Adnan Qasim, Imran Shafi
Abstract:
Diesel engines are most efficient and reliable in terms of efficiency, reliability and adaptability. Most of the research and development up till now have been directed towards High-Speed Diesel Engine, for Commercial use. In these engines objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low-speed engines the requirement is altogether different. These types of Engines are mostly used in Maritime Industry, Agriculture industry, Static Engines Compressors Engines etc. Unfortunately due to lack of research and development, these engines have low efficiency and high soot emissions and one of the most effective way to overcome these issues is by efficient combustion in an engine cylinder, the fuel spray atomization process plays a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process is of a great importance. In this research, we will examine the effects of primary breakup modeling on the spray characteristics under diesel engine conditions. KH-ACT model is applied to cater the effect of aerodynamics in an engine cylinder and also cavitations and turbulence generated inside the injector. It is a modified form of most commonly used KH model, which considers only the aerodynamically induced breakup based on the Kelvin–Helmholtz instability. Our model is extensively evaluated by performing 3-D time-dependent simulations on Open FOAM, which is an open source flow solver. Spray characteristics like Spray Penetration, Liquid length, Spray cone angle and Souter mean diameter (SMD) were validated by comparing the results of Open Foam and Matlab. Including the effects of cavitation and turbulence enhances primary breakup, leading to smaller droplet sizes, decrease in liquid penetration, and increase in the radial dispersion of spray. All these properties favor early evaporation of fuel which enhances Engine efficiency.Keywords: Kelvin–Helmholtz instability, open foam, primary breakup, souter mean diameter, turbulence
Procedia PDF Downloads 217646 Population Pharmacokinetics of Levofloxacin and Moxifloxacin, and the Probability of Target Attainment in Ethiopian Patients with Multi-Drug Resistant Tuberculosis
Authors: Temesgen Sidamo, Prakruti S. Rao, Eleni Akllilu, Workineh Shibeshi, Yumi Park, Yong-Soon Cho, Jae-Gook Shin, Scott K. Heysell, Stellah G. Mpagama, Ephrem Engidawork
Abstract:
The fluoroquinolones (FQs) are used off-label for the treatment of multidrug-resistant tuberculosis (MDR-TB), and for evaluation in shortening the duration of drug-susceptible TB in recently prioritized regimens. Within the class, levofloxacin (LFX) and moxifloxacin (MXF) play a substantial role in ensuring success in treatment outcomes. However, sub-therapeutic plasma concentrations of either LFX or MXF may drive unfavorable treatment outcomes. To the best of our knowledge, the pharmacokinetics of LFX and MXF in Ethiopian patients with MDR-TB have not yet been investigated. Therefore, the aim of this study was to develop a population pharmacokinetic (PopPK) model of levofloxacin (LFX) and moxifloxacin (MXF) and assess the percent probability of target attainment (PTA) as defined by the ratio of the area under the plasma concentration-time curve over 24-h (AUC0-24) and the in vitro minimum inhibitory concentration (MIC) (AUC0-24/MIC) in Ethiopian MDR-TB patients. Steady-state plasma was collected from 39 MDR-TB patients enrolled in the programmatic treatment course and the drug concentrations were determined using optimized liquid chromatography-tandem mass spectrometry. In addition, the in vitro MIC of the patients' pretreatment clinical isolates was determined. PopPK and simulations were run at various doses, and PK parameters were estimated. The effect of covariates on the PK parameters and the PTA for maximum mycobacterial kill and resistance prevention was also investigated. LFX and MXF both fit in a one-compartment model with adjustments. The apparent volume of distribution (V) and clearance (CL) of LFX were influenced by serum creatinine (Scr), whereas the absorption constant (Ka) and V of MXF were influenced by Scr and BMI, respectively. The PTA for LFX maximal mycobacterial kill at the critical MIC of 0.5 mg/L was 29%, 62%, and 95% with the simulated 750 mg, 1000 mg, and 1500 mg doses, respectively, whereas the PTA for resistance prevention at 1500 mg was only 4.8%, with none of the lower doses achieving this target. At the critical MIC of 0.25 mg/L, there was no difference in the PTA (94.4%) for maximum bacterial kill among the simulated doses of MXF (600 mg, 800 mg, and 1000 mg), but the PTA for resistance prevention improved proportionately with dose. Standard LFX and MXF doses may not provide adequate drug exposure. LFX PopPK is more predictable for maximum mycobacterial kill, whereas MXF's resistance prevention target increases with dose. Scr and BMI are likely to be important covariates in dose optimization or therapeutic drug monitoring (TDM) studies in Ethiopian patients.Keywords: population PK, PTA, moxifloxacin, levofloxacin, MDR-TB patients, ethiopia
Procedia PDF Downloads 123645 Artificial Intelligence in Bioscience: The Next Frontier
Authors: Parthiban Srinivasan
Abstract:
With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction
Procedia PDF Downloads 362644 Influence of Surface Wettability on Imbibition Dynamics of Protein Solution in Microwells
Authors: Himani Sharma, Amit Agrawal
Abstract:
Stability of the Cassie and Wenzel wetting states depends on intrinsic contact angle and geometric features on a surface that was exploited in capturing biofluids in microwells. However, the mechanism of imbibition of biofluids in the microwells is not well implied in terms of wettability of a substrate. In this work, we experimentally demonstrated filling dynamics in hydrophilic and hydrophobic microwells by protein solutions. Towards this, we utilized lotus leaf as a mold to fabricate microwells on a Polydimethylsiloxane (PDMS) surface. Lotus leaf containing micrometer-sized blunt-conical shaped pillars with a height of 8-15 µm and diameter of 3-8 µm were transferred on to PDMS. Furthermore, PDMS surface was treated with oxygen plasma to render the hydrophilic nature. A 10µL droplets containing fluorescein isothiocyanate (FITC) - labelled bovine serum albumin (BSA) were rested on both hydrophobic (θa = 108o, where θa is the apparent contact angle) and hydrophilic (θa = 60o) PDMS surfaces. A time-dependent fluorescence microscopy was conducted on these modified PDMS surfaces by recording the fluorescent intensity over a 5 minute period. It was observed that, initially (at t=1 min) FITC-BSA was accumulated on the periphery of both hydrophilic and hydrophobic microwells due to incomplete penetration of liquid-gas meniscus. This deposition of FITC-BSA on periphery of microwell was not changed with time for hydrophobic surfaces, whereas, a complete filling was occurred in hydrophilic microwells (at t=5 mins). This attributes to a gradual movement of three-phase contact line along the vertical surface of the hydrophilic microwells as compared to stable pinning in the hydrophobic microwells as confirmed by Surface Evolver simulations. In addition, if the cavities are presented on hydrophobic surfaces, air bubbles will be trapped inside the cavities once the aqueous solution is placed over these surfaces, resulting in the Cassie-Baxter wetting state. This condition hinders trapping of proteins inside the microwells. Thus, it is necessary to impart hydrophilicity to the microwell surfaces so as to induce the Wenzel state, such that, an entire solution will be fully in contact with the walls of microwells. Imbibition of microwells by protein solutions was analyzed in terms fluorescent intensity versus time. The present work underlines the importance of geometry of microwells and surface wettability of substrate in wetting and effective capturing of solid sub-phases in biofluids.Keywords: BSA, microwells, surface evolver, wettability
Procedia PDF Downloads 206