Search results for: computational pipeline
105 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems
Procedia PDF Downloads 131104 Thermal Imaging of Aircraft Piston Engine in Laboratory Conditions
Authors: Lukasz Grabowski, Marcin Szlachetka, Tytus Tulwin
Abstract:
The main task of the engine cooling system is to maintain its average operating temperatures within strictly defined limits. Too high or too low average temperatures result in accelerated wear or even damage to the engine or its individual components. In order to avoid local overheating or significant temperature gradients, leading to high stresses in the component, the aim is to ensure an even flow of air. In the case of analyses related to heat exchange, one of the main problems is the comparison of temperature fields because standard measuring instruments such as thermocouples or thermistors only provide information about the course of temperature at a given point. Thermal imaging tests can be helpful in this case. With appropriate camera settings and taking into account environmental conditions, we are able to obtain accurate temperature fields in the form of thermograms. Emission of heat from the engine to the engine compartment is an important issue when designing a cooling system. Also, in the case of liquid cooling, the main sources of heat in the form of emissions from the engine block, cylinders, etc. should be identified. It is important to redesign the engine compartment ventilation system. Ensuring proper cooling of aircraft reciprocating engine is difficult not only because of variable operating range but mainly because of different cooling conditions related to the change of speed or altitude of flight. Engine temperature also has a direct and significant impact on the properties of engine oil, which under the influence of this parameter changes, in particular, its viscosity. Too low or too high, its value can be a result of fast wear of engine parts. One of the ways to determine the temperatures occurring on individual parts of the engine is the use of thermal imaging measurements. The article presents the results of preliminary thermal imaging tests of aircraft piston diesel engine with a maximum power of about 100 HP. In order to perform the heat emission tests of the tested engine, the ThermaCAM S65 thermovision monitoring system from FLIR (Forward-Looking Infrared) together with the ThermaCAM Researcher Professional software was used. The measurements were carried out after the engine warm up. The engine speed was 5300 rpm The measurements were taken for the following environmental parameters: air temperature: 17 °C, ambient pressure: 1004 hPa, relative humidity: 38%. The temperatures distribution on the engine cylinder and on the exhaust manifold were analysed. Thermal imaging tests made it possible to relate the results of simulation tests to the real object by measuring the rib temperature of the cylinders. The results obtained are necessary to develop a CFD (Computational Fluid Dynamics) model of heat emission from the engine bay. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).Keywords: aircraft, piston engine, heat, emission
Procedia PDF Downloads 118103 Examination of Porcine Gastric Biomechanics in the Antrum Region
Authors: Sif J. Friis, Mette Poulsen, Torben Strom Hansen, Peter Herskind, Jens V. Nygaard
Abstract:
Gastric biomechanics governs a large range of scientific and engineering fields, from gastric health issues to interaction mechanisms between external devices and the tissue. Determination of mechanical properties of the stomach is, thus, crucial, both for understanding gastric pathologies as well as for the development of medical concepts and device designs. Although the field of gastric biomechanics is emerging, advances within medical devices interacting with the gastric tissue could greatly benefit from an increased understanding of tissue anisotropy and heterogeneity. Thus, in this study, uniaxial tensile tests of gastric tissue were executed in order to study biomechanical properties within the same individual as well as across individuals. With biomechanical tests in the strain domain, tissue from the antrum region of six porcine stomachs was tested using eight samples from each stomach (n = 48). The samples were cut so that they followed dominant fiber orientations. Accordingly, from each stomach, four samples were longitudinally oriented, and four samples were circumferentially oriented. A step-wise stress relaxation test with five incremental steps up to 25 % strain with 200 s rest periods for each step was performed, followed by a 25 % strain ramp test with three different strain rates. Theoretical analysis of the data provided stress-strain/time curves as well as 20 material parameters (e.g., stiffness coefficients, dissipative energy densities, and relaxation time coefficients) used for statistical comparisons between samples from the same stomach as well as in between stomachs. Results showed that, for the 20 material parameters, heterogeneity across individuals, when extracting samples from the same area, was in the same order of variation as the samples within the same stomach. For samples from the same stomach, the mean deviation percentage for all 20 parameters was 21 % and 18 % for longitudinal and circumferential orientations, compared to 25 % and 19 %, respectively, for samples across individuals. This observation was also supported by a nonparametric one-way ANOVA analysis, where results showed that the 20 material parameters from each of the six stomachs came from the same distribution with a level of statistical significance of P > 0.05. Direction-dependency was also examined, and it was found that the maximum stress for longitudinal samples was significantly higher than for circumferential samples. However, there were no significant differences in the 20 material parameters, with the exception of the equilibrium stiffness coefficient (P = 0.0039) and two other stiffness coefficients found from the relaxation tests (P = 0.0065, 0.0374). Nor did the stomach tissue show any significant differences between the three strain-rates used in the ramp test. Heterogeneity within the same region has not been examined earlier, yet, the importance of the sampling area has been demonstrated in this study. All material parameters found are essential to understand the passive mechanics of the stomach and may be used for mathematical and computational modeling. Additionally, an extension of the protocol used may be relevant for compiling a comparative study between the human stomach and the pig stomach.Keywords: antrum region, gastric biomechanics, loading-unloading, stress relaxation, uniaxial tensile testing
Procedia PDF Downloads 432102 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity
Authors: Yuri Laevsky, Tatyana Nosova
Abstract:
The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation
Procedia PDF Downloads 301101 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis
Authors: Mohamed Ali Abdennadher
Abstract:
Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology
Procedia PDF Downloads 29100 2,7-Diazaindole as a Photophysical Probe for Excited State Hydrogen/Proton Transfer
Authors: Simran Baweja, Bhavika Kalal, Surajit Maity
Abstract:
Photoinduced tautomerization reactions have been the centre of attention among the scientific community over the past several decades because of their significance in various biological systems. 7-azaindole (7AI) is considered a model system for DNA base pairing and to understand the role of such tautomerization reactions in mutations. To the best of our knowledge, extensive studies have been carried out on 7-azaindole and its solvent clusters exhibiting proton/ hydrogen transfer in both solution as well as gas phases. Derivatives of the above molecule, like 2,7- and 2,6-diazaindoles are proposed to have even better photophysical properties due to the presence of -aza group on the 2nd position. However, there are studies in the solution phase that suggest the relevance of these molecules, but there are no experimental studies reported in the gas phase yet. In our current investigation, we present the first gas phase spectroscopic data of 2,7-diazaindole (2,7-DAI) and its solvent cluster (2,7-DAI-H2O). In this, we have employed state-of-the-art laser spectroscopic methods such as fluorescence excitation (LIF), dispersed fluorescence (DF), resonant two-photon ionization-time of flight mass spectrometry (2C-R2PI), photoionization efficiency spectroscopy (PIE), IR-UV double resonance spectroscopy, i.e., fluorescence-dip infrared spectroscopy (FDIR) and resonant ion-dip infrared spectroscopy (IDIR) to understand the electronic structure of the molecule. The origin band corresponding to the S1 ← S0 transition of the bare 2,7-DAI is found to be positioned at 33910 cm-1, whereas the origin band corresponding to S1 ← S0 transition of the 2,7-DAI-H2O is positioned at 33074 cm-1. The red-shifted transition in the case of solvent cluster suggests the enhanced feasibility of excited state hydrogen/ proton transfer. The ionization potential for the 2,7-DAI molecule is found to be 8.92 eV which is significantly higher than the previously reported 7AI (8.11 eV) molecule, making it a comparatively complex molecule to study. The ionization potential is reduced by 0.14 eV in the case of 2,7-DAI-H2O (8.78 eV) cluster compared to that of 2,7-DAI. Moreover, on comparison with the available literature values of 7AI, we found the origin band of 2,7-DAI and 2,7-DAI-H2O to be red-shifted by -729 and -280 cm-1 respectively. The ground and excited state N-H stretching frequencies of the 27DAI molecule were determined using fluorescence-dip infrared spectra (FDIR) and resonant ion dip infrared spectroscopy (IDIR), obtained at 3523 and 3467 cm-1, respectively. The lower value of vNH in the electronically excited state of 27DAI implies the higher acidity of the group compared to the ground state. Moreover, we have done extensive computational analysis, which suggests that the energy barrier in the excited state reduces significantly as we increase the number of catalytic solvent molecules (S= H2O, NH3) as well as the polarity of solvent molecules. We found that the ammonia molecule is a better candidate for hydrogen transfer compared to water because of its higher gas-phase basicity. Further studies are underway to understand the excited state dynamics and photochemistry of such N-rich chromophores.Keywords: excited state hydrogen transfer, supersonic expansion, gas phase spectroscopy, IR-UV double resonance spectroscopy, laser induced fluorescence, photoionization efficiency spectroscopy
Procedia PDF Downloads 7599 Hyperelastic Constitutive Modelling of the Male Pelvic System to Understand the Prostate Motion, Deformation and Neoplasms Location with the Influence of MRI-TRUS Fusion Biopsy
Authors: Muhammad Qasim, Dolors Puigjaner, Josep Maria López, Joan Herrero, Carme Olivé, Gerard Fortuny
Abstract:
Computational modeling of the human pelvis using the finite element (FE) method has become extremely important to understand the mechanics of prostate motion and deformation when transrectal ultrasound (TRUS) guided biopsy is performed. The number of reliable and validated hyperelastic constitutive FE models of the male pelvis region is limited, and given models did not precisely describe the anatomical behavior of pelvis organs, mainly of the prostate and its neoplasms location. The motion and deformation of the prostate during TRUS-guided biopsy makes it difficult to know the location of potential lesions in advance. When using this procedure, practitioners can only provide roughly estimations for the lesions locations. Consequently, multiple biopsy samples are required to target one single lesion. In this study, the whole pelvis model (comprised of the rectum, bladder, pelvic muscles, prostate transitional zone (TZ), and peripheral zone (PZ)) is used for the simulation results. An isotropic hyperelastic approach (Signorini model) was used for all the soft tissues except the vesical muscles. The vesical muscles are assumed to have a linear elastic behavior due to the lack of experimental data to determine the constants involved in hyperelastic models. The tissues and organ geometry is taken from the existing literature for 3D meshes. Then the biomechanical parameters were obtained under different testing techniques described in the literature. The acquired parametric values for uniaxial stress/strain data are used in the Signorini model to see the anatomical behavior of the pelvis model. The five mesh nodes in terms of small prostate lesions are selected prior to biopsy and each lesion’s final position is targeted when TRUS probe force of 30 N is applied at the inside rectum wall. Code_Aster open-source software is used for numerical simulations. Moreover, the overall effects of pelvis organ deformation were demonstrated when TRUS–guided biopsy is induced. The deformation of the prostate and neoplasms displacement showed that the appropriate material properties to organs altered the resulting lesion's migration parametrically. As a result, the distance traveled by these lesions ranged between 3.77 and 9.42 mm. The lesion displacement and organ deformation are compared and analyzed with our previous study in which we used linear elastic properties for all pelvic organs. Furthermore, the visual comparison of axial and sagittal slices are also compared, which is taken for Magnetic Resource Imaging (MRI) and TRUS images with our preliminary study.Keywords: code-aster, magnetic resonance imaging, neoplasms, transrectal ultrasound, TRUS-guided biopsy
Procedia PDF Downloads 8798 Analysis of Influencing Factors on Infield-Logistics: A Survey of Different Farm Types in Germany
Authors: Michael Mederle, Heinz Bernhardt
Abstract:
The Management of machine fleets or autonomous vehicle control will considerably increase efficiency in future agricultural production. Especially entire process chains, e.g. harvesting complexes with several interacting combine harvesters, grain carts, and removal trucks, provide lots of optimization potential. Organization and pre-planning ensure to get these efficiency reserves accessible. One way to achieve this is to optimize infield path planning. Particularly autonomous machinery requires precise specifications about infield logistics to be navigated effectively and process optimized in the fields individually or in machine complexes. In the past, a lot of theoretical optimization has been done regarding infield logistics, mainly based on field geometry. However, there are reasons why farmers often do not apply the infield strategy suggested by mathematical route planning tools. To make the computational optimization more useful for farmers this study focuses on these influencing factors by expert interviews. As a result practice-oriented navigation not only to the field but also within the field will be possible. The survey study is intended to cover the entire range of German agriculture. Rural mixed farms with simple technology equipment are considered as well as large agricultural cooperatives which farm thousands of hectares using track guidance and various other electronic assistance systems. First results show that farm managers using guidance systems increasingly attune their infield-logistics on direction giving obstacles such as power lines. In consequence, they can avoid inefficient boom flippings while doing plant protection with the sprayer. Livestock farmers rather focus on the application of organic manure with its specific requirements concerning road conditions, landscape terrain or field access points. Cultivation of sugar beets makes great demands on infield patterns because of its particularities such as the row crop system or high logistics demands. Furthermore, several machines working in the same field simultaneously influence each other, regardless whether or not they are of the equal type. Specific infield strategies always are based on interactions of several different influences and decision criteria. Single working steps like tillage, seeding, plant protection or harvest mostly cannot be considered each individually. The entire production process has to be taken into consideration to detect the right infield logistics. One long-term objective of this examination is to integrate the obtained influences on infield strategies as decision criteria into an infield navigation tool. In this way, path planning will become more practical for farmers which is a basic requirement for automatic vehicle control and increasing process efficiency.Keywords: autonomous vehicle control, infield logistics, path planning, process optimizing
Procedia PDF Downloads 23397 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements
Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga
Abstract:
Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform
Procedia PDF Downloads 38696 Two-Dimensional Dynamics Motion Simulations of F1 Rare Wing-Flap
Authors: Chaitanya H. Acharya, Pavan Kumar P., Gopalakrishna Narayana
Abstract:
In the realm of aerodynamics, numerous vehicles incorporate moving components to enhance their performance. For instance, airliners deploy hydraulically operated flaps and ailerons during take-off and landing, while Formula 1 racing cars utilize hydraulic tubes and actuators for various components, including the Drag Reduction System (DRS). The DRS, consisting of a rear wing and adjustable flaps, plays a crucial role in overtaking manoeuvres. The DRS has two positions: the default position with the flaps down, providing high downforce, and the lifted position, which reduces drag, allowing for increased speed and aiding in overtaking. Swift deployment of the DRS during races is essential for overtaking competitors. The fluid flow over the rear wing flap becomes intricate during deployment, involving flow reversal and operational changes, leading to unsteady flow physics that significantly influence aerodynamic characteristics. Understanding the drag and downforce during DRS deployment is crucial for determining race outcomes. While experiments can yield accurate aerodynamic data, they can be expensive and challenging to conduct across varying speeds. Computational Fluid Dynamics (CFD) emerges as a cost-effective solution to predict drag and downforce across a range of speeds, especially with the rapid deployment of the DRS. This study employs the finite volume-based solver Ansys Fluent, incorporating dynamic mesh motions and a turbulent model to capture the complex flow phenomena associated with the moving rear wing flap. A dedicated section for the rare wing-flap is considered in the present simulations, and the aerodynamics of these sections closely resemble S1223 aerofoils. Before delving into the simulations of the rare wing-flap aerofoil, numerical results undergo validation using experimental data from an NLR flap aerofoil case, encompassing different flap angles at two distinct angles of attack was carried out. The increase in flap angle as increase in lift and drag is observed for a given angle of attack. The simulation methodology for the rare-wing-flap aerofoil case involves specific time durations before lifting the flap. During this period, drag and downforce values are determined as 330 N and 1800N, respectively. Following the flap lift, a noteworthy reduction in drag to 55 % and a decrease in downforce to 17 % are observed. This understanding is critical for making instantaneous decisions regarding the deployment of the Drag Reduction System (DRS) at specific speeds, thereby influencing the overall performance of the Formula 1 racing car. Hence, this work emphasizes the utilization of dynamic mesh motion methodology to predict the aerodynamic characteristics during the deployment of the DRS in a Formula 1 racing car.Keywords: DRS, CFD, drag, downforce, dynamics mesh motion
Procedia PDF Downloads 9495 Linguistic Cyberbullying, a Legislative Approach
Authors: Simona Maria Ignat
Abstract:
Bullying online has been an increasing studied topic during the last years. Different approaches, psychological, linguistic, or computational, have been applied. To our best knowledge, a definition and a set of characteristics of phenomenon agreed internationally as a common framework are still waiting for answers. Thus, the objectives of this paper are the identification of bullying utterances on Twitter and their algorithms. This research paper is focused on the identification of words or groups of words, categorized as “utterances”, with bullying effect, from Twitter platform, extracted on a set of legislative criteria. This set is the result of analysis followed by synthesis of law documents on bullying(online) from United States of America, European Union, and Ireland. The outcome is a linguistic corpus with approximatively 10,000 entries. The methods applied to the first objective have been the following. The discourse analysis has been applied in identification of keywords with bullying effect in texts from Google search engine, Images link. Transcription and anonymization have been applied on texts grouped in CL1 (Corpus linguistics 1). The keywords search method and the legislative criteria have been used for identifying bullying utterances from Twitter. The texts with at least 30 representations on Twitter have been grouped. They form the second corpus linguistics, Bullying utterances from Twitter (CL2). The entries have been identified by using the legislative criteria on the the BoW method principle. The BoW is a method of extracting words or group of words with same meaning in any context. The methods applied for reaching the second objective is the conversion of parts of speech to alphabetical and numerical symbols and writing the bullying utterances as algorithms. The converted form of parts of speech has been chosen on the criterion of relevance within bullying message. The inductive reasoning approach has been applied in sampling and identifying the algorithms. The results are groups with interchangeable elements. The outcomes convey two aspects of bullying: the form and the content or meaning. The form conveys the intentional intimidation against somebody, expressed at the level of texts by grammatical and lexical marks. This outcome has applicability in the forensic linguistics for establishing the intentionality of an action. Another outcome of form is a complex of graphemic variations essential in detecting harmful texts online. This research enriches the lexicon already known on the topic. The second aspect, the content, revealed the topics like threat, harassment, assault, or suicide. They are subcategories of a broader harmful content which is a constant concern for task forces and legislators at national and international levels. These topic – outcomes of the dataset are a valuable source of detection. The analysis of content revealed algorithms and lexicons which could be applied to other harmful contents. A third outcome of content are the conveyances of Stylistics, which is a rich source of discourse analysis of social media platforms. In conclusion, this corpus linguistics is structured on legislative criteria and could be used in various fields.Keywords: corpus linguistics, cyberbullying, legislation, natural language processing, twitter
Procedia PDF Downloads 8694 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis
Authors: Mehrnaz Mostafavi
Abstract:
The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans
Procedia PDF Downloads 10093 Constitutive Androstane Receptor (CAR) Inhibitor CINPA1 as a Tool to Understand CAR Structure and Function
Authors: Milu T. Cherian, Sergio C. Chai, Morgan A. Casal, Taosheng Chen
Abstract:
This study aims to use CINPA1, a recently discovered small-molecule inhibitor of the xenobiotic receptor CAR (constitutive androstane receptor) for understanding the binding modes of CAR and to guide CAR-mediated gene expression profiling studies in human primary hepatocytes. CAR and PXR are xenobiotic sensors that respond to drugs and endobiotics by modulating the expression of metabolic genes that enhance detoxification and elimination. Elevated levels of drug metabolizing enzymes and efflux transporters resulting from CAR activation promote the elimination of chemotherapeutic agents leading to reduced therapeutic effectiveness. Multidrug resistance in tumors after chemotherapy could be associated with errant CAR activity, as shown in the case of neuroblastoma. CAR inhibitors used in combination with existing chemotherapeutics could be utilized to attenuate multidrug resistance and resensitize chemo-resistant cancer cells. CAR and PXR have many overlapping modulating ligands as well as many overlapping target genes which confounded attempts to understand and regulate receptor-specific activity. Through a directed screening approach we previously identified a new CAR inhibitor, CINPA1, which is novel in its ability to inhibit CAR function without activating PXR. The cellular mechanisms by which CINPA1 inhibits CAR function were also extensively examined along with its pharmacokinetic properties. CINPA1 binding was shown to change CAR-coregulator interactions as well as modify CAR recruitment at DNA response elements of regulated genes. CINPA1 was shown to be broken down in the liver to form two, mostly inactive, metabolites. The structure-activity differences of CINPA1 and its metabolites were used to guide computational modeling using the CAR-LBD structure. To rationalize how ligand binding may lead to different CAR pharmacology, an analysis of the docked poses of human CAR bound to CITCO (a CAR activator) vs. CINPA1 or the metabolites was conducted. From our modeling, strong hydrogen bonding of CINPA1 with N165 and H203 in the CAR-LBD was predicted. These residues were validated to be important for CINPA1 binding using single amino-acid CAR mutants in a CAR-mediated functional reporter assay. Also predicted were residues making key hydrophobic interactions with CINPA1 but not the inactive metabolites. Some of these hydrophobic amino acids were also identified and additionally, the differential coregulator interactions of these mutants were determined in mammalian two-hybrid systems. CINPA1 represents an excellent starting point for future optimization into highly relevant probe molecules to study the function of the CAR receptor in normal- and pathophysiology, and possible development of therapeutics (for e.g. use for resensitizing chemoresistant neuroblastoma cells).Keywords: antagonist, chemoresistance, constitutive androstane receptor (CAR), multi-drug resistance, structure activity relationship (SAR), xenobiotic resistance
Procedia PDF Downloads 28792 Simulation Research of the Aerodynamic Drag of 3D Structures for Individual Transport Vehicle
Authors: Pawel Magryta, Mateusz Paszko
Abstract:
In today's world, a big problem of individual mobility, especially in large urban areas, occurs. Commonly used grand way of transport such as buses, trains or cars do not fulfill their tasks, i.e. they are not able to meet the increasing mobility needs of the growing urban population. Additional to that, the limitations of civil infrastructure construction in the cities exist. Nowadays the most common idea is to transfer the part of urban transport on the level of air transport. However to do this, there is a need to develop an individual flying transport vehicle. The biggest problem occurring in this concept is the type of the propulsion system from which the vehicle will obtain a lifting force. Standard propeller drives appear to be too noisy. One of the ideas is to provide the required take-off and flight power by the machine using the innovative ejector system. This kind of the system will be designed through a suitable choice of the three-dimensional geometric structure with special shape of nozzle in order to generate overpressure. The authors idea is to make a device that would allow to cumulate the overpressure using the a five-sided geometrical structure that will be limited on the one side by the blowing flow of air jet. In order to test this hypothesis a computer simulation study of aerodynamic drag of such 3D structures have been made. Based on the results of these studies, the tests on real model were also performed. The final stage of work was a comparative analysis of the results of simulation and real tests. The CFD simulation studies of air flow was conducted using the Star CD - Star Pro 3.2 software. The design of virtual model was made using the Catia v5 software. Apart from the objective to obtain advanced aviation propulsion system, all of the tests and modifications of 3D structures were also aimed at achieving high efficiency of this device while maintaining the ability to generate high value of overpressures. This was possible only in case of a large mass flow rate of air. All these aspects have been possible to verify using CFD methods for observing the flow of the working medium in the tested model. During the simulation tests, the distribution and size of pressure and velocity vectors were analyzed. Simulations were made with different boundary conditions (supply air pressure), but with a fixed external conditions (ambient temp., ambient pressure, etc.). The maximum value of obtained overpressure is 2 kPa. This value is too low to exploit the power of this device for the individual transport vehicle. Both the simulation model and real object shows a linear dependence of the overpressure values obtained from the different geometrical parameters of three-dimensional structures. Application of computational software greatly simplifies and streamlines the design and simulation capabilities. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: aviation propulsion, CFD, 3d structure, aerodynamic drag
Procedia PDF Downloads 31091 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study
Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari
Abstract:
The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two well known scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a case-study. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means of TRNSYS, which allows to simulate the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With TRNSYS it is possible to obtain quite accurate and reliable results, that allow to identify effective combinations building-HVAC system. The second step has consisted of using output data obtained with TRNSYS as input to the calculation model RETScreen, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing to determine the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while RETScreen provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model RETScreen for different design options. For example, the analysis performed on the building, taken as a case study, found that the most suitable plant solution, taking into account technical, economic and environmental aspects, is the one based on a CCHP system (Combined Cooling, Heating, and Power) using an internal combustion engine.Keywords: energy, system, building, cooling, electrical
Procedia PDF Downloads 57390 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors
Authors: Navid Kaboudi, Ali Shayanfar
Abstract:
Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.Keywords: logistic regression, breastfeeding, descriptors, penetration
Procedia PDF Downloads 7189 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 17088 Polyurethane Membrane Mechanical Property Study for a Novel Carotid Covered Stent
Authors: Keping Zuo, Jia Yin Chia, Gideon Praveen Kumar Vijayakumar, Foad Kabinejadian, Fangsen Cui, Pei Ho, Hwa Liang Leo
Abstract:
Carotid artery is the major vessel supplying blood to the brain. Carotid artery stenosis is one of the three major causes of stroke and the stroke is the fourth leading cause of death and the first leading cause of disability in most developed countries. Although there is an increasing interest in carotid artery stenting for treatment of cervical carotid artery bifurcation therosclerotic disease, currently available bare metal stents cannot provide an adequate protection against the detachment of the plaque fragments over diseased carotid artery, which could result in the formation of micro-emboli and subsequent stroke. Our research group has recently developed a novel preferential covered-stent for carotid artery aims to prevent friable fragments of atherosclerotic plaques from flowing into the cerebral circulation, and yet retaining the ability to preserve the flow of the external carotid artery. The preliminary animal studies have demonstrated the potential of this novel covered-stent design for the treatment of carotid therosclerotic stenosis. The purpose of this study is to evaluate the biomechanical property of PU membrane of different concentration configurations in order to refine the stent coating technique and enhance the clinical performance of our novel carotid covered stent. Results from this study also provide necessary material property information crucial for accurate simulation analysis for our stents. Method: Medical grade Polyurethane (ChronoFlex AR) was used to prepare PU membrane specimens. Different PU membrane configurations were subjected to uniaxial test: 22%, 16%, and 11% PU solution were made by mixing the original solution with proper amount of the Dimethylacetamide (DMAC). The specimens were then immersed in physiological saline solution for 24 hours before test. All specimens were moistened with saline solution before mounting and subsequent uniaxial testing. The specimens were preconditioned by loading the PU membrane sample to a peak stress of 5.5 Mpa for 10 consecutive cycles at a rate of 50 mm/min. The specimens were then stretched to failure at the same loading rate. Result: The results showed that the stress-strain response curves of all PU membrane samples exhibited nonlinear characteristic. For the ultimate failure stress, 22% PU membrane was significantly higher than 16% (p<0.05). In general, our preliminary results showed that lower concentration PU membrane is stiffer than the higher concentration one. From the perspective of mechanical properties, 22% PU membrane is a better choice for the covered stent. Interestingly, the hyperelastic Ogden model is able to accurately capture the nonlinear, isotropic stress-strain behavior of PU membrane with R2 of 0.9977 ± 0.00172. This result will be useful for future biomechanical analysis of our stent designs and will play an important role for computational modeling of our covered stent fatigue study.Keywords: carotid artery, covered stent, nonlinear, hyperelastic, stress, strain
Procedia PDF Downloads 31087 Evaluation of Coupled CFD-FEA Simulation for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham
Abstract:
Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 8986 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 2385 Steel Concrete Composite Bridge: Modelling Approach and Analysis
Authors: Kaviyarasan D., Satish Kumar S. R.
Abstract:
India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge
Procedia PDF Downloads 18584 2,7-diazaindole as a Potential Photophysical Probe for Excited State Deactivation Processes
Authors: Simran Baweja, Bhavika Kalal, Surajit Maity
Abstract:
Photoinduced tautomerization reactions have been the centre of attention among scientific community over past several decades because of their significance in various biological systems. 7-azaindole (7AI) is considered as a model system for DNA base pairing and to understand the role of such tautomerization reactions in mutations. To the best of our knowledge, extensive studies have been carried on 7-azaindole and its solvent clusters exhibiting proton/ hydrogen transfer in both solution as well as gas phase. Derivatives of above molecule, like 2,7- and 2,6-diazaindoles are proposed to have even better photophysical properties due to the presence of -aza group on the 2nd position. However, there are a few studies in the solution phase which suggest the relevance of these molecules, but there are no experimental studies reported in the gas phase yet. In our current investigation, we present the first gas phase spectroscopic data of 2,7-diazaindole (2,7-DAI) and its solvent cluster (2,7-DAI-H2O). In this, we have employed state-of-the-art laser spectroscopic methods such as fluorescence excitation (LIF), dispersed fluorescence (DF), resonant two-photon ionization time of flight mass spectrometry (2C-R2PI), photoionization efficiency spectroscopy (PIE), IR-UV double resonance spectroscopy i.e. fluorescence-dip infrared spectroscopy (FDIR) and resonant ion-dip infrared spectroscopy (IDIR) to understand the electronic structure of the molecule. The origin band corresponding to S1 ← S0 transition of the bare 2,7-DAI is found to be positioned at 33910 cm-1 whereas the origin band corresponding to S1 ← S0 transition of the 2,7-DAI-H2O is positioned at 33074 cm-1. The red shifted transition in case of solvent cluster suggests the enhanced feasibility of excited state hydrogen/ proton transfer. The ionization potential for the 2,7-DAI molecule is found to be 8.92 eV, which is significantly higher that the previously reported 7AI (8.11 eV) molecule, making it a comparatively complex molecule to study. The ionization potential is reduced by 0.14 eV in case of 2,7-DAI-H2O (8.78 eV) cluster compared to that of 2,7-DAI. Moreover, on comparison with the available literature values of 7AI, we found the origin band of 2,7-DAI and 2,7-DAI-H2O to be red shifted by -729 and -280 cm-1 respectively. The ground and excited state N-H stretching frequencies of the 27DAI molecule were determined using fluorescence-dip infrared spectra (FDIR) and resonant ion dip infrared spectroscopy (IDIR), obtained at 3523 and 3467 cm-1, respectively. The lower value of vNH in the electronic excited state of 27DAI implies the higher acidity of the group compared to the ground state. Moreover, we have done extensive computational analysis, which suggests that the energy barrier in excited state reduces significantly as we increase the number of catalytic solvent molecules (S= H2O, NH3) as well as the polarity of solvent molecules. We found that the ammonia molecule is a better candidate for hydrogen transfer compared to water because of its higher gas-phase basicity. Further studies are underway to understand the excited state dynamics and photochemistry of such N-rich chromophores.Keywords: photoinduced tautomerization reactions, gas phse spectroscopy, ), IR-UV double resonance spectroscopy, resonant two-photon ionization time of flight mass spectrometry (2C-R2PI)
Procedia PDF Downloads 8683 Synthesis, Computational Studies, Antioxidant and Anti-Inflammatory Bio-Evaluation of 2,5-Disubstituted- 1,3,4-Oxadiazole Derivatives
Authors: Sibghat Mansoor Rana, Muhammad Islam, Hamid Saeed, Hummera Rafique, Muhammad Majid, Muhammad Tahir Aqeel, Fariha Imtiaz, Zaman Ashraf
Abstract:
The 1,3,4-oxadiazole derivatives Ox-6a-f have been synthesized by incorporating flur- biprofen moiety with the aim to explore the potential of target molecules to decrease the oxidative stress. The title compounds Ox-6a-f were prepared by simple reactions in which a flurbiprofen –COOH group was esterified with methanol in an acid-catalyzed medium, which was then reacted with hydrazine to afford the corresponding hydrazide. The acid hydrazide was then cyclized into 1,3,4-oxadiazole-2-thiol by reacting with CS2 in the presence of KOH. The title compounds Ox-6a-f were synthesized by the reaction of an –SH group with various alkyl/aryl chlorides, which involves an S-alkylation reaction. The structures of the synthesized Ox-6a-f derivatives were ascer- tained by spectroscopic data. The in silico molecular docking was performed against target proteins cyclooxygenase-2 COX-2 (PDBID 5KIR) and cyclooxygenase-1 COX-1 (PDBID 6Y3C) to determine the binding affinity of the synthesized compounds with these structures. It has been inferred that most of the synthesized compounds bind well with an active binding site of 5KIR compared to 6Y3C, and especially compound Ox-6f showed excellent binding affinity (7.70 kcal/mol) among all synthesized compounds Ox-6a-f. The molecular dynamic (MD) simulation has also been performed to check the stability of docking complexes of ligands with COX-2 by determining their root mean square deviation and root mean square fluctuation. Little fluctuation was observed in case of Ox-6f, which forms the most stable complex with COX-2. The comprehensive antioxidant potential of the synthesized compounds has been evaluated by determining their free radical scavenging activity, including DPPH, OH, nitric oxide (NO), and iron chelation assay. The derivative Ox-6f showed promising results with 80.23% radical scavenging potential at a dose of 100 μg/mL while ascorbic acid exhibited 87.72% inhibition at the same dose. The anti-inflammatory activity of the final products has also been performed, and inflammatory markers were assayed, such as a thiobarbituric acid-reducing substance, nitric oxide, interleukin-6 (IL-6), and COX-2. The derivatives Ox-6d and Ox-6f displayed higher anti-inflammatory activity, exhibiting 70.56% and 74.16% activity, respectively. The results were compared with standard ibuprofen, which showed 84.31% activity at the same dose, 200 μg/mL. The anti-inflammatory potential has been performed by following the carrageen-induced hind paw edema model, and results showed that derivative Ox-6f exhibited 79.83% reduction in edema volume compared to standard ibuprofen, which reduced 84.31% edema volume. As dry lab and wet lab results confirm each other, it has been deduced that derivative Ox-6f may serve as the lead structure to design potent compounds to address oxidative stress.Keywords: synthetic chemistry, pharmaceutical chemistry, oxadiazole derivatives, anti-inflammatory, anti-cancer compounds
Procedia PDF Downloads 1582 Text Mining Past Medical History in Electrophysiological Studies
Authors: Roni Ramon-Gonen, Amir Dori, Shahar Shelly
Abstract:
Background and objectives: Healthcare professionals produce abundant textual information in their daily clinical practice. The extraction of insights from all the gathered information, mainly unstructured and lacking in normalization, is one of the major challenges in computational medicine. In this respect, text mining assembles different techniques to derive valuable insights from unstructured textual data, so it has led to being especially relevant in Medicine. Neurological patient’s history allows the clinician to define the patient’s symptoms and along with the result of the nerve conduction study (NCS) and electromyography (EMG) test, assists in formulating a differential diagnosis. Past medical history (PMH) helps to direct the latter. In this study, we aimed to identify relevant PMH, understand which PMHs are common among patients in the referral cohort and documented by the medical staff, and examine the differences by sex and age in a large cohort based on textual format notes. Methods: We retrospectively identified all patients with abnormal NCS between May 2016 to February 2022. Age, gender, and all NCS attributes reports were recorded, including the summary text. All patients’ histories were extracted from the text report by a query. Basic text cleansing and data preparation were performed, as well as lemmatization. Very popular words (like ‘left’ and ‘right’) were deleted. Several words were replaced with their abbreviations. A bag of words approach was used to perform the analyses. Different visualizations which are common in text analysis, were created to easily grasp the results. Results: We identified 5282 unique patients. Three thousand and five (57%) patients had documented PMH. Of which 60.4% (n=1817) were males. The total median age was 62 years (range 0.12 – 97.2 years), and the majority of patients (83%) presented after the age of forty years. The top two documented medical histories were diabetes mellitus (DM) and surgery. DM was observed in 16.3% of the patients, and surgery at 15.4%. Other frequent patient histories (among the top 20) were fracture, cancer (ca), motor vehicle accident (MVA), leg, lumbar, discopathy, back and carpal tunnel release (CTR). When separating the data by sex, we can see that DM and MVA are more frequent among males, while cancer and CTR are less frequent. On the other hand, the top medical history in females was surgery and, after that, DM. Other frequent histories among females are breast cancer, fractures, and CTR. In the younger population (ages 18 to 26), the frequent PMH were surgery, fractures, trauma, and MVA. Discussion: By applying text mining approaches to unstructured data, we were able to better understand which medical histories are more relevant in these circumstances and, in addition, gain additional insights regarding sex and age differences. These insights might help to collect epidemiological demographical data as well as raise new hypotheses. One limitation of this work is that each clinician might use different words or abbreviations to describe the same condition, and therefore using a coding system can be beneficial.Keywords: abnormal studies, healthcare analytics, medical history, nerve conduction studies, text mining, textual analysis
Procedia PDF Downloads 9681 Simulation of Hydraulic Fracturing Fluid Cleanup for Partially Degraded Fracturing Fluids in Unconventional Gas Reservoirs
Authors: Regina A. Tayong, Reza Barati
Abstract:
A stable, fast and robust three-phase, 2D IMPES simulator has been developed for assessing the influence of; breaker concentration on yield stress of filter cake and broken gel viscosity, varying polymer concentration/yield stress along the fracture face, fracture conductivity, fracture length, capillary pressure changes and formation damage on fracturing fluid cleanup in tight gas reservoirs. This model has been validated as against field data reported in the literature for the same reservoir. A 2-D, two-phase (gas/water) fracture propagation model is used to model our invasion zone and create the initial conditions for our clean-up model by distributing 200 bbls of water around the fracture. A 2-D, three-phase IMPES simulator, incorporating a yield-power-law-rheology has been developed in MATLAB to characterize fluid flow through a hydraulically fractured grid. The variation in polymer concentration along the fracture is computed from a material balance equation relating the initial polymer concentration to total volume of injected fluid and fracture volume. All governing equations and the methods employed have been adequately reported to permit easy replication of results. The effect of increasing capillary pressure in the formation simulated in this study resulted in a 10.4% decrease in cumulative production after 100 days of fluid recovery. Increasing the breaker concentration from 5-15 gal/Mgal on the yield stress and fluid viscosity of a 200 lb/Mgal guar fluid resulted in a 10.83% increase in cumulative gas production. For tight gas formations (k=0.05 md), fluid recovery increases with increasing shut-in time, increasing fracture conductivity and fracture length, irrespective of the yield stress of the fracturing fluid. Mechanical induced formation damage combined with hydraulic damage tends to be the most significant. Several correlations have been developed relating pressure distribution and polymer concentration to distance along the fracture face and average polymer concentration variation with injection time. The gradient in yield stress distribution along the fracture face becomes steeper with increasing polymer concentration. The rate at which the yield stress (τ_o) is increasing is found to be proportional to the square of the volume of fluid lost to the formation. Finally, an improvement on previous results was achieved through simulating yield stress variation along the fracture face rather than assuming constant values because fluid loss to the formation and the polymer concentration distribution along the fracture face decreases as we move away from the injection well. The novelty of this three-phase flow model lies in its ability to (i) Simulate yield stress variation with fluid loss volume along the fracture face for different initial guar concentrations. (ii) Simulate increasing breaker activity on yield stress and broken gel viscosity and the effect of (i) and (ii) on cumulative gas production within reasonable computational time.Keywords: formation damage, hydraulic fracturing, polymer cleanup, multiphase flow numerical simulation
Procedia PDF Downloads 13080 Improvement of Electric Aircraft Endurance through an Optimal Propeller Design Using Combined BEM, Vortex and CFD Methods
Authors: Jose Daniel Hoyos Giraldo, Jesus Hernan Jimenez Giraldo, Juan Pablo Alvarado Perilla
Abstract:
Range and endurance are the main limitations of electric aircraft due to the nature of its source of power. The improvement of efficiency on this kind of systems is extremely meaningful to encourage the aircraft operation with less environmental impact. The propeller efficiency highly affects the overall efficiency of the propulsion system; hence its optimization can have an outstanding effect on the aircraft performance. An optimization method is applied to an aircraft propeller in order to maximize its range and endurance by estimating the best combination of geometrical parameters such as diameter and airfoil, chord and pitch distribution for a specific aircraft design at a certain cruise speed, then the rotational speed at which the propeller operates at minimum current consumption is estimated. The optimization is based on the Blade Element Momentum (BEM) method, additionally corrected to account for tip and hub losses, Mach number and rotational effects; furthermore an airfoil lift and drag coefficients approximation is implemented from Computational Fluid Dynamics (CFD) simulations supported by preliminary studies of grid independence and suitability of different turbulence models, to feed the BEM method, with the aim of achieve more reliable results. Additionally, Vortex Theory is employed to find the optimum pitch and chord distribution to achieve a minimum induced loss propeller design. Moreover, the optimization takes into account the well-known brushless motor model, thrust constraints for take-off runway limitations, maximum allowable propeller diameter due to aircraft height and maximum motor power. The BEM-CFD method is validated by comparing its predictions for a known APC propeller with both available experimental tests and APC reported performance curves which are based on Vortex Theory fed with the NASA Transonic Airfoil code, showing a adequate fitting with experimental data even more than reported APC data. Optimal propeller predictions are validated by wind tunnel tests, CFD propeller simulations and a study of how the propeller will perform if it replaces the one of on known aircraft. Some tendency charts relating a wide range of parameters such as diameter, voltage, pitch, rotational speed, current, propeller and electric efficiencies are obtained and discussed. The implementation of CFD tools shows an improvement in the accuracy of BEM predictions. Results also showed how a propeller has higher efficiency peaks when it operates at high rotational speed due to the higher Reynolds at which airfoils present lower drag. On the other hand, the behavior of the current consumption related to the propulsive efficiency shows counterintuitive results, the best range and endurance is not necessary achieved in an efficiency peak.Keywords: BEM, blade design, CFD, electric aircraft, endurance, optimization, range
Procedia PDF Downloads 10879 Allylation of Active Methylene Compounds with Cyclic Baylis-Hillman Alcohols: Why Is It Direct and Not Conjugate?
Authors: Karim Hrratha, Khaled Essalahb, Christophe Morellc, Henry Chermettec, Salima Boughdiria
Abstract:
Among the carbon-carbon bond formation types, allylation of active methylene compounds with cyclic Baylis-Hillman (BH) alcohols is a reliable and widely used method. This reaction is a very attractive tool in organic synthesis of biological and biodiesel compounds. Thus, in view of an insistent and peremptory request for an efficient and straightly method for synthesizing the desired product, a thorough analysis of various aspects of the reaction processes is an important task. The product afforded by the reaction of active methylene with BH alcohols depends largely on the experimental conditions, notably on the catalyst properties. All experiments reported that catalysis is needed for this reaction type because of the poor ability of alcohol hydroxyl group to be as a suitable leaving group. Within the catalysts, several transition- metal based have been used such as palladium in the presence of acid or base and have been considered as reliable methods. Furthemore, acid catalysts such as BF3.OEt2, BiX3 (X= Cl, Br, I, (OTf)3), InCl3, Yb(OTf)3, FeCl3, p-TsOH and H-montmorillonite have been employed to activate the C-C bond formation through the alkylation of active methylene compounds. Interestingly a report of a smoothly process for the ability of 4-imethyaminopyridine(DMAP) to catalyze the allylation reaction of active methylene compounds with cyclic Baylis-Hillman (BH) alcohol appeared recently. However, the reaction mechanism remains ambiguous, since the C- allylation process leads to an unexpected product (noted P1), corresponding to a direct allylation instead of conjugate allylation, which involves the most electrophilic center according to the electron withdrawing group CO effect. The main objective of the present theoretical study is to better understand the role of the DMAP catalytic activity as well as the process leading to the end- product (P1) for the catalytic reaction of a cyclic BH alcohol with active methylene compounds. For that purpose, we have carried out computations of a set of active methylene compounds varying by R1 and R2 toward the same alcohol, and we have attempted to rationalize the mechanisms thanks to the acid–base approach, and conceptual DFT tools such as chemical potential, hardness, Fukui functions, electrophilicity index and dual descriptor, as these approaches have shown a good prediction of reactions products.The present work is then organized as follows: In a first part some computational details will be given, introducing the reactivity indexes used in the present work, then Section 3 is dedicated to the discussion of the prediction of the selectivity and regioselectivity. The paper ends with some concluding remarks. In this work, we have shown, through DFT method at the B3LYP/6-311++G(d,p) level of theory that: The allylation of active methylene compounds with cyclic BH alcohol is governed by orbital control character. Hence the end- product denoted P1 is generated by direct allylation.Keywords: DFT calculation, gas phase pKa, theoretical mechanism, orbital control, charge control, Fukui function, transition state
Procedia PDF Downloads 30678 Geometric Optimisation of Piezoelectric Fan Arrays for Low Energy Cooling
Authors: Alastair Hales, Xi Jiang
Abstract:
Numerical methods are used to evaluate the operation of confined face-to-face piezoelectric fan arrays as pitch, P, between the blades is varied. Both in-phase and counter-phase oscillation are considered. A piezoelectric fan consists of a fan blade, which is clamped at one end, and an extremely low powered actuator. This drives the blade tip’s oscillation at its first natural frequency. Sufficient blade tip speed, created by the high oscillation frequency and amplitude, is required to induce vortices and downstream volume flow in the surrounding air. A single piezoelectric fan may provide the ideal solution for low powered hot spot cooling in an electronic device, but is unable to induce sufficient downstream airflow to replace a conventional air mover, such as a convection fan, in power electronics. Piezoelectric fan arrays, which are assemblies including multiple fan blades usually in face-to-face orientation, must be developed to widen the field of feasible applications for the technology. The potential energy saving is significant, with a 50% power demand reduction compared to convection fans even in an unoptimised state. A numerical model of a typical piezoelectric fan blade is derived and validated against experimental data. Numerical error is found to be 5.4% and 9.8% using two data comparison methods. The model is used to explore the variation of pitch as a function of amplitude, A, for a confined two-blade piezoelectric fan array in face-to-face orientation, with the blades oscillating both in-phase and counter-phase. It has been reported that in-phase oscillation is optimal for generating maximum downstream velocity and flow rate in unconfined conditions, due at least in part to the beneficial coupling between the adjacent blades that leads to an increased oscillation amplitude. The present model demonstrates that confinement has a significant detrimental effect on in-phase oscillation. Even at low pitch, counter-phase oscillation produces enhanced downstream air velocities and flow rates. Downstream air velocity from counter-phase oscillation can be maximally enhanced, relative to that generated from a single blade, by 17.7% at P = 8A. Flow rate enhancement at the same pitch is found to be 18.6%. By comparison, in-phase oscillation at the same pitch outputs 23.9% and 24.8% reductions in peak downstream air velocity and flow rate, relative to that generated from a single blade. This optimal pitch, equivalent to those reported in the literature, suggests that counter-phase oscillation is less affected by confinement. The optimal pitch for generating bulk airflow from counter-phase oscillation is large, P > 16A, due to the small but significant downstream velocity across the span between adjacent blades. However, by considering design in a confined space, counterphase pitch should be minimised to maximise the bulk airflow generated from a certain cross-sectional area within a channel flow application. Quantitative values are found to deviate to a small degree as other geometric and operational parameters are varied, but the established relationships are maintained.Keywords: piezoelectric fans, low energy cooling, power electronics, computational fluid dynamics
Procedia PDF Downloads 22177 Modeling and Design of a Solar Thermal Open Volumetric Air Receiver
Authors: Piyush Sharma, Laltu Chandra, P. S. Ghoshdastidar, Rajiv Shekhar
Abstract:
Metals processing operations such as melting and heat treatment of metals are energy-intensive, requiring temperatures greater than 500oC. The desired temperature in these industrial furnaces is attained by circulating electrically-heated air. In most of these furnaces, electricity produced from captive coal-based thermal power plants is used. Solar thermal energy could be a viable heat source in these furnaces. A retrofitted solar convective furnace (SCF) concept, which uses solar thermal generated hot air, has been proposed. Critical to the success of a SCF is the design of an open volumetric air receiver (OVAR), which can heat air in excess of 800oC. The OVAR is placed on top of a tower and receives concentrated solar radiation from a heliostat field. Absorbers, mixer assembly, and the return air flow chamber (RAFC) are the major components of an OVAR. The absorber is a porous structure that transfers heat from concentrated solar radiation to ambient air, referred to as primary air. The mixer ensures uniform air temperature at the receiver exit. Flow of the relatively cooler return air in the RAFC ensures that the absorbers do not fail by overheating. In an earlier publication, the detailed design basis, fabrication, and characterization of a 2 kWth open volumetric air receiver (OVAR) based laboratory solar air tower simulator was presented. Development of an experimentally-validated, CFD based mathematical model which can ultimately be used for the design and scale-up of an OVAR has been the major objective of this investigation. In contrast to the published literature, where flow and heat transfer have been modeled primarily in a single absorber module, the present study has modeled the entire receiver assembly, including the RAFC. Flow and heat transfer calculations have been carried out in ANSYS using the LTNE model. The complex return air flow pattern in the RAFC requires complicated meshes and is computational and time intensive. Hence a simple, realistic 1-D mathematical model, which circumvents the need for carrying out detailed flow and heat transfer calculations, has also been proposed. Several important results have emerged from this investigation. Circumferential electrical heating of absorbers can mimic frontal heating by concentrated solar radiation reasonably well in testing and characterizing the performance of an OVAR. Circumferential heating, therefore, obviates the need for expensive high solar concentration simulators. Predictions suggest that the ratio of power on aperture (POA) and mass flow rate of air (MFR) is a normalizing parameter for characterizing the thermal performance of an OVAR. Increasing POA/MFR increases the maximum temperature of air, but decreases the thermal efficiency of an OVAR. Predictions of the 1-D mathematical are within 5% of ANSYS predictions and computation time is reduced from ~ 5 hours to a few seconds.Keywords: absorbers, mixer assembly, open volumetric air receiver, return air flow chamber, solar thermal energy
Procedia PDF Downloads 19776 Single Cell Rna Sequencing Operating from Benchside to Bedside: An Interesting Entry into Translational Genomics
Authors: Leo Nnamdi Ozurumba-Dwight
Abstract:
Single-cell genomic analytical systems have proved to be a platform to isolate bulk cells into selected single cells for genomic, proteomic, and related metabolomic studies. This is enabling systematic investigations of the level of heterogeneity in a diverse and wide pool of cell populations. Single cell technologies, embracing techniques such as high parameter flow cytometry, single-cell sequencing, and high-resolution images are playing vital roles in these investigations on messenger ribonucleic acid (mRNA) molecules and related gene expressions in tracking the nature and course of disease conditions. This entails targeted molecular investigations on unit cells that help us understand cell behavoiur and expressions, which can be examined for their health implications on the health state of patients. One of the vital good sides of single-cell RNA sequencing (scRNA seq) is its probing capacity to detect deranged or abnormal cell populations present within homogenously perceived pooled cells, which would have evaded cursory screening on the pooled cell populations of biological samples obtained as part of diagnostic procedures. Despite conduction of just single-cell transcriptome analysis, scRNAseq now permits comparison of the transcriptome of the individual cells, which can be evaluated for gene expressional patterns that depict areas of heterogeneity with pharmaceutical drug discovery and clinical treatment applications. It is vital to strictly work through the tools of investigations from wet lab to bioinformatics and computational tooled analyses. In the precise steps for scRNAseq, it is critical to do thorough and effective isolation of viable single cells from the tissues of interest using dependable techniques (such as FACS) before proceeding to lysis, as this enhances the appropriate picking of quality mRNA molecules for subsequent sequencing (such as by the use of Polymerase Chain Reaction machine). Interestingly, scRNAseq can be deployed to analyze various types of biological samples such as embryos, nervous systems, tumour cells, stem cells, lymphocytes, and haematopoietic cells. In haematopoietic cells, it can be used to stratify acute myeloid leukemia patterns in patients, sorting them out into cohorts that enable re-modeling of treatment regimens based on stratified presentations. In immunotherapy, it can furnish specialist clinician-immunologist with tools to re-model treatment for each patient, an attribute of precision medicine. Finally, the good predictive attribute of scRNAseq can help reduce the cost of treatment for patients, thus attracting more patients who would have otherwise been discouraged from seeking quality clinical consultation help due to perceived high cost. This is a positive paradigm shift for patients’ attitudes primed towards seeking treatment.Keywords: immunotherapy, transcriptome, re-modeling, mRNA, scRNA-seq
Procedia PDF Downloads 176