Search results for: graph matching
176 Multi-scale Spatial and Unified Temporal Feature-fusion Network for Multivariate Time Series Anomaly Detection
Authors: Hang Yang, Jichao Li, Kewei Yang, Tianyang Lei
Abstract:
Multivariate time series anomaly detection is a significant research topic in the field of data mining, encompassing a wide range of applications across various industrial sectors such as traffic roads, financial logistics, and corporate production. The inherent spatial dependencies and temporal characteristics present in multivariate time series introduce challenges to the anomaly detection task. Previous studies have typically been based on the assumption that all variables belong to the same spatial hierarchy, neglecting the multi-level spatial relationships. To address this challenge, this paper proposes a multi-scale spatial and unified temporal feature fusion network, denoted as MSUT-Net, for multivariate time series anomaly detection. The proposed model employs a multi-level modeling approach, incorporating both temporal and spatial modules. The spatial module is designed to capture the spatial characteristics of multivariate time series data, utilizing an adaptive graph structure learning model to identify the multi-level spatial relationships between data variables and their attributes. The temporal module consists of a unified temporal processing module, which is tasked with capturing the temporal features of multivariate time series. This module is capable of simultaneously identifying temporal dependencies among different variables. Extensive testing on multiple publicly available datasets confirms that MSUT-Net achieves superior performance on the majority of datasets. Our method is able to model and accurately detect systems data with multi-level spatial relationships from a spatial-temporal perspective, providing a novel perspective for anomaly detection analysis.Keywords: data mining, industrial system, multivariate time series, anomaly detection
Procedia PDF Downloads 16175 Characterising the Dynamic Friction in the Staking of Plain Spherical Bearings
Authors: Jacob Hatherell, Jason Matthews, Arnaud Marmier
Abstract:
Anvil Staking is a cold-forming process that is used in the assembly of plain spherical bearings into a rod-end housing. This process ensures that the bearing outer lip conforms to the chamfer in the matching rod end to produce a lightweight mechanical joint with sufficient strength to meet the pushout load requirement of the assembly. Finite Element (FE) analysis is being used extensively to predict the behaviour of metal flow in cold forming processes to support industrial manufacturing and product development. On-going research aims to validate FE models across a wide range of bearing and rod-end geometries by systematically isolating and understanding the uncertainties caused by variations in, material properties, load-dependent friction coefficients and strain rate sensitivity. The improved confidence in these models aims to eliminate the costly and time-consuming process of experimental trials in the introduction of new bearing designs. Previous literature has shown that friction coefficients do not remain constant during cold forming operations, however, the understanding of this phenomenon varies significantly and is rarely implemented in FE models. In this paper, a new approach to evaluate the normal contact pressure versus friction coefficient relationship is outlined using friction calibration charts generated via iterative FE models and ring compression tests. When compared to previous research, this new approach greatly improves the prediction of forming geometry and the forming load during the staking operation. This paper also aims to standardise the FE approach to modelling ring compression test and determining the friction calibration charts.Keywords: anvil staking, finite element analysis, friction coefficient, spherical plain bearing, ring compression tests
Procedia PDF Downloads 205174 Design and Development of a Mechanical Force Gauge for the Square Watermelon Mold
Authors: Morteza Malek Yarand, Hadi Saebi Monfared
Abstract:
This study aimed at designing and developing a mechanical force gauge for the square watermelon mold for the first time. It also tried to introduce the square watermelon characteristics and its production limitations. The mechanical force gauge performance and the product itself were also described. There are three main designable gauge models: a. hydraulic gauge, b. strain gauge, and c. mechanical gauge. The advantage of the hydraulic model is that it instantly displays the pressure and thus the force exerted by the melon. However, considering the inability to measure forces at all directions, complicated development, high cost, possible hydraulic fluid leak into the fruit chamber and the possible influence of increased ambient temperature on the fluid pressure, the development of this gauge was overruled. The second choice was to calculate pressure using the direct force a strain gauge. The main advantage of these strain gauges over spring types is their high precision in measurements; but with regard to the lack of conformity of strain gauge working range with water melon growth, calculations were faced with problems. Finally the mechanical pressure gauge has advantages, including the ability to measured forces and pressures on the mold surface during melon growth; the ability to display the peak forces; the ability to produce melon growth graph thanks to its continuous force measurements; the conformity of its manufacturing materials with the required physical conditions of melon growth; high air conditioning capability; the ability to permit sunlight reaches the melon rind (no yellowish skin and quality loss); fast and straightforward calibration; no damages to the product during assembling and disassembling; visual check capability of the product within the mold; applicable to all growth environments (field, greenhouses, etc.); simple process; low costs and so forth.Keywords: mechanical force gauge, mold, reshaped fruit, square watermelon
Procedia PDF Downloads 274173 Development of Soil Test Kits to Determine Organic Matter Available Phosphorus and Exchangeable Potassium in Thailand
Authors: Charirat Kusonwiriyawong, Supha Photichan, Wannarut Chutibutr
Abstract:
Soil test kits for rapid analysis of the organic matter, available phosphorus and exchangeable potassium were developed to drive a low-cost field testing kit to farmers. The objective was to provide a decision tool for improving soil fertility. One aspect of soil test kit development was ease of use which is a time requirement for completing organic matter, available phosphorus and exchangeable potassium test in one soil sample. This testing kit required only two extractions and utilized no filtration consuming approximately 15 minutes per sample. Organic matter was principally created by oxidizing carbon KMnO₄ using the standard color chart. In addition, modified single extractant (Mehlich I) was applied to extract available phosphorus and exchangeable potassium. Molybdenum blue method and turbidimetric method using standard color chart were adapted to analyze available phosphorus and exchangeable potassium, respectively. Modified single extractant using in soil test kits were highly significant matching with analytical laboratory results (r=0.959** and 0.945** for available phosphorus and exchangeable potassium, respectively). Linear regressions were statistically calculated between modified single extractant and standard laboratory analysis (y=0.9581x-12.973 for available phosphorus and y=0.5372x+15.283 for exchangeable potassium, respectively). These equations were calibrated to formulate a fertilizer rate recommendation for specific corps. To validate quality, soil test kits were distributed to farmers and extension workers. We found that the accuracy of soil test kits were 71.0%, 63.9% and 65.5% for organic matter, available phosphorus, and exchangeable potassium, respectively. The quantitative survey was also conducted in order to assess their satisfaction with soil test kits. The survey showed that more than 85% of respondents said these testing kits were more convenient, economical and reliable than the other commercial soil test kits. Based upon the finding of this study, soil test kits can be another alternative for providing soil analysis and fertility recommendations when a soil testing laboratory is not available.Keywords: available phosphorus, exchangeable potassium, modified single extractant, organic matter, soil test kits
Procedia PDF Downloads 147172 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology
Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani
Abstract:
Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography
Procedia PDF Downloads 424171 2.4 GHz 0.13µM Multi Biased Cascode Power Amplifier for ISM Band Wireless Applications
Authors: Udayan Patankar, Shashwati Bhagat, Vilas Nitneware, Ants Koel
Abstract:
An ISM band power amplifier is a type of electronic amplifier used to convert a low-power radio-frequency signal into a larger signal of significant power, typically used for driving the antenna of a transmitter. Due to drastic changes in telecommunication generations may lead to the requirements of improvements. Rapid changes in communication lead to the wide implementation of WLAN technology for its excellent characteristics, such as high transmission speed, long communication distance, and high reliability. Many applications such as WLAN, Bluetooth, and ZigBee, etc. were evolved with 2.4GHz to 5 GHz ISM Band, in which the power amplifier (PA) is a key building block of RF transmitters. There are many manufacturing processes available to manufacture a power amplifier for desired power output, but the major problem they have faced is about the power it consumed for its proper working, as many of them are fabricated on the GaN HEMT, Bi COMS process. In this paper we present a CMOS Base two stage cascode design of power amplifier working on 2.4GHz ISM frequency band. To lower the costs and allow full integration of a complete System-on-Chip (SoC) we have chosen 0.13µm low power CMOS technology for design. While designing a power amplifier, it is a real task to achieve higher power efficiency with minimum resources. This design showcase the Multi biased Cascode methodology to implement a two-stage CMOS power amplifier using ADS and LTSpice simulating tool. Main source is maximum of 2.4V which is internally distributed into different biasing point VB driving and VB driven as required for distinct stages of two stage RF power amplifier. It shows maximum power added efficiency near about 70.195% whereas its Power added efficiency calculated at 1 dB compression point is 44.669 %. Biased MOSFET is used to reduce total dc current as this circuit is designed for different wireless applications comes under 2.4GHz ISM Band.Keywords: RFIC, PAE, RF CMOS, impedance matching
Procedia PDF Downloads 225170 Completion of the Modified World Health Organization (WHO) Partograph during Labour in Public Health Institutions of Addis Ababa, Ethiopia
Authors: Engida Yisma, Berhanu Dessalegn, Ayalew Astatkie, Nebreed Fesseha
Abstract:
Background: The World Health Organization (WHO) recommends using the partograph to follow labour and delivery, with the objective to improve health care and reduce maternal and foetal morbidity and death. Methods: A retrospective document review was undertaken to assess the completion of the modified WHO partograph during labour in public health institutions of Addis Ababa, Ethiopia. A total of 420 of the modified WHO partographs used to monitor mothers in labour from five public health institutions that provide maternity care were reviewed. A structured checklist was used to gather the required data. The collected data were analyzed using SPSS version 16.0. Frequency distributions, cross-tabulations and a graph were used to describe the results of the study. Results: All facilities were using the modified WHO partograph. The correct completion of the partograph was very low. From 420 partographs reviewed across all the five health facilities, foetal heart rate was recorded into the recommended standard in 129(30.7%) of the partographs, while 138 (32.9%) of cervical dilatation and 87 (20.70%) of uterine contractions were recorded to the recommended standard. The study did not document descent of the presenting part in 353 (84%). Moulding in 364 (86.7%) of the partographs reviewed was not recorded. Documentation of state of the liquor was 113(26.9%), while the maternal blood pressure was recorded to standard only in 78(18.6%) of the partographs reviewed. Conclusions: This study showed a poor completion of the modified WHO partographs during labour in public health institutions of Addis Ababa, Ethiopia. The findings may reflect poor management of labour and indicate the need for pre-service and periodic on-job training of health workers on the proper completion of the partograph. Regular supportive supervision, provision of guidelines and mandatory health facility policy are also needed in support of a collaborative effort to reduce maternal and perinatal deaths.Keywords: modified WHO partograph, completion, public health institutions, Addis Ababa, Ethiopia
Procedia PDF Downloads 349169 Normalized P-Laplacian: From Stochastic Game to Image Processing
Authors: Abderrahim Elmoataz
Abstract:
More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems
Procedia PDF Downloads 513168 Detection of Bcl2 Polymorphism in Patient with Hepatocellular carcinoma
Authors: Mohamed Abdel-Hamid, Olfat Gamil Shaker, Doha El-Sayed Ellakwa, Eman Fathy Abdel-Maksoud
Abstract:
Introduction: Despite advances in the knowledge of the molecular virology of hepatitis C virus (HCV), the mechanisms of hepatocellular injury in HCV infection are not completely understood. Hepatitis C viral infection (HCV) influences the susceptibility to apoptosis. This could lead to insufficient antiviral immune response and persistent viral infection. Aim of this study: was to examine whether BCL-2 gene polymorphism at codon 43 (+127G/A or Ala43Thr) has an impact on development of hepatocellular carcinoma caused by chronic hepatitis C Egyptian patients. Subjects and Methods: The study included three groups; group 1: composing of 30 patients with hepatocellular carcinoma (HCC), group 2 composing of 30 patients with HCV, group 3 composing of 30 healthy subjects matching the same age and socioeconomic status were taken as a control group. Gene polymorphism of BCL2 (Ala43Thr) were evaluated by PCR-RFLP technique and measured for all patients and controls. Results: The summed 43Thr genotype was more frequent and statistically significant in HCC patients as compared to control group. This genotype of BCL2 gene may inhibit the programmed cell death which leads to disturbance in tissue and cells homeostasis and reduction in immune regulation. This result leads to viral replication and HCV persistence. Moreover, virus produces variety of mechanisms to block genes participated in apoptosis. This mechanism proves that HCV patients who have 43Thr genotype are more susceptible to HCC. Conclusion: The data suggest for the first time that the BCL2 polymorphism is associated with the susceptibility to HCC in Egyptian populations and might be used as molecular markers for evaluating HCC risk. This study clearly demonstrated that Chronic HCV exhibit a deregulation of apoptosis with the disease progression. This provides an insight into the pathogenesis of chronic HCV infection, and may contribute to the therapy.Keywords: BCL2 gene, Hepatitis C Virus, Hepatocellular carcinoma, sensitivity, specificity, apoptosis
Procedia PDF Downloads 508167 Secondary True to Life Polyethylene Terephthalate Nanoplastics: Obtention, Characterization, and Hazard Evaluation
Authors: Aliro Villacorta, Laura Rubio, Mohamed Alaraby, Montserrat López Mesas, Victor Fuentes-Cebrian, Oscar H. Moriones, Ricard Marcos, Alba Hernández.
Abstract:
Micro and nano plastics (MNPLs) are emergent environmental pollutants requiring urgent information on their potential risks to human health. One of the problems associated with the evaluation of their undesirable effects is the lack of real samples matching those resulting from the environmental degradation of plastic wastes. To such end, we propose an easy method to obtain polyethylene terephthalate nano plastics from water plastic bottles (PET-NPLs) but, in principle, applicable to any other plastic goods sources. An extensive characterization indicates that the proposed process produces uniform samples of PET-NPLs of around 100 nm, as determined by using a multi-angle and dynamic light scattering methodology. An important point to be highlighted is that to avoid the metal contamination resulting from methods using metal blades/burrs for milling, trituration, or sanding, we propose to use diamond burrs to produce metal-free samples. To visualize the toxicological profile of the produced PET-NPLs, we have evaluated their ability to be internalized by cells, their cytotoxicity, and their ability to induce oxidative stress and induce DNA damage. In this preliminary approach, we have detected their cellular uptake, but without the induction of significant biological effects. Thus, no relevant increases in toxicity, reactive oxygen species (ROS) induction, or DNA damage -as detected with the comet assay- have been observed. The use of real samples, as produced in this study, will generate relevant data in the discussion about the potential health risks associated with MNPLs exposures.Keywords: nanoplastics, polyethylene terephthalate, physicochemical characterization, cell uptake, cytotoxicity
Procedia PDF Downloads 98166 Influence of Instrumental Playing on Attachment Type of Musicians and Music Students Using Adult Attachment Scale-R
Authors: Sofia Serra-Dawa
Abstract:
Adult relationships accrue on a variety of past social experiences, intentions, and emotions that might predispose and influence the approach to and construction of subsequent relationships. The Adult Attachment Theory (AAT) proposes four types of adult attachment, where attachment is built over two dimensions of anxiety and avoidance: secure, anxious-preoccupied, dismissive-avoidant, and fearful-avoidant. The AAT has been studied in multiple settings such as personal and therapeutic relationships, educational settings, sexual orientation, health, and religion. In music scholarship, the AAT has been used to frame class learning of student singers and study the relational behavior between voice teachers and students. Building on this study, the present inquiry studies how attachment types might characterize learning relationships of music students (in the Western Conservatory tradition), and whether particular instrumental experiences might correlate to given attachment styles. Given certain behavioral cohesive features of established traditions of instrumental playing and performance modes, it is hypothesized that student musicians will display specific characteristics correlated to instrumental traditions, demonstrating clear tendency of attachment style, which in turn has implications on subsequent professional interactions. This study is informed by the methodological framework of Adult Attachment Scale-R (Collins and Read, 1990), which was particularly chosen given its non-invasive questions and classificatory validation. It is further hypothesized that the analytical comparison of musicians’ profiles has the potential to serve as the baseline for other comparative behavioral observation studies [this component is expected to be verified and completed well before the conference meeting]. This research may have implications for practitioners concerned with matching and improving musical teaching and learning relationships and in (professional and amateur) long-term musical settings.Keywords: adult attachment, music education, musicians attachment profile, musicians relationships
Procedia PDF Downloads 158165 Vulnerability Assessment of Healthcare Interdependent Critical Infrastructure Coloured Petri Net Model
Authors: N. Nivedita, S. Durbha
Abstract:
Critical Infrastructure (CI) consists of services and technological networks such as healthcare, transport, water supply, electricity supply, information technology etc. These systems are necessary for the well-being and to maintain effective functioning of society. Critical Infrastructures can be represented as nodes in a network where they are connected through a set of links depicting the logical relationship among them; these nodes are interdependent on each other and interact with each at other at various levels, such that the state of each infrastructure influences or is correlated to the state of another. Disruption in the service of one infrastructure nodes of the network during a disaster would lead to cascading and escalating disruptions across other infrastructures nodes in the network. The operation of Healthcare Infrastructure is one such Critical Infrastructure that depends upon a complex interdependent network of other Critical Infrastructure, and during disasters it is very vital for the Healthcare Infrastructure to be protected, accessible and prepared for a mass casualty. To reduce the consequences of a disaster on the Critical Infrastructure and to ensure a resilient Critical Health Infrastructure network, knowledge, understanding, modeling, and analyzing the inter-dependencies between the infrastructures is required. The paper would present inter-dependencies related to Healthcare Critical Infrastructure based on Hierarchical Coloured Petri Nets modeling approach, given a flood scenario as the disaster which would disrupt the infrastructure nodes. The model properties are being analyzed for the various state changes which occur when there is a disruption or damage to any of the Critical Infrastructure. The failure probabilities for the failure risk of interconnected systems are calculated by deriving a reachability graph, which is later mapped to a Markov chain. By analytically solving and analyzing the Markov chain, the overall vulnerability of the Healthcare CI HCPN model is demonstrated. The entire model would be integrated with Geographic information-based decision support system to visualize the dynamic behavior of the interdependency of the Healthcare and related CI network in a geographically based environment.Keywords: critical infrastructure interdependency, hierarchical coloured petrinet, healthcare critical infrastructure, Petri Nets, Markov chain
Procedia PDF Downloads 530164 Brain Connectome of Glia, Axons, and Neurons: Cognitive Model of Analogy
Authors: Ozgu Hafizoglu
Abstract:
An analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with physical, behavioral, principal relations that are essential to learning, discovery, and innovation. The Cognitive Model of Analogy (CMA) leads and creates patterns of pathways to transfer information within and between domains in science, just as happens in the brain. The connectome of the brain shows how the brain operates with mental leaps between domains and mental hops within domains and the way how analogical reasoning mechanism operates. This paper demonstrates the CMA as an evolutionary approach to science, technology, and life. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions in the new era, especially post-pandemic. In this paper, we will reveal how to draw an analogy to scientific research to discover new systems that reveal the fractal schema of analogical reasoning within and between the systems like within and between the brain regions. Distinct phases of the problem-solving processes are divided thusly: stimulus, encoding, mapping, inference, and response. Based on the brain research so far, the system is revealed to be relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain’s mechanism in macro context; brain and spinal cord, and micro context: glia and neurons, relative to matching conditions of analogical reasoning and relational information, encoding, mapping, inference and response processes, and verification of perceptual responses in four-term analogical reasoning. Finally, we will relate all these terminologies with these mental leaps, mental maps, mental hops, and mental loops to make the mental model of CMA clear.Keywords: analogy, analogical reasoning, brain connectome, cognitive model, neurons and glia, mental leaps, mental hops, mental loops
Procedia PDF Downloads 165163 A Frictional-Collisional Closure Model for the Saturated Granular Flow: Experimental Evidence and Two Phase Modelling
Authors: Yunhui Sun, Qingquan Liu, Xiaoliang Wang
Abstract:
Dense granular flows widely exist in geological flows such as debris flow, landslide, or sheet flow, where both the interparticle and solid-liquid interactions are important to modify the flow. So, a two-phase approach with both phases correctly modelled is important for a better investigation of the saturated granular flows. However, a proper closure model covering a wide range of flowing states for the solid phase is still lacking. This study first employs a chute flow experiment based on the refractive index matching method, which makes it possible to obtain internal flow information such as velocity, shear rate, granular fluctuation, and volume fraction. The granular stress is obtained based on a steady assumption. The kinetic theory is found to describe the stress dependence on the flow state well. More importantly, the granular rheology is found to be frictionally dominated under weak shear and collisionally dominated under strong shear. The results presented thus provide direct experimental evidence on a possible frictional-collisional closure model for the granular phase. The data indicates that both frictional stresses exist over a wide range of the volume fraction, though traditional theory believes it vanishes below a critical volume fraction. Based on the findings, a two-phase model is used to simulate the chute flow. Both phases are modelled as continuum media, and the inter-phase interactions, such as drag force and pressure gradient force, are considered. The frictional-collisional model is used for the closure of the solid phase stress. The profiles of the kinematic properties agree well with the experiments. This model is further used to simulate immersed granular collapse, which is unsteady in nature, to study the applicability of this model, which is derived from steady flow.Keywords: closure model, collision, friction, granular flow, two-phase model
Procedia PDF Downloads 59162 Elastodynamic Response of Shear Wave Dispersion in a Multi-Layered Concentric Cylinders Composed of Reinforced and Piezo-Materials
Authors: Sunita Kumawat, Sumit Kumar Vishwakarma
Abstract:
The present study fundamentally focuses on analyzing the limitations and transference of horizontally polarized Shear waves(SH waves) in a four-layered compounded cylinder. The geometrical structure comprises of concentric cylinders of infinite length composed of self-reinforced (SR), fibre-reinforced (FR), piezo-magnetic (PM), and piezo-electric(PE) materials. The entire structure is assumed to be pre stressed along the azimuthal direction. In order to make the structure sensitive to the application pertaining to sensors and actuators, the PM and PE cylinders have been categorically placed in the outer part of the geometry. Whereas in order to provide stiffness and stability to the structure, the inner part consists of self-reinforced and fibre-reinforced media. The common boundary between each of the cylinders has been essentially considered as imperfectly bounded. At the interface of PE and PM media, mechanical, electrical, magnetic, and inter-coupled types of imperfections have been exhibited. The closed-form of dispersion relation has been deduced for two contrast cases i.e. electrically open magnetically short(EOMS) and electrically short and magnetically open ESMO circuit conditions. Dispersion curves have been plotted to illustrate the salient features of parameters like normalized imperfect interface parameters, initial stresses, and radii of the concentric cylinders. The comparative effect of each one of these parameters on the phase velocity of the wave has been enlisted and marked individually. Every graph has been presented with two consecutive modes in succession for a comprehensive understanding. This theoretical study may be implemented to improvise the performance of surface acoustic wave (SAW) sensors and actuators consisting of piezo-electric quartz and piezo-composite concentric cylinders.Keywords: self-reinforced, fibre-reinforced, piezo-electric, piezo-magnetic, interfacial imperfection
Procedia PDF Downloads 109161 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs
Authors: Anika Chebrolu
Abstract:
Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.Keywords: drug design, multitargeticity, de-novo, reinforcement learning
Procedia PDF Downloads 99160 Harmonizing Spatial Plans: A Methodology to Integrate Sustainable Mobility and Energy Plans to Promote Resilient City Planning
Authors: B. Sanchez, D. Zambrana-Vasquez, J. Fresner, C. Krenn, F. Morea, L. Mercatelli
Abstract:
Local administrations are facing established targets on sustainable development from different disciplines at the heart of different city departments. Nevertheless, some of these targets, such as CO2 reduction, relate to two or more disciplines, as it is the case of sustainable mobility and energy plans (SUMP & SECAP/SEAP). This opens up the possibility to efficiently cooperate among different city departments and to create and develop harmonized spatial plans by using available resources and together achieving more ambitious goals in cities. The steps of the harmonization processes developed result in the identification of areas to achieve common strategic objectives. Harmonization, in other words, helps different departments in local authorities to work together and optimize the use or resources by sharing the same vision, involving key stakeholders, and promoting common data assessment to better optimize the resources. A methodology to promote resilient city planning via the harmonization of sustainable mobility and energy plans is presented in this paper. In order to validate the proposed methodology, a representative city engaged in an innovation process in efficient spatial planning is used as a case study. The harmonization process of sustainable mobility and energy plans covers identifying matching targets between different fields, developing different spatial plans with dual benefit and common indicators guaranteeing the continuous improvement of the harmonized plans. The proposed methodology supports local administrations in consistent spatial planning, considering both energy efficiency and sustainable mobility. Thus, municipalities can use their human and economic resources efficiently. This guarantees an efficient upgrade of land use plans integrating energy and mobility aspects in order to achieve sustainability targets, as well as to improve the wellbeing of its citizens.Keywords: integrated multi-sector planning, spatial plans harmonization, sustainable energy and climate action plan, sustainable urban mobility plan
Procedia PDF Downloads 178159 Photocapacitor Integrating Solar Energy Conversion and Energy Storage
Authors: Jihuai Wu, Zeyu Song, Zhang Lan, Liuxue Sun
Abstract:
Solar energy is clean, open, and infinite, but solar radiation on the earth is fluctuating, intermittent, and unstable. So, the sustainable utilization of solar energy requires a combination of high-efficient energy conversion and low-loss energy storage technologies. Hence, a photo capacitor integrated with photo-electrical conversion and electric-chemical storage functions in single device is a cost-effective, volume-effective and functional-effective optimal choice. However, owing to the multiple components, multi-dimensional structure and multiple functions in one device, especially the mismatch of the functional modules, the overall conversion and storage efficiency of the photocapacitors is less than 13%, which seriously limits the development of the integrated system of solar conversion and energy storage. To this end, two typical photocapacitors were studied. A three-terminal photocapacitor was integrated by using perovskite solar cell as solar conversion module and symmetrical supercapacitor as energy storage module. A function portfolio management concept was proposed the relationship among various efficiencies during photovoltaic conversion and energy storage process were clarified. By harmonizing the energy matching between conversion and storage modules and seeking the maximum power points coincide and the maximum efficiency points synchronize, the overall efficiency of the photocapacitor surpassed 18 %, and Joule efficiency was closed to 90%. A voltage adjustable hybrid supercapacitor (VAHSC) was designed as energy storage module, and two Si wafers in series as solar conversion module, a three-terminal photocapacitor was fabricated. The VAHSC effectively harmonizes the energy harvest and storage modules, resulting in the current, voltage, power, and energy match between both modules. The optimal photocapacitor achieved an overall efficiency of 15.49% and Joule efficiency of 86.01%, along with excellent charge/discharge cycle stability. In addition, the Joule efficiency (ηJoule) was defined as the energy ratio of discharge/charge of the devices for the first time.Keywords: joule efficiency, perovskite solar cell, photocapacitor, silicon solar cell, supercapacitor
Procedia PDF Downloads 87158 Analysis of Cycling Accessibility on Chengdu Tianfu Greenway Based on Improved Two-Step Floating Catchment Area Method: A Case Study of Jincheng Greenway
Authors: Qin Zhu
Abstract:
Under the background of accelerating the construction of Beautiful and Livable Park City in Chengdu, the Tianfu greenway system, as an important support system for the construction of parks in the whole region, its accessibility is one of the key indicators to measure the effectiveness of the greenway construction. In recent years, cycling has become an important transportation mode for residents to go to the greenways because of its low-carbon, healthy and convenient characteristics, and the study of greenway accessibility under cycling mode can provide reference suggestions for the optimization and improvement of greenways. Taking Jincheng Greenway in Chengdu City as an example, the Baidu Map Application Programming Interface (API) and questionnaire survey was used to improve the two-step floating catchment area (2SFCA) method from the three dimensions of search threshold, supply side and demand side, to calculate the cycling accessibility of the greenway and to explore the spatial matching relationship with the population density, the number of entrances and the comprehensive attractiveness. The results show that: 1) the distribution of greenway accessibility in Jincheng shows a pattern of "high in the south and low in the north, high in the west and low in the east", 2) the spatial match between greenway accessibility and population density of the residential area is imbalanced, and there is a significant positive correlation between accessibility and the number of selectable greenway access points in residential areas, as well as the overall attractiveness of greenways, with a high degree of match. On this basis, it is proposed to give priority to the mismatch area to alleviate the contradiction between supply and demand, optimize the greenway access points to improve the traffic connection, enhance the comprehensive quality of the greenway and strengthen the service capacity, to further improve the cycling accessibility of the Jincheng Greenway and improve the spatial allocation of greenway resources.Keywords: accessibility, Baidu maps API, cycling, greenway, 2SFCA
Procedia PDF Downloads 86157 Environmental Impact Assessment in Mining Regions with Remote Sensing
Authors: Carla Palencia-Aguilar
Abstract:
Calculations of Net Carbon Balance can be obtained by means of Net Biome Productivity (NBP), Net Ecosystem Productivity (NEP), and Net Primary Production (NPP). The latter is an important component of the biosphere carbon cycle and is easily obtained data from MODIS MOD17A3HGF; however, the results are only available yearly. To overcome data availability, bands 33 to 36 from MODIS MYD021KM (obtained on a daily basis) were analyzed and compared with NPP data from the years 2000 to 2021 in 7 sites where surface mining takes place in the Colombian territory. Coal, Gold, Iron, and Limestone were the minerals of interest. Scales and Units as well as thermal anomalies, were considered for net carbon balance per location. The NPP time series from the satellite images were filtered by using two Matlab filters: First order and Discrete Transfer. After filtering the NPP time series, comparing the graph results from the satellite’s image value, and running a linear regression, the results showed R2 from 0,72 to 0,85. To establish comparable units among NPP and bands 33 to 36, the Greenhouse Gas Equivalencies Calculator by EPA was used. The comparison was established in two ways: one by the sum of all the data per point per year and the other by the average of 46 weeks and finding the percentage that the value represented with respect to NPP. The former underestimated the total CO2 emissions. The results also showed that coal and gold mining in the last 22 years had less CO2 emissions than limestone, with an average per year of 143 kton CO2 eq for gold, 152 kton CO2 eq for coal, and 287 kton CO2 eq for iron. Limestone emissions varied from 206 to 441 kton CO2 eq. The maximum emission values from unfiltered data correspond to 165 kton CO2 eq. for gold, 188 kton CO2 eq. for coal, and 310 kton CO2 eq. for iron and limestone, varying from 231 to 490 kton CO2 eq. If the most pollutant limestone site improves its production technology, limestone could count with a maximum of 318 kton CO2 eq emissions per year, a value very similar respect to iron. The importance of gathering data is to establish benchmarks in order to attain 2050’s zero emissions goal.Keywords: carbon dioxide, NPP, MODIS, MINING
Procedia PDF Downloads 105156 Analysis of Lift Arm Failure and Its Improvement for the Use in Farm Tractor
Authors: Japinder Wadhawan, Pradeep Rajan, Alok K. Saran, Navdeep S. Sidhu, Daanvir K. Dhir
Abstract:
Currently, research focus in the development of agricultural equipment and tractor parts in India is innovation and use of alternate materials like austempered ductile iron (ADI). Three-point linkage mechanism of the tractor is susceptible to unpredictable load conditions in the field, and one of the critical components vulnerable to failure is lift arm. Conventionally, lift arm is manufactured either by forging or casting (SG Iron) and main objective of the present work is to reduce the failure occurrences in the lift arm, which is achieved by changing the manufacturing material, i.e ADI, without changing existing design. Effect of four pertinent variables of manufacturing ADI, viz. austenitizing temperature, austenitizing time, austempering temperature, austempering time, was investigated using Taguchi method for design of experiments. To analyze the effect of parameters on the mechanical properties, mean average and signal-to-noise (S/N) ratio was calculated based on the design of experiments with L9 orthogonal array and the linear graph. The best combination for achieving the desired mechanical properties of lift arm is austenitization at 860°C for 90 minutes and austempering at 350°C for 60 minutes. Results showed that the developed component is having 925 MPA tensile strength, 7.8 per cent elongation and 120 joules toughness making it more suitable material for lift arm manufacturing. The confirmatory experiment has been performed and found a good agreement between predicted and experimental value. Also, the CAD model of the existing design was developed in computer aided design software, and structural loading calculations were performed by a commercial finite element analysis package. An optimized shape of the lift arm has also been proposed resulting in light weight and cheaper product than the existing design, which can withstand the same loading conditions effectively.Keywords: austempered ductile iron, design of experiment, finite element analysis, lift arm
Procedia PDF Downloads 233155 FEM for Stress Reduction by Optimal Auxiliary Holes in a Loaded Plate with Elliptical Hole
Authors: Basavaraj R. Endigeri, S. G. Sarganachari
Abstract:
Steel is widely used in machine parts, structural equipment and many other applications. In many steel structural elements, holes of different shapes and orientations are made with a view to satisfy the design requirements. The presence of holes in steel elements creates stress concentration, which eventually reduce the mechanical strength of the structure. Therefore, it is of great importance to investigate the state of stress around the holes for the safety and properties design of such elements. By literature survey, it is known that till date, there is no analytical solution to reduce the stress concentration by providing auxiliary holes at a definite location and radii in a steel plate. The numerical method can be used to determine the optimum location and radii of auxiliary holes. In the present work plate with an elliptical hole, for a steel material subjected to uniaxial load is analyzed and the effect of stress concentration is graphically represented .The introduction of auxiliary holes at a optimum location and radii with its effect on stress concentration is also represented graphically. The finite element analysis package ANSYS 11.0 is used to analyse the steel plate. The analysis is carried out using a plane 42 element. Further the ANSYS optimization model is used to determine the location and radii for optimum values of auxiliary hole to reduce stress concentration. All the results for different diameter to plate width ratio are presented graphically. The results of this study are in the form of the graphs for determining the locations and diameter of optimal auxiliary holes. The graph of stress concentration v/s central hole diameter to plate width ratio. The Finite Elements results of the study indicates that the stress concentration effect of central elliptical hole in an uniaxial loaded plate can be reduced by introducing auxiliary holes on either side of the central circular hole.Keywords: finite element method, optimization, stress concentration factor, auxiliary holes
Procedia PDF Downloads 453154 Spatial Differentiation of Elderly Care Facilities in Mountainous Cities: A Case Study of Chongqing
Abstract:
In this study, a web crawler was used to collect POI sample data from 38 districts and counties of Chongqing in 2022, and ArcGIS was combined to coordinate and projection conversion and realize data visualization. Nuclear density analysis and spatial correlation analysis were used to explore the spatial distribution characteristics of elderly care facilities in Chongqing, and K mean cluster analysis was carried out with GeoDa to study the spatial concentration degree of elderly care resources in 38 districts and counties. Finally, the driving force of spatial differentiation of elderly care facilities in various districts and counties of Chongqing is studied by using the method of geographic detector. The results show that: (1) in terms of spatial distribution structure, the distribution of elderly care facilities in Chongqing is unbalanced, showing a distribution pattern of ‘large dispersion and small agglomeration’ and the asymmetric pattern of ‘west dense and east sparse, north dense and south sparse’ is prominent. (2) In terms of the spatial matching between elderly care resources and the elderly population, there is a weak coordination between the input of elderly care resources and the distribution of the elderly population at the county level in Chongqing. (3) The analysis of the results of the geographical detector shows that the single factor influence is mainly the number of elderly population, public financial revenue and district and county GDP. The high single factor influence is mainly caused by the elderly population, public financial income, and district and county GDP. The influence of each influence factor on the spatial distribution of elderly care facilities is not simply superimposed but has a nonlinear enhancement effect or double factor enhancement. It is necessary to strengthen the synergistic effect of two factors and promote the synergistic effect of multiple factors.Keywords: aging, elderly care facilities, spatial differentiation, geographical detector, driving force analysis, Mountain city
Procedia PDF Downloads 40153 An Assessment on Socio-Economic Impacts of Smallholder Eucalyptus Tree Plantation in the Case of Northwest Ethiopia
Authors: Mersha Tewodros Getnet, Mengistu Ketema, Bamlaku Alemu, Girma Demilew
Abstract:
The availability of forest products determines the possibilities for forest-based livelihood options. Plantation forest is a widespread economic activity in highland areas of the Amhara regional state, owing primarily to degradation and limited access to natural forests. As a result, tree plantation has become one of the rural livelihood options in the area. Therefore, given the increasing importance of smallholder plantations in highland areas of Amhara Regional States, the aim of this research was to evaluate the extent of smallholder plantations and their socio-economic impact. To address the abovementioned research, a sequential embedded mixed research design was employed. This qualitative and quantitative information was gathered from both primary and secondary sources. Primary data were collected from 385 sample households, which were chosen using a three-stage, multi-stage sampling method based on the Cochran sample size formula. Both descriptive and inferential statistics were used to analyze the data. Smallholder eucalyptus plantations in the study area were discovered to be common, and they are now part of the livelihood portfolio for meeting both household wood consumption and generating cash income. According to the PSM model's ATT results, income from selling farm forest products certainly contributes more to total household income, farm expenditure per cultivated land, and education spending than non-planter households. As a result, the government must strengthen plantation practices by prioritizing specific intervention areas while implementing measures to counteract the plantation's inequality-increasing effect through a variety of means, including progressive taxation.Keywords: smallholder plantation, Eucalyptus, propensity score matching, average treatment effect and income
Procedia PDF Downloads 139152 Legal Judgment Prediction through Indictments via Data Visualization in Chinese
Authors: Kuo-Chun Chien, Chia-Hui Chang, Ren-Der Sun
Abstract:
Legal Judgment Prediction (LJP) is a subtask for legal AI. Its main purpose is to use the facts of a case to predict the judgment result. In Taiwan's criminal procedure, when prosecutors complete the investigation of the case, they will decide whether to prosecute the suspect and which article of criminal law should be used based on the facts and evidence of the case. In this study, we collected 305,240 indictments from the public inquiry system of the procuratorate of the Ministry of Justice, which included 169 charges and 317 articles from 21 laws. We take the crime facts in the indictments as the main input to jointly learn the prediction model for law source, article, and charge simultaneously based on the pre-trained Bert model. For single article cases where the frequency of the charge and article are greater than 50, the prediction performance of law sources, articles, and charges reach 97.66, 92.22, and 60.52 macro-f1, respectively. To understand the big performance gap between articles and charges, we used a bipartite graph to visualize the relationship between the articles and charges, and found that the reason for the poor prediction performance was actually due to the wording precision. Some charges use the simplest words, while others may include the perpetrator or the result to make the charges more specific. For example, Article 284 of the Criminal Law may be indicted as “negligent injury”, "negligent death”, "business injury", "driving business injury", or "non-driving business injury". As another example, Article 10 of the Drug Hazard Control Regulations can be charged as “Drug Control Regulations” or “Drug Hazard Control Regulations”. In order to solve the above problems and more accurately predict the article and charge, we plan to include the article content or charge names in the input, and use the sentence-pair classification method for question-answer problems in the BERT model to improve the performance. We will also consider a sequence-to-sequence approach to charge prediction.Keywords: legal judgment prediction, deep learning, natural language processing, BERT, data visualization
Procedia PDF Downloads 122151 Integrated Genetic-A* Graph Search Algorithm Decision Model for Evaluating Cost and Quality of School Renovation Strategies
Authors: Yu-Ching Cheng, Yi-Kai Juan, Daniel Castro
Abstract:
Energy consumption of buildings has been an increasing concern for researchers and practitioners in the last decade. Sustainable building renovation can reduce energy consumption and carbon dioxide emissions; meanwhile, it also can extend existing buildings useful life and facilitate environmental sustainability while providing social and economic benefits to the society. School buildings are different from other designed spaces as they are more crowded and host the largest portion of daily activities and occupants. Strategies that focus on reducing energy use but also improve the students’ learning environment becomes a significant subject in sustainable school buildings development. A decision model is developed in this study to solve complicated and large-scale combinational, discrete and determinate problems such as school renovation projects. The task of this model is to automatically search for the most cost-effective (lower cost and higher quality) renovation strategies. In this study, the search process of optimal school building renovation solutions is by nature a large-scale zero-one programming determinate problem. A* is suitable for solving deterministic problems due to its stable and effective search process, and genetic algorithms (GA) provides opportunities to acquire global optimal solutions in a short time via its indeterminate search process based on probability. These two algorithms are combined in this study to consider trade-offs between renovation cost and improved quality, this decision model is able to evaluate current school environmental conditions and suggest an optimal scheme of sustainable school buildings renovation strategies. Through adoption of this decision model, school managers can overcome existing limitations and transform school buildings into spaces more beneficial to students and friendly to the environment.Keywords: decision model, school buildings, sustainable renovation, genetic algorithm, A* search algorithm
Procedia PDF Downloads 119150 System Identification of Building Structures with Continuous Modeling
Authors: Ruichong Zhang, Fadi Sawaged, Lotfi Gargab
Abstract:
This paper introduces a wave-based approach for system identification of high-rise building structures with a pair of seismic recordings, which can be used to evaluate structural integrity and detect damage in post-earthquake structural condition assessment. The fundamental of the approach is based on wave features of generalized impulse and frequency response functions (GIRF and GFRF), i.e., wave responses at one structural location to an impulsive motion at another reference location in time and frequency domains respectively. With a pair of seismic recordings at the two locations, GFRF is obtainable as Fourier spectral ratio of the two recordings, and GIRF is then found with the inverse Fourier transformation of GFRF. With an appropriate continuous model for the structure, a closed-form solution of GFRF, and subsequent GIRF, can also be found in terms of wave transmission and reflection coefficients, which are related to structural physical properties above the impulse location. Matching the two sets of GFRF and/or GIRF from recordings and the model helps identify structural parameters such as wave velocity or shear modulus. For illustration, this study examines ten-story Millikan Library in Pasadena, California with recordings of Yorba Linda earthquake of September 3, 2002. The building is modelled as piecewise continuous layers, with which GFRF is derived as function of such building parameters as impedance, cross-sectional area, and damping. GIRF can then be found in closed form for some special cases and numerically in general. Not only does this study reveal the influential factors of building parameters in wave features of GIRF and GRFR, it also shows some system-identification results, which are consistent with other vibration- and wave-based results. Finally, this paper discusses the effectiveness of the proposed model in system identification.Keywords: wave-based approach, seismic responses of buildings, wave propagation in structures, construction
Procedia PDF Downloads 234149 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics
Authors: Titus A. Beu
Abstract:
Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.
Procedia PDF Downloads 120148 Exploring the Design of Prospective Human Immunodeficiency Virus Type 1 Reverse Transcriptase Inhibitors through a Comprehensive Approach of Quantitative Structure Activity Relationship Study, Molecular Docking, and Molecular Dynamics Simulations
Authors: Mouna Baassi, Mohamed Moussaoui, Sanchaita Rajkhowa, Hatim Soufi, Said Belaaouad
Abstract:
The objective of this paper is to address the challenging task of targeting Human Immunodeficiency Virus type 1 Reverse Transcriptase (HIV-1 RT) in the treatment of AIDS. Reverse Transcriptase inhibitors (RTIs) have limitations due to the development of Reverse Transcriptase mutations that lead to treatment resistance. In this study, a combination of statistical analysis and bioinformatics tools was adopted to develop a mathematical model that relates the structure of compounds to their inhibitory activities against HIV-1 Reverse Transcriptase. Our approach was based on a series of compounds recognized for their HIV-1 RT enzymatic inhibitory activities. These compounds were designed via software, with their descriptors computed using multiple tools. The most statistically promising model was chosen, and its domain of application was ascertained. Furthermore, compounds exhibiting comparable biological activity to existing drugs were identified as potential inhibitors of HIV-1 RT. The compounds underwent evaluation based on their chemical absorption, distribution, metabolism, excretion, toxicity properties, and adherence to Lipinski's rule. Molecular docking techniques were employed to examine the interaction between the Reverse Transcriptase (Wild Type and Mutant Type) and the ligands, including a known drug available in the market. Molecular dynamics simulations were also conducted to assess the stability of the RT-ligand complexes. Our results reveal some of the new compounds as promising candidates for effectively inhibiting HIV-1 Reverse Transcriptase, matching the potency of the established drug. This necessitates further experimental validation. This study, beyond its immediate results, provides a methodological foundation for future endeavors aiming to discover and design new inhibitors targeting HIV-1 Reverse Transcriptase.Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation, reverse transcriptase inhibitors, HIV type 1
Procedia PDF Downloads 93147 Mild Hypothermia Versus Normothermia in Patients Undergoing Cardiac Surgery: A Propensity Matched Analysis
Authors: Ramanish Ravishankar, Azar Hussain, Mahmoud Loubani, Mubarak Chaudhry
Abstract:
Background and Aims: Currently, there are no strict guidelines in cardiopulmonary bypass temperature management in cardiac surgery not involving the aortic arch. This study aims to compare patient outcomes undergoing mild hypothermia and normothermia. The aim of this study was to compare patient outcomes between mild hypothermia and normothermia undergoing on-pump cardiac surgery not involving the aortic arch. Methods: This was a retrospective cohort study from January 2015 until May 2023. Patients who underwent cardiac surgery with cardiopulmonary bypass temperatures ≥32oC were included and stratified into mild hypothermia (32oC – 35oC) and normothermia (>35oC) cohorts. Propensity matching was applied through the nearest neighbour method (1:1) using the risk factors detailed in the EuroScore using RStudio. The primary outcome was mortality. Secondary outcomes included post-op stay, intensive care unit readmission, re-admission, stroke, and renal complications. Patients who had major aortic surgery and off-pump operations were excluded. Results: Each cohort had 1675 patients. There was a significant increase in overall mortality with the mild hypothermia cohort (3.59% vs. 2.32%; p=0.04912). There was also a greater stroke incidence (2.09% vs. 1.13%; p=0.0396) and transient ischaemic attack (TIA) risk (3.1% vs. 1.49%; p=0.0027). There was no significant difference in renal complications (9.13% vs. 7.88%; p=0.2155). Conclusions: Patient’s who underwent mild hypothermia during cardiopulmonary bypass have a significantly greater mortality, stroke, and transient ischaemic attack incidence. Mild hypothermia does not appear to provide any benefit over normothermia and does not appear to provide any neuroprotective benefits. This shows different results to that of other major studies; further trials and studies need to be conducted to reach a consensus.Keywords: cardiac surgery, therapeutic hypothermia, neuroprotection, cardiopulmonary bypass
Procedia PDF Downloads 68