Search results for: computational modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5656

Search results for: computational modeling

1126 Structural Protein-Protein Interactions Network of Breast Cancer Lung and Brain Metastasis Corroborates Conformational Changes of Proteins Lead to Different Signaling

Authors: Farideh Halakou, Emel Sen, Attila Gursoy, Ozlem Keskin

Abstract:

Protein–Protein Interactions (PPIs) mediate major biological processes in living cells. The study of PPIs as networks and analyze the network properties contribute to the identification of genes and proteins associated with diseases. In this study, we have created the sub-networks of brain and lung metastasis from primary tumor in breast cancer. To do so, we used seed genes known to cause metastasis, and produced their interactions through a network-topology based prioritization method named GUILDify. In order to have the experimental support for the sub-networks, we further curated them using STRING database. We proceeded by modeling structures for the interactions lacking complex forms in Protein Data Bank (PDB). The functional enrichment analysis shows that KEGG pathways associated with the immune system and infectious diseases, particularly the chemokine signaling pathway, are important for lung metastasis. On the other hand, pathways related to genetic information processing are more involved in brain metastasis. The structural analyses of the sub-networks vividly demonstrated their difference in terms of using specific interfaces in lung and brain metastasis. Furthermore, the topological analysis identified genes such as RPL5, MMP2, CCR5 and DPP4, which are already known to be associated with lung or brain metastasis. Additionally, we found 6 and 9 putative genes that are specific for lung and brain metastasis, respectively. Our analysis suggests that variations in genes and pathways contributing to these different breast metastasis types may arise due to change in tissue microenvironment. To show the benefits of using structural PPI networks instead of traditional node and edge presentation, we inspect two case studies showing the mutual exclusiveness of interactions and effects of mutations on protein conformation which lead to different signaling.

Keywords: breast cancer, metastasis, PPI networks, protein conformational changes

Procedia PDF Downloads 247
1125 Thermal Transport Properties of Common Transition Single Metal Atom Catalysts

Authors: Yuxi Zhu, Zhenqian Chen

Abstract:

It is of great interest to investigate the thermal properties of non-precious metal catalysts for Proton exchange membrane fuel cell (PEMFC) based on the thermal management requirements. Due to the low symmetry of materials, to accurately obtain the thermal conductivity of materials, it is necessary to obtain the second and third order force constants by combining density functional theory and machine learning interatomic potential. To be specific, the interatomic force constants are obtained by moment tensor potential (MTP), which is trained by the computational trajectory of Ab initio molecular dynamics (AIMD) at 50, 300, 600, and 900 K for 1 ps each, with a time step of 1 fs in the AIMD computation. And then the thermal conductivity can be obtained by solving the Boltzmann transport equation. In this paper, the thermal transport properties of single metal atom catalysts are studied for the first time to our best knowledge by machine-learning interatomic potential (MLIP). Results show that the single metal atom catalysts exhibit anisotropic thermal conductivities and partially exhibit good thermal conductivity. The average lattice thermal conductivities of G-FeN₄, G-CoN₄ and G-NiN₄ at 300 K are 88.61 W/mK, 205.32 W/mK and 210.57 W/mK, respectively. While other single metal atom catalysts show low thermal conductivity due to their low phonon lifetime. The results also show that low-frequency phonons (0-10 THz) dominate thermal transport properties. The results provide theoretical insights into the application of single metal atom catalysts in thermal management.

Keywords: proton exchange membrane fuel cell, single metal atom catalysts, density functional theory, thermal conductivity, machine-learning interatomic potential

Procedia PDF Downloads 32
1124 Influence of Hygro-Thermo-Mechanical Loading on Buckling and Vibrational Behavior of FG-CNT Composite Beam with Temperature Dependent Characteristics

Authors: Puneet Kumar, Jonnalagadda Srinivas

Abstract:

The authors report here vibration and buckling analysis of functionally graded carbon nanotube-polymer composite (FG-CNTPC) beams under hygro-thermo-mechanical environments using higher order shear deformation theory. The material properties of CNT and polymer matrix are often affected by temperature and moisture content. A micromechanical model with agglomeration effect is employed to compute the elastic, thermal and moisture properties of the composite beam. The governing differential equation of FG-CNTRPC beam is developed using higher-order shear deformation theory to account shear deformation effects. The elastic, thermal and hygroscopic strain terms are derived from variational principles. Moreover, thermal and hygroscopic loads are determined by considering uniform, linear and sinusoidal variation of temperature and moisture content through the thickness. Differential equations of motion are formulated as an eigenvalue problem using appropriate displacement fields and solved by using finite element modeling. The obtained results of natural frequencies and critical buckling loads show a good agreement with published data. The numerical illustrations elaborate the dynamic as well as buckling behavior under uniaxial load for different environmental conditions, boundary conditions and volume fraction distribution profile, beam slenderness ratio. Further, comparisons are shown at different boundary conditions, temperatures, degree of moisture content, volume fraction as well as agglomeration of CNTs, slenderness ratio of beam for different shear deformation theories.

Keywords: hygrothermal effect, free vibration, buckling load, agglomeration

Procedia PDF Downloads 267
1123 A Study on Factors Affecting (Building Information Modelling) BIM Implementation in European Renovation Projects

Authors: Fatemeh Daneshvartarigh

Abstract:

New technologies and applications have radically altered construction techniques in recent years. In order to anticipate how the building will act, perform, and appear, these technologies encompass a wide range of visualization, simulation, and analytic tools. These new technologies and applications have a considerable impact on completing construction projects in today's (architecture, engineering and construction)AEC industries. The rate of changes in BIM-related topics is different worldwide, and it depends on many factors, e.g., the national policies of each country. Therefore, there is a need for comprehensive research focused on a specific area with common characteristics. Therefore, one of the necessary measures to increase the use of this new approach is to examine the challenges and obstacles facing it. In this research, based on the Delphi method, at first, the background and related literature are reviewed. Then, using the knowledge obtained from the literature, a primary questionnaire is generated and filled by experts who are selected using snowball sampling. It covered the experts' attitudes towards implementing BIM in renovation projects and their view of the benefits and obstacles in this regard. By analyzing the primary questionnaire, the second group of experts is selected among the participants to be interviewed. The results are analyzed using Theme analysis. Six themes, including Management support, staff resistance, client willingness, Cost of software and implementation, the difficulty of implementation, and other reasons, are obtained. Then a final questionnaire is generated from the themes and filled by the same group of experts. The result is analyzed by the Fuzzy Delphi method, showing the exact ranking of the obtained themes. The final results show that management support, staff resistance, and client willingness are the most critical barrier to BIM usage in renovation projects.

Keywords: building information modeling, BIM, BIM implementation, BIM barriers, BIM in renovation

Procedia PDF Downloads 170
1122 High-Resolution Flood Hazard Mapping Using Two-Dimensional Hydrodynamic Model Anuga: Case Study of Jakarta, Indonesia

Authors: Hengki Eko Putra, Dennish Ari Putro, Tri Wahyu Hadi, Edi Riawan, Junnaedhi Dewa Gede, Aditia Rojali, Fariza Dian Prasetyo, Yudhistira Satya Pribadi, Dita Fatria Andarini, Mila Khaerunisa, Raditya Hanung Prakoswa

Abstract:

Catastrophe risk management can only be done if we are able to calculate the exposed risks. Jakarta is an important city economically, socially, and politically and in the same time exposed to severe floods. On the other hand, flood risk calculation is still very limited in the area. This study has calculated the risk of flooding for Jakarta using 2-Dimensional Model ANUGA. 2-Dimensional model ANUGA and 1-Dimensional Model HEC-RAS are used to calculate the risk of flooding from 13 major rivers in Jakarta. ANUGA can simulate physical and dynamical processes between the streamflow against river geometry and land cover to produce a 1-meter resolution inundation map. The value of streamflow as an input for the model obtained from hydrological analysis on rainfall data using hydrologic model HEC-HMS. The probabilistic streamflow derived from probabilistic rainfall using statistical distribution Log-Pearson III, Normal and Gumbel, through compatibility test using Chi Square and Smirnov-Kolmogorov. Flood event on 2007 is used as a comparison to evaluate the accuracy of model output. Property damage estimations were calculated based on flood depth for 1, 5, 10, 25, 50, and 100 years return period against housing value data from the BPS-Statistics Indonesia, Centre for Research and Development of Housing and Settlements, Ministry of Public Work Indonesia. The vulnerability factor was derived from flood insurance claim. Jakarta's flood loss estimation for the return period of 1, 5, 10, 25, 50, and 100 years, respectively are Rp 1.30 t; Rp 16.18 t; Rp 16.85 t; Rp 21.21 t; Rp 24.32 t; and Rp 24.67 t of the total value of building Rp 434.43 t.

Keywords: 2D hydrodynamic model, ANUGA, flood, flood modeling

Procedia PDF Downloads 278
1121 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 179
1120 Modeling of Glycine Transporters in Mammalian Using the Probability Approach

Authors: K. S. Zaytsev, Y. R. Nartsissov

Abstract:

Glycine is one of the key inhibitory neurotransmitters in Central nervous system (CNS) meanwhile glycinergic transmission is highly dependable on its appropriate reuptake from synaptic cleft. Glycine transporters (GlyT) of types 1 and 2 are the enzymes providing glycine transport back to neuronal and glial cells along with Na⁺ and Cl⁻ co-transport. The distribution and stoichiometry of GlyT1 and GlyT2 differ in details, and GlyT2 is more interesting for the research as it reuptakes glycine to neuron cells, whereas GlyT1 is located in glial cells. In the process of GlyT2 activity, the translocation of the amino acid is accompanied with binding of both one chloride and three sodium ions consequently (two sodium ions for GlyT1). In the present study, we developed a computer simulator of GlyT2 and GlyT1 activity based on known experimental data for quantitative estimation of membrane glycine transport. The trait of a single protein functioning was described using the probability approach where each enzyme state was considered separately. Created scheme of transporter functioning realized as a consequence of elemental steps allowed to take into account each event of substrate association and dissociation. Computer experiments using up-to-date kinetic parameters allowed receiving the number of translocated glycine molecules, Na⁺ and Cl⁻ ions per time period. Flexibility of developed software makes it possible to evaluate glycine reuptake pattern in time under different internal characteristics of enzyme conformational transitions. We investigated the behavior of the system in a wide range of equilibrium constant (from 0.2 to 100), which is not determined experimentally. The significant influence of equilibrium constant in the range from 0.2 to 10 on the glycine transfer process is shown. The environmental conditions such as ion and glycine concentrations are decisive if the values of the constant are outside the specified range.

Keywords: glycine, inhibitory neurotransmitters, probability approach, single protein functioning

Procedia PDF Downloads 123
1119 Systematic Review of Associations between Interoception, Vagal Tone, and Emotional Regulation

Authors: Darren Edwards, Thomas Pinna

Abstract:

Background: Interoception and heart rate variability have been found to predict outcomes of mental health and well-being. However, these have usually been investigated independently of one another. Objectives: This review aimed to explore the associations between interoception and heart rate variability (HRV) with emotion regulation (ER) and ER strategies within the existing literature and utilizing systematic review methodology. Methods: The process of article retrieval and selection followed the preferred reporting items for systematic review and meta-analyses (PRISMA) guidelines. Databases PsychINFO, Web of Science, PubMed, CINAHL, and MEDLINE were scanned for papers published. Preliminary inclusion and exclusion criteria were specified following the patient, intervention, comparison, and outcome (PICO) framework, whilst the checklist for critical appraisal and data extraction for systematic reviews of prediction modeling studies (CHARMS) framework was used to help formulate the research question, and to critically assess for bias in the identified full-length articles. Results: 237 studies were identified after initial database searches. Of these, eight studies were included in the final selection. Six studies explored the associations between HRV and ER, whilst three investigated the associations between interoception and ER (one of which was included in the HRV selection too). Overall, the results seem to show that greater HRV and interoception are associated with better ER. Specifically, high parasympathetic activity largely predicted the use of adaptive ER strategies such as reappraisal, and better acceptance of emotions. High interoception, instead, was predictive of effective down-regulation of negative emotions and handling of social uncertainty, there was no association with any specific ER strategy. Conclusions: Awareness of one’s own bodily feelings and vagal activation seem to be of central importance for the effective regulation of emotional responses.

Keywords: emotional regulation, vagal tone, interoception, chronic conditions, health and well-being, psychological flexibility

Procedia PDF Downloads 117
1118 Analyzing Nonsimilar Convective Heat Transfer in Copper/Alumina Nanofluid with Magnetic Field and Thermal Radiations

Authors: Abdulmohsen Alruwaili

Abstract:

A partial differential system featuring momentum and energy balance is often used to describe simulations of flow initiation and thermal shifting in boundary layers. The buoyancy force in terms of temperature is factored in the momentum balance equation. Buoyancy force causes the flow quantity to fluctuate along the streamwise direction 𝑋; therefore, the problem can be, to our best knowledge, analyzed through nonsimilar modeling. In this analysis, a nonsimilar model is evolved for radiative mixed convection of a magnetized power-law nanoliquid flow on top of a vertical plate installed in a stationary fluid. The upward linear stretching initiated the flow in the vertical direction. Assuming nanofluids are composite of copper (Cu) and alumina (Al₂O₃) nanoparticles, the viscous dissipation in this case is negligible. The nonsimilar system is dealt with analytically by local nonsimilarity (LNS) via numerical algorithm bvp4c. Surface temperature and flow field are shown visually in relation to factors like mixed convection, magnetic field strength, nanoparticle volume fraction, radiation parameters, and Prandtl number. The repercussions of magnetic and mixed convection parameters on the rate of energy transfer and friction coefficient are represented in tabular forms. The results obtained are compared to the published literature. It is found that the existence of nanoparticles significantly improves the temperature profile of considered nanoliquid. It is also observed that when the estimates of the magnetic parameter increase, the velocity profile decreases. Enhancement in nanoparticle concentration and mixed convection parameter improves the velocity profile.

Keywords: nanofluid, power law model, mixed convection, thermal radiation

Procedia PDF Downloads 40
1117 Weakly Solving Kalah Game Using Artificial Intelligence and Game Theory

Authors: Hiba El Assibi

Abstract:

This study aims to weakly solve Kalah, a two-player board game, by developing a start-to-finish winning strategy using an optimized Minimax algorithm with Alpha-Beta Pruning. In weakly solving Kalah, our focus is on creating an optimal strategy from the game's beginning rather than analyzing every possible position. The project will explore additional enhancements like symmetry checking and code optimizations to speed up the decision-making process. This approach is expected to give insights into efficient strategy formulation in board games and potentially help create games with a fair distribution of outcomes. Furthermore, this research provides a unique perspective on human versus Artificial Intelligence decision-making in strategic games. By comparing the AI-generated optimal moves with human choices, we can explore how seemingly advantageous moves can, in the long run, be harmful, thereby offering a deeper understanding of strategic thinking and foresight in games. Moreover, this paper discusses the evaluation of our strategy against existing methods, providing insights on performance and computational efficiency. We also discuss the scalability of our approach to the game, considering different board sizes (number of pits and stones) and rules (different variations) and studying how that affects performance and complexity. The findings have potential implications for the development of AI applications in strategic game planning, enhancing our understanding of human cognitive processes in game settings, and offer insights into creating balanced and engaging game experiences.

Keywords: minimax, alpha beta pruning, transposition tables, weakly solving, game theory

Procedia PDF Downloads 58
1116 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations

Authors: Karthikeyan Kalirajan, Ashok Joshi

Abstract:

An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.

Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection

Procedia PDF Downloads 430
1115 A Rationale to Describe Ambident Reactivity

Authors: David Ryan, Martin Breugst, Turlough Downes, Peter A. Byrne, Gerard P. McGlacken

Abstract:

An ambident nucleophile is a nucleophile that possesses two or more distinct nucleophilic sites that are linked through resonance and are effectively “in competition” for reaction with an electrophile. Examples include enolates, pyridone anions, and nitrite anions, among many others. Reactions of ambident nucleophiles and electrophiles are extremely prevalent at all levels of organic synthesis. The principle of hard and soft acids and bases (the “HSAB principle”) is most commonly cited in the explanation of selectivities in such reactions. Although this rationale is pervasive in any discussion on ambident reactivity, the HSAB principle has received considerable criticism. As a result, the principle’s supplantation has become an area of active interest in recent years. This project focuses on developing a model for rationalizing ambident reactivity. Presented here is an approach that incorporates computational calculations and experimental kinetic data to construct Gibbs energy profile diagrams. The preferred site of alkylation of nitrite anion with a range of ‘hard’ and ‘soft’ alkylating agents was established by ¹H NMR spectroscopy. Pseudo-first-order rate constants were measured directly by ¹H NMR reaction monitoring, and the corresponding second-order constants and Gibbs energies of activation were derived. These, in combination with computationally derived standard Gibbs energies of reaction, were sufficient to construct Gibbs energy wells. By representing the ambident system as a series of overlapping Gibbs energy wells, a more intuitive picture of ambident reactivity emerges. Here, previously unexplained switches in reactivity in reactions involving closely related electrophiles are elucidated.

Keywords: ambident, Gibbs, nucleophile, rates

Procedia PDF Downloads 90
1114 Numerical Investigation of Multiphase Flow in Pipelines

Authors: Gozel Judakova, Markus Bause

Abstract:

We present and analyze reliable numerical techniques for simulating complex flow and transport phenomena related to natural gas transportation in pipelines. Such kind of problems are of high interest in the field of petroleum and environmental engineering. Modeling and understanding natural gas flow and transformation processes during transportation is important for the sake of physical realism and the design and operation of pipeline systems. In our approach a two fluid flow model based on a system of coupled hyperbolic conservation laws is considered for describing natural gas flow undergoing hydratization. The accurate numerical approximation of two-phase gas flow remains subject of strong interest in the scientific community. Such hyperbolic problems are characterized by solutions with steep gradients or discontinuities, and their approximation by standard finite element techniques typically gives rise to spurious oscillations and numerical artefacts. Recently, stabilized and discontinuous Galerkin finite element techniques have attracted researchers’ interest. They are highly adapted to the hyperbolic nature of our two-phase flow model. In the presentation a streamline upwind Petrov-Galerkin approach and a discontinuous Galerkin finite element method for the numerical approximation of our flow model of two coupled systems of Euler equations are presented. Then the efficiency and reliability of stabilized continuous and discontinous finite element methods for the approximation is carefully analyzed and the potential of the either classes of numerical schemes is investigated. In particular, standard benchmark problems of two-phase flow like the shock tube problem are used for the comparative numerical study.

Keywords: discontinuous Galerkin method, Euler system, inviscid two-fluid model, streamline upwind Petrov-Galerkin method, twophase flow

Procedia PDF Downloads 334
1113 Pareto System of Optimal Placement and Sizing of Distributed Generation in Radial Distribution Networks Using Particle Swarm Optimization

Authors: Sani M. Lawal, Idris Musa, Aliyu D. Usman

Abstract:

The Pareto approach of optimal solutions in a search space that evolved in multi-objective optimization problems is adopted in this paper, which stands for a set of solutions in the search space. This paper aims at presenting an optimal placement of Distributed Generation (DG) in radial distribution networks with an optimal size for minimization of power loss and voltage deviation as well as maximizing voltage profile of the networks. And these problems are formulated using particle swarm optimization (PSO) as a constraint nonlinear optimization problem with both locations and sizes of DG being continuous. The objective functions adopted are the total active power loss function and voltage deviation function. The multiple nature of the problem, made it necessary to form a multi-objective function in search of the solution that consists of both the DG location and size. The proposed PSO algorithm is used to determine optimal placement and size of DG in a distribution network. The output indicates that PSO algorithm technique shows an edge over other types of search methods due to its effectiveness and computational efficiency. The proposed method is tested on the standard IEEE 34-bus and validated with 33-bus test systems distribution networks. Results indicate that the sizing and location of DG are system dependent and should be optimally selected before installing the distributed generators in the system and also an improvement in the voltage profile and power loss reduction have been achieved.

Keywords: distributed generation, pareto, particle swarm optimization, power loss, voltage deviation

Procedia PDF Downloads 369
1112 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence

Authors: K. N. Kiran, S. Anish

Abstract:

It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.

Keywords: boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow

Procedia PDF Downloads 352
1111 Molecular Design and Synthesis of Heterocycles Based Anticancer Agents

Authors: Amna J. Ghith, Khaled Abu Zid, Khairia Youssef, Nasser Saad

Abstract:

Backgrounds: The multikinase and vascular endothelial growth factor (VEGF) receptor inhibitors interrupt the pathway by which angiogenesis becomes established and promulgated, resulting in the inadequate nourishment of metastatic disease. VEGFR-2 has been the principal target of anti-angiogenic therapies. We disclose the new thieno pyrimidines as inhibitors of VEGFR-2 designed by a molecular modeling approach with increased synergistic activity and decreased side effects. Purpose: 2-substituted thieno pyrimidines are designed and synthesized with anticipated anticancer activity based on its in silico molecular docking study that supports the initial pharmacophoric hypothesis with a same binding mode of interaction at the ATP-binding site of VEGFR-2 (PDB 2QU5) with high docking score. Methods: A series of compounds were designed using discovery studio 4.1/CDOCKER with a rational that mimic the pharmacophoric features present in the reported active compounds that targeted VEGFR-2. An in silico ADMET study was also performed to validate the bioavailability of the newly designed compounds. Results: The Compounds to be synthesized showed interaction energy comparable to or within the range of the benzimidazole inhibitor ligand when docked with VEGFR-2. ADMET study showed comparable results most of the compounds showed absorption within (95-99) zone varying according to different substitutions attached to thieno pyrimidine ring system. Conclusions: A series of 2-subsituted thienopyrimidines are to be synthesized with anticipated anticancer activity and according to docking study structure requirement for the design of VEGFR-2 inhibitors which can act as powerful anticancer agents.

Keywords: docking, discovery studio 4.1/CDOCKER, heterocycles based anticancer agents, 2-subsituted thienopyrimidines

Procedia PDF Downloads 248
1110 Tracing Digital Traces of Phatic Communion in #Mooc

Authors: Judith Enriquez-Gibson

Abstract:

This paper meddles with the notion of phatic communion introduced 90 years ago by Malinowski, who was a Polish-born British anthropologist. It explores the phatic in Twitter within the contents of tweets related to moocs (massive online open courses) as a topic or trend. It is not about moocs though. It is about practices that could easily be hidden or neglected if we let big or massive topics take the lead or if we simply follow the computational or secret codes behind Twitter itself and third party software analytics. It draws from media and cultural studies. Though at first it appears data-driven as I submitted data collection and analytics into the hands of a third party software, Twitonomy, the aim is to follow how phatic communion might be practised in a social media site, such as Twitter. Lurking becomes its research method to analyse mooc-related tweets. A total of 3,000 tweets were collected on 11 October 2013 (UK timezone). The emphasis of lurking is to engage with Twitter as a system of connectivity. One interesting finding is that a click is in fact a phatic practice. A click breaks the silence. A click in one of the mooc website is actually a tweet. A tweet was posted on behalf of a user who simply chose to click without formulating the text and perhaps without knowing that it contains #mooc. Surely, this mechanism is not about reciprocity. To break the silence, users did not use words. They just clicked the ‘tweet button’ on a mooc website. A click performs and maintains connectivity – and Twitter as the medium in attendance in our everyday, available when needed to be of service. In conclusion, the phatic culture of breaking silence in Twitter does not have to submit to the power of code and analytics. It is a matter of human code.

Keywords: click, Twitter, phatic communion, social media data, mooc

Procedia PDF Downloads 415
1109 Design, Modeling, Fabrication, and Testing of a Scaled down Hybrid Rocket Engine

Authors: Pawthawala Nancy Manish, Syed Alay Hashim

Abstract:

A hybrid rocket is a rocket engine which uses propellants in two different states of matter- one is in solid and the other either gas or liquid. A hybrid rocket exhibit advantages over both liquid rockets and solid rockets especially in terms of simplicity, stop-start-restart capabilities, safety and cost. This paper deals the design and development of a hybrid rocket having paraffin wax as solid fuel and liquid oxygen as oxidizer. Due to variation of pressure in combustion chamber there is significantly change in mass flow rate, burning rate and uneven regression along the length of the grain. This project describes the working model of a hybrid propellant rocket motor. We have designed a hybrid rocket thrust chamber based on the predetermined combustion chamber pressure and the properties of hybrid propellant. This project is all ready in working condition with normal oxygen injector. Now we have planned to modify the injector design to improve the combustion property. We will use spray type injector for injecting the oxidizer. This idea will increase the performance followed by the regression rate of the solid fuel. By employing mass conservation law, oxygen mass flux, oxidizer/fuel ratio and regression rate the thrust coefficient can be obtained for our current design. CATIA V5 R20 is our design software for the complete setup. This project is fully based on experimental evaluation and the collection of combustion and flow parameters. The thrust chamber is made of stainless steel and the duration of test is around 15-20 seconds (Maximum). These experiments indicates that paraffin based fuel provides the opportunity to satisfy a broad range of mission requirements for the next generation of the hybrid rocket system.

Keywords: burning rate, liquid oxygen, mass flow rate, paraffin wax and sugar

Procedia PDF Downloads 339
1108 Model-Based Approach as Support for Product Industrialization: Application to an Optical Sensor

Authors: Frederic Schenker, Jonathan J. Hendriks, Gianluca Nicchiotti

Abstract:

In a product industrialization perspective, the end-product shall always be at the peak of technological advancement and developed in the shortest time possible. Thus, the constant growth of complexity and a shorter time-to-market calls for important changes on both the technical and business level. Undeniably, the common understanding of the system is beclouded by its complexity which leads to the communication gap between the engineers and the sale department. This communication link is therefore important to maintain and increase the information exchange between departments to ensure a punctual and flawless delivery to the end customer. This evolution brings engineers to reason with more hindsight and plan ahead. In this sense, they use new viewpoints to represent the data and to express the model deliverables in an understandable way that the different stakeholder may identify their needs and ideas. This article focuses on the usage of Model-Based System Engineering (MBSE) in a perspective of system industrialization and reconnect the engineering with the sales team. The modeling method used and presented in this paper concentrates on displaying as closely as possible the needs of the customer. Firstly, by providing a technical solution to the sales team to help them elaborate commercial offers without omitting technicalities. Secondly, the model simulates between a vast number of possibilities across a wide range of components. It becomes a dynamic tool for powerful analysis and optimizations. Thus, the model is no longer a technical tool for the engineers, but a way to maintain and solidify the communication between departments using different views of the model. The MBSE contribution to cost optimization during New Product Introduction (NPI) activities is made explicit through the illustration of a case study describing the support provided by system models to architectural choices during the industrialization of a novel optical sensor.

Keywords: analytical model, architecture comparison, MBSE, product industrialization, SysML, system thinking

Procedia PDF Downloads 165
1107 Combustion Characteristics of Wet Woody Biomass in a Grate Furnace: Including Measurements within the Bed

Authors: Narges Razmjoo, Hamid Sefidari, Michael Strand

Abstract:

Biomass combustion is a growing technique for heat and power production due to the increasing stringent regulations with CO2 emissions. Grate-fired systems have been regarded as a common and popular combustion technology for burning woody biomass. However, some grate furnaces are not well optimized and may emit significant amount of unwanted compounds such as dust, NOx, CO, and unburned gaseous components. The combustion characteristics inside the fuel bed are of practical interest, as they are directly related to the release of volatiles and affect the stability and the efficiency of the fuel bed combustion. Although numerous studies have been presented on the grate firing of biomass, to the author’s knowledge, none of them have conducted a detailed experimental study within the fuel bed. It is difficult to conduct measurements of temperature and gas species inside the burning bed of the fuel in full-scale boilers. Results from such inside bed measurements can also be applied by the numerical experts for modeling the fuel bed combustion. The current work presents an experimental investigation into the combustion behavior of wet woody biomass (53 %) in a 4 MW reciprocating grate boiler, by focusing on the gas species distribution along the height of the fuel bed. The local concentrations of gases (CO, CO2, CH4, NO, and O2) inside the fuel bed were measured through a glass port situated on the side wall of the furnace. The measurements were carried out at five different heights of the fuel bed, by means of a bent stainless steel probe containing a type-k thermocouple. The sample gas extracted from the fuel bed, through the probe, was filtered and dried and then was analyzed using two infrared spectrometers. Temperatures of about 200-1100 °C were measured close to the grate, indicating that char combustion is occurring at the bottom of the fuel bed and propagates upward. The CO and CO2 concentration varied in the range of 15-35 vol % and 3-16 vol %, respectively, and NO concentration varied between 10-140 ppm. The profile of the gas concentrations distribution along the bed height provided a good overview of the combustion sub-processes in the fuel bed.

Keywords: experimental, fuel bed, grate firing, wood combustion

Procedia PDF Downloads 329
1106 Variant Selection and Pre-transformation Phase Reconstruction for Deformation-Induced Transformation in AISI 304 Austenitic Stainless Steel

Authors: Manendra Singh Parihar, Sandip Ghosh Chowdhury

Abstract:

Austenitic stainless steels are widely used and give a good combination of properties. When this steel is plastically deformed, a phase transformation of the metastable Face Centred Cubic Austenite to the stable Body Centred Cubic (α’) or to the Hexagonal close packed (ԑ) martensite may occur, leading to the enhancement in the mechanical properties like strength. The work was based on variant selection and corresponding texture analysis for the strain induced martensitic transformation during deformation of the parent austenite FCC phase to form the product HCP and the BCC martensite phases separately, obeying their respective orientation relationships. The automated method for reconstruction of the parent phase orientation using the EBSD data of the product phase orientation is done using the MATLAB and TSL-OIM software. The method of triplets was used which involves the formation of a triplet of neighboring product grains having a common variant and linking them using a misorientation-based criterion. This led to the proper reconstruction of the pre-transformation phase orientation data and thus to its micro structure and texture. The computational speed of current method is better compared to the previously used methods of reconstruction. The reconstruction of austenite from ԑ and α’ martensite was carried out for multiple samples and their IPF images, pole figures, inverse pole figures and ODFs were compared. Similar type of results was observed for all samples. The comparison gives the idea for estimating the correct sequence of the transformation i.e. γ → ε → α’ or γ → α’, during deformation of AISI 304 austenitic stainless steel.

Keywords: variant selection, reconstruction, EBSD, austenitic stainless steel, martensitic transformation

Procedia PDF Downloads 492
1105 Comparative Study on Daily Discharge Estimation of Soolegan River

Authors: Redvan Ghasemlounia, Elham Ansari, Hikmet Kerem Cigizoglu

Abstract:

Hydrological modeling in arid and semi-arid regions is very important. Iran has many regions with these climate conditions such as Chaharmahal and Bakhtiari province that needs lots of attention with an appropriate management. Forecasting of hydrological parameters and estimation of hydrological events of catchments, provide important information that used for design, management and operation of water resources such as river systems, and dams, widely. Discharge in rivers is one of these parameters. This study presents the application and comparison of some estimation methods such as Feed-Forward Back Propagation Neural Network (FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) to predict the daily flow discharge of the Soolegan River, located at Chaharmahal and Bakhtiari province, in Iran. In this study, Soolegan, station was considered. This Station is located in Soolegan River at 51° 14՜ Latitude 31° 38՜ longitude at North Karoon basin. The Soolegan station is 2086 meters higher than sea level. The data used in this study are daily discharge and daily precipitation of Soolegan station. Feed Forward Back Propagation Neural Network(FFBPNN), Multi Linear Regression (MLR), Gene Expression Programming (GEP) and Bayesian Network (BN) models were developed using the same input parameters for Soolegan's daily discharge estimation. The results of estimation models were compared with observed discharge values to evaluate performance of the developed models. Results of all methods were compared and shown in tables and charts.

Keywords: ANN, multi linear regression, Bayesian network, forecasting, discharge, gene expression programming

Procedia PDF Downloads 563
1104 Multi-scale Spatial and Unified Temporal Feature-fusion Network for Multivariate Time Series Anomaly Detection

Authors: Hang Yang, Jichao Li, Kewei Yang, Tianyang Lei

Abstract:

Multivariate time series anomaly detection is a significant research topic in the field of data mining, encompassing a wide range of applications across various industrial sectors such as traffic roads, financial logistics, and corporate production. The inherent spatial dependencies and temporal characteristics present in multivariate time series introduce challenges to the anomaly detection task. Previous studies have typically been based on the assumption that all variables belong to the same spatial hierarchy, neglecting the multi-level spatial relationships. To address this challenge, this paper proposes a multi-scale spatial and unified temporal feature fusion network, denoted as MSUT-Net, for multivariate time series anomaly detection. The proposed model employs a multi-level modeling approach, incorporating both temporal and spatial modules. The spatial module is designed to capture the spatial characteristics of multivariate time series data, utilizing an adaptive graph structure learning model to identify the multi-level spatial relationships between data variables and their attributes. The temporal module consists of a unified temporal processing module, which is tasked with capturing the temporal features of multivariate time series. This module is capable of simultaneously identifying temporal dependencies among different variables. Extensive testing on multiple publicly available datasets confirms that MSUT-Net achieves superior performance on the majority of datasets. Our method is able to model and accurately detect systems data with multi-level spatial relationships from a spatial-temporal perspective, providing a novel perspective for anomaly detection analysis.

Keywords: data mining, industrial system, multivariate time series, anomaly detection

Procedia PDF Downloads 20
1103 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 45
1102 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data

Procedia PDF Downloads 337
1101 Next Generation UK Storm Surge Model for the Insurance Market: The London Case

Authors: Iacopo Carnacina, Mohammad Keshtpoor, Richard Yablonsky

Abstract:

Non-structural protection measures against flooding are becoming increasingly popular flood risk mitigation strategies. In particular, coastal flood insurance impacts not only private citizens but also insurance and reinsurance companies, who may require it to retain solvency and better understand the risks they face from a catastrophic coastal flood event. In this context, a framework is presented here to assess the risk for coastal flooding across the UK. The area has a long history of catastrophic flood events, including the Great Flood of 1953 and the 2013 Cyclone Xaver storm, both of which led to significant loss of life and property. The current framework will leverage a technology based on a hydrodynamic model (Delft3D Flexible Mesh). This flexible mesh technology, coupled with a calibration technique, allows for better utilisation of computational resources, leading to higher resolution and more detailed results. The generation of a stochastic set of extra tropical cyclone (ETC) events supports the evaluation of the financial losses for the whole area, also accounting for correlations between different locations in different scenarios. Finally, the solution shows a detailed analysis for the Thames River, leveraging the information available on flood barriers and levees. Two realistic disaster scenarios for the Greater London area are simulated: In the first scenario, the storm surge intensity is not high enough to fail London’s flood defences, but in the second scenario, London’s flood defences fail, highlighting the potential losses from a catastrophic coastal flood event.

Keywords: storm surge, stochastic model, levee failure, Thames River

Procedia PDF Downloads 236
1100 The Development of an Agent-Based Model to Support a Science-Based Evacuation and Shelter-in-Place Planning Process within the United States

Authors: Kyle Burke Pfeiffer, Carmella Burdi, Karen Marsh

Abstract:

The evacuation and shelter-in-place planning process employed by most jurisdictions within the United States is not informed by a scientifically-derived framework that is inclusive of the behavioral and policy-related indicators of public compliance with evacuation orders. While a significant body of work exists to define these indicators, the research findings have not been well-integrated nor translated into useable planning factors for public safety officials. Additionally, refinement of the planning factors alone is insufficient to support science-based evacuation planning as the behavioral elements of evacuees—even with consideration of policy-related indicators—must be examined in the context of specific regional transportation and shelter networks. To address this problem, the Federal Emergency Management Agency and Argonne National Laboratory developed an agent-based model to support regional analysis of zone-based evacuation in southeastern Georgia. In particular, this model allows public safety officials to analyze the consequences that a range of hazards may have upon a community, assess evacuation and shelter-in-place decisions in the context of specified evacuation and response plans, and predict outcomes based on community compliance with orders and the capacity of the regional (to include extra-jurisdictional) transportation and shelter networks. The intention is to use this model to aid evacuation planning and decision-making. Applications for the model include developing a science-driven risk communication strategy and, ultimately, in the case of evacuation, the shortest possible travel distance and clearance times for evacuees within the regional boundary conditions.

Keywords: agent-based modeling for evacuation, decision-support for evacuation planning, evacuation planning, human behavior in evacuation

Procedia PDF Downloads 240
1099 Combustion and Emissions Performance of Syngas Fuels Derived from Palm Kernel Shell and Polyethylene (PE) Waste via Catalytic Steam Gasification

Authors: Chaouki Ghenai

Abstract:

Computational fluid dynamics analysis of the burning of syngas fuels derived from biomass and plastic solid waste mixture through gasification process is presented in this paper. The syngas fuel is burned in gas turbine can combustor. Gas turbine can combustor with swirl is designed to burn the fuel efficiently and reduce the emissions. The main objective is to test the impact of the alternative syngas fuel compositions and lower heating value on the combustion performance and emissions. The syngas fuel is produced by blending Palm Kernel Shell (PKS) with Polyethylene (PE) waste via catalytic steam gasification (fluidized bed reactor). High hydrogen content syngas fuel was obtained by mixing 30% PE waste with PKS. The syngas composition obtained through the gasification process is 76.2% H2, 8.53% CO, 4.39% CO2 and 10.90% CH4. The lower heating value of the syngas fuel is LHV = 15.98 MJ/m3. Three fuels were tested in this study natural gas (100%CH4), syngas fuel and pure hydrogen (100% H2). The power from the combustor was kept constant for all the fuels tested in this study. The effect of syngas fuel composition and lower heating value on the flame shape, gas temperature, mass of carbon dioxide (CO2) and nitrogen oxides (NOX) per unit of energy generation is presented in this paper. The results show an increase of the peak flame temperature and NO mass fractions for the syngas and hydrogen fuels compared to natural gas fuel combustion. Lower average CO2 emissions at the exit of the combustor are obtained for the syngas compared to the natural gas fuel.

Keywords: CFD, combustion, emissions, gas turbine combustor, gasification, solid waste, syngas, waste to energy

Procedia PDF Downloads 596
1098 Future Design and Innovative Economic Models for Futuristic Markets in Developing Countries

Authors: Nessreen Y. Ibrahim

Abstract:

Designing the future according to realistic analytical study for the futuristic market needs can be a milestone strategy to make a huge improvement in developing countries economics. In developing countries, access to high technology and latest science approaches is very limited. The financial problems in low and medium income countries have negative effects on the kind and quality of imported new technologies and application for their markets. Thus, there is a strong need for shifting paradigm thinking in the design process to improve and evolve their development strategy. This paper discusses future possibilities in developing countries, and how they can design their own future according to specific future models FDM (Future Design Models), which established to solve certain economical problems, as well as political and cultural conflicts. FDM is strategic thinking framework provides an improvement in both content and process. The content includes; beliefs, values, mission, purpose, conceptual frameworks, research, and practice, while the process includes; design methodology, design systems, and design managements tools. In this paper the main objective was building an innovative economic model to design a chosen possible futuristic scenario; by understanding the market future needs, analyze real world setting, solve the model questions by future driven design, and finally interpret the results, to discuss to what extent the results can be transferred to the real world. The paper discusses Egypt as a potential case study. Since, Egypt has highly complex economical problems, extra-dynamic political factors, and very rich cultural aspects; we considered Egypt is a very challenging example for applying FDM. The paper results recommended using FDM numerical modeling as a starting point to design the future.

Keywords: developing countries, economic models, future design, possible futures

Procedia PDF Downloads 269
1097 Development of Automated Quality Management System for the Management of Heat Networks

Authors: Nigina Toktasynova, Sholpan Sagyndykova, Zhanat Kenzhebayeva, Maksat Kalimoldayev, Mariya Ishimova, Irbulat Utepbergenov

Abstract:

Any business needs a stable operation and continuous improvement, therefore it is necessary to constantly interact with the environment, to analyze the work of the enterprise in terms of employees, executives and consumers, as well as to correct any inconsistencies of certain types of processes and their aggregate. In the case of heat supply organizations, in addition to suppliers, local legislation must be considered which often is the main regulator of pricing of services. In this case, the process approach used to build a functional organizational structure in these types of businesses in Kazakhstan is a challenge not only in the implementation, but also in ways of analyzing the employee's salary. To solve these problems, we investigated the management system of heating enterprise, including strategic planning based on the balanced scorecard (BSC), quality management in accordance with the standards of the Quality Management System (QMS) ISO 9001 and analysis of the system based on expert judgment using fuzzy inference. To carry out our work we used the theory of fuzzy sets, the QMS in accordance with ISO 9001, BSC according to the method of Kaplan and Norton, method of construction of business processes according to the notation IDEF0, theory of modeling using Matlab software simulation tools and graphical programming LabVIEW. The results of the work are as follows: We determined possibilities of improving the management of heat-supply plant-based on QMS; after the justification and adaptation of software tool it has been used to automate a series of functions for the management and reduction of resources and for the maintenance of the system up to date; an application for the analysis of the QMS based on fuzzy inference has been created with novel organization of communication software with the application enabling the analysis of relevant data of enterprise management system.

Keywords: balanced scorecard, heat supply, quality management system, the theory of fuzzy sets

Procedia PDF Downloads 369