Search results for: lighting simulations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2226

Search results for: lighting simulations

516 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data

Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates

Abstract:

Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.

Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.

Procedia PDF Downloads 97
515 Topping Failure Analysis of Anti-Dip Bedding Rock Slopes Subjected to Crest Loads

Authors: Chaoyi Sun, Congxin Chen, Yun Zheng, Kaizong Xia, Wei Zhang

Abstract:

Crest loads are often encountered in hydropower, highway, open-pit and other engineering rock slopes. Toppling failure is one of the most common deformation failure types of anti-dip bedding rock slopes. Analysis on such failure of anti-dip bedding rock slopes subjected to crest loads has an important influence on engineering practice. Based on the step-by-step analysis approach proposed by Goodman and Bray, a geo-mechanical model was developed, and the related analysis approach was proposed for the toppling failure of anti-dip bedding rock slopes subjected to crest loads. Using the transfer coefficient method, a formulation was derived for calculating the residual thrust of slope toe and the support force required to meet the requirements of the slope stability under crest loads, which provided a scientific reference to design and support for such slopes. Through slope examples, the influence of crest loads on the residual thrust and sliding ratio coefficient was investigated for cases of different block widths and slope cut angles. The results show that there exists a critical block width for such slope. The influence of crest loads on the residual thrust is non-negligible when the block thickness is smaller than the critical value. Moreover, the influence of crest loads on the slope stability increases with the slope cut angle and the sliding ratio coefficient of anti-dip bedding rock slopes increases with the crest loads. Finally, the theoretical solutions and numerical simulations using Universal Distinct Element Code (UDEC) were compared, in which the consistent results show the applicability of both approaches.

Keywords: anti-dip bedding rock slope, crest loads, stability analysis, toppling failure

Procedia PDF Downloads 179
514 Economic Growth: The Nexus of Oil Price Volatility and Renewable Energy Resources among Selected Developed and Developing Economies

Authors: Muhammad Siddique, Volodymyr Lugovskyy

Abstract:

This paper explores how nations might mitigate the unfavorable impacts of oil price volatility on economic growth by switching to renewable energy sources. The impacts of uncertain factor prices on economic activity are examined by looking at the Realized Volatility (RV) of oil prices rather than the more traditional method of looking at oil price shocks. The United States of America (USA), China (C), India (I), United Kingdom (UK), Germany (G), Malaysia (M), and Pakistan (P) are all included to round out the traditional literature's examination of selected nations, which focuses on oil-importing and exporting economies. Granger Causality Tests (GCT), Impulse Response Functions (IRF), and Variance Decompositions (VD) demonstrate that in a Vector Auto-Regressive (VAR) scenario, the negative impacts of oil price volatility extend beyond what can be explained by oil price shocks alone for all of the nations in the sample. Different nations have different levels of vulnerability to changes in oil prices and other factors that may play a role in a sectoral composition and the energy mix. The conventional method, which only takes into account whether a country is a net oil importer or exporter, is inadequate. The potential economic advantages of initiatives to decouple the macroeconomy from volatile commodities markets are shown through simulations of volatility shocks in alternative energy mixes (with greater proportions of renewables). It is determined that in developing countries like Pakistan, increasing the use of renewable energy sources might lessen an economy's sensitivity to changes in oil prices; nonetheless, a country-specific study is required to identify particular policy actions. In sum, the research provides an innovative justification for mitigating economic growth's dependence on stable oil prices in our sample countries.

Keywords: oil price volatility, renewable energy, economic growth, developed and developing economies

Procedia PDF Downloads 79
513 Library Screening and Evaluation of Mycobacterium tuberculosis Ketol-Acid Reductoisomerase Inhibitors

Authors: Vagolu S. Krishna, Shan Zheng, Estharla M. Rekha, Luke W. Guddat, Dharmarajan Sriram

Abstract:

Tuberculosis (TB) remains a major threat to human health. This due to the fact that current drug treatments are less than optimal as well as the rising occurrence of multi drug-resistant and extensively drug-resistant strains of the etiological agent, Mycobacterium tuberculosis (Mt). Given the wide-spread significance of this disease, we have undertaken a design and evaluation program to discover new anti-TB drug leads. Here, our attention is focused on ketol-acid reductoisomerase (KARI), the second enzyme in the branched-chain amino acid biosynthesis pathway. Importantly, this enzyme is present in bacteria but not in humans, making it an attractive proposition for drug discovery. In the present work, we used high-throughput virtual screening to identify seventeen potential inhibitors of KARI using the Birla Institute of Technology and Science in-house database. Compounds were selected based on high docking scores, which were assigned as the result of favourable interactions between the compound and the active site of KARI. The Ki values for two leads, compounds 14 and 16 are 3.71 and 3.06 µM, respectively for Mt KARI. To assess the mode of binding, 100 ns molecular dynamics simulations for these two compounds in association with Mt KARI were performed and showed that the complex was stable with an average RMSD of less than 2.5 Å for all atoms. Compound 16 showed an MIC of 2.06 ± 0.91 µM and a 1.9 fold logarithmic reduction in the growth of Mt in an infected macrophage model. The two compounds exhibited low toxicity against murine macrophage RAW 264.7 cell lines. Thus, both compounds are promising candidates for development as an anti-TB drug leads.

Keywords: ketol-acid reductoisomerase, macrophage, molecular docking and dynamics, tuberculosis

Procedia PDF Downloads 122
512 Investigations into the Efficiencies of Steam Conversion in Three Reactor Chemical Looping

Authors: Ratnakumar V. Kappagantula, Gordon D. Ingram, Hari B. Vuthaluru

Abstract:

This paper analyzes a three reactor chemical looping process for hydrogen production from natural gas, allowing for carbon dioxide capture through chemical looping technology. An oxygen carrier is circulated to separate carbon dioxide, to reduce steam for hydrogen production and to supply oxygen for combustion. In this study, the emphasis is placed on the steam conversion in the steam reactor by investigating the hydrogen efficiencies of the complete system at steam conversions of 15.8% and 50%. An Aspen Plus model was developed for a Three Reactor Chemical Looping process to study the effects of operational parameters on hydrogen production is investigated. Maximum hydrogen production was observed under stoichiometric conditions. Different conversions in the steam reactor, which was modelled as a Gibbs reactor, were found when Gibbs-identified products and user identified products were chosen. Simulations were performed for different oxygen carriers, which consist of an active metal oxide on an inert support material. For the same metal oxide mass flowrate, the fuel reactor temperature decreased for different support materials in the order: aluminum oxide (Al2O3) > magnesium aluminate (MgAl2O4) > zirconia (ZrO2). To achieve the same fuel reactor temperature for the same oxide mass flow rate, the inert mass fraction was found to be 0.825 for ZrO2, 0.7 for MgAl2O4 and 0.6 for Al2O3. The effect of poisoning of the oxygen carrier was also analyzed. With 3000 ppm sulfur-based impurities in the feed gas, the hydrogen product energy rate of the process were found to decrease by 0.4%.

Keywords: aspen plus, chemical looping combustion, inert support balls, oxygen carrier

Procedia PDF Downloads 328
511 High Pressure Torsion Deformation Behavior of a Low-SFE FCC Ternary Medium Entropy Alloy

Authors: Saumya R. Jha, Krishanu Biswas, Nilesh P. Gurao

Abstract:

Several recent investigations have revealed medium entropy alloys exhibiting better mechanical properties than their high entropy counterparts. This clearly establishes that although a higher entropy plays a vital role in stabilization of particular phase over complex intermetallic phases, configurational entropy is not the primary factor responsible for the high inherent strengthening in these systems. Above and beyond a high contribution from friction stresses and solid solution strengthening, strain hardening is an important contributor to the strengthening in these systems. In this regard, researchers have developed severe plastic deformation (SPD) techniques like High Pressure Torsion (HPT) to incorporate very high shear strain in the material, thereby leading to ultrafine grained (UFG) microstructures, which cause manifold increase in the strength. The presented work demonstrates a meticulous study of the variation in mechanical properties at different radial displacements from the center of HPT tested equiatomic ternary FeMnNi synthesized by casting route, which is a low stacking fault energy FCC alloy that shows significantly higher toughness than its high entropy counterparts like Cantor alloy. The gradient in grain sizes along the radial direction of these specimens has been modeled using microstructure entropy for predicting the mechanical properties, which has also been validated by indentation tests. The dislocation density is computed by FEM simulations for varying strains and validated by analyzing synchrotron diffraction data. Thus, the proposed model can be utilized to predict the strengthening behavior of similar systems deformed by HPT subjected to varying loading conditions.

Keywords: high pressure torsion, severe plastic deformation, configurational entropy, dislocation density, FEM simulation

Procedia PDF Downloads 153
510 Lateral Torsional Buckling Resistance of Trapezoidally Corrugated Web Girders

Authors: Annamária Käferné Rácz, Bence Jáger, Balázs Kövesdi, László Dunai

Abstract:

Due to the numerous advantages of steel corrugated web girders, its application field is growing for bridges as well as for buildings. The global stability behavior of such girders is significantly larger than those of conventional I-girders with flat web, thus the application of the structural steel material can be significantly reduced. Design codes and specifications do not provide clear and complete rules or recommendations for the determination of the lateral torsional buckling (LTB) resistance of corrugated web girders. Therefore, the authors made a thorough investigation regarding the LTB resistance of the corrugated web girders. Finite element (FE) simulations have been performed to develop new design formulas for the determination of the LTB resistance of trapezoidally corrugated web girders. FE model is developed considering geometrical and material nonlinear analysis using equivalent geometric imperfections (GMNI analysis). The equivalent geometric imperfections involve the initial geometric imperfections and residual stresses coming from rolling, welding and flame cutting. Imperfection sensitivity analysis was performed to determine the necessary magnitudes regarding only the first eigenmodes shape imperfections. By the help of the validated FE model, an extended parametric study is carried out to investigate the LTB resistance for different trapezoidal corrugation profiles. First, the critical moment of a specific girder was calculated by FE model. The critical moments from the FE calculations are compared to the previous analytical calculation proposals. Then, nonlinear analysis was carried out to determine the ultimate resistance. Due to the numerical investigations, new proposals are developed for the determination of the LTB resistance of trapezoidally corrugated web girders through a modification factor on the design method related to the conventional flat web girders.

Keywords: corrugated web, lateral torsional buckling, critical moment, FE modeling

Procedia PDF Downloads 283
509 Ebola Virus Glycoprotein Inhibitors from Natural Compounds: Computer-Aided Drug Design

Authors: Driss Cherqaoui, Nouhaila Ait Lahcen, Ismail Hdoufane, Mehdi Oubahmane, Wissal Liman, Christelle Delaite, Mohammed M. Alanazi

Abstract:

The Ebola virus is a highly contagious and deadly pathogen that causes Ebola virus disease. The Ebola virus glycoprotein (EBOV-GP) is a key factor in viral entry into host cells, making it a critical target for therapeutic intervention. Using a combination of computational approaches, this study focuses on the identification of natural compounds that could serve as potent inhibitors of EBOV-GP. The 3D structure of EBOV-GP was selected, with missing residues modeled, and this structure was minimized and equilibrated. Two large natural compound databases, COCONUT and NPASS, were chosen and filtered based on toxicity risks and Lipinski’s Rule of Five to ensure drug-likeness. Following this, a pharmacophore model, built from 22 reported active inhibitors, was employed to refine the selection of compounds with a focus on structural relevance to known Ebola inhibitors. The filtered compounds were subjected to virtual screening via molecular docking, which identified ten promising candidates (five from each database) with strong binding affinities to EBOV-GP. These compounds were then validated through molecular dynamics simulations to evaluate their binding stability and interactions with the target. The top three compounds from each database were further analyzed using ADMET profiling, confirming their favorable pharmacokinetic properties, stability, and safety. These results suggest that the selected compounds have the potential to inhibit EBOV-GP, offering new avenues for antiviral drug development against the Ebola virus.

Keywords: EBOV-GP, Ebola virus glycoprotein, high-throughput drug screening, molecular docking, molecular dynamics, natural compounds, pharmacophore modeling, virtual screening

Procedia PDF Downloads 23
508 Comparing Energy Labelling of Buildings in Spain

Authors: Carolina Aparicio-Fernández, Alejandro Vilar Abad, Mar Cañada Soriano, Jose-Luis Vivancos

Abstract:

The building sector is responsible for 40% of the total energy consumption in the European Union (EU). Thus, implementation of strategies for quantifying and reducing buildings energy consumption is indispensable for reaching the EU’s carbon neutrality and energy efficiency goals. Each Member State has transposed the European Directives according to its own peculiarities: existing technical legislation, constructive solutions, climatic zones, etc. Therefore, in accordance with the Energy Performance of Buildings Directive, Member States have developed different Energy Performance Certificate schemes, using proposed energy simulation software-tool for each national or regional area. Energy Performance Certificates provide a powerful and comprehensive information to predict, analyze and improve the energy demand of new and existing buildings. Energy simulation software and databases allow a better understanding of the current constructive reality of the European building stock. However, Energy Performance Certificates still have to face several issues to consider them as a reliable and global source of information since different calculation tools are used that do not allow the connection between them. In this document, TRNSYS (TRaNsient System Simulation program) software is used to calculate the energy demand of a building, and it is compared with the energy labeling obtained with Spanish Official software-tools. We demonstrate the possibility of using not official software-tools to calculate the Energy Performance Certificate. Thus, this approach could be used throughout the EU and compare the results in all possible cases proposed by the EU Member States. To implement the simulations, an isolated single-family house with different construction solutions is considered. The results are obtained for every climatic zone of the Spanish Technical Building Code.

Keywords: energy demand, energy performance certificate EPBD, trnsys, buildings

Procedia PDF Downloads 127
507 Numerical Study of Nonlinear Guided Waves in Composite Laminates with Delaminations

Authors: Reza Soleimanpour, Ching Tai Ng

Abstract:

Fibre-composites are widely used in various structures due to their attractive properties such as higher stiffness to mass ratio and better corrosion resistance compared to metallic materials. However, one serious weakness of this composite material is delamination, which is a subsurface separation of laminae. A low level of this barely visible damage can cause a significant reduction in residual compressive strength. In the last decade, the application of guided waves for damage detection has been a topic of significant interest for many researches. Among all guided wave techniques, nonlinear guided wave has shown outstanding sensitivity and capability for detecting different types of damages, e.g. cracks and delaminations. So far, most of researches on applications of nonlinear guided wave have been dedicated to isotropic material, such as aluminium and steel, while only a few works have been done on applications of nonlinear characteristics of guided waves in anisotropic materials. This study investigates the nonlinear interactions of the fundamental antisymmetric lamb wave (A0) with delamination in composite laminates using three-dimensional (3D) explicit finite element (FE) simulations. The nonlinearity considered in this study arises from interactions of two interfaces of sub-laminates at the delamination region, which generates contact acoustic nonlinearity (CAN). The aim of this research is to investigate the phenomena of CAN in composite laminated beams by a series of numerical case studies. In this study interaction of fundamental antisymmetric lamb wave with delamination of different sizes are studied in detail. The results show that the A0 lamb wave interacts with the delaminations generating CAN in the form of higher harmonics, which is a good indicator for determining the existence of delaminations in composite laminates.

Keywords: contact acoustic nonlinearity, delamination, fibre reinforced composite beam, finite element, nonlinear guided waves

Procedia PDF Downloads 204
506 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks

Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang

Abstract:

The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.

Keywords: femtocell networks, game theory, interference mitigation, spectrum allocation

Procedia PDF Downloads 156
505 A Two-Phase Flow Interface Tracking Algorithm Using a Fully Coupled Pressure-Based Finite Volume Method

Authors: Shidvash Vakilipour, Scott Ormiston, Masoud Mohammadi, Rouzbeh Riazi, Kimia Amiri, Sahar Barati

Abstract:

Two-phase and multi-phase flows are common flow types in fluid mechanics engineering. Among the basic and applied problems of these flow types, two-phase parallel flow is the one that two immiscible fluids flow in the vicinity of each other. In this type of flow, fluid properties (e.g. density, viscosity, and temperature) are different at the two sides of the interface of the two fluids. The most challenging part of the numerical simulation of two-phase flow is to determine the location of interface accurately. In the present work, a coupled interface tracking algorithm is developed based on Arbitrary Lagrangian-Eulerian (ALE) approach using a cell-centered, pressure-based, coupled solver. To validate this algorithm, an analytical solution for fully developed two-phase flow in presence of gravity is derived, and then, the results of the numerical simulation of this flow are compared with analytical solution at various flow conditions. The results of the simulations show good accuracy of the algorithm despite using a nearly coarse and uniform grid. Temporal variations of interface profile toward the steady-state solution show that a greater difference between fluids properties (especially dynamic viscosity) will result in larger traveling waves. Gravity effect studies also show that favorable gravity will result in a reduction of heavier fluid thickness and adverse gravity leads to increasing it with respect to the zero gravity condition. However, the magnitude of variation in favorable gravity is much more than adverse gravity.

Keywords: coupled solver, gravitational force, interface tracking, Reynolds number to Froude number, two-phase flow

Procedia PDF Downloads 315
504 An Integrated Label Propagation Network for Structural Condition Assessment

Authors: Qingsong Xiong, Cheng Yuan, Qingzhao Kong, Haibei Xiong

Abstract:

Deep-learning-driven approaches based on vibration responses have attracted larger attention in rapid structural condition assessment while obtaining sufficient measured training data with corresponding labels is relevantly costly and even inaccessible in practical engineering. This study proposes an integrated label propagation network for structural condition assessment, which is able to diffuse the labels from continuously-generating measurements by intact structure to those of missing labels of damage scenarios. The integrated network is embedded with damage-sensitive features extraction by deep autoencoder and pseudo-labels propagation by optimized fuzzy clustering, the architecture and mechanism which are elaborated. With a sophisticated network design and specified strategies for improving performance, the present network achieves to extends the superiority of self-supervised representation learning, unsupervised fuzzy clustering and supervised classification algorithms into an integration aiming at assessing damage conditions. Both numerical simulations and full-scale laboratory shaking table tests of a two-story building structure were conducted to validate its capability of detecting post-earthquake damage. The identifying accuracy of a present network was 0.95 in numerical validations and an average 0.86 in laboratory case studies, respectively. It should be noted that the whole training procedure of all involved models in the network stringently doesn’t rely upon any labeled data of damage scenarios but only several samples of intact structure, which indicates a significant superiority in model adaptability and feasible applicability in practice.

Keywords: autoencoder, condition assessment, fuzzy clustering, label propagation

Procedia PDF Downloads 97
503 Collapse Load Analysis of Reinforced Concrete Pile Group in Liquefying Soils under Lateral Loading

Authors: Pavan K. Emani, Shashank Kothari, V. S. Phanikanth

Abstract:

The ultimate load analysis of RC pile groups has assumed a lot of significance under liquefying soil conditions, especially due to post-earthquake studies of 1964 Niigata, 1995 Kobe and 2001 Bhuj earthquakes. The present study reports the results of numerical simulations on pile groups subjected to monotonically increasing lateral loads under design amounts of pile axial loading. The soil liquefaction has been considered through the non-linear p-y relationship of the soil springs, which can vary along the depth/length of the pile. This variation again is related to the liquefaction potential of the site and the magnitude of the seismic shaking. As the piles in the group can reach their extreme deflections and rotations during increased amounts of lateral loading, a precise modeling of the inelastic behavior of the pile cross-section is done, considering the complete stress-strain behavior of concrete, with and without confinement, and reinforcing steel, including the strain-hardening portion. The possibility of the inelastic buckling of the individual piles is considered in the overall collapse modes. The model is analysed using Riks analysis in finite element software to check the post buckling behavior and plastic collapse of piles. The results confirm the kinds of failure modes predicted by centrifuge test results reported by researchers on pile group, although the pile material used is significantly different from that of the simulation model. The extension of the present work promises an important contribution to the design codes for pile groups in liquefying soils.

Keywords: collapse load analysis, inelastic buckling, liquefaction, pile group

Procedia PDF Downloads 162
502 Binding Mechanism of Synthesized 5β-Dihydrocortisol and 5β-Dihydrocortisol Acetate with Human Serum Albumin to Understand Their Role in Breast Cancer

Authors: Monika Kallubai, Shreya Dubey, Rajagopal Subramanyam

Abstract:

Our study is all about the biological interactions of synthesized 5β-dihydrocortisol (Dhc) and 5β-dihydrocortisol acetate (DhcA) molecules with carrier protein Human Serum Albumin (HSA). The cytotoxic study was performed on breast cancer cell line (MCF-7) normal human embryonic kidney cell line (HEK293), the IC50 values for MCF-7 cells were 28 and 25 µM, respectively, whereas no toxicity in terms of cell viability was observed with HEK293 cell line. The further experiment proved that Dhc and DhcA induced 35.6% and 37.7% early apoptotic cells and 2.5%, 2.9% late apoptotic cells respectively. Morphological observation of cell death through TUNEL assay revealed that Dhc and DhcA induced apoptosis in MCF-7 cells. The complexes of HSA–Dhc and HSA–DhcA were observed as static quenching, and the binding constants (K) was 4.7±0.03×104 M-1 and 3.9±0.05×104 M-1, and their binding free energies were found to be -6.4 and -6.16 kcal/mol, respectively. The displacement studies confirmed that lidocaine 1.4±0.05×104 M-1 replaced Dhc, and phenylbutazone 1.5±0.05×104 M-1 replaced by DhcA, which explains domain I and domain II are the binding sites for Dhc and DhcA. Further, CD results revealed that the secondary structure of HSA was altered in the presence of Dhc and DhcA. Furthermore, the atomic force microscopy and transmission electron microscopy showed that the dimensions like height and molecular sizes of the HSA–Dhc and HSA–DhcA complex were larger compared to HSA alone. Detailed analysis through molecular dynamics simulations also supported the greater stability of HSA–Dhc and HSA–DhcA complexes, and root-mean-square-fluctuation interpreted the binding site of Dhc as domain IB and domain IIA for DhcA. This information is valuable for the further development of steroid derivatives with improved pharmacological significance as novel anti-cancer drugs.

Keywords: apoptosis, dihydrocortisol, fluorescence quenching, protein conformations

Procedia PDF Downloads 131
501 Decision Support System for the Management of the Shandong Peninsula, China

Authors: Natacha Fery, Guilherme L. Dalledonne, Xiangyang Zheng, Cheng Tang, Roberto Mayerle

Abstract:

A Decision Support System (DSS) for supporting decision makers in the management of the Shandong Peninsula has been developed. Emphasis has been given to coastal protection, coastal cage aquaculture and harbors. The investigations were done in the framework of a joint research project funded by the German Ministry of Education and Research (BMBF) and the Chinese Academy of Sciences (CAS). In this paper, a description of the DSS, the development of its components, and results of its application are presented. The system integrates in-situ measurements, process-based models, and a database management system. Numerical models for the simulation of flow, waves, sediment transport and morphodynamics covering the entire Bohai Sea are set up based on the Delft3D modelling suite (Deltares). Calibration and validation of the models were realized based on the measurements of moored Acoustic Doppler Current Profilers (ADCP) and High Frequency (HF) radars. In order to enable cost-effective and scalable applications, a database management system was developed. It enhances information processing, data evaluation, and supports the generation of data products. Results of the application of the DSS to the management of coastal protection, coastal cage aquaculture and harbors are presented here. Model simulations covering the most severe storms observed during the last decades were carried out leading to an improved understanding of hydrodynamics and morphodynamics. Results helped in the identification of coastal stretches subjected to higher levels of energy and improved support for coastal protection measures.

Keywords: coastal protection, decision support system, in-situ measurements, numerical modelling

Procedia PDF Downloads 195
500 Impact of Economic Globalization on Ecological Footprint in India: Evidenced with Dynamic ARDL Simulations

Authors: Muhammed Ashiq Villanthenkodath, Shreya Pal

Abstract:

Purpose: This study scrutinizes the impact of economic globalization on ecological footprint while endogenizing economic growth and energy consumption from 1990 to 2018 in India. Design/methodology/approach: The standard unit root test has been employed for time series analysis to unveil the integration order. Then, the cointegration was confirmed using autoregressive distributed lag (ARDL) analysis. Further, the study executed the dynamic ARDL simulation model to estimate long-run and short-run results along with simulation and robotic prediction. Findings: The cointegration analysis confirms the existence of a long-run association among variables. Further, economic globalization reduces the ecological footprint in the long run. Similarly, energy consumption decreases the ecological footprint. In contrast, economic growth spurs the ecological footprint in India. Originality/value: This study contributes to the literature in many ways. First, unlike studies that employ CO2 emissions and globalization nexus, this study employs ecological footprint for measuring environmental quality; since it is the broader measure of environmental quality, it can offer a wide range of climate change mitigation policies for India. Second, the study executes a multivariate framework with updated series from 1990 to 2018 in India to explore the link between EF, economic globalization, energy consumption, and economic growth. Third, the dynamic autoregressive distributed lag (ARDL) model has been used to explore the short and long-run association between the series. Finally, to our limited knowledge, this is the first study that uses economic globalization in the EF function of India amid facing a trade-off between sustainable economic growth and the environment in the era of globalization.

Keywords: economic globalization, ecological footprint, India, dynamic ARDL simulation model

Procedia PDF Downloads 124
499 Software Development for Both Small Wind Performance Optimization and Structural Compliance Analysis with International Safety Regulations

Authors: K. M. Yoo, M. H. Kang

Abstract:

Conventional commercial wind turbine design software is limited to large wind turbines due to not incorporating with low Reynold’s Number aerodynamic characteristics typically for small wind turbines. To extract maximum annual energy product from an intermediately designed small wind turbine associated with measured wind data, numerous simulation is highly recommended to have a best fitting planform design with proper airfoil configuration. Since depending upon wind distribution with average wind speed, an optimal wind turbine planform design changes accordingly. It is theoretically not difficult, though, it is very inconveniently time-consuming design procedure to finalize conceptual layout of a desired small wind turbine. Thus, to help simulations easier and faster, a GUI software is developed to conveniently iterate and change airfoil types, wind data, and geometric blade data as well. With magnetic generator torque curve, peak power tracking simulation is also available to better match with the magnetic generator. Small wind turbine often lacks starting torque due to blade optimization. Thus this simulation is also embedded along with yaw design. This software provides various blade cross section details at user’s design convenience such as skin thickness control with fiber direction option, spar shape, and their material properties. Since small wind turbine is under international safety regulations with fatigue damage during normal operations and safety load analyses with ultimate excessive loads, load analyses are provided with each category mandated in the safety regulations.

Keywords: GUI software, Low Reynold’s number aerodynamics, peak power tracking, safety regulations, wind turbine performance optimization

Procedia PDF Downloads 305
498 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks

Authors: Tugba Bayoglu

Abstract:

Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.

Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification

Procedia PDF Downloads 279
497 Computation of Residual Stresses in Human Face Due to Growth

Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan

Abstract:

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of the living tissues to the mechanical loads is necessary for a wide range of developing fields such as, designing of prosthetics and optimized surgery operations. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically growth and remodeling is one of the main sources. Extracting body organs from medical imaging, does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is the gravity since an organ grows under its influence from its birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. In this paper, we have implemented a computational framework based on fixed-point iteration to determine the residual stresses due to growth. Using nonlinear continuum mechanics and the concept of fictitious configuration we find the unknown stress-free reference configuration which is necessary for mechanical analysis. To illustrate the method, we apply it to a finite element model of healthy human face whose geometry has been extracted from medical images. We have computed the distribution of residual stress in facial tissues, which can overcome the effect of gravity and cause that tissues remain firm. Tissue wrinkles caused by aging could be a consequence of decreasing residual stress and not counteracting the gravity. Considering these stresses has important application in maxillofacial surgery. It helps the surgeons to predict the changes after surgical operations and their consequences.

Keywords: growth, soft tissue, residual stress, finite element method

Procedia PDF Downloads 355
496 Towards Designing of a Potential New HIV-1 Protease Inhibitor Using Quantitative Structure-Activity Relationship Study in Combination with Molecular Docking and Molecular Dynamics Simulations

Authors: Mouna Baassi, Mohamed Moussaoui, Hatim Soufi, Sanchaita RajkhowaI, Ashwani Sharma, Subrata Sinha, Said Belaaouad

Abstract:

Human Immunodeficiency Virus type 1 protease (HIV-1 PR) is one of the most challenging targets of antiretroviral therapy used in the treatment of AIDS-infected people. The performance of protease inhibitors (PIs) is limited by the development of protease mutations that can promote resistance to the treatment. The current study was carried out using statistics and bioinformatics tools. A series of thirty-three compounds with known enzymatic inhibitory activities against HIV-1 protease was used in this paper to build a mathematical model relating the structure to the biological activity. These compounds were designed by software; their descriptors were computed using various tools, such as Gaussian, Chem3D, ChemSketch and MarvinSketch. Computational methods generated the best model based on its statistical parameters. The model’s applicability domain (AD) was elaborated. Furthermore, one compound has been proposed as efficient against HIV-1 protease with comparable biological activity to the existing ones; this drug candidate was evaluated using ADMET properties and Lipinski’s rule. Molecular Docking performed on Wild Type and Mutant Type HIV-1 proteases allowed the investigation of the interaction types displayed between the proteases and the ligands, Darunavir (DRV) and the new drug (ND). Molecular dynamics simulation was also used in order to investigate the complexes’ stability, allowing a comparative study of the performance of both ligands (DRV & ND). Our study suggested that the new molecule showed comparable results to that of Darunavir and may be used for further experimental studies. Our study may also be used as a pipeline to search and design new potential inhibitors of HIV-1 proteases.

Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation.

Procedia PDF Downloads 40
495 Reduction of Plutonium Production in Heavy Water Research Reactor: A Feasibility Study through Neutronic Analysis Using MCNPX2.6 and CINDER90 Codes

Authors: H. Shamoradifar, B. Teimuri, P. Parvaresh, S. Mohammadi

Abstract:

One of the main characteristics of Heavy Water Moderated Reactors is their high production of plutonium. This article demonstrates the possibility of reduction of plutonium and other actinides in Heavy Water Research Reactor. Among the many ways for reducing plutonium production in a heavy water reactor, in this research, changing the fuel from natural Uranium fuel to Thorium-Uranium mixed fuel was focused. The main fissile nucleus in Thorium-Uranium fuels is U-233 which would be produced after neutron absorption by Th-232, so the Thorium-Uranium fuels have some known advantages compared to the Uranium fuels. Due to this fact, four Thorium-Uranium fuels with different compositions ratios were chosen in our simulations; a) 10% UO2-90% THO2 (enriched= 20%); b) 15% UO2-85% THO2 (enriched= 10%); c) 30% UO2-70% THO2 (enriched= 5%); d) 35% UO2-65% THO2 (enriched= 3.7%). The natural Uranium Oxide (UO2) is considered as the reference fuel, in other words all of the calculated data are compared with the related data from Uranium fuel. Neutronic parameters were calculated and used as the comparison parameters. All calculations were performed by Monte Carol (MCNPX2.6) steady state reaction rate calculation linked to a deterministic depletion calculation (CINDER90). The obtained computational data showed that Thorium-Uranium fuels with four different fissile compositions ratios can satisfy the safety and operating requirements for Heavy Water Research Reactor. Furthermore, Thorium-Uranium fuels have a very good proliferation resistance and consume less fissile material than uranium fuels at the same reactor operation time. Using mixed Thorium-Uranium fuels reduced the long-lived α emitter, high radiotoxic wastes and the radio toxicity level of spent fuel.

Keywords: Heavy Water Reactor, Burn up, Minor Actinides, Neutronic Calculation

Procedia PDF Downloads 246
494 Photocatalytic Degradation of Organic Polluant Reacting with Tungstates: Role of Microstructure and Size Effect on Oxidation Kinetics

Authors: A. Taoufyq, B. Bakiz, A. Benlhachemi, L. Patout, D. V. Chokouadeua, F. Guinneton, G. Nolibe, A. Lyoussi, J-R. Gavarri

Abstract:

Currently, the photo catalytic reactions occurring under solar illumination have attracted worldwide attentions due to a tremendous set of environmental problems. Taking the sunlight into account, it is indispensable to develop highly effective visible-light-driver photo catalysts. Nano structured materials such as MxM’1-xWO6 system are widely studied due to its interesting piezoelectric, dielectric and catalytic properties. These materials can be used in photo catalysis technique for environmental applications, such as waste water treatments. The aim of this study was to investigate the photo catalytic activity of polycrystalline phases of bismuth tungstate of formula Bi2WO6. Polycrystalline samples were elaborated using a coprecipitation technique followed by a calcination process at different temperatures (300, 400, 600 and 900°C). The obtained polycrystalline phases have been characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), and transmission electron microscopy (TEM). Crystal cell parameters and cell volume depend on elaboration temperature. High-resolution electron microscopy images and image simulations, associated with X-ray diffraction data, allowed confirming the lattices and space groups Pca21. The photo catalytic activity of the as-prepared samples was studied by irradiating aqueous solutions of Rhodamine B, associated with Bi2WO6 additives having variable crystallite sizes. The photo catalytic activity of such bismuth tungstates increased as the crystallite sizes decreased. The high specific area of the photo catalytic particles obtained at 300°C seems to condition the degradation kinetics of RhB.

Keywords: Bismuth tungstate, crystallite sizes, electron microscopy, photocatalytic activity, X-ray diffraction.

Procedia PDF Downloads 449
493 Probabilistic Analysis of Bearing Capacity of Isolated Footing using Monte Carlo Simulation

Authors: Sameer Jung Karki, Gokhan Saygili

Abstract:

The allowable bearing capacity of foundation systems is determined by applying a factor of safety to the ultimate bearing capacity. Conventional ultimate bearing capacity calculations routines are based on deterministic input parameters where the nonuniformity and inhomogeneity of soil and site properties are not accounted for. Hence, the laws of mathematics like probability calculus and statistical analysis cannot be directly applied to foundation engineering. It’s assumed that the Factor of Safety, typically as high as 3.0, incorporates the uncertainty of the input parameters. This factor of safety is estimated based on subjective judgement rather than objective facts. It is an ambiguous term. Hence, a probabilistic analysis of the bearing capacity of an isolated footing on a clayey soil is carried out by using the Monte Carlo Simulation method. This simulated model was compared with the traditional discrete model. It was found out that the bearing capacity of soil was found higher for the simulated model compared with the discrete model. This was verified by doing the sensitivity analysis. As the number of simulations was increased, there was a significant % increase of the bearing capacity compared with discrete bearing capacity. The bearing capacity values obtained by simulation was found to follow a normal distribution. While using the traditional value of Factor of safety 3, the allowable bearing capacity had lower probability (0.03717) of occurring in the field compared to a higher probability (0.15866), while using the simulation derived factor of safety of 1.5. This means the traditional factor of safety is giving us bearing capacity that is less likely occurring/available in the field. This shows the subjective nature of factor of safety, and hence probability method is suggested to address the variability of the input parameters in bearing capacity equations.

Keywords: bearing capacity, factor of safety, isolated footing, montecarlo simulation

Procedia PDF Downloads 187
492 Study of Structural Behavior and Proton Conductivity of Inorganic Gel Paste Electrolyte at Various Phosphorous to Silicon Ratio by Multiscale Modelling

Authors: P. Haldar, P. Ghosh, S. Ghoshdastidar, K. Kargupta

Abstract:

In polymer electrolyte membrane fuel cells (PEMFC), the membrane electrode assembly (MEA) is consisting of two platinum coated carbon electrodes, sandwiched with one proton conducting phosphoric acid doped polymeric membrane. Due to low mechanical stability, flooding and fuel cell crossover, application of phosphoric acid in polymeric membrane is very critical. Phosphorous and silica based 3D inorganic gel gains the attention in the field of supercapacitors, fuel cells and metal hydrate batteries due to its thermally stable highly proton conductive behavior. Also as a large amount of water molecule and phosphoric acid can easily get trapped in Si-O-Si network cavities, it causes a prevention in the leaching out. In this study, we have performed molecular dynamics (MD) simulation and first principle calculations to understand the structural, electronics and electrochemical and morphological behavior of this inorganic gel at various P to Si ratios. We have used dipole-dipole interactions, H bonding, and van der Waals forces to study the main interactions between the molecules. A 'structure property-performance' mapping is initiated to determine optimum P to Si ratio for best proton conductivity. We have performed the MD simulations at various temperature to understand the temperature dependency on proton conductivity. The observed results will propose a model which fits well with experimental data and other literature values. We have also studied the mechanism behind proton conductivity. And finally we have proposed a structure for the gel paste with optimum P to Si ratio.

Keywords: first principle calculation, molecular dynamics simulation, phosphorous and silica based 3D inorganic gel, polymer electrolyte membrane fuel cells, proton conductivity

Procedia PDF Downloads 129
491 Power Energy Management For A Grid-Connected PV System Using Rule-Base Fuzzy Logic

Authors: Nousheen Hashmi, Shoab Ahmad Khan

Abstract:

Active collaboration among the green energy sources and the load demand leads to serious issues related to power quality and stability. The growing number of green energy resources and Distributed-Generators need newer strategies to be incorporated for their operations to keep the power energy stability among green energy resources and micro-grid/Utility Grid. This paper presents a novel technique for energy power management in Grid-Connected Photovoltaic with energy storage system under set of constraints including weather conditions, Load Shedding Hours, Peak pricing Hours by using rule-based fuzzy smart grid controller to schedule power coming from multiple Power sources (photovoltaic, grid, battery) under the above set of constraints. The technique fuzzifies all the inputs and establishes fuzzify rule set from fuzzy outputs before defuzzification. Simulations are run for 24 hours period and rule base power scheduler is developed. The proposed fuzzy controller control strategy is able to sense the continuous fluctuations in Photovoltaic power generation, Load Demands, Grid (load Shedding patterns) and Battery State of Charge in order to make correct and quick decisions.The suggested Fuzzy Rule-based scheduler can operate well with vague inputs thus doesn’t not require any exact numerical model and can handle nonlinearity. This technique provides a framework for the extension to handle multiple special cases for optimized working of the system.

Keywords: photovoltaic, power, fuzzy logic, distributed generators, state of charge, load shedding, membership functions

Procedia PDF Downloads 480
490 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models

Authors: Azadeh Jafari, Robert G. Owens

Abstract:

In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.

Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics

Procedia PDF Downloads 361
489 Patient-Specific Design Optimization of Cardiovascular Grafts

Authors: Pegah Ebrahimi, Farshad Oveissi, Iman Manavi-Tehrani, Sina Naficy, David F. Fletcher, Fariba Dehghani, David S. Winlaw

Abstract:

Despite advances in modern surgery, congenital heart disease remains a medical challenge and a major cause of infant mortality. Cardiovascular prostheses are routinely used in surgical procedures to address congenital malformations, for example establishing a pathway from the right ventricle to the pulmonary arteries in pulmonary valvar atresia. Current off-the-shelf options including human and adult products have limited biocompatibility and durability, and their fixed size necessitates multiple subsequent operations to upsize the conduit to match with patients’ growth over their lifetime. Non-physiological blood flow is another major problem, reducing the longevity of these prostheses. These limitations call for better designs that take into account the hemodynamical and anatomical characteristics of different patients. We have integrated tissue engineering techniques with modern medical imaging and image processing tools along with mathematical modeling to optimize the design of cardiovascular grafts in a patient-specific manner. Computational Fluid Dynamics (CFD) analysis is done according to models constructed from each individual patient’s data. This allows for improved geometrical design and achieving better hemodynamic performance. Tissue engineering strives to provide a material that grows with the patient and mimic the durability and elasticity of the native tissue. Simulations also give insight on the performance of the tissues produced in our lab and reduce the need for costly and time-consuming methods of evaluation of the grafts. We are also developing a methodology for the fabrication of the optimized designs.

Keywords: computational fluid dynamics, cardiovascular grafts, design optimization, tissue engineering

Procedia PDF Downloads 243
488 Hands-off Parking: Deep Learning Gesture-based System for Individuals with Mobility Needs

Authors: Javier Romera, Alberto Justo, Ignacio Fidalgo, Joshue Perez, Javier Araluce

Abstract:

Nowadays, individuals with mobility needs face a significant challenge when docking vehicles. In many cases, after parking, they encounter insufficient space to exit, leading to two undesired outcomes: either avoiding parking in that spot or settling for improperly placed vehicles. To address this issue, the following paper presents a parking control system employing gestural teleoperation. The system comprises three main phases: capturing body markers, interpreting gestures, and transmitting orders to the vehicle. The initial phase is centered around the MediaPipe framework, a versatile tool optimized for real-time gesture recognition. MediaPipe excels at detecting and tracing body markers, with a special emphasis on hand gestures. Hands detection is done by generating 21 reference points for each hand. Subsequently, after data capture, the project employs the MultiPerceptron Layer (MPL) for indepth gesture classification. This tandem of MediaPipe's extraction prowess and MPL's analytical capability ensures that human gestures are translated into actionable commands with high precision. Furthermore, the system has been trained and validated within a built-in dataset. To prove the domain adaptation, a framework based on the Robot Operating System (ROS), as a communication backbone, alongside CARLA Simulator, is used. Following successful simulations, the system is transitioned to a real-world platform, marking a significant milestone in the project. This real vehicle implementation verifies the practicality and efficiency of the system beyond theoretical constructs.

Keywords: gesture detection, mediapipe, multiperceptron layer, robot operating system

Procedia PDF Downloads 100
487 CFD Study of Subcooled Boiling Flow at Elevated Pressure Using a Mechanistic Wall Heat Partitioning Model

Authors: Machimontorn Promtong, Sherman C. P. Cheung, Guan H. Yeoh, Sara Vahaji, Jiyuan Tu

Abstract:

The wide range of industrial applications involved with boiling flows promotes the necessity of establishing fundamental knowledge in boiling flow phenomena. For this purpose, a number of experimental and numerical researches have been performed to elucidate the underlying physics of this flow. In this paper, the improved wall boiling models, implemented on ANSYS CFX 14.5, were introduced to study subcooled boiling flow at elevated pressure. At the heated wall boundary, the Fractal model, Force balance approach and Mechanistic frequency model are given for predicting the nucleation site density, bubble departure diameter, and bubble departure frequency. The presented wall heat flux partitioning closures were modified to consider the influence of bubble sliding along the wall before the lift-off, which usually happens in the flow boiling. The simulation was performed based on the Two-fluid model, where the standard k-ω SST model was selected for turbulence modelling. Existing experimental data at around 5 bars were chosen to evaluate the accuracy of the presented mechanistic approach. The void fraction and Interfacial Area Concentration (IAC) are in good agreement with the experimental data. However, the predicted bubble velocity and Sauter Mean Diameter (SMD) are over-predicted. This over-prediction may be caused by consideration of only dispersed and spherical bubbles in the simulations. In the future work, the important physical mechanisms of bubbles, such as merging and shrinking during sliding on the heated wall will be incorporated into this mechanistic model to enhance its capability for a wider range of flow prediction.

Keywords: subcooled boiling flow, computational fluid dynamics (CFD), mechanistic approach, two-fluid model

Procedia PDF Downloads 318