Search results for: computational neuroscience
188 Discourse Analysis: Where Cognition Meets Communication
Authors: Iryna Biskub
Abstract:
The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.Keywords: cognition, communication, discourse, strategy
Procedia PDF Downloads 254187 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor
Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro
Abstract:
Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.Keywords: control, DC motor, discrete PID, discrete state feedback
Procedia PDF Downloads 268186 Robust Processing of Antenna Array Signals under Local Scattering Environments
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch
Procedia PDF Downloads 113185 The Impact of AI on Consumers’ Morality: An Empirical Evidence
Authors: Mingxia Zhu, Matthew Tingchi Liu
Abstract:
AI grows gradually in the market with its efficiency and accuracy, influencing people’s perceptions, attitude, and even consequential behaviors. Current study extends prior research by focusing on AI’s impact on consumers’ morality. First, study 1 tested individuals’ believes about AI and human’s moral perceptions and people’s attribution of moral worth to AI and human. Moral perception refers to a computational system an entity maintains to detect and identify moral violations, while moral worth here denotes whether individual regard an entity as worthy of moral treatment. To identify the effect of AI on consumers’ morality, two studies were employed. Study 1 is a within-subjects survey, while study 2 is an experimental study. In the study 1, one hundred and forty participants were recruited through online survey company in China (M_age = 27.31 years, SD = 7.12 years; 65% female). The participants were asked to assign moral perception and moral worth to AI and human. A paired samples t-test reveals that people generally regard that human has higher moral perception (M_Human = 6.03, SD = .86) than AI (M_AI = 2.79, SD = 1.19; t(139) = 27.07, p < .001; Cohen’s d = 1.41). In addition, another paired samples t-test results showed that people attributed higher moral worth to the human personnel (M_Human = 6.39, SD = .56) compared with AIs (M_AI = 5.43, SD = .85; t(139) = 12.96, p < .001; d = .88). In the next study, two hundred valid samples were recruited from survey company in China (M_age = 27.87 years, SD = 6.68 years; 55% female) and the participants were randomly assigned to two conditions (AI vs. human). After viewing the stimuli of human versus AI, participants are informed that one insurance company would determine the price purely based on their declaration. Therefore, their open-ended answers were coded into ethical, honest behavior and unethical, dishonest behavior according to the design of prior literature. A Chi-square analysis revealed that 64% of the participants would immorally lie towards AI insurance inspector while 42% of participants reported deliberately lower mileage facing with human inspector (χ^2 (1) = 9.71, p = .002). Similarly, the logistic regression results suggested that people would significantly more likely to report fraudulent answer when facing with AI (β = .89, odds ratio = 2.45, Wald = 9.56, p = .002). It is demonstrated that people would be more likely to behave unethically in front of non-human agents, such as AI agent, rather than human. The research findings shed light on new practical ethical issues in human-AI interaction and address the important role of human employees during the process of service delivery in the new era of AI.Keywords: AI agent, consumer morality, ethical behavior, human-AI interaction
Procedia PDF Downloads 84184 Identification of Potent and Selective SIRT7 Anti-Cancer Inhibitor via Structure-Based Virtual Screening and Molecular Dynamics Simulation
Authors: Md. Fazlul Karim, Ashik Sharfaraz, Aysha Ferdoushi
Abstract:
Background: Computational medicinal chemistry approaches are used for designing and identifying new drug-like molecules, predicting properties and pharmacological activities, and optimizing lead compounds in drug development. SIRT7, a nicotinamide adenine dinucleotide (NAD+)-dependent deacylase which regulates aging, is an emerging target for cancer therapy with mounting evidence that SIRT7 downregulation plays important roles in reversing cancer phenotypes and suppressing tumor growth. Activation or altered expression of SIRT7 is associated with the progression and invasion of various cancers, including liver, breast, gastric, prostate, and non-small cell lung cancer. Objectives: The goal of this work was to identify potent and selective bioactive candidate inhibitors of SIRT7 by in silico screening of small molecule compounds obtained from Nigella sativa (N. sativa). Methods: SIRT7 structure was retrieved from The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB), and its active site was identified using CASTp and metaPocket. Molecular docking simulation was performed with PyRx 0.8 virtual screening software. Drug-likeness properties were tested using SwissADME and pkCSM. In silico toxicity was evaluated by Osiris Property Explorer. Bioactivity was predicted by Molinspiration software. Antitumor activity was screened for Prediction of Activity Spectra for Substances (PASS) using Way2Drug web server. Molecular dynamics (MD) simulation was carried out by Desmond v3.6 package. Results: A total of 159 bioactive compounds from the N. Sativa were screened against the SIRT7 enzyme. Five bioactive compounds: chrysin (CID:5281607), pinocembrin (CID:68071), nigellidine (CID:136828302), nigellicine (CID:11402337), and epicatechin (CID:72276) were identified as potent SIRT7 anti-cancer candidates after docking score evaluation and applying Lipinski's Rule of Five. Finally, MD simulation identified Chrysin as the top SIRT7 anti-cancer candidate molecule. Conclusion: Chrysin, which shows a potential inhibitory effect against SIRT7, can act as a possible anti-cancer drug candidate. This inhibitor warrants further evaluation to check its pharmacokinetics and pharmacodynamics properties both in vitro and in vivo.Keywords: SIRT7, antitumor, molecular docking, molecular dynamics simulation
Procedia PDF Downloads 80183 Numerical Erosion Investigation of Standalone Screen (Wire-Wrapped) Due to the Impact of Sand Particles Entrained in a Single-Phase Flow (Water Flow)
Authors: Ahmed Alghurabi, Mysara Mohyaldinn, Shiferaw Jufar, Obai Younis, Abdullah Abduljabbar
Abstract:
Erosion modeling equations were typically acquired from regulated experimental trials for solid particles entrained in single-phase or multi-phase flows. Evidently, those equations were later employed to predict the erosion damage caused by the continuous impacts of solid particles entrained in streamflow. It is also well-known that the particle impact angle and velocity do not change drastically in gas-sand flow erosion prediction; hence an accurate prediction of erosion can be projected. On the contrary, high-density fluid flows, such as water flow, through complex geometries, such as sand screens, greatly affect the sand particles’ trajectories/tracks and consequently impact the erosion rate predictions. Particle tracking models and erosion equations are frequently applied simultaneously as a method to improve erosion visualization and estimation. In the present work, computational fluid dynamic (CFD)-based erosion modeling was performed using a commercially available software; ANSYS Fluent. The continuous phase (water flow) behavior was simulated using the realizable K-epsilon model, and the secondary phase (solid particles), having a 5% flow concentration, was tracked with the help of the discrete phase model (DPM). To accomplish a successful erosion modeling, three erosion equations from the literature were utilized and introduced to the ANSYS Fluent software to predict the screen wire-slot velocity surge and estimate the maximum erosion rates on the screen surface. Results of turbulent kinetic energy, turbulence intensity, dissipation rate, the total pressure on the screen, screen wall shear stress, and flow velocity vectors were presented and discussed. Moreover, the particle tracks and path-lines were also demonstrated based on their residence time, velocity magnitude, and flow turbulence. On one hand, results from the utilized erosion equations have shown similarities in screen erosion patterns, locations, and DPM concentrations. On the other hand, the model equations estimated slightly different values of maximum erosion rates of the wire-wrapped screen. This is solely based on the fact that the utilized erosion equations were developed with some assumptions that are controlled by the experimental lab conditions.Keywords: CFD simulation, erosion rate prediction, material loss due to erosion, water-sand flow
Procedia PDF Downloads 163182 Simulation of Wet Scrubbers for Flue Gas Desulfurization
Authors: Anders Schou Simonsen, Kim Sorensen, Thomas Condra
Abstract:
Wet scrubbers are used for flue gas desulfurization by injecting water directly into the flue gas stream from a set of sprayers. The water droplets will flow freely inside the scrubber, and flow down along the scrubber walls as a thin wall film while reacting with the gas phase to remove SO₂. This complex multiphase phenomenon can be divided into three main contributions: the continuous gas phase, the liquid droplet phase, and the liquid wall film phase. This study proposes a complete model, where all three main contributions are taken into account and resolved using OpenFOAM for the continuous gas phase, and MATLAB for the liquid droplet and wall film phases. The 3D continuous gas phase is composed of five species: CO₂, H₂O, O₂, SO₂, and N₂, which are resolved along with momentum, energy, and turbulence. Source terms are present for four species, energy and momentum, which are affecting the steady-state solution. The liquid droplet phase experiences breakup, collisions, dynamics, internal chemistry, evaporation and condensation, species mass transfer, energy transfer and wall film interactions. Numerous sub-models have been implemented and coupled to realise the above-mentioned phenomena. The liquid wall film experiences impingement, acceleration, atomization, separation, internal chemistry, evaporation and condensation, species mass transfer, and energy transfer, which have all been resolved using numerous sub-models as well. The continuous gas phase has been coupled with the liquid phases using source terms by an approach, where the two software packages are couples using a link-structure. The complete CFD model has been verified using 16 experimental tests from an existing scrubber installation, where a gradient-based pattern search optimization algorithm has been used to tune numerous model parameters to match the experimental results. The CFD model needed to be fast for evaluation in order to apply this optimization routine, where approximately 1000 simulations were needed. The results show that the complex multiphase phenomena governing wet scrubbers can be resolved in a single model. The optimization routine was able to tune the model to accurately predict the performance of an existing installation. Furthermore, the study shows that a coupling between OpenFOAM and MATLAB is realizable, where the data and source term exchange increases the computational requirements by approximately 5%. This allows for exploiting the benefits of both software programs.Keywords: desulfurization, discrete phase, scrubber, wall film
Procedia PDF Downloads 268181 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs
Authors: M. De Filippo, J. S. Kuang
Abstract:
In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line
Procedia PDF Downloads 179180 Application of Thermoplastic Microbioreactor to the Single Cell Study of Budding Yeast to Decipher the Effect of 5-Hydroxymethylfurfural on Growth
Authors: Elif Gencturk, Ekin Yurdakul, Ahmet Y. Celik, Senol Mutlu, Kutlu O. Ulgen
Abstract:
Yeast cells are generally used as a model system of eukaryotes due to their complex genetic structure, rapid growth ability in optimum conditions, easy replication and well-defined genetic system properties. Thus, yeast cells increased the knowledge of the principal pathways in humans. During fermentation, carbohydrates (hexoses and pentoses) degrade into some toxic by-products such as 5-hydroxymethylfurfural (5-HMF or HMF) and furfural. HMF influences the ethanol yield, and ethanol productivity; it interferes with microbial growth and is considered as a potent inhibitor of bioethanol production. In this study, yeast single cell behavior under HMF application was monitored by using a continuous flow single phase microfluidic platform. Microfluidic device in operation is fabricated by hot embossing and thermo-compression techniques from cyclo-olefin polymer (COP). COP is biocompatible, transparent and rigid material and it is suitable for observing fluorescence of cells considering its low auto-fluorescence characteristic. The response of yeast cells was recorded through Red Fluorescent Protein (RFP) tagged Nop56 gene product, which is an essential evolutionary-conserved nucleolar protein, and also a member of the box C/D snoRNP complexes. With the application of HMF, yeast cell proliferation continued but HMF slowed down the cell growth, and after HMF treatment the cell proliferation stopped. By the addition of fresh nutrient medium, the yeast cells recovered after 6 hours of HMF exposure. Thus, HMF application suppresses normal functioning of cell cycle but it does not cause cells to die. The monitoring of Nop56 expression phases of the individual cells shed light on the protein and ribosome synthesis cycles along with their link to growth. Further computational study revealed that the mechanisms underlying the inhibitory or inductive effects of HMF on growth are enriched in functional categories of protein degradation, protein processing, DNA repair and multidrug resistance. The present microfluidic device can successfully be used for studying the effects of inhibitory agents on growth by single cell tracking, thus capturing cell to cell variations. By metabolic engineering techniques, engineered strains can be developed, and the metabolic network of the microorganism can thus be manipulated such that chemical overproduction of target metabolite is achieved along with the maximum growth/biomass yield.Keywords: COP, HMF, ribosome biogenesis, thermoplastic microbioreactor, yeast
Procedia PDF Downloads 171179 PbLi Activation Due to Corrosion Products in WCLL BB (EU-DEMO) and Its Impact on Reactor Design and Recycling
Authors: Nicole Virgili, Marco Utili
Abstract:
The design of the Breeding Blanket in Tokamak fusion energy systems has to guarantee sufficient availability in addition to its functions, that are, tritium breeding self-sufficiency, power extraction and shielding (the magnets and the VV). All these function in the presence of extremely harsh operating conditions in terms of heat flux and neutron dose as well as chemical environment of the coolant and breeder that challenge structural materials (structural resistance and corrosion resistance). The movement and activation of fluids from the BB to the Ex-vessel components in a fusion power plant have an important radiological consideration because flowing material can carry radioactivity to safety-critical areas. This includes gamma-ray emission from activated fluid and activated corrosion products, and secondary activation resulting from neutron emission, with implication for the safety of maintenance personnel and damage to electrical and electronic equipment. In addition to the PbLi breeder activation, it is important to evaluate the contribution due to the activated corrosion products (ACPs) dissolved in the lead-lithium eutectic alloy, at different concentration levels. Therefore, the purpose of the study project is to evaluate the PbLi activity utilizing the FISPACT II inventory code. Emphasis is given on how the design of the EU-DEMO WCLL, and potential recycling of the breeder material will be impacted by the activation of PbLi and the associated active corrosion products (ACPs). For this scope the following Computational Tools, Data and Geometry have been considered: • Neutron source: EU-DEMO neutron flux < 1014/cm2/s • Neutron flux distribution in equatorial breeding blanket module (BBM) #13 in the WCLL BB outboard central zone, which is the most activated zone, with the aim to introduce a conservative component utilizing MNCP6. • The recommended geometry model: 2017 EU DEMO CAD model. • Blanket Module Material Specifications (Composition) • Activation calculations for different ACP concentration levels in the PbLi breeder, with a given chemistry in stationary equilibrium conditions, using FISPACT II code. Results suggest that there should be a waiting time of about 10 years from the shut-down (SD) to be able to safely manipulate the PbLi for recycling operations with simple shielding requirements. The dose rate is mainly given by the PbLi and the ACP concentration (x1 or x 100) does not shift the result. In conclusion, the results show that there is no impact on PbLi activation due to ACPs levels.Keywords: activation, corrosion products, recycling, WCLL BB., PbLi
Procedia PDF Downloads 132178 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination
Authors: Gilberto Goracci, Fabio Curti
Abstract:
This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field
Procedia PDF Downloads 105177 Identification of Peroxisome Proliferator-Activated Receptors α/γ Dual Agonists for Treatment of Metabolic Disorders, Insilico Screening, and Molecular Dynamics Simulation
Authors: Virendra Nath, Vipin Kumar
Abstract:
Background: TypeII Diabetes mellitus is a foremost health problem worldwide, predisposing to increased mortality and morbidity. Undesirable effects of the current medications have prompted the researcher to develop more potential drug(s) against the disease. The peroxisome proliferator-activated receptors (PPARs) are members of the nuclear receptors family and take part in a vital role in the regulation of metabolic equilibrium. They can induce or repress genes associated with adipogenesis, lipid, and glucose metabolism. Aims: Investigation of PPARα/γ agonistic hits were screened by hierarchical virtual screening followed by molecular dynamics simulation and knowledge-based structure-activity relation (SAR) analysis using approved PPAR α/γ dual agonist. Methods: The PPARα/γ agonistic activity of compounds was searched by using Maestro through structure-based virtual screening and molecular dynamics (MD) simulation application. Virtual screening of nuclear-receptor ligands was done, and the binding modes with protein-ligand interactions of newer entity(s) were investigated. Further, binding energy prediction, Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit along with the structural comparative analysis of approved PPARα/γ agonists with screened hit was done for knowledge-based SAR. Results and Discussion: The silicone chip-based approach recognized the most capable nine hits and had better predictive binding energy as compared to the reference drug compound (Tesaglitazar). In this study, the key amino acid residues of binding pockets of both targets PPARα/γ were acknowledged as essential and were found to be associated in the key interactions with the most potential dual hit (ChemDiv-3269-0443). Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit and found root mean square deviation (RMSD) stabile around 2Å and 2.1Å, respectively. Frequency distribution data also revealed that the key residues of both proteins showed maximum contacts with a potent hit during the MD simulation of 20 nanoseconds (ns). The knowledge-based SAR studies of PPARα/γ agonists were studied using 2D structures of approved drugs like aleglitazar, tesaglitazar, etc. for successful designing and synthesis of compounds PPARγ agonistic candidates with anti-hyperlipidimic potential.Keywords: computational, diabetes, PPAR, simulation
Procedia PDF Downloads 103176 Geochemical Characterization for Identification of Hydrocarbon Generation: Implication of Unconventional Gas Resources
Authors: Yousif M. Makeen
Abstract:
This research will address the processes of geochemical characterization and hydrocarbon generation process occurring within hydrocarbon source and/or reservoir rocks. The geochemical characterization includes organic-inorganic associations that influence the storage capacity of unconventional hydrocarbon resources (e.g. shale gas) and the migration process of oil/gas of the petroleum source/reservoir rocks. Kerogen i.e. the precursor of petroleum, occurs in various forms and types, may either be oil-prone, gas-prone, or both. China has a number of petroleum-bearing sedimentary basins commonly associated with shale gas, oil sands, and oil shale. Taken Sichuan basin as a selected basin in this study, the Sichuan basin has recorded notable successful discoveries of shale gas especially in the marine shale reservoirs within the area. However, a notable discoveries of lacustrine shale in the North-Este Fuling area indicate the accumulation of shale gas within non-marine source rock. The objective of this study is to evaluate the hydrocarbon storage capacity, generation, and retention processes in the rock matrix of hydrocarbon source/reservoir rocks within the Sichuan basin using an advanced X-ray tomography 3D imaging computational technology, commonly referred to as Micro-CT, SEM (Scanning Electron Microscope), optical microscope as well as organic geochemical facilities (e.g. vitrinite reflectance and UV light). The preliminary results of this study show that the lacustrine shales under investigation are acting as both source and reservoir rocks, which are characterized by very fine grains and very low permeability and porosity. Three pore structures have also been characterized in the study in the lacustrine shales, including organic matter pores, interparticle pores and intraparticle pores using x-ray Computed Tomography (CT). The benefits of this study would be a more successful oil and gas exploration and higher recovery factor, thus having a direct economic impact on China and the surrounding region. Methodologies: SRA TOC/TPH or Rock-Eval technique will be used to determine the source rock richness (S1 and S2) and Tmax. TOC analysis will be carried out using a multi N/C 3100 analyzer. The SRA and TOC results were used in calculating other parameters such as hydrogen index (HI) and production index (PI). This analysis will indicate the quantity of the organic matter. Minimum TOC limits generally accepted as essential for a source-rock are 0.5% for shales and 0.2% for carbonates. Contributions: This research could solve issues related to oil potential, provide targets, and serve as a pathfinder to future exploration activity in the Sichuan basin.Keywords: shale gas, unconventional resources, organic chemistry, Sichuan basin
Procedia PDF Downloads 40175 Numerical Simulation of Waves Interaction with a Free Floating Body by MPS Method
Authors: Guoyu Wang, Meilian Zhang, Chunhui LI, Bing Ren
Abstract:
In recent decades, a variety of floating structures have played a crucial role in ocean and marine engineering, such as ships, offshore platforms, floating breakwaters, fish farms, floating airports, etc. It is common for floating structures to suffer from loadings under waves, and the responses of the structures mounted in marine environments have a significant relation to the wave impacts. The interaction between surface waves and floating structures is one of the important issues in ship or marine structure design to increase performance and efficiency. With the progress of computational fluid dynamics, a number of numerical models based on the NS equations in the time domain have been developed to explore the above problem, such as the finite difference method or the finite volume method. Those traditional numerical simulation techniques for moving bodies are grid-based, which may encounter some difficulties when treating a large free surface deformation and a moving boundary. In these models, the moving structures in a Lagrangian formulation need to be appropriately described in grids, and the special treatment of the moving boundary is inevitable. Nevertheless, in the mesh-based models, the movement of the grid near the structure or the communication between the moving Lagrangian structure and Eulerian meshes will increase the algorithm complexity. Fortunately, these challenges can be avoided by the meshless particle methods. In the present study, a moving particle semi-implicit model is explored for the numerical simulation of fluid–structure interaction with surface flows, especially for coupling of fluid and moving rigid body. The equivalent momentum transfer method is proposed and derived for the coupling of fluid and rigid moving body. The structure is discretized into a group of solid particles, which are assumed as fluid particles involved in solving the NS equation altogether with the surrounding fluid particles. The momentum conservation is ensured by the transfer from those fluid particles to the corresponding solid particles. Then, the position of the solid particles is updated to keep the initial shape of the structure. Using the proposed method, the motions of a free-floating body in regular waves are numerically studied. The wave surface evaluation and the dynamic response of the floating body are presented. There is good agreement when the numerical results, such as the sway, heave, and roll of the floating body, are compared with the experimental and other numerical data. It is demonstrated that the presented MPS model is effective for the numerical simulation of fluid-structure interaction.Keywords: floating body, fluid structure interaction, MPS, particle method, waves
Procedia PDF Downloads 76174 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer
Authors: Feng-Sheng Wang, Chao-Ting Cheng
Abstract:
Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution
Procedia PDF Downloads 81173 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering
Authors: Hamza Benzerrouk, Alexander Nebylov
Abstract:
In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.Keywords: GNSS, INS, Kalman filtering, ultra tight integration
Procedia PDF Downloads 284172 Epigenetic and Archeology: A Quest to Re-Read Humanity
Authors: Salma A. Mahmoud
Abstract:
Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.Keywords: epigenetics, archeology, bones, chromatin, methylome
Procedia PDF Downloads 108171 Computational Analysis of Thermal Degradation in Wind Turbine Spars' Equipotential Bonding Subjected to Lightning Strikes
Authors: Antonio A. M. Laudani, Igor O. Golosnoy, Ole T. Thomsen
Abstract:
Rotor blades of large, modern wind turbines are highly susceptible to downward lightning strikes, as well as to triggering upward lightning; consequently, it is necessary to equip them with an effective lightning protection system (LPS) in order to avoid any damage. The performance of existing LPSs is affected by carbon fibre reinforced polymer (CFRP) structures, which lead to lightning-induced damage in the blades, e.g. via electrical sparks. A solution to prevent internal arcing would be to electrically bond the LPS and the composite structures such that to obtain the same electric potential. Nevertheless, elevated temperatures are achieved at the joint interfaces because of high contact resistance, which melts and vaporises some of the epoxy resin matrix around the bonding. The produced high-pressure gasses open up the bonding and can ignite thermal sparks. The objective of this paper is to predict the current density distribution and the temperature field in the adhesive joint cross-section, in order to check whether the resin pyrolysis temperature is achieved and any damage is expected. The finite element method has been employed to solve both the current and heat transfer problems, which are considered weakly coupled. The mathematical model for electric current includes Maxwell-Ampere equation for induced electric field solved together with current conservation, while the thermal field is found from heat diffusion equation. In this way, the current sub-model calculates Joule heat release for a chosen bonding configuration, whereas the thermal analysis allows to determining threshold values of voltage and current density not to be exceeded in order to maintain the temperature across the joint below the pyrolysis temperature, therefore preventing the occurrence of outgassing. In addition, it provides an indication of the minimal number of bonding points. It is worth to mention that the numerical procedures presented in this study can be tailored and applied to any type of joints other than adhesive ones for wind turbine blades. For instance, they can be applied for lightning protection of aerospace bolted joints. Furthermore, they can even be customized to predict the electromagnetic response under lightning strikes of other wind turbine systems, such as nacelle and hub components.Keywords: carbon fibre reinforced polymer, equipotential bonding, finite element method, FEM, lightning protection system, LPS, wind turbine blades
Procedia PDF Downloads 164170 Investigation of Elastic Properties of 3D Full Five Directional (f5d) Braided Composite Materials
Authors: Apeng Dong, Shu Li, Wenguo Zhu, Ming Qi, Qiuyi Xu
Abstract:
The primary objective of this paper is to focus on the elasticity properties of three-dimensional full five directional (3Df5d) braided composite. A large body of research has been focused on the 3D four directional (4d) and 3D five directional (5d) structure but not much research on the 3Df5d material. Generally, the influence of the yarn shape on mechanical properties of braided materials tends to be ignored, which makes results too ideal. Besides, with the improvement of the computational ability, people are accustomed to using computers to predict the material parameters, which fails to give an explicit and concise result facilitating production and application. Based on the traditional mechanics, this paper firstly deduced the functional relation between elasticity properties and braiding parameters. In addition, considering the actual shape of yarns after consolidation, the longitudinal modulus is modified and defined practically. Firstly, the analytic model is established based on the certain assumptions for the sake of clarity, this paper assumes that: A: the cross section of axial yarns is square; B: The cross section of braiding yarns is hexagonal; C: the characters of braiding yarns and axial yarns are the same; D: The angle between the structure boundary and the projection of braiding yarns in transverse plane is 45°; E: The filling factor ε of composite yarns is π/4; F: The deformation of unit cell is under constant strain condition. Then, the functional relation between material constants and braiding parameters is systematically deduced aimed at the yarn deformation mode. Finally, considering the actual shape of axial yarns after consolidation, the concept of technology factor is proposed and the longitudinal modulus of the material is modified based on the energy theory. In this paper, the analytic solution of material parameters is given for the first time, which provides a good reference for further research and application for 3Df5d materials. Although the analysis model is established based on certain assumptions, the analysis method is also applicable for other braided structures. Meanwhile, it is crucial that the cross section shape and straightness of axial yarns play dominant roles in the longitudinal elastic property. So in the braiding and solidifying process, the stability of the axial yarns should be guaranteed to increase the technology factor to reduce the dispersion of material parameters. Overall, the elastic properties of this materials are closely related to the braiding parameters and can be strongly designable, and although the longitudinal modulus of the material is greatly influenced by the technology factors, it can be defined to certain extent.Keywords: analytic solution, braided composites, elasticity properties, technology factor
Procedia PDF Downloads 238169 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites
Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy
Abstract:
Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites
Procedia PDF Downloads 177168 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties
Authors: E. Salem
Abstract:
Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames
Procedia PDF Downloads 115167 A Development of a Simulation Tool for Production Planning with Capacity-Booking at Specialty Store Retailer of Private Label Apparel Firms
Authors: Erika Yamaguchi, Sirawadee Arunyanrt, Shunichi Ohmori, Kazuho Yoshimoto
Abstract:
In this paper, we suggest a simulation tool to make a decision of monthly production planning for maximizing a profit of Specialty store retailer of Private label Apparel (SPA) firms. Most of SPA firms are fabless and make outsourcing deals for productions with factories of their subcontractors. Every month, SPA firms make a booking for production lines and manpower in the factories. The booking is conducted a few months in advance based on a demand prediction and a monthly production planning at that time. However, the demand prediction is updated month by month, and the monthly production planning would change to meet the latest demand prediction. Then, SPA firms have to change the capacities initially booked within a certain range to suit to the monthly production planning. The booking system is called “capacity-booking”. These days, though it is an issue for SPA firms to make precise monthly production planning, many firms are still conducting the production planning by empirical rules. In addition, it is also a challenge for SPA firms to match their products and factories with considering their demand predictabilities and regulation abilities. In this paper, we suggest a model for considering these two issues. An objective is to maximize a total profit of certain periods, which is sales minus costs of production, inventory, and capacity-booking penalty. To make a better monthly production planning at SPA firms, these points should be considered: demand predictabilities by random trends, previous and next month’s production planning of the target month, and regulation abilities of the capacity-booking. To decide matching products and factories for outsourcing, it is important to consider seasonality, volume, and predictability of each product, production possibility, size, and regulation ability of each factory. SPA firms have to consider these constructions and decide orders with several factories per one product. We modeled these issues as a linear programming. To validate the model, an example of several computational experiments with a SPA firm is presented. We suppose four typical product groups: basic, seasonal (Spring / Summer), seasonal (Fall / Winter), and spot product. As a result of the experiments, a monthly production planning was provided. In the planning, demand predictabilities from random trend are reduced by producing products which are different product types. Moreover, priorities to produce are given to high-margin products. In conclusion, we developed a simulation tool to make a decision of monthly production planning which is useful when the production planning is set every month. We considered the features of capacity-booking, and matching of products and factories which have different features and conditions.Keywords: capacity-booking, SPA, monthly production planning, linear programming
Procedia PDF Downloads 520166 Radar Cross Section Modelling of Lossy Dielectrics
Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit
Abstract:
Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation
Procedia PDF Downloads 242165 Determination of the Relative Humidity Profiles in an Internal Micro-Climate Conditioned Using Evaporative Cooling
Authors: M. Bonello, D. Micallef, S. P. Borg
Abstract:
Driven by increased comfort standards, but at the same time high energy consciousness, energy-efficient space cooling has become an essential aspect of building design. Its aims are simple, aiming at providing satisfactory thermal comfort for individuals in an interior space using low energy consumption cooling systems. In this context, evaporative cooling is both an energy-efficient and an eco-friendly cooling process. In the past two decades, several academic studies have been performed to determine the resulting thermal comfort produced by an evaporative cooling system, including studies on temperature profiles, air speed profiles, effect of clothing and personnel activity. To the best knowledge of the authors, no studies have yet considered the analysis of relative humidity (RH) profiles in a space cooled using evaporative cooling. Such a study will determine the effect of different humidity levels on a person's thermal comfort and aid in the consequent improvement designs of such future systems. Under this premise, the research objective is to characterise the resulting different RH profiles in a chamber micro-climate using the evaporative cooling system in which the inlet air speed, temperature and humidity content are varied. The chamber shall be modelled using Computational Fluid Dynamics (CFD) in ANSYS Fluent. Relative humidity shall be modelled using a species transport model while the k-ε RNG formulation is the proposed turbulence model that is to be used. The model shall be validated with measurements taken using an identical test chamber in which tests are to be conducted under the different inlet conditions mentioned above, followed by the verification of the model's mesh and time step. The verified and validated model will then be used to simulate other inlet conditions which would be impractical to conduct in the actual chamber. More details of the modelling and experimental approach will be provided in the full paper The main conclusions from this work are two-fold: the micro-climatic relative humidity spatial distribution within the room is important to consider in the context of investigating comfort at occupant level; and the investigation of a human being's thermal comfort (based on Predicted Mean Vote – Predicted Percentage Dissatisfied [PMV-PPD] values) and its variation with different locations of relative humidity values. The study provides the necessary groundwork for investigating the micro-climatic RH conditions of environments cooled using evaporative cooling. Future work may also target the analysis of ways in which evaporative cooling systems may be improved to better the thermal comfort of human beings, specifically relating to the humidity content around a sedentary person.Keywords: chamber micro-climate, evaporative cooling, relative humidity, thermal comfort
Procedia PDF Downloads 157164 Modeling of Anode Catalyst against CO in Fuel Cell Using Material Informatics
Authors: M. Khorshed Alam, H. Takaba
Abstract:
The catalytic properties of metal usually change by intermixturing with another metal in polymer electrolyte fuel cells. Pt-Ru alloy is one of the much-talked used alloy to enhance the CO oxidation. In this work, we have investigated the CO coverage on the Pt2Ru3 nanoparticle with different atomic conformation of Pt and Ru using a combination of material informatics with computational chemistry. Density functional theory (DFT) calculations used to describe the adsorption strength of CO and H with different conformation of Pt Ru ratio in the Pt2Ru3 slab surface. Then through the Monte Carlo (MC) simulations we examined the segregation behaviour of Pt as a function of surface atom ratio, subsurface atom ratio, particle size of the Pt2Ru3 nanoparticle. We have constructed a regression equation so as to reproduce the results of DFT only from the structural descriptors. Descriptors were selected for the regression equation; xa-b indicates the number of bonds between targeted atom a and neighboring atom b in the same layer (a,b = Pt or Ru). Terms of xa-H2 and xa-CO represent the number of atoms a binding H2 and CO molecules, respectively. xa-S is the number of atom a on the surface. xa-b- is the number of bonds between atom a and neighboring atom b located outside the layer. The surface segregation in the alloying nanoparticles is influenced by their component elements, composition, crystal lattice, shape, size, nature of the adsorbents and its pressure, temperature etc. Simulations were performed on different size (2.0 nm, 3.0 nm) of nanoparticle that were mixing of Pt and Ru atoms in different conformation considering of temperature range 333K. In addition to the Pt2Ru3 alloy we also considered pure Pt and Ru nanoparticle to make comparison of surface coverage by adsorbates (H2, CO). Hence, we assumed the pure and Pt-Ru alloy nanoparticles have an fcc crystal structures as well as a cubo-octahedron shape, which is bounded by (111) and (100) facets. Simulations were performed up to 50 million MC steps. From the results of MC, in the presence of gases (H2, CO), the surfaces are occupied by the gas molecules. In the equilibrium structure the coverage of H and CO as a function of the nature of surface atoms. In the initial structure, the Pt/Ru ratios on the surfaces for different cluster sizes were in range of 0.50 - 0.95. MC simulation was employed when the partial pressure of H2 (PH2) and CO (PCO) were 70 kPa and 100-500 ppm, respectively. The Pt/Ru ratios decrease as the increase in the CO concentration, without little exception only for small nanoparticle. The adsorption strength of CO on the Ru site is higher than the Pt site that would be one of the reason for decreasing the Pt/Ru ratio on the surface. Therefore, our study identifies that controlling the nanoparticle size, composition, conformation of alloying atoms, concentration and chemical potential of adsorbates have impact on the steadiness of nanoparticle alloys which ultimately and also overall catalytic performance during the operations.Keywords: anode catalysts, fuel cells, material informatics, Monte Carlo
Procedia PDF Downloads 193163 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems
Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue
Abstract:
The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure
Procedia PDF Downloads 323162 Field Synergy Analysis of Combustion Characteristics in the Afterburner of Solid Oxide Fuel Cell System
Authors: Shing-Cheng Chang, Cheng-Hao Yang, Wen-Sheng Chang, Chih-Chia Lin, Chun-Han Li
Abstract:
The solid oxide fuel cell (SOFC) is a promising green technology which can achieve a high electrical efficiency. Due to the high operating temperature of SOFC stack, the off-gases at high temperature from anode and cathode outlets are introduced into an afterburner to convert the chemical energy into thermal energy by combustion. The heat is recovered to preheat the fresh air and fuel gases before they pass through the stack during the SOFC power generation system operation. For an afterburner of the SOFC system, the temperature control with a good thermal uniformity is important. A burner with a well-designed geometry usually can achieve a satisfactory performance. To design an afterburner for an SOFC system, the computational fluid dynamics (CFD) simulation is adoptable. In this paper, the hydrogen combustion characteristics in an afterburner with simple geometry are studied by using CFD. The burner is constructed by a cylinder chamber with the configuration of a fuel gas inlet, an air inlet, and an exhaust outlet. The flow field and temperature distributions inside the afterburner under different fuel and air flow rates are analyzed. To improve the temperature uniformity of the afterburner during the SOFC system operation, the flow paths of anode/cathode off-gases are varied by changing the positions of fuels and air inlet channel to improve the heat and flow field synergy in the burner furnace. Because the air flow rate is much larger than the fuel gas, the flow structure and heat transfer in the afterburner is dominated by the air flow path. The present work studied the effects of fluid flow structures on the combustion characteristics of an SOFC afterburner by three simulation models with a cylindrical combustion chamber and a tapered outlet. All walls in the afterburner are assumed to be no-slip and adiabatic. In each case, two set of parameters are simulated to study the transport phenomena of hydrogen combustion. The equivalence ratios are in the range of 0.08 to 0.1. Finally, the pattern factor for the simulation cases is calculated to investigate the effect of gas inlet locations on the temperature uniformity of the SOFC afterburner. The results show that the temperature uniformity of the exhaust gas can be improved by simply adjusting the position of the gas inlet. The field synergy analysis indicates the design of the fluid flow paths should be in the way that can significantly contribute to the heat transfer, i.e. the field synergy angle should be as small as possible. In the study cases, the averaged synergy angle of the burner is about 85̊, 84̊, and 81̊ respectively.Keywords: afterburner, combustion, field synergy, solid oxide fuel cell
Procedia PDF Downloads 137161 Designing Metal Organic Frameworks for Sustainable CO₂ Utilization
Authors: Matthew E. Potter, Daniel J. Stewart, Lindsay M. Armstrong, Pier J. A. Sazio, Robert R. Raja
Abstract:
Rising CO₂ levels in the atmosphere means that CO₂ is a highly desirable feedstock. This requires specific catalysts to be designed to activate this inert molecule, combining a catalytic site tailored for CO₂ transformations with a support that can readily adsorb CO₂. Metal organic frameworks (MOFs) are regularly used as CO₂ sorbents. The organic nature of the linker molecules, connecting the metal nodes, offers many post-synthesis modifications to introduce catalytic active sites into the frameworks. However, the metal nodes may be coordinatively unsaturated, allowing them to bind to organic moieties. Imidazoles have shown promise catalyzing the formation of cyclic carbonates from epoxides with CO₂. Typically, this synthesis route employs toxic reagents such as phosgene, liberating HCl. Therefore an alternative route with CO₂ is highly appealing. In this work we design active sites for CO₂ activation, by tethering substituted-imidazole organocatalytic species to the available Cr3+ metal nodes of a Cr-MIL-101 MOF, for the first time, to create a tailored species for carbon capture utilization applications. Our tailored design strategy combining a CO₂ sorbent, Cr-MIL-101, with an anchored imidazole results in a highly active and selective multifunctional catalyst, achieving turnover frequencies of over 750 hr-1. These findings demonstrate the synergy between the MOF framework and imidazoles for CO₂ utilization applications. Further, the effect of substrate variation has been explored yielding mechanistic insights into this process. Through characterization, we show that the structural and compositional integrity of the Cr-MIL-101 has been preserved on functionalizing the imidazoles. Further, we show the binding of the imidazoles to the Cr3+ metal nodes. This can be seen through our EPR study, where the distortion of the Cr3+ on binding to the imidazole shows the CO₂ binding site is close to the active imidazole. This has a synergistic effect, improving catalytic performance. We believe the combination of MOF support and organocatalyst allows many possibilities to generate new multifunctional catalysts for CO₂ utilisation. In conclusion, we have validated our design procedure, combining a known CO₂ sorbent, with an active imidazole species to create a unique tailored multifunctional catalyst for CO₂ utilization. This species achieves high activity and selectivity for the formation of cyclic carbonates and offers a sustainable alternative to traditional synthesis methods. This work represents a unique design strategy for CO₂ utilization while offering exciting possibilities for further work in characterization, computational modelling, and post-synthesis modification.Keywords: carbonate, catalysis, MOF, utilisation
Procedia PDF Downloads 180160 An Energy and Economic Comparison of Solar Thermal Collectors for Domestic Hot Water Applications
Authors: F. Ghani, T. S. O’Donovan
Abstract:
Today, the global solar thermal market is dominated by two collector types; the flat plate and evacuated tube collector. With regards to the number of installations worldwide, the evacuated tube collector is the dominant variant primarily due to the Chinese market but the flat plate collector dominates both the Australian and European markets. The market share of the evacuated tube collector is, however, growing in Australia due to a common belief that this collector type is ‘more efficient’ and, therefore, the better choice for hot water applications. In this study, we investigate this issue further to assess the validity of this statement. This was achieved by methodically comparing the performance and economics of several solar thermal systems comprising of; a low-performance flat plate collector, a high-performance flat collector, and an evacuated tube collector coupled with a storage tank and pump. All systems were simulated using the commercial software package Polysun for four climate zones in Australia to take into account different weather profiles in the study and subjected to a thermal load equivalent to a household comprising of four people. Our study revealed that the energy savings and payback periods varied significantly for systems operating under specific environmental conditions. Solar fractions ranged between 58 and 100 per cent, while payback periods range between 3.8 and 10.1 years. Although the evacuated tube collector was found to operate with a marginally higher thermal efficiency over the selective surface flat plate collector due to reduced ambient heat loss, the high-performance flat plate collector outperformed the evacuated tube collector on thermal yield. This result was obtained as the flat plate collector possesses a significantly higher absorber to gross collector area ratio over the evacuated tube collector. Furthermore, it was found for Australian regions operating with a high average solar radiation intensity and ambient temperature, the lower performance collector is the preferred choice due to favorable economics and reduced stagnation temperature. Our study has provided additional insight into the thermal performance and economics of the two prevalent solar thermal collectors currently available. A computational investigation has been carried out specifically for the Australian climate due to its geographic size and significant variation in weather. For domestic hot water applications were fluid temperatures between 50 and 60 degrees Celsius are sought, the flat plate collector is both technically and economically favorable over the evacuated tube collector. This research will be useful to system design engineers, solar thermal manufacturers, and those involved in policy to encourage the implementation of solar thermal systems into the hot water market.Keywords: solar thermal, energy analysis, flat plate, evacuated tube, collector performance
Procedia PDF Downloads 211159 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model
Authors: Yew Mun Yip, Dawei Zhang
Abstract:
Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.Keywords: hydrogen bond, polarization effect, protein folding, PSBC
Procedia PDF Downloads 270