Search results for: code blue simulation module
174 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)
Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri
Abstract:
Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase
Procedia PDF Downloads 232173 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design
Authors: Mohammad Bagher Anvari, Arman Shojaei
Abstract:
Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.Keywords: incremental launching, bridge construction, finite element model, optimization
Procedia PDF Downloads 104172 God, The Master Programmer: The Relationship Between God and Computers
Authors: Mohammad Sabbagh
Abstract:
Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.Keywords: programming, the Quran, object orientation, computers and humans, GOD
Procedia PDF Downloads 107171 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 192170 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 217169 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality
Authors: Qian Yi Ooi
Abstract:
At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality
Procedia PDF Downloads 222168 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 397167 DIF-JACKET: a Thermal Protective Jacket for Firefighters
Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves
Abstract:
Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.Keywords: firefighters, multilayer system, phase change material, thermal protective clothing
Procedia PDF Downloads 166166 A Literature Review on the Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster
Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon
Abstract:
In a disaster event, sharing patient information between the pre-hospitals Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre-EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors which are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality and the data were analysed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system which can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analysed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospitals staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.Keywords: communication, emergency communication services, emergency medical teams, emergency physicians, emergency nursing, paramedics, information and communication technology, communication systems
Procedia PDF Downloads 86165 Numerical Modelling of the Influence of Meteorological Forcing on Water-Level in the Head Bay of Bengal
Authors: Linta Rose, Prasad K. Bhaskaran
Abstract:
Water-level information along the coast is very important for disaster management, navigation, planning shoreline management, coastal engineering and protection works, port and harbour activities, and for a better understanding of near-shore ocean dynamics. The water-level variation along a coast attributes from various factors like astronomical tides, meteorological and hydrological forcing. The study area is the Head Bay of Bengal which is highly vulnerable to flooding events caused by monsoons, cyclones and sea-level rise. The study aims to explore the extent to which wind and surface pressure can influence water-level elevation, in view of the low-lying topography of the coastal zones in the region. The ADCIRC hydrodynamic model has been customized for the Head Bay of Bengal, discretized using flexible finite elements and validated against tide gauge observations. Monthly mean climatological wind and mean sea level pressure fields of ERA Interim reanalysis data was used as input forcing to simulate water-level variation in the Head Bay of Bengal, in addition to tidal forcing. The output water-level was compared against that produced using tidal forcing alone, so as to quantify the contribution of meteorological forcing to water-level. The average contribution of meteorological fields to water-level in January is 5.5% at a deep-water location and 13.3% at a coastal location. During the month of July, when the monsoon winds are strongest in this region, this increases to 10.7% and 43.1% respectively at the deep-water and coastal locations. The model output was tested by varying the input conditions of the meteorological fields in an attempt to quantify the relative significance of wind speed and wind direction on water-level. Under uniform wind conditions, the results showed a higher contribution of meteorological fields for south-west winds than north-east winds, when the wind speed was higher. A comparison of the spectral characteristics of output water-level with that generated due to tidal forcing alone showed additional modes with seasonal and annual signatures. Moreover, non-linear monthly mode was found to be weaker than during tidal simulation, all of which point out that meteorological fields do not cause much effect on the water-level at periods less than a day and that it induces non-linear interactions between existing modes of oscillations. The study signifies the role of meteorological forcing under fair weather conditions and points out that a combination of multiple forcing fields including tides, wind, atmospheric pressure, waves, precipitation and river discharge is essential for efficient and effective forecast modelling, especially during extreme weather events.Keywords: ADCIRC, head Bay of Bengal, mean sea level pressure, meteorological forcing, water-level, wind
Procedia PDF Downloads 221164 The Aspect of the Digital Formation in the Solar Community as One Prototype to Find the Algorithmic Sustainable Conditions in the Global Environment
Authors: Kunihisa Kakumoto
Abstract:
Purpose: The global environmental problem is now raised in the global dimension. The sprawl phenomenon over the natural limitation is to be made a forecast beforehand in an algorithmic way so that the condition of our social life can hopefully be protected under the natural limitation. The sustainable condition in the globe is now to be found to keep the balance between the capacity of nature and the possibility of our social lives. The amount of water on the earth is limited. Therefore, on the reason, sustainable conditions are strongly dependent on the capacity of water. The amount of water can be considered in relation to the area of the green planting because a certain volume of the water can be obtained in the forest, where the green planting can be preserved. We can find the sustainable conditions of the water in relation to the green planting area. The reduction of CO₂ by green planting is also possible. Possible Measure and the Methods: Until now, by the opportunity of many international conferences, the concept of the solar community as one prototype has been introduced by technical papers. The algorithmic trial calculation on the basic concept of the solar community can be taken into consideration. The concept of the solar community is based on the collected data of the solar model house. According to the algorithmic results of the prototype, the simulation work in the globe can be performed as the algorithmic conversion results. This algorithmic study can be simulated by the amount of water, also in relation to the green planting area. Additionally, the submission of CO₂ in the solar community and the reduction of CO₂ by green planting can be calculated. On the base of these calculations in the solar community, the sustainable conditions on the globe can be simulated as the conversion results in an algorithmic way. The digital formation in the solar community can also be taken into consideration by this opportunity. Conclusion: For the finding of sustainable conditions around the globe, the solar community as one prototype has been taken into consideration. The role of the water is very important because the capacity of the water supply is very limited. But, at present, the cycle of the social community is not composed by the point of the natural mechanism. The simulative calculation of this study can be shown by the limitation of the total water supply. According to this process, the total capacity of the water supply and the capable residential number of the population and the areas can be taken into consideration by the algorithmic calculation. For keeping enough water, the green planting areas are very important. The planting area is also very important to keep the balance of CO₂. The simulative calculation can be performed by the relation between the submission and the reduction of CO₂ in the solar community. For the finding of this total balance and the sustainable conditions, the green planting area and the total amount of water can be recognized by the algorithmic simulative calculation. The study for the finding of sustainable conditions can be performed by the simulative calculations on the algorithmic model in the solar community as one prototype. The example of one prototype can be in balance. The activity of the social life must be in the capacity of the natural mechanism. The capable capacity of the natural environment in our world is very limited.Keywords: the solar community, the sustainable condition, the natural limitation, the algorithmic calculation
Procedia PDF Downloads 113163 Experimental and Numerical Investigation of Fracture Behavior of Foamed Concrete Based on Three-Point Bending Test of Beams with Initial Notch
Authors: M. Kozłowski, M. Kadela
Abstract:
Foamed concrete is known for its low self-weight and excellent thermal and acoustic properties. For many years, it has been used worldwide for insulation to foundations and roof tiles, as backfill to retaining walls, sound insulation, etc. However, in the last years it has become a promising material also for structural purposes e.g. for stabilization of weak soils. Due to favorable properties of foamed concrete, many interests and studies were involved to analyze its strength, mechanical, thermal and acoustic properties. However, these studies do not cover the investigation of fracture energy which is the core factor governing the damage and fracture mechanisms. Only limited number of publications can be found in literature. The paper presents the results of experimental investigation and numerical campaign of foamed concrete based on three-point bending test of beams with initial notch. First part of the paper presents the results of a series of static loading tests performed to investigate the fracture properties of foamed concrete of varying density. Beam specimens with dimensions of 100×100×840 mm with a central notch were tested in three-point bending. Subsequently, remaining halves of the specimens with dimensions of 100×100×420 mm were tested again as un-notched beams in the same set-up with reduced distance between supports. The tests were performed in a hydraulic displacement controlled testing machine with a load capacity of 5 kN. Apart from measuring the loading and mid-span displacement, a crack mouth opening displacement (CMOD) was monitored. Based on the load – displacement curves of notched beams the values of fracture energy and tensile stress at failure were calculated. The flexural tensile strength was obtained on un-notched beams with dimensions of 100×100×420 mm. Moreover, cube specimens 150×150×150 mm were tested in compression to determine the compressive strength. Second part of the paper deals with numerical investigation of the fracture behavior of beams with initial notch presented in the first part of the paper. Extended Finite Element Method (XFEM) was used to simulate and analyze the damage and fracture process. The influence of meshing and variation of mechanical properties on results was investigated. Numerical models simulate correctly the behavior of beams observed during three-point bending. The numerical results show that XFEM can be used to simulate different fracture toughness of foamed concrete and fracture types. Using the XFEM and computer simulation technology allow for reliable approximation of load–bearing capacity and damage mechanisms of beams made of foamed concrete, which provides some foundations for realistic structural applications.Keywords: foamed concrete, fracture energy, three-point bending, XFEM
Procedia PDF Downloads 303162 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis
Procedia PDF Downloads 155161 Integration of Icf Walls as Diurnal Solar Thermal Storage with Microchannel Solar Assisted Heat Pump for Space Heating and Domestic Hot Water Production
Authors: Mohammad Emamjome Kashan, Alan S. Fung
Abstract:
In Canada, more than 32% of the total energy demand is related to the building sector. Therefore, there is a great opportunity for Greenhouse Gases (GHG) reduction by integrating solar collectors to provide building heating load and domestic hot water (DHW). Despite the cold winter weather, Canada has a good number of sunny and clear days that can be considered for diurnal solar thermal energy storage. Due to the energy mismatch between building heating load and solar irradiation availability, relatively big storage tanks are usually needed to store solar thermal energy during the daytime and then use it at night. On the other hand, water tanks occupy huge space, especially in big cities, space is relatively expensive. This project investigates the possibility of using a specific building construction material (ICF – Insulated Concrete Form) as diurnal solar thermal energy storage that is integrated with a heat pump and microchannel solar thermal collector (MCST). Not much literature has studied the application of building pre-existing walls as active solar thermal energy storage as a feasible and industrialized solution for the solar thermal mismatch. By using ICF walls that are integrated into the building envelope, instead of big storage tanks, excess solar energy can be stored in the concrete of the ICF wall that consists of EPS insulation layers on both sides to store the thermal energy. In this study, two solar-based systems are designed and simulated inTransient Systems Simulation Program(TRNSYS)to compare ICF wall thermal storage benefits over the system without ICF walls. In this study, the heating load and DHW of a Canadian single-family house located in London, Ontario, are provided by solar-based systems. The proposed system integrates the MCST collector, a water-to-water HP, a preheat tank, the main tank, fan coils (to deliver the building heating load), and ICF walls. During the day, excess solar energy is stored in the ICF walls (charging cycle). Thermal energy can be restored from the ICF walls when the preheat tank temperature drops below the ICF wall (discharging process) to increase the COP of the heat pump. The evaporator of the heat pump is taking is coupled with the preheat tank. The provided warm water by the heat pump is stored in the second tank. Fan coil units are in contact with the tank to provide a building heating load. DHW is also delivered is provided from the main tank. It is investigated that the system with ICF walls with an average solar fraction of 82%- 88% can cover the whole heating demand+DHW of nine months and has a 10-15% higher average solar fraction than the system without ICF walls. Sensitivity analysis for different parameters influencing the solar fraction is discussed in detail.Keywords: net-zero building, renewable energy, solar thermal storage, microchannel solar thermal collector
Procedia PDF Downloads 121160 Modulation of Receptor-Activation Due to Hydrogen Bond Formation
Authors: Sourav Ray, Christoph Stein, Marcus Weber
Abstract:
A new class of drug candidates, initially derived from mathematical modeling of ligand-receptor interactions, activate the μ-opioid receptor (MOR) preferentially at acidic extracellular pH-levels, as present in injured tissues. This is of commercial interest because it may preclude the adverse effects of conventional MOR agonists like fentanyl, which include but are not limited to addiction, constipation, sedation, and apnea. Animal studies indicate the importance of taking the pH value of the chemical environment of MOR into account when designing new drugs. Hydrogen bonds (HBs) play a crucial role in stabilizing protein secondary structure and molecular interaction, such as ligand-protein interaction. These bonds may depend on the pH value of the chemical environment. For the MOR, antagonist naloxone and agonist [D-Ala2,N-Me-Phe4,Gly5-ol]-enkephalin (DAMGO) form HBs with ionizable residue HIS 297 at physiological pH to modulate signaling. However, such interactions were markedly reduced at acidic pH. Although fentanyl-induced signaling is also diminished at acidic pH, HBs with HIS 297 residue are not observed at either acidic or physiological pH for this strong agonist of the MOR. Molecular dynamics (MD) simulations can provide greater insight into the interaction between the ligand of interest and the HIS 297 residue. Amino acid protonation states are adjusted to the model difference in system acidity. Unbiased and unrestrained MD simulations were performed, with the ligand in the proximity of the HIS 297 residue. Ligand-receptor complexes were embedded in 1-palmitoyl-2-oleoyl-sn glycero-3-phosphatidylcholine (POPC) bilayer to mimic the membrane environment. The occurrence of HBs between the different ligands and the HIS 297 residue of MOR at acidic and physiological pH values were tracked across the various simulation trajectories. No HB formation was observed between fentanyl and HIS 297 residue at either acidic or physiological pH. Naloxone formed some HBs with HIS 297 at pH 5, but no such HBs were noted at pH 7. Interestingly, DAMGO displayed an opposite yet more pronounced HB formation trend compared to naloxone. Whereas a marginal number of HBs could be observed at even pH 5, HBs with HIS 297 were more stable and widely present at pH 7. The HB formation plays no and marginal role in the interaction of fentanyl and naloxone, respectively, with the HIS 297 residue of MOR. However, HBs play a significant role in the DAMGO and HIS 297 interaction. Post DAMGO administration, these HBs might be crucial for the remediation of opioid tolerance and restoration of opioid sensitivity. Although experimental studies concur with our observations regarding the influence of HB formation on the fentanyl and DAMGO interaction with HIS 297, the same could not be conclusively stated for naloxone. Therefore, some other supplementary interactions might be responsible for the modulation of the MOR activity by naloxone binding at pH 7 but not at pH 5. Further elucidation of the mechanism of naloxone action on the MOR could assist in the formulation of cost-effective naloxone-based treatment of opioid overdose or opioid-induced side effects.Keywords: effect of system acidity, hydrogen bond formation, opioid action, receptor activation
Procedia PDF Downloads 176159 Design Development and Qualification of a Magnetically Levitated Blower for C0₂ Scrubbing in Manned Space Missions
Authors: Larry Hawkins, Scott K. Sakakura, Michael J. Salopek
Abstract:
The Marshall Space Flight Center is designing and building a next-generation CO₂ removal system, the Four Bed Carbon Dioxide Scrubber (4BCO₂), which will use the International Space Station (ISS) as a testbed. The current ISS CO2 removal system has faced many challenges in both performance and reliability. Given that CO2 removal is an integral Environmental Control and Life Support System (ECLSS) subsystem, the 4BCO2 Scrubber has been designed to eliminate the shortfalls identified in the current ISS system. One of the key required upgrades was to improve the performance and reliability of the blower that provides the airflow through the CO₂ sorbent beds. A magnetically levitated blower, capable of higher airflow and pressure than the previous system, was developed to meet this need. The design and qualification testing of this next-generation blower are described here. The new blower features a high-efficiency permanent magnet motor, a five-axis, active magnetic bearing system, and a compact controller containing both a variable speed drive and a magnetic bearing controller. The blower uses a centrifugal impeller to pull air from the inlet port and drive it through an annular space around the motor and magnetic bearing components to the exhaust port. Technical challenges of the blower and controller development include survival of the blower system under launch random vibration loads, operation in microgravity, packaging under strict size and weight requirements, and successful operation during 4BCO₂ operational changeovers. An ANSYS structural dynamic model of the controller was used to predict response to the NASA defined random vibration spectrum and drive minor design changes. The simulation results are compared to measurements from qualification testing the controller on a vibration table. Predicted blower performance is compared to flow loop testing measurements. Dynamic response of the system to valve changeovers is presented and discussed using high bandwidth measurements from dynamic pressure probes, magnetic bearing position sensors, and actuator coil currents. The results presented in the paper show that the blower controller will survive launch vibration levels, the blower flow meets the requirements, and the magnetic bearings have adequate load capacity and control bandwidth to maintain the desired rotor position during the valve changeover transients.Keywords: blower, carbon dioxide removal, environmental control and life support system, magnetic bearing, permanent magnet motor, validation testing, vibration
Procedia PDF Downloads 136158 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items
Authors: Wen-Chung Wang, Xue-Lan Qiu
Abstract:
Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison
Procedia PDF Downloads 247157 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach
Authors: Zhuoran Li, Guan Qin
Abstract:
A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method
Procedia PDF Downloads 173156 Passive Greenhouse Systems in Poland
Authors: Magdalena Grudzińska
Abstract:
Passive systems allow solar radiation to be converted into thermal energy thanks to appropriate building construction. Greenhouse systems are particularly worth attention, due to the low costs of their realization and strong architectural appeal. The paper discusses the energy effects of using passive greenhouse systems, such as glazed balconies, in an example residential building. The research was carried out for five localities in Poland, belonging to climatic zones different in terms of external air temperature and insolation: Koszalin, Poznań, Lublin, Białystok and Zakopane The analysed apartment had a floor area of approximately 74 m² Three thermal zones were distinguished in the flat - the balcony, the room adjacent to it, and the remaining space, for which various internal conditions were defined. Calculations of the energy demand were made using the dynamic simulation program, based on the control volume method. The climatic data were represented by Typical Meteorological Years, prepared on the basis of source data collected from 1971 to 2000. In each locality, the introduction of a passive greenhouse system led to a lower demand for heating in the apartment, and the shortening of the heating season. The smallest effectiveness of passive solar energy systems was noted in Białystok. Demand for heating was reduced there by 14.5% and the heating season remained the longest, due to low temperatures of external air and small sums of solar radiation intensity. In Zakopane, energy savings came to 21% and the heating season was reduced to 107 days, thanks to the greatest insolation during winter. The introduction of greenhouse systems caused an increase in cooling demand in the warmer part of the year, but total energy demand declined in each of the discussed places. However, potential energy savings are smaller if the building's annual life cycle is taken into consideration, and amount from 5.6% up to 14%. Koszalin and Zakopane are localities in which the greenhouse system allows the best energy results to be achieved. It should be emphasized that favourable conditions for introducing greenhouse systems are connected with different climatic conditions. In the seaside area (Koszalin) they result from high temperatures in the heating season and the smallest insolation in the summer period, while in the mountainous area (Zakopane) they result from high insolation in the winter and low temperatures in the summer. In the region of middle and middle-eastern Poland active systems (such as solar energy collectors or photovoltaic panels) could be more beneficial, due to high insolation during summer. It is assessed that passive systems do not eliminate the need for traditional heating in Poland. They can, however, substantially contribute to lower use of non-renewable fuels and the shortening of the heating season. The calculations showed diversification in the effectiveness of greenhouse systems resulting from climatic conditions, and allowed to identify areas which are the most suitable for the passive use of solar radiation.Keywords: solar energy, passive greenhouse systems, glazed balconies, climatic conditions
Procedia PDF Downloads 368155 Item-Trait Pattern Recognition of Replenished Items in Multidimensional Computerized Adaptive Testing
Authors: Jianan Sun, Ziwen Ye
Abstract:
Multidimensional computerized adaptive testing (MCAT) is a popular research topic in psychometrics. It is important for practitioners to clearly know the item-trait patterns of administered items when a test like MCAT is operated. Item-trait pattern recognition refers to detecting which latent traits in a psychological test are measured by each of the specified items. If the item-trait patterns of the replenished items in MCAT item pool are well detected, the interpretability of the items can be improved, which can further promote the abilities of the examinees who attending the MCAT to be accurately estimated. This research explores to solve the item-trait pattern recognition problem of the replenished items in MCAT item pool from the perspective of statistical variable selection. The popular multidimensional item response theory model, multidimensional two-parameter logistic model, is assumed to fit the response data of MCAT. The proposed method uses the least absolute shrinkage and selection operator (LASSO) to detect item-trait patterns of replenished items based on the essential information of item responses and ability estimates of examinees collected from a designed MCAT procedure. Several advantages of the proposed method are outlined. First, the proposed method does not strictly depend on the relative order between the replenished items and the selected operational items, so it allows the replenished items to be mixed into the operational items in reasonable order such as considering content constraints or other test requirements. Second, the LASSO used in this research improves the interpretability of the multidimensional replenished items in MCAT. Third, the proposed method can exert the advantage of shrinkage method idea for variable selection, so it can help to check item quality and key dimension features of replenished items and saves more costs of time and labors in response data collection than traditional factor analysis method. Moreover, the proposed method makes sure the dimensions of replenished items are recognized to be consistent with the dimensions of operational items in MCAT item pool. Simulation studies are conducted to investigate the performance of the proposed method under different conditions for varying dimensionality of item pool, latent trait correlation, item discrimination, test lengths and item selection criteria in MCAT. Results show that the proposed method can accurately detect the item-trait patterns of the replenished items in the two-dimensional and the three-dimensional item pool. Selecting enough operational items from the item pool consisting of high discriminating items by Bayesian A-optimality in MCAT can improve the recognition accuracy of item-trait patterns of replenished items for the proposed method. The pattern recognition accuracy for the conditions with correlated traits is better than those with independent traits especially for the item pool consisting of comparatively low discriminating items. To sum up, the proposed data-driven method based on the LASSO can accurately and efficiently detect the item-trait patterns of replenished items in MCAT.Keywords: item-trait pattern recognition, least absolute shrinkage and selection operator, multidimensional computerized adaptive testing, variable selection
Procedia PDF Downloads 131154 The Solid-Phase Sensor Systems for Fluorescent and SERS-Recognition of Neurotransmitters for Their Visualization and Determination in Biomaterials
Authors: Irina Veselova, Maria Makedonskaya, Olga Eremina, Alexandr Sidorov, Eugene Goodilin, Tatyana Shekhovtsova
Abstract:
Such catecholamines as dopamine, norepinephrine, and epinephrine are the principal neurotransmitters in the sympathetic nervous system. Catecholamines and their metabolites are considered to be important markers of socially significant diseases such as atherosclerosis, diabetes, coronary heart disease, carcinogenesis, Alzheimer's and Parkinson's diseases. Currently, neurotransmitters can be studied via electrochemical and chromatographic techniques that allow their characterizing and quantification, although these techniques can only provide crude spatial information. Besides, the difficulty of catecholamine determination in biological materials is associated with their low normal concentrations (~ 1 nM) in biomaterials, which may become even one more order lower because of some disorders. In addition, in blood they are rapidly oxidized by monoaminooxidases from thrombocytes and, for this reason, the determination of neurotransmitter metabolism indicators in an organism should be very rapid (15—30 min), especially in critical states. Unfortunately, modern instrumental analysis does not offer a complex solution of this problem: despite its high sensitivity and selectivity, HPLC-MS cannot provide sufficiently rapid analysis, while enzymatic biosensors and immunoassays for the determination of the considered analytes lack sufficient sensitivity and reproducibility. Fluorescent and SERS-sensors remain a compelling technology for approaching the general problem of selective neurotransmitter detection. In recent years, a number of catecholamine sensors have been reported including RNA aptamers, fluorescent ribonucleopeptide (RNP) complexes, and boronic acid based synthetic receptors and the sensor operated in a turn-off mode. In this work we present the fluorescent and SERS turn-on sensor systems based on the bio- or chemorecognizing nanostructured films {chitosan/collagen-Tb/Eu/Cu-nanoparticles-indicator reagents} that provide the selective recognition, visualization, and sensing of the above mentioned catecholamines on the level of nanomolar concentrations in biomaterials (cell cultures, tissue etc.). We have (1) developed optically transparent porous films and gels of chitosan/collagen; (2) ensured functionalization of the surface by molecules-'recognizers' (by impregnation and immobilization of components of the indicator systems: biorecognizing and auxiliary reagents); (3) performed computer simulation for theoretical prediction and interpretation of some properties of the developed materials and obtained analytical signals in biomaterials. We are grateful for the financial support of this research from Russian Foundation for Basic Research (grants no. 15-03-05064 a, and 15-29-01330 ofi_m).Keywords: biomaterials, fluorescent and SERS-recognition, neurotransmitters, solid-phase turn-on sensor system
Procedia PDF Downloads 406153 Modeling and Analysis of Drilling Operation in Shale Reservoirs with Introduction of an Optimization Approach
Authors: Sina Kazemi, Farshid Torabi, Todd Peterson
Abstract:
Drilling in shale formations is frequently time-consuming, challenging, and fraught with mechanical failures such as stuck pipes or hole packing off when the cutting removal rate is not sufficient to clean the bottom hole. Crossing the heavy oil shale and sand reservoirs with active shale and microfractures is generally associated with severe fluid losses causing a reduction in the rate of the cuttings removal. These circumstances compromise a well’s integrity and result in a lower rate of penetration (ROP). This study presents collective results of field studies and theoretical analysis conducted on data from South Pars and North Dome in an Iran-Qatar offshore field. Solutions to complications related to drilling in shale formations are proposed through systemically analyzing and applying modeling techniques to select field mud logging data. Field data measurements during actual drilling operations indicate that in a shale formation where the return flow of polymer mud was almost lost in the upper dolomite layer, the performance of hole cleaning and ROP progressively change when higher string rotations are initiated. Likewise, it was observed that this effect minimized the force of rotational torque and improved well integrity in the subsequent casing running. Given similar geologic conditions and drilling operations in reservoirs targeting shale as the producing zone like the Bakken formation within the Williston Basin and Lloydminster, Saskatchewan, a drill bench dynamic modeling simulation was used to simulate borehole cleaning efficiency and mud optimization. The results obtained by altering RPM (string revolution per minute) at the same pump rate and optimized mud properties exhibit a positive correlation with field measurements. The field investigation and developed model in this report show that increasing the speed of string revolution as far as geomechanics and drilling bit conditions permit can minimize the risk of mechanically stuck pipes while reaching a higher than expected ROP in shale formations. Data obtained from modeling and field data analysis, optimized drilling parameters, and hole cleaning procedures are suggested for minimizing the risk of a hole packing off and enhancing well integrity in shale reservoirs. Whereas optimization of ROP at a lower pump rate maintains the wellbore stability, it saves time for the operator while reducing carbon emissions and fatigue of mud motors and power supply engines.Keywords: ROP, circulating density, drilling parameters, return flow, shale reservoir, well integrity
Procedia PDF Downloads 87152 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 286151 Fast and Non-Invasive Patient-Specific Optimization of Left Ventricle Assist Device Implantation
Authors: Huidan Yu, Anurag Deb, Rou Chen, I-Wen Wang
Abstract:
The use of left ventricle assist devices (LVADs) in patients with heart failure has been a proven and effective therapy for patients with severe end-stage heart failure. Due to the limited availability of suitable donor hearts, LVADs will probably become the alternative solution for patient with heart failure in the near future. While the LVAD is being continuously improved toward enhanced performance, increased device durability, reduced size, a better understanding of implantation management becomes critical in order to achieve better long-term blood supplies and less post-surgical complications such as thrombi generation. Important issues related to the LVAD implantation include the location of outflow grafting (OG), the angle of the OG, the combination between LVAD and native heart pumping, uniform or pulsatile flow at OG, etc. We have hypothesized that an optimal implantation of LVAD is patient specific. To test this hypothesis, we employ a novel in-house computational modeling technique, named InVascular, to conduct a systematic evaluation of cardiac output at aortic arch together with other pertinent hemodynamic quantities for each patient under various implantation scenarios aiming to get an optimal implantation strategy. InVacular is a powerful computational modeling technique that integrates unified mesoscale modeling for both image segmentation and fluid dynamics with the cutting-edge GPU parallel computing. It first segments the aortic artery from patient’s CT image, then seamlessly feeds extracted morphology, together with the velocity wave from Echo Ultrasound image of the same patient, to the computation model to quantify 4-D (time+space) velocity and pressure fields. Using one NVIDIA Tesla K40 GPU card, InVascular completes a computation from CT image to 4-D hemodynamics within 30 minutes. Thus it has the great potential to conduct massive numerical simulation and analysis. The systematic evaluation for one patient includes three OG anastomosis (ascending aorta, descending thoracic aorta, and subclavian artery), three combinations of LVAD and native heart pumping (1:1, 1:2, and 1:3), three angles of OG anastomosis (inclined upward, perpendicular, and inclined downward), and two LVAD inflow conditions (uniform and pulsatile). The optimal LVAD implantation is suggested through a comprehensive analysis of the cardiac output and related hemodynamics from the simulations over the fifty-four scenarios. To confirm the hypothesis, 5 random patient cases will be evaluated.Keywords: graphic processing unit (GPU) parallel computing, left ventricle assist device (LVAD), lumped-parameter model, patient-specific computational hemodynamics
Procedia PDF Downloads 134150 Density Functional Theory Study of the Surface Interactions between Sodium Carbonate Aerosols and Fission Products
Authors: Ankita Jadon, Sidi Souvi, Nathalie Girault, Denis Petitprez
Abstract:
The interaction of fission products (FP) with sodium carbonate (Na₂CO₃) aerosols is of a high safety concern because of their potential role in the radiological source term mitigation by FP trapping. In a sodium-cooled fast nuclear reactor (SFR) experiencing a severe accident, sodium (Na) aerosols can be formed after the ejection of the liquid Na coolant inside the containment. The surface interactions between these aerosols and different FP species have been investigated using ab-initio, density functional theory (DFT) calculations using Vienna ab-initio simulation package (VASP). In addition, an improved thermodynamic model has been proposed to treat DFT-VASP calculated energies to extrapolate them to temperatures and pressures of interest in our study. A combined experimental and theoretical chemistry study has been carried out to have both atomistic and macroscopic understanding of the chemical processes; the theoretical chemistry part of this approach is presented in this paper. The Perdew, Burke, and Ernzerhof functional were applied in combination with Grimme’s van der Waals correction to compute exchange-correlational energy at 0 K. Seven different surface cleavages were studied of Ƴ-Na₂CO₃ phase (stable at 603.15 K), it was found that for defect-free surfaces, the (001) facet is the most stable. Furthermore, calculations were performed to study surface defects and reconstructions on the ideal surface. All the studied surface defects were found to be less stable than the ideal surface. More than one adsorbate-ligand configurations were found to be stable confirming that FP vapors could be trapped on various adsorption sites. The calculated adsorption energies (Eads, eV) for the three most stable adsorption sites for I₂ are -1.33, -1.088, and -1.085. Moreover, the adsorption of the first molecule of I₂ changes the surface in a way which would favor stronger adsorption of a second molecule of I2 (Eads, eV = -1.261). For HI adsorption, the most favored reactions have the following Eads (eV) -1.982, -1.790, -1.683 implying that HI would be more reactive than I₂. In addition to FP species, adsorption of H₂O was also studied as the hydrated surface can have different reactivity than the bare surface. One thermodynamically favored site for H₂O adsorption was found with an Eads, eV of -0.754. Finally, the calculations of hydrated surfaces of Na₂CO₃ show that a layer of water adsorbed on the surface significantly reduces its affinity for iodine (Eads, eV = -1.066). According to the thermodynamic model built, the required partial pressure at 373 K to have adsorption of the first layer of iodine is 4.57×10⁻⁴ bar. The second layer will be adsorbed at partial pressures higher than 8.56×10⁻⁶ bar; a layer of water on the surface will increase these pressure almost ten folds to 3.71×10⁻³ bar. The surface interacts with elemental Cs with an Eads (eV) of -1.60, while interacts even strongly with CsI with an Eads (eV) of -2.39. More results on the interactions between Na₂CO₃ (001) and cesium-based FP will also be presented in this paper.Keywords: iodine uptake, sodium carbonate surface, sodium-cooled fast nuclear reactor, DFT calculations, fission products
Procedia PDF Downloads 152149 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model
Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle
Abstract:
In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model
Procedia PDF Downloads 103148 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 139147 Investigation of Residual Stress Relief by in-situ Rolling Deposited Bead in Directed Laser Deposition
Authors: Ravi Raj, Louis Chiu, Deepak Marla, Aijun Huang
Abstract:
Hybridization of the directed laser deposition (DLD) process using an in-situ micro-roller to impart a vertical compressive load on the deposited bead at elevated temperatures can relieve tensile residual stresses incurred in the process. To investigate this stress relief mechanism and its relationship with the in-situ rolling parameters, a fully coupled dynamic thermo-mechanical model is presented in this study. A single bead deposition of Ti-6Al-4V alloy with an in-situ roller made of mild steel moving at a constant speed with a fixed nominal bead reduction is simulated using the explicit solver of the finite element software, Abaqus. The thermal model includes laser heating during the deposition process and the heat transfer between the roller and the deposited bead. The laser heating is modeled using a moving heat source with a Gaussian distribution, applied along the pre-formed bead’s surface using the VDFLUX Fortran subroutine. The bead’s cross-section is assumed to be semi-elliptical. The interfacial heat transfer between the roller and the bead is considered in the model. Besides, the roller is cooled internally using axial water flow, considered in the model using convective heat transfer. The mechanical model for the bead and substrate includes the effects of rolling along with the deposition process, and their elastoplastic material behavior is captured using the J2 plasticity theory. The model accounts for strain, strain rate, and temperature effects on the yield stress based on Johnson-Cook’s theory. Various aspects of this material behavior are captured in the FE software using the subroutines -VUMAT for elastoplastic behavior, VUHARD for yield stress, and VUEXPAN for thermal strain. The roller is assumed to be elastic and does not undergo any plastic deformation. Also, contact friction at the roller-bead interface is considered in the model. Based on the thermal results of the bead, the distance between the roller and the deposition nozzle (roller o set) can be determined to ensure rolling occurs around the beta-transus temperature for the Ti-6Al-4V alloy. It is identified that roller offset and the nominal bead height reduction are crucial parameters that influence the residual stresses in the hybrid process. The results obtained from a simulation at roller offset of 20 mm and nominal bead height reduction of 7% reveal that the tensile residual stresses decrease to about 52% due to in-situ rolling throughout the deposited bead. This model can be used to optimize the rolling parameters to minimize the residual stresses in the hybrid DLD process with in-situ micro-rolling.Keywords: directed laser deposition, finite element analysis, hybrid in-situ rolling, thermo-mechanical model
Procedia PDF Downloads 111146 Predicting Long-Term Performance of Concrete under Sulfate Attack
Authors: Elakneswaran Yogarajah, Toyoharu Nawa, Eiji Owaki
Abstract:
Cement-based materials have been using in various reinforced concrete structural components as well as in nuclear waste repositories. The sulfate attack has been an environmental issue for cement-based materials exposed to sulfate bearing groundwater or soils, and it plays an important role in the durability of concrete structures. The reaction between penetrating sulfate ions and cement hydrates can result in swelling, spalling and cracking of cement matrix in concrete. These processes induce a reduction of mechanical properties and a decrease of service life of an affected structure. It has been identified that the precipitation of secondary sulfate bearing phases such as ettringite, gypsum, and thaumasite can cause the damage. Furthermore, crystallization of soluble salts such as sodium sulfate crystals induces degradation due to formation and phase changes. Crystallization of mirabilite (Na₂SO₄:10H₂O) and thenardite (Na₂SO₄) or their phase changes (mirabilite to thenardite or vice versa) due to temperature or sodium sulfate concentration do not involve any chemical interaction with cement hydrates. Over the past couple of decades, an intensive work has been carried out on sulfate attack in cement-based materials. However, there are several uncertainties still exist regarding the mechanism for the damage of concrete in sulfate environments. In this study, modelling work has been conducted to investigate the chemical degradation of cementitious materials in various sulfate environments. Both internal and external sulfate attack are considered for the simulation. In the internal sulfate attack, hydrate assemblage and pore solution chemistry of co-hydrating Portland cement (PC) and slag mixing with sodium sulfate solution are calculated to determine the degradation of the PC and slag-blended cementitious materials. Pitzer interactions coefficients were used to calculate the activity coefficients of solution chemistry at high ionic strength. The deterioration mechanism of co-hydrating cementitious materials with 25% of Na₂SO₄ by weight is the formation of mirabilite crystals and ettringite. Their formation strongly depends on sodium sulfate concentration and temperature. For the external sulfate attack, the deterioration of various types of cementitious materials under external sulfate ingress is simulated through reactive transport model. The reactive transport model is verified with experimental data in terms of phase assemblage of various cementitious materials with spatial distribution for different sulfate solution. Finally, the reactive transport model is used to predict the long-term performance of cementitious materials exposed to 10% of Na₂SO₄ for 1000 years. The dissolution of cement hydrates and secondary formation of sulfate-bearing products mainly ettringite are the dominant degradation mechanisms, but not the sodium sulfate crystallization.Keywords: thermodynamic calculations, reactive transport, radioactive waste disposal, PHREEQC
Procedia PDF Downloads 163145 Coupling Strategy for Multi-Scale Simulations in Micro-Channels
Authors: Dahia Chibouti, Benoit Trouette, Eric Chenier
Abstract:
With the development of micro-electro-mechanical systems (MEMS), understanding fluid flow and heat transfer at the micrometer scale is crucial. In the case where the flow characteristic length scale is narrowed to around ten times the mean free path of gas molecules, the classical fluid mechanics and energy equations are still valid in the bulk flow, but particular attention must be paid to the gas/solid interface boundary conditions. Indeed, in the vicinity of the wall, on a thickness of about the mean free path of the molecules, called the Knudsen layer, the gas molecules are no longer in local thermodynamic equilibrium. Therefore, macroscopic models based on the continuity of velocity, temperature and heat flux jump conditions must be applied at the fluid/solid interface to take this non-equilibrium into account. Although these macroscopic models are widely used, the assumptions on which they depend are not necessarily verified in realistic cases. In order to get rid of these assumptions, simulations at the molecular scale are carried out to study how molecule interaction with walls can change the fluid flow and heat transfers at the vicinity of the walls. The developed approach is based on a kind of heterogeneous multi-scale method: micro-domains overlap the continuous domain, and coupling is carried out through exchanges of information between both the molecular and the continuum approaches. In practice, molecular dynamics describes the fluid flow and heat transfers in micro-domains while the Navier-Stokes and energy equations are used at larger scales. In this framework, two kinds of micro-simulation are performed: i) in bulk, to obtain the thermo-physical properties (viscosity, conductivity, ...) as well as the equation of state of the fluid, ii) close to the walls to identify the relationships between the slip velocity and the shear stress or between the temperature jump and the normal temperature gradient. The coupling strategy relies on an implicit formulation of the quantities extracted from micro-domains. Indeed, using the results of the molecular simulations, a Bayesian regression is performed in order to build continuous laws giving both the behavior of the physical properties, the equation of state and the slip relationships, as well as their uncertainties. These latter allow to set up a learning strategy to optimize the number of micro simulations. In the present contribution, the first results regarding this coupling associated with the learning strategy are illustrated through parametric studies of convergence criteria, choice of basis functions and noise of input data. Anisothermic flows of a Lennard Jones fluid in micro-channels are finally presented.Keywords: multi-scale, microfluidics, micro-channel, hybrid approach, coupling
Procedia PDF Downloads 168