Search results for: prediction capability
626 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder
Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen
Abstract:
Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.Keywords: natural language inference, explanation generation, variational auto-encoder, generative model
Procedia PDF Downloads 151625 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks
Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer
Abstract:
New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics
Procedia PDF Downloads 139624 Differences in Guilt, Shame, Self-Anger, and Suicide Cognitions Based on Recent Suicide Ideation and Lifetime Suicide Attempt History
Authors: E. H. Szeto, E. Ammendola, J. V. Tabares, A. Starkey, J. Hay, J. G. McClung, C. J. Bryan
Abstract:
Introduction: Suicide is a leading cause of death globally, which accounts for more deaths annually than war, acquired immunodeficiency syndrome, homicides, and car accidents, while an estimated 140 million individuals have significant suicide ideation (SI) each year in the United States. Typical risk factors such as hopelessness, depression, and psychiatric disorders can predict suicide ideation but cannot distinguish between those who ideate from those who attempt suicide (SA). The Fluid Vulnerability Theory of suicide posits that a person’s activation of the suicidal mode is predicated on one’s predisposition, triggers, baseline/acute risk, and protective factors. The current study compares self-conscious cognitive-affective states (including guilt, shame, anger towards the self, and suicidal beliefs) among patients based on the endorsement of recent SI (i.e., past two weeks; acute risk) and lifetime SA (i.e., baseline risk). Method: A total of 2,722 individuals in an outpatient primary care setting were included in this cross-sectional, observational study; data for 2,584 were valid and retained for analysis. The Differential Emotions Scale measuring guilt, shame, and self-anger and the Suicide Cognitions Scale measuring suicide cognitions were administered. Results: A total of 2,222 individuals reported no recent SI or lifetime SA (Group 1), 161 reported recent SI only (Group 2), 145 reported lifetime SA only (Group 3), 56 reported both recent SI and lifetime SA (Group 4). The Kruskal-Wallis test showed that guilt, shame, self-anger, and suicide cognitions were the highest for Group 4 (both recent SI and lifetime SA), followed by Group 2 (recent SI-only), then Group 3 (lifetime SA-only), and lastly, Group 1 (no recent SI or lifetime SA). Conclusion: The results on recent SI-only versus lifetime SA-only contribute to the literature on the Fluid Vulnerability Theory of suicide by capturing SI and SA in two different time periods, which signify the acute risks and chronic baseline risks of the suicidal mode, respectively. It is also shown that: (a) people with a lifetime SA reported more severe symptoms than those without, (b) people with recent SI reported more severe symptoms than those without, and (c) people with both recent SI and lifetime SA were the most severely distressed. Future studies may replicate the findings here with other pertinent risk factors such as thwarted belongingness, perceived burdensomeness, and acquired capability, the last of which is consistently linked to attempting among ideators.Keywords: suicide, guilt, shame, self-anger, suicide cognitions, suicide ideation, suicide attempt
Procedia PDF Downloads 162623 Monolithic Integrated GaN Resonant Tunneling Diode Pair with Picosecond Switching Time for High-speed Multiple-valued Logic System
Authors: Fang Liu, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun, JunShuai Xue
Abstract:
The explosive increasing needs of data processing and information storage strongly drive the advancement of the binary logic system to multiple-valued logic system. Inherent negative differential resistance characteristic, ultra-high-speed switching time, and robust anti-irradiation capability make III-nitride resonant tunneling diode one of the most promising candidates for multi-valued logic devices. Here we report the monolithic integration of GaN resonant tunneling diodes in series to realize multiple negative differential resistance regions, obtaining at least three stable operating states. A multiply-by-three circuit is achieved by this combination, increasing the frequency of the input triangular wave from f0 to 3f0. The resonant tunneling diodes are grown by plasma-assistedmolecular beam epitaxy on free-standing c-plane GaN substrates, comprising double barriers and a single quantum well both at the atomic level. Device with a peak current density of 183kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed, which is the best result reported in nitride-based resonant tunneling diodes. Microwave oscillation event at room temperature was discovered with a fundamental frequency of 0.31GHz and an output power of 5.37μW, verifying the high repeatability and robustness of our device. The switching behavior measurement was successfully carried out, featuring rise and fall times in the order of picoseconds, which can be used in high-speed digital circuits. Limited by the measuring equipment and the layer structure, the switching time can be further improved. In general, this article presents a novel nitride device with multiple negative differential regions driven by the resonant tunneling mechanism, which can be used in high-speed multiple value logic field with reduced circuit complexity, demonstrating a new solution of nitride devices to break through the limitations of binary logic.Keywords: GaN resonant tunneling diode, negative differential resistance, multiple-valued logic system, switching time, peak-to-valley current ratio
Procedia PDF Downloads 100622 Understanding How Posting and Replying Behaviors in Social Media Differentiate the Social Capital Cultivation Capabilities of Users
Authors: Jung Lee
Abstract:
This study identifies how the cultivation capabilities of social capital influence the overall attitudes of social media users and how these influences differ across user groups. First, the cultivation capabilities of social capital are identified from three aspects, namely, social capital accessibility, potentiality and sensitivity. These three types of social capital acquisition capabilities collectively represent how the social media users perceive the social media environment in terms of possibilities for social capital creation. These three capabilities are hypothesized to influence social media satisfaction and continuing use intention. Next, two essential activities in social media are identified, namely, posting and replying, to categorise social media users based on behavioral patterns. Various social media activities consist of the combinations of these two basic activities. Posting represents the broadcasting aspect of social media, whereas replying represents the communicative aspect of social media. We categorize users into four from communicators to observers by using these two behaviors to develop usage pattern matrix. By applying the usage pattern matrix to the capability model, we argue that posting behavior generally has a positive moderating effect on the attitudes of social media users, whereas replying behavior occasionally exhibits the negative moderating effect. These different moderating effects of posting and replying behavior are explained based on the different levels of social capital sensitivity and expectation of individuals. When a person is highly expecting social capital from social media, he or she would post actively. However, when one is highly sensitive to social capital, he or she would actively respond and reply to postings of other people because such an act would create a longer and more interactive relationship. A total of 512 social media users are invited to answer the survey. They were asked about their attitudes toward the social media and how they expect social capital through this practice. They were asked to check their general social media usage pattern for user categorization. Result confirmed that most of the hypotheses were supported. Three types of social capital cultivation capabilities are significant determinants of social media attitudes, and two social media activities (i.e., posting and replying) exhibited different moderating effects on attitudes. This study provides following discussions. First, three types of social capital cultivation capabilities were identified. Despite the numerous concerns about social media, such as whether it is a decent and real environment that produces social capital, this study confirms that people explicitly expect and experience social capital values from social media. Second, posting and replying activities are two building blocks of social media activities. These two activities are useful in explaining different the attitudes of social media users and predict future usage.Keywords: social media, social capital, social media satisfaction, social media use intention
Procedia PDF Downloads 191621 Prediction of Positive Cloud-to-Ground Lightning Striking Zones for Charged Thundercloud Based on Line Charge Model
Authors: Surajit Das Barman, Rakibuzzaman Shah, Apurv Kumar
Abstract:
Bushfire is known as one of the ascendant factors to create pyrocumulus thundercloud that causes the ignition of new fires by pyrocumulonimbus (pyroCb) lightning strikes and creates major losses of lives and property worldwide. A conceptual model-based risk planning would be beneficial to predict the lightning striking zones on the surface of the earth underneath the pyroCb thundercloud. PyroCb thundercloud can generate both positive cloud-to-ground (+CG) and negative cloud-to-ground (-CG) lightning in which +CG tends to ignite more bushfires and cause massive damage to nature and infrastructure. In this paper, a simple line charge structured thundercloud model is constructed in 2-D coordinates using the method of image charge to predict the probable +CG lightning striking zones on the earth’s surface for two conceptual thundercloud charge configurations: titled dipole and conventional tripole structure with excessive lower positive charge regions that lead to producing +CG lightning. The electric potential and surface charge density along the earth’s surface for both structures via continuously adjusting the position and the charge density of their charge regions is investigated. Simulation results for tilted dipole structure confirm the down-shear extension of the upper positive charge region in the direction of the cloud’s forward flank by 4 to 8 km, resulting in negative surface density, and would expect +CG lightning to strike within 7.8 km to 20 km around the earth periphery in the direction of the cloud’s forward flank. On the other hand, the conceptual tripole charge structure with enhanced lower positive charge region develops negative surface charge density on the earth’s surface in the range |x| < 6.5 km beneath the thundercloud and highly favors producing +CG lightning strikes.Keywords: pyrocumulonimbus, cloud-to-ground lightning, charge structure, surface charge density, forward flank
Procedia PDF Downloads 113620 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion
Authors: Omran M. Kenshel, Alan J. O'Connor
Abstract:
Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability
Procedia PDF Downloads 473619 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings
Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian
Abstract:
Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM
Procedia PDF Downloads 110618 An Experimental Investigation of the Surface Pressure on Flat Plates in Turbulent Boundary Layers
Authors: Azadeh Jafari, Farzin Ghanadi, Matthew J. Emes, Maziar Arjomandi, Benjamin S. Cazzolato
Abstract:
The turbulence within the atmospheric boundary layer induces highly unsteady aerodynamic loads on structures. These loads, if not accounted for in the design process, will lead to structural failure and are therefore important for the design of the structures. For an accurate prediction of wind loads, understanding the correlation between atmospheric turbulence and the aerodynamic loads is necessary. The aim of this study is to investigate the effect of turbulence within the atmospheric boundary layer on the surface pressure on a flat plate over a wide range of turbulence intensities and integral length scales. The flat plate is chosen as a fundamental geometry which represents structures such as solar panels and billboards. Experiments were conducted at the University of Adelaide large-scale wind tunnel. Two wind tunnel boundary layers with different intensities and length scales of turbulence were generated using two sets of spires with different dimensions and a fetch of roughness elements. Average longitudinal turbulence intensities of 13% and 26% were achieved in each boundary layer, and the longitudinal integral length scale within the three boundary layers was between 0.4 m and 1.22 m. The pressure distributions on a square flat plate at different elevation angles between 30° and 90° were measured within the two boundary layers with different turbulence intensities and integral length scales. It was found that the peak pressure coefficient on the flat plate increased with increasing turbulence intensity and integral length scale. For example, the peak pressure coefficient on a flat plate elevated at 90° increased from 1.2 to 3 with increasing turbulence intensity from 13% to 26%. Furthermore, both the mean and the peak pressure distribution on the flat plates varied with turbulence intensity and length scale. The results of this study can be used to provide a more accurate estimation of the unsteady wind loads on structures such as buildings and solar panels.Keywords: atmospheric boundary layer, flat plate, pressure coefficient, turbulence
Procedia PDF Downloads 140617 Deorbiting Performance of Electrodynamic Tethers to Mitigate Space Debris
Authors: Giulia Sarego, Lorenzo Olivieri, Andrea Valmorbida, Carlo Bettanini, Giacomo Colombatti, Marco Pertile, Enrico C. Lorenzini
Abstract:
International guidelines recommend removing any artificial body in Low Earth Orbit (LEO) within 25 years from mission completion. Among disposal strategies, electrodynamic tethers appear to be a promising option for LEO, thanks to the limited storage mass and the minimum interface requirements to the host spacecraft. In particular, recent technological advances make it feasible to deorbit large objects with tether lengths of a few kilometers or less. To further investigate such an innovative passive system, the European Union is currently funding the project E.T.PACK – Electrodynamic Tether Technology for Passive Consumable-less Deorbit Kit in the framework of the H2020 Future Emerging Technologies (FET) Open program. The project focuses on the design of an end of life disposal kit for LEO satellites. This kit aims to deploy a taped tether that can be activated at the spacecraft end of life to perform autonomous deorbit within the international guidelines. In this paper, the orbital performance of the E.T.PACK deorbiting kit is compared to other disposal methods. Besides, the orbital decay prediction is parametrized as a function of spacecraft mass and tether system performance. Different values of length, width, and thickness of the tether will be evaluated for various scenarios (i.e., different initial orbital parameters). The results will be compared to other end-of-life disposal methods with similar allocated resources. The analysis of the more innovative system’s performance with the tape coated with a thermionic material, which has a low work-function (LWT), for which no active component for the cathode is required, will also be briefly discussed. The results show that the electrodynamic tether option can be a competitive and performant solution for satellite disposal compared to other deorbit technologies.Keywords: deorbiting performance, H2020, spacecraft disposal, space electrodynamic tethers
Procedia PDF Downloads 177616 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks
Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton
Abstract:
Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.Keywords: modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition
Procedia PDF Downloads 156615 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations
Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward
Abstract:
A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.Keywords: critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team
Procedia PDF Downloads 143614 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids
Authors: S. Gariani, I. Shyha
Abstract:
Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions
Procedia PDF Downloads 279613 Optimization of Dez Dam Reservoir Operation Using Genetic Algorithm
Authors: Alireza Nikbakht Shahbazi, Emadeddin Shirali
Abstract:
Since optimization issues of water resources are complicated due to the variety of decision making criteria and objective functions, it is sometimes impossible to resolve them through regular optimization methods or, it is time or money consuming. Therefore, the use of modern tools and methods is inevitable in resolving such problems. An accurate and essential utilization policy has to be determined in order to use natural resources such as water reservoirs optimally. Water reservoir programming studies aim to determine the final cultivated land area based on predefined agricultural models and water requirements. Dam utilization rule curve is also provided in such studies. The basic information applied in water reservoir programming studies generally include meteorological, hydrological, agricultural and water reservoir related data, and the geometric characteristics of the reservoir. The system of Dez dam water resources was simulated applying the basic information in order to determine the capability of its reservoir to provide the objectives of the performed plan. As a meta-exploratory method, genetic algorithm was applied in order to provide utilization rule curves (intersecting the reservoir volume). MATLAB software was used in order to resolve the foresaid model. Rule curves were firstly obtained through genetic algorithm. Then the significance of using rule curves and the decrease in decision making variables in the system was determined through system simulation and comparing the results with optimization results (Standard Operating Procedure). One of the most essential issues in optimization of a complicated water resource system is the increasing number of variables. Therefore a lot of time is required to find an optimum answer and in some cases, no desirable result is obtained. In this research, intersecting the reservoir volume has been applied as a modern model in order to reduce the number of variables. Water reservoir programming studies has been performed based on basic information, general hypotheses and standards and applying monthly simulation technique for a statistical period of 30 years. Results indicated that application of rule curve prevents the extreme shortages and decrease the monthly shortages.Keywords: optimization, rule curve, genetic algorithm method, Dez dam reservoir
Procedia PDF Downloads 265612 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel
Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi
Abstract:
The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point
Procedia PDF Downloads 107611 Artificial Membrane Comparison for Skin Permeation in Skin PAMPA
Authors: Aurea C. L. Lacerda, Paulo R. H. Moreno, Bruna M. P. Vianna, Cristina H. R. Serra, Airton Martin, André R. Baby, Vladi O. Consiglieri, Telma M. Kaneko
Abstract:
The modified Franz cell is the most widely used model for in vitro permeation studies, however it still presents some disadvantages. Thus, some alternative methods have been developed such as Skin PAMPA, which is a bio- artificial membrane that has been applied for skin penetration estimation of xenobiotics based on HT permeability model consisting. Skin PAMPA greatest advantage is to carry out more tests, in a fast and inexpensive way. The membrane system mimics the stratum corneum characteristics, which is the primary skin barrier. The barrier properties are given by corneocytes embedded in a multilamellar lipid matrix. This layer is the main penetration route through the paracellular permeation pathway and it consists of a mixture of cholesterol, ceramides, and fatty acids as the dominant components. However, there is no consensus on the membrane composition. The objective of this work was to compare the performance among different bio-artificial membranes for studying the permeation in skin PAMPA system. Material and methods: In order to mimetize the lipid composition`s present in the human stratum corneum six membranes were developed. The membrane composition was equimolar mixture of cholesterol, ceramides 1-O-C18:1, C22, and C20, plus fatty acids C20 and C24. The membrane integrity assay was based on the transport of Brilliant Cresyl Blue, which has a low permeability; and Lucifer Yellow with very poor permeability and should effectively be completely rejected. The membrane characterization was performed using Confocal Laser Raman Spectroscopy, using stabilized laser at 785 nm with 10 second integration time and 2 accumulations. The membrane behaviour results on the PAMPA system were statistically evaluated and all of the compositions have shown integrity and permeability. The confocal Raman spectra were obtained in the region of 800-1200 cm-1 that is associated with the C-C stretches of the carbon scaffold from the stratum corneum lipids showed similar pattern for all the membranes. The ceramides, long chain fatty acids and cholesterol in equimolar ratio permitted to obtain lipid mixtures with self-organization capability, similar to that occurring into the stratum corneum. Conclusion: The artificial biological membranes studied for Skin PAMPA showed to be similar and with comparable properties to the stratum corneum.Keywords: bio-artificial membranes, comparison, confocal Raman, skin PAMPA
Procedia PDF Downloads 509610 Probabilistic Building Life-Cycle Planning as a Strategy for Sustainability
Authors: Rui Calejo Rodrigues
Abstract:
Building Refurbishing and Maintenance is a major area of knowledge ultimately dispensed to user/occupant criteria. The optimization of the service life of a building needs a special background to be assessed as it is one of those concepts that needs proficiency to be implemented. ISO 15686-2 Buildings and constructed assets - Service life planning: Part 2, Service life prediction procedures, states a factorial method based on deterministic data for building components life span. Major consequences result on a deterministic approach because users/occupants are not sensible to understand the end of components life span and so simply act on deterministic periods and so costly and resources consuming solutions do not meet global targets of planet sustainability. The estimation of 2 thousand million conventional buildings in the world, if submitted to a probabilistic method for service life planning rather than a deterministic one provide an immense amount of resources savings. Since 1989 the research team nowadays stating for CEES–Center for Building in Service Studies developed a methodology based on Montecarlo method for probabilistic approach regarding life span of building components, cost and service life care time spans. The research question of this deals with the importance of probabilistic approach of buildings life planning compared with deterministic methods. It is presented the mathematic model developed for buildings probabilistic lifespan approach and experimental data is obtained to be compared with deterministic data. Assuming that buildings lifecycle depends a lot on component replacement this methodology allows to conclude on the global impact of fixed replacements methodologies such as those on result of deterministic models usage. Major conclusions based on conventional buildings estimate are presented and evaluated under a sustainable perspective.Keywords: building components life cycle, building maintenance, building sustainability, Montecarlo Simulation
Procedia PDF Downloads 205609 Development of a Model for Predicting Radiological Risks in Interventional Cardiology
Authors: Stefaan Carpentier, Aya Al Masri, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis, and ulceration to appear. In order to prevent these deterministic effects, a prediction of the peak skin-dose for the patient is important in order to improve the post-operative care to be given to the patient. The objective of this study is to estimate, before the intervention, the patient dose for ‘Chronic Total Occlusion (CTO)’ procedures by selecting relevant clinical indicators. Materials and methods: 103 procedures were performed in the ‘Interventional Cardiology (IC)’ department using a Siemens Artis Zee image intensifier that provides the Air Kerma of each IC exam. Peak Skin Dose (PSD) was measured for each procedure using radiochromic films. Patient parameters such as sex, age, weight, and height were recorded. The complexity index J-CTO score, specific to each intervention, was determined by the cardiologist. A correlation method applied to these indicators allowed to specify their influence on the dose. A predictive model of the dose was created using multiple linear regressions. Results: Out of 103 patients involved in the study, 5 were excluded for clinical reasons and 2 for placement of radiochromic films outside the exposure field. 96 2D-dose maps were finally used. The influencing factors having the highest correlation with the PSD are the patient's diameter and the J-CTO score. The predictive model is based on these parameters. The comparison between estimated and measured skin doses shows an average difference of 0.85 ± 0.55 Gy for doses of less than 6 Gy. The mean difference between air-Kerma and PSD is 1.66 Gy ± 1.16 Gy. Conclusion: Using our developed method, a first estimate of the dose to the skin of the patient is available before the start of the procedure, which helps the cardiologist in carrying out its intervention. This estimation is more accurate than that provided by the Air-Kerma.Keywords: chronic total occlusion procedures, clinical experimentation, interventional radiology, patient's peak skin dose
Procedia PDF Downloads 136608 Chemical Kinetics and Computational Fluid-Dynamics Analysis of H2/CO/CO2/CH4 Syngas Combustion and NOx Formation in a Micro-Pilot-Ignited Supercharged Dual Fuel Engine
Authors: Ulugbek Azimov, Nearchos Stylianidis, Nobuyuki Kawahara, Eiji Tomita
Abstract:
A chemical kinetics and computational fluid-dynamics (CFD) analysis was performed to evaluate the combustion of syngas derived from biomass and coke-oven solid feedstock in a micro-pilot ignited supercharged dual-fuel engine under lean conditions. For this analysis, a new reduced syngas chemical kinetics mechanism was constructed and validated by comparing the ignition delay and laminar flame speed data with those obtained from experiments and other detail chemical kinetics mechanisms available in the literature. The reaction sensitivity analysis was conducted for ignition delay at elevated pressures in order to identify important chemical reactions that govern the combustion process. The chemical kinetics of NOx formation was analyzed for H2/CO/CO2/CH4 syngas mixtures by using counter flow burner and premixed laminar flame speed reactor models. The new mechanism showed a very good agreement with experimental measurements and accurately reproduced the effect of pressure, temperature and equivalence ratio on NOx formation. In order to identify the species important for NOx formation, a sensitivity analysis was conducted for pressures 4 bar, 10 bar and 16 bar and preheat temperature 300 K. The results show that the NOx formation is driven mostly by hydrogen based species while other species, such as N2, CO2 and CH4, have also important effects on combustion. Finally, the new mechanism was used in a multidimensional CFD simulation to predict the combustion of syngas in a micro-pilot-ignited supercharged dual-fuel engine and results were compared with experiments. The mechanism showed the closest prediction of the in-cylinder pressure and the rate of heat release (ROHR).Keywords: syngas, chemical kinetics mechanism, internal combustion engine, NOx formation
Procedia PDF Downloads 409607 An Empirical Exploration of Factors Influencing Lecturers' Acceptance of Open Educational Resources for Enhanced Knowledge Sharing in North-East Nigerian Universities
Authors: Bello, A., Muhammed Ibrahim Abba., Abdullahi, M., Dauda, Sabo, & Shittu, A. T.
Abstract:
This study investigated the Predictors of Lecturers Knowledge Sharing Acceptance on Open Educational Resources (OER) in North-East Nigerian in Universities. The study population comprised of 632 lecturers of Federal Universities in North-east Nigeria. The study sample covered 338 lecturers who were selected purposively from Adamawa, Bauchi and Borno State Federal Universities in Nigeria. The study adopted a prediction correlational research design. The instruments used for data collection was the questionnaire. Experts in the field of educational technology validated the instrument and tested it for reliability checks using Cronbach’s alpha. The constructs on lecturers’ acceptance to share OER yielded a reliability coefficient of; α = .956 for Performance Expectancy, α = .925; for Effort Expectancy, α = .955; for Social Influence, α = .879; for Facilitating Conditions and α = .948 for acceptance to share OER. the researchers contacted the Deanery of faculties of education and enlisted local coordinators to facilitate the data collection process at each university. The data was analysed using multiple sequential regression statistic at a significance level of 0.05 using SPSS version 23.0. The findings of the study revealed that performance expectancy (β = 0.658; t = 16.001; p = 0.000), effort expectancy (β = 0.194; t = 3.802; p = 0.000), social influence (β = 0.306; t = 5.246; p = 0.000), collectively indicated that the variables have a predictive capacity to stimulate lecturer’s acceptance to share their resources on OER repository. However, the finding revealed that facilitating conditions (β = .053; t = .899; p = 0.369), does not have a predictive capacity to stimulate lecturer’s acceptance to share their resources on OER repository. Based on these findings, the study recommends among others that the university management should consider adjusting OER policy to be centered around actualizing lecturers career progression.Keywords: acceptance, lecturers, open educational resources, knowledge sharing
Procedia PDF Downloads 73606 Decisional Regret in Men with Localized Prostate Cancer among Various Treatment Options and the Association with Erectile Functioning and Depressive Symptoms: A Moderation Analysis
Authors: Caren Hilger, Silke Burkert, Friederike Kendel
Abstract:
Men with localized prostate cancer (PCa) have to choose among different treatment options, such as active surveillance (AS) and radical prostatectomy (RP). All available treatment options may be accompanied by specific psychological or physiological side effects. Depending on the nature and extent of these side effects, patients are more or less likely to be satisfied or to struggle with their treatment decision in the long term. Therefore, the aim of this study was to assess and explain decisional regret in men with localized PCa. The role of erectile functioning as one of the main physiological side effects of invasive PCa treatment, depressive symptoms as a common psychological side effect, and the association of erectile functioning and depressive symptoms with decisional regret were investigated. Men with localized PCa initially managed with AS or RP (N=292) were matched according to length of therapy (mean 47.9±15.4 months). Subjects completed mailed questionnaires assessing decisional regret, changes in erectile functioning, depressive symptoms, and sociodemographic variables. Clinical data were obtained from case report forms. Differences among the two treatment groups (AS and RP) were calculated using t-tests and χ²-tests, relationships of decisional regret with erectile functioning and depressive symptoms were computed using multiple regression. Men were on average 70±7.2 years old. The two treatment groups differed markedly regarding decisional regret (p<.001, d=.50), changes in erectile functioning (p<.001, d=1.2), and depressive symptoms (p=.01, d=.30), with men after RP reporting higher values, respectively. Regression analyses showed that after adjustment for age, tumor risk category, and changes in erectile functioning, depressive symptoms were still significantly associated with decisional regret (B=0.52, p<.001). Additionally, when predicting decisional regret, the interaction of changes in erectile functioning and depressive symptoms reached significance for men after RP (B=0.52, p<.001), but not for men under AS (B=-0.16, p=.14). With increased changes in erectile functioning, the association of depressive symptoms with decisional regret became stronger in men after RP. Decisional regret is a phenomenon more prominent in men after RP than in men under AS. Erectile functioning and depressive symptoms interact in their prediction of decisional regret. Screening and treating depressive symptoms might constitute a starting point for interventions aiming to reduce decisional regret in this target group.Keywords: active surveillance, decisional regret, depressive symptoms, erectile functioning, prostate cancer, radical prostatectomy
Procedia PDF Downloads 218605 Effect of Locally Injected Mesenchymal Stem Cells on Bone Regeneration of Rat Calvaria Defects
Authors: Gileade P. Freitas, Helena B. Lopes, Alann T. P. Souza, Paula G. F. P. Oliveira, Adriana L. G. Almeida, Paulo G. Coelho, Marcio M. Beloti, Adalberto L. Rosa
Abstract:
Bone tissue presents great capacity to regenerate when injured by trauma, infectious processes, or neoplasia. However, the extent of injury may exceed the inherent tissue regeneration capability demanding some kind of additional intervention. In this scenario, cell therapy has emerged as a promising alternative to treat challenging bone defects. This study aimed at evaluating the effect of local injection of bone marrow-derived mesenchymal stem cells (BM-MSCs) and adipose tissue-derived mesenchymal stem cells (AT-MSCs) on bone regeneration of rat calvaria defects. BM-MSCs and AT-MSCs were isolated and characterized by expression of surface markers; cell viability was evaluated after injection through a 21G needle. Defects of 5 mm in diameter were created in calvaria and after two weeks a single injection of BM-MSCs, AT-MSCs or vehicle-PBS without cells (Control) was carried out. Cells were tracked by bioluminescence and at 4 weeks post-injection bone formation was evaluated by micro-computed tomography (μCT) and histology, nanoindentation, and through gene expression of bone remodeling markers. The data were evaluated by one-way analysis of variance (p≤0.05). BM-MSCs and AT-MSCs presented characteristics of mesenchymal stem cells, kept viability after passing through a 21G needle and remained in the defects until day 14. In general, injection of both BM-MSCs and AT-MSCs resulted in higher bone formation compared to Control. Additionally, this bone tissue displayed elastic modulus and hardness similar to the pristine calvaria bone. The expression of all evaluated genes involved in bone formation was upregulated in bone tissue formed by BM-MSCs compared to AT-MSCs while genes involved in bone resorption were upregulated in AT-MSCs-formed bone. We show that cell therapy based on the local injection of BM-MSCs or AT-MSCs is effective in delivering viable cells that displayed local engraftment and induced a significant improvement in bone healing. Despite differences in the molecular cues observed between BM-MSCs and AT-MSCs, both cells were capable of forming bone tissue at comparable amounts and properties. These findings may drive cell therapy approaches toward the complete bone regeneration of challenging sites.Keywords: cell therapy, mesenchymal stem cells, bone repair, cell culture
Procedia PDF Downloads 184604 Kinetics of Sugar Losses in Hot Water Blanching of Water Yam (Dioscorea alata)
Authors: Ayobami Solomon Popoola
Abstract:
Yam is majorly a carbohydrate food grown in most parts of the world. It could be boiled, fried or roasted for consumption in a variety of ways. Blanching is an established heat pre-treatment given to fruits and vegetables prior to further processing such as dehydration, canning, freezing etc. Losses of soluble solids during blanching has been a great problem because a reasonable quantity of the water-soluble nutrients are inevitably leached into the blanching water. Without blanching, the high residual levels of reducing sugars after extended storage produce a dark, bitter-tasting product because of the Maillard reactions of reducing sugars at frying temperature. Measurement and prediction of such losses are necessary for economic efficiency in production and to establish the level of effluent treatment of the blanching water. This paper aims at resolving this problem by investigating the effects of cube size and temperature on the rate of diffusional losses of reducing sugars and total sugars during hot water blanching of water-yam. The study was carried out using four temperature levels (65, 70, 80 and 90 °C) and two cubes sizes (0.02 m³ and 0.03 m³) at 4 times intervals (5, 10, 15 and 20 mins) respectively. Obtained data were fitted into Fick’s non-steady equation from which diffusion coefficients (Da) were obtained. The Da values were subsequently fitted into Arrhenius plot to obtain activation energies (Ea-values) for diffusional losses. The diffusion co-efficient were independent of cube size and time but highly temperature dependent. The diffusion coefficients were ≥ 1.0 ×10⁻⁹ m²s⁻¹ for reducing sugars and ≥ 5.0 × 10⁻⁹ m²s⁻¹ for total sugars. The Ea values ranged between 68.2 to 73.9 KJmol⁻¹ and 7.2 to 14.30 KJmol⁻¹ for reducing sugars and total sugars losses respectively. Predictive equations for estimating amount of reducing sugars and total sugars with blanching time of water-yam at various temperatures were also presented. The equation could be valuable in process design and optimization. However, amount of other soluble solids that might have leached into the water along with reducing and total sugars during blanching was not investigated in the study.Keywords: blanching, kinetics, sugar losses, water yam
Procedia PDF Downloads 165603 D-Wave Quantum Computing Ising Model: A Case Study for Forecasting of Heat Waves
Authors: Dmytro Zubov, Francesco Volponi
Abstract:
In this paper, D-Wave quantum computing Ising model is used for the forecasting of positive extremes of daily mean air temperature. Forecast models are designed with two to five qubits, which represent 2-, 3-, 4-, and 5-day historical data respectively. Ising model’s real-valued weights and dimensionless coefficients are calculated using daily mean air temperatures from 119 places around the world, as well as sea level (Aburatsu, Japan). In comparison with current methods, this approach is better suited to predict heat wave values because it does not require the estimation of a probability distribution from scarce observations. Proposed forecast quantum computing algorithm is simulated based on traditional computer architecture and combinatorial optimization of Ising model parameters for the Ronald Reagan Washington National Airport dataset with 1-day lead-time on learning sample (1975-2010 yr). Analysis of the forecast accuracy (ratio of successful predictions to total number of predictions) on the validation sample (2011-2014 yr) shows that Ising model with three qubits has 100 % accuracy, which is quite significant as compared to other methods. However, number of identified heat waves is small (only one out of nineteen in this case). Other models with 2, 4, and 5 qubits have 20 %, 3.8 %, and 3.8 % accuracy respectively. Presented three-qubit forecast model is applied for prediction of heat waves at other five locations: Aurel Vlaicu, Romania – accuracy is 28.6 %; Bratislava, Slovakia – accuracy is 21.7 %; Brussels, Belgium – accuracy is 33.3 %; Sofia, Bulgaria – accuracy is 50 %; Akhisar, Turkey – accuracy is 21.4 %. These predictions are not ideal, but not zeros. They can be used independently or together with other predictions generated by different method(s). The loss of human life, as well as environmental, economic, and material damage, from extreme air temperatures could be reduced if some of heat waves are predicted. Even a small success rate implies a large socio-economic benefit.Keywords: heat wave, D-wave, forecast, Ising model, quantum computing
Procedia PDF Downloads 500602 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture
Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani
Abstract:
The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated. Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.Keywords: carbon capture and storage, oxy-combustion, netpower cycle, oxy turbine cycles, zero emission, heat exchanger design, supercritical carbon dioxide, oxy-fuel power plant, pinch point analysis
Procedia PDF Downloads 204601 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 364600 Numerical Simulation of Aeroelastic Influence Exerted by Kinematic and Geometrical Parameters on Oscillations' Frequencies and Phase Shift Angles in a Simulated Compressor of Gas Transmittal Unit
Authors: Liliia N. Butymova, Vladimir Y. Modorsky, Nikolai A. Shevelev
Abstract:
Prediction of vibration processes in gas transmittal units (GTU) is an urgent problem. Despite numerous scientific publications on the problem of vibrations in general, there are not enough works concerning FSI-modeling interaction processes between several deformable blades in gas-dynamic flow. Since it is very difficult to solve the problem in full scope, with all factors considered, a unidirectional dynamic coupled 1FSI model is suggested for use at the first stage, which would include, from symmetry considerations, two blades, which might be considered as the first stage of solving more general bidirectional problem. ANSYS CFX programmed multi-processor was chosen as a numerical computation tool. The problem was solved on PNRPU high-capacity computer complex. At the first stage of the study, blades were believed oscillating with the same frequency, although oscillation phases could be equal and could be different. At that non-stationary gas-dynamic forces distribution over the blades surfaces is calculated in run of simulation experiment. Oscillations in the “gas — structure” dynamic system are assumed to increase if the resultant of these gas-dynamic forces is in-phase with blade oscillation, and phase shift (φ=0). Provided these oscillation occur with phase shift, then oscillations might increase or decrease, depending on the phase shift value. The most important results are as follows: the angle of phase shift in inter-blade oscillation and the gas-dynamic force depends on the flow velocity, the specific inter-blade gap, and the shaft rotation speed; a phase shift in oscillation of adjacent blades does not always correspond to phase shift of gas-dynamic forces affecting the blades. Thus, it was discovered, that asynchronous oscillation of blades might cause either attenuation or intensification of oscillation. It was revealed that clocking effect might depend not only on the mutual circumferential displacement of blade rows and the gap between the blades, but also on the blade dynamic deformation nature.Keywords: aeroelasticity, ANSYS CFX, oscillation, phase shift, clocking effect, vibrations
Procedia PDF Downloads 269599 The Impact of Intelligent Control Systems on Biomedical Engineering and Research
Authors: Melkamu Tadesse Getachew
Abstract:
Intelligent control systems have revolutionized biomedical engineering, advancing research and enhancing medical practice. This review paper examines the impact of intelligent control on various aspects of biomedical engineering. It analyzes how these systems enhance precision and accuracy in biomedical instrumentation, improving diagnostics, monitoring, and treatment. Integration challenges are addressed, and potential solutions are proposed. The paper also investigates the optimization of drug delivery systems through intelligent control. It explores how intelligent systems contribute to precise dosing, targeted drug release, and personalized medicine. Challenges related to controlled drug release and patient variability are discussed, along with potential avenues for overcoming them. The comparison of algorithms used in intelligent control systems in biomedical control is also reviewed. The implications of intelligent control in computational and systems biology are explored, showcasing how these systems enable enhanced analysis and prediction of complex biological processes. Challenges such as interpretability, human-machine interaction, and machine reliability are examined, along with potential solutions. Intelligent control in biomedical engineering also plays a crucial role in risk management during surgical operations. This section demonstrates how intelligent systems improve patient safety and surgical outcomes when integrated into surgical robots, augmented reality, and preoperative planning. The challenges associated with these implementations and potential solutions are discussed in detail. In summary, this review paper comprehensively explores the widespread impact of intelligent control on biomedical engineering, showing the future of human health issues promising. It discusses application areas, challenges, and potential solutions, highlighting the transformative potential of these systems in advancing research and improving medical practice.Keywords: Intelligent control systems, biomedical instrumentation, drug delivery systems, robotic surgical instruments, Computational monitoring and modeling
Procedia PDF Downloads 44598 Test Method Development for Evaluation of Process and Design Effect on Reinforced Tube
Authors: Cathal Merz, Gareth O’Donnell
Abstract:
Coil reinforced thin-walled (CRTW) tubes are used in medicine to treat problems affecting blood vessels within the body through minimally invasive procedures. The CRTW tube considered in this research makes up part of such a device and is inserted into the patient via their femoral or brachial arteries and manually navigated to the site in need of treatment. This procedure replaces the requirement to perform open surgery but is limited by reduction of blood vessel lumen diameter and increase in tortuosity of blood vessels deep in the brain. In order to maximize the capability of these procedures, CRTW tube devices are being manufactured with decreasing wall thicknesses in order to deliver treatment deeper into the body and to allow passage of other devices through its inner diameter. This introduces significant stresses to the device materials which have resulted in an observed increase in the breaking of the proximal segment of the device into two separate pieces after it has failed by buckling. As there is currently no international standard for measuring the mechanical properties of these CRTW tube devices, it is difficult to accurately analyze this problem. The aim of the current work is to address this discrepancy in the biomedical device industry by developing a measurement system that can be used to quantify the effect of process and design changes on CRTW tube performance, aiding in the development of better performing, next generation devices. Using materials testing frames, micro-computed tomography (micro-CT) imaging, experiment planning, analysis of variance (ANOVA), T-tests and regression analysis, test methods have been developed for assessing the impact of process and design changes on the device. The major findings of this study have been an insight into the suitability of buckle and three-point bend tests for the measurement of the effect of varying processing factors on the device’s performance, and guidelines for interpreting the output data from the test methods. The findings of this study are of significant interest with respect to verifying and validating key process and design changes associated with the device structure and material condition. Test method integrity evaluation is explored throughout.Keywords: neurovascular catheter, coil reinforced tube, buckling, three-point bend, tensile
Procedia PDF Downloads 117597 The Implementation of Poisson Impedance Inversion to Improve Hydrocarbon Reservoir Characterization in Poseidon Field, Browse Basin, Australia
Authors: Riky Tri Hartagung, Mohammad Syamsu Rosid
Abstract:
The lithology prediction process, as well as the fluid content is the most important part in the reservoir characterization. One of the methods used in this process is the simultaneous seismic inversion method. In the Posseidon field, Browse Basin, Australia, the parameters generated through simultaneous seismic inversion are not able to characterize the reservoir accurately because of the overlapping impedance values between hydrocarbon sand, water sand, and shale, which causes a high level of ambiguity in the interpretation. The Poisson Impedance inversion provides a solution to this problem by rotating the impedance a few degrees, which is obtained through the coefficient c. Coefficient c is obtained through the Target Correlation Coefficient Analysis (TCCA) by finding the optimum correlation coefficient between Poisson Impedance and the target log, namely gamma ray, effective porosity, and resistivity. Correlation of each of these target logs will produce Lithology Impedance (LI) which is sensitive to lithology sand, Porosity Impedance (ϕI) which is sensitive to porous sand, and Fluid Impedance (FI) which is sensitive to fluid content. The results show that PI gives better results in separating hydrocarbon saturated reservoir zones. Based on the results of the LI-GR crossplot, the ϕI-effective porosity crossplot, and the FI-Sw crossplot with optimum correlations of 0.74, 0.91, and 0.82 respectively, it shows that the lithology of hidrocarbon-saturated porous sand is at the value of LI ≤ 2800 (m/s)(g *cc), ϕI ≤ 5500 (m/s)(g*cc), and FI ≤ 4000 (m/s)(g*cc). The presence of low values of LI, ϕI, and FI correlates accurately with the presence of hydrocarbons in the well. Each value of c is then applied to the seismic data. The results show that the PI inversion gives a good distribution of Hydrocarbon-saturated porous sand lithology. The distribution of hydrocarbon saturated porous sand on the seismic inversion section is seen in the northeast – southwest direction, which is estimated as the direction of gas distribution.Keywords: reservoir characterization, poisson impedance, browse basin, poseidon field
Procedia PDF Downloads 124