Search results for: experimental liver cirrhosis
191 Comparison of the Effect of Heart Rate Variability Biofeedback and Slow Breathing Training on Promoting Autonomic Nervous Function Related Performance
Authors: Yi Jen Wang, Yu Ju Chen
Abstract:
Background: Heart rate variability (HRV) biofeedback can promote autonomic nervous function, sleep quality and reduce psychological stress. In HRV biofeedback training, it is hoped that through the guidance of machine video or audio, the patient can breathe slowly according to his own heart rate changes so that the heart and lungs can achieve resonance, thereby promoting the related effects of autonomic nerve function; while, it is also pointed out that if slow breathing of 6 times per minute can also guide the case to achieve the effect of cardiopulmonary resonance. However, there is no relevant research to explore the comparison of the effectiveness of cardiopulmonary resonance by using video or audio HRV biofeedback training and metronome-guided slow breathing. Purpose: To compare the promotion of autonomic nervous function performance between using HRV biofeedback and slow breathing guided by a metronome. Method: This research is a kind of experimental design with convenient sampling; the cases are randomly divided into the heart rate variability biofeedback training group and the slow breathing training group. The HRV biofeedback training group will conduct HRV biofeedback training in a four-week laboratory and use the home training device for autonomous training; while the slow breathing training group will conduct slow breathing training in the four-week laboratory using the mobile phone APP breathing metronome to guide the slow breathing training, and use the mobile phone APP for autonomous training at home. After two groups were enrolled and four weeks after the intervention, the autonomic nervous function-related performance was repeatedly measured. Using the chi-square test, student’s t-test and other statistical methods to analyze the results, and use p <0.05 as the basis for statistical significance. Results: A total of 27 subjects were included in the analysis. After four weeks of training, the HRV biofeedback training group showed significant improvement in the HRV indexes (SDNN, RMSSD, HF, TP) and sleep quality. Although the stress index also decreased, it did not reach statistical significance; the slow breathing training group was not statistically significant after four weeks of training, only sleep quality improved significantly, while the HRV indexes (SDNN, RMSSD, TP) all increased. Although HF and stress indexes decreased, they were not statistically significant. Comparing the difference between the two groups after training, it was found that the HF index improved significantly and reached statistical significance in the HRV biofeedback training group. Although the sleep quality of the two groups improved, it did not reach that level in a statistically significant difference. Conclusion: HRV biofeedback training is more effective in promoting autonomic nervous function than slow breathing training, but the effects of reducing stress and promoting sleep quality need to be explored after increasing the number of samples. The results of this study can provide a reference for clinical or community health promotion. In the future, it can also be further designed to integrate heart rate variability biological feedback training into the development of AI artificial intelligence wearable devices, which can make it more convenient for people to train independently and get effective feedback in time.Keywords: autonomic nervous function, HRV biofeedback, heart rate variability, slow breathing
Procedia PDF Downloads 175190 Simulation and Thermal Evaluation of Containers Using PCM in Different Weather Conditions of Chile: Energy Savings in Lightweight Constructions
Authors: Paula Marín, Mohammad Saffari, Alvaro de Gracia, Luisa F. Cabeza, Svetlana Ushak
Abstract:
Climate control represents an important issue when referring to energy consumption of buildings and associated expenses, both in installation or operation periods. The climate control of a building relies on several factors. Among them, localization, orientation, architectural elements, sources of energy used, are considered. In order to study the thermal behaviour of a building set up, the present study proposes the use of energy simulation program Energy Plus. In recent years, energy simulation programs have become important tools for evaluation of thermal/energy performance of buildings and facilities. Besides, the need to find new forms of passive conditioning in buildings for energy saving is a critical component. The use of phase change materials (PCMs) for heat storage applications has grown in importance due to its high efficiency. Therefore, the climatic conditions of Northern Chile: high solar radiation and extreme temperature fluctuations ranging from -10°C to 30°C (Calama city), low index of cloudy days during the year are appropriate to take advantage of solar energy and use passive systems in buildings. Also, the extensive mining activities in northern Chile encourage the use of large numbers of containers to harbour workers during shifts. These containers are constructed with lightweight construction systems, requiring heating during night and cooling during day, increasing the HVAC electricity consumption. The use of PCM can improve thermal comfort and reduce the energy consumption. The objective of this study was to evaluate the thermal and energy performance of containers of 2.5×2.5×2.5 m3, located in four cities of Chile: Antofagasta, Calama, Santiago, and Concepción. Lightweight envelopes, typically used in these building prototypes, were evaluated considering a container without PCM inclusion as the reference building and another container with PCM-enhanced envelopes as a test case, both of which have a door and a window in the same wall, orientated in two directions: North and South. To see the thermal response of these containers in different seasons, the simulations were performed considering a period of one year. The results show that higher energy savings for the four cities studied are obtained when the distribution of door and window in the container is in the north direction because of higher solar radiation incidence. The comparison of HVAC consumption and energy savings in % for north direction of door and window are summarised. Simulation results show that in the city of Antofagasta 47% of heating energy could be saved and in the cities of Calama and Concepción the biggest savings in terms of cooling could be achieved since PCM reduces almost all the cooling demand. Currently, based on simulation results, four containers have been constructed and sized with the same structural characteristics carried out in simulations, that are, containers with/without PCM, with door and window in one wall. Two of these containers will be placed in Antofagasta and two containers in a copper mine near to Calama, all of them will be monitored for a period of one year. The simulation results will be validated with experimental measurements and will be reported in the future.Keywords: energy saving, lightweight construction, PCM, simulation
Procedia PDF Downloads 284189 The Impact of Professional Development on Teachers’ Instructional Practice
Authors: Karen Koellner, Nanette Seago, Jennifer Jacobs, Helen Garnier
Abstract:
Although studies of teacher professional development (PD) are prevalent, surprisingly most have only produced incremental shifts in teachers’ learning and their impact on students. There is a critical need to understand what teachers take up and use in their classroom practice after attending PD and why we often do not see greater changes in learning and practice. This paper is based on a mixed methods efficacy study of the Learning and Teaching Geometry (LTG) video-based mathematics professional development materials. The extent to which the materials produce a beneficial impact on teachers’ mathematics knowledge, classroom practices, and their students’ knowledge in the domain of geometry through a group-randomized experimental design are considered. In this study, we examine a small group of teachers to better understand their interpretations of the workshops and their classroom uptake. The participants included 103 secondary mathematics teachers serving grades 6-12 from two states in different regions. Randomization was conducted at the school level, with 23 schools and 49 teachers assigned to the treatment group and 18 schools and 54 teachers assigned to the comparison group. The case study examination included twelve treatment teachers. PD workshops for treatment teachers began in Summer 2016. Nine full days of professional development were offered to teachers, beginning with the one-week institute (Summer 2016) and four days of PD throughout the academic year. The same facilitator-led all of the workshops, after completing a facilitator preparation process that included a multi-faceted assessment of fidelity. The overall impact of the LTG PD program was assessed from multiple sources: two teacher content assessments, two PD embedded assessments, pre-post-post videotaped classroom observations, and student assessments. Additional data was collected from the case study teachers including additional videotaped classroom observations and interviews. Repeated measures ANOVA analyses were used to detect patterns of change in the treatment teachers’ content knowledge before and after completion of the LTG PD, relative to the comparison group. No significant effects were found across the two groups of teachers on the two teacher content assessments. Teachers were rated on the quality of their mathematics instruction captured in videotaped classroom observations using the Math in Common Observation Protocol. On average, teachers who attended the LTG PD intervention improved their ability to engage students in mathematical reasoning and to provide accurate, coherent, and well-justified mathematical content. In addition, the LTG PD intervention and instruction that engaged students in mathematical practices both positively and significantly predicted greater student knowledge gains. Teacher knowledge was not a significant predictor. Twelve treatment teachers were self-selected to serve as case study teachers to provide additional videotapes in which they felt they were using something from the PD they learned and experienced. Project staff analyzed the videos, compared them to previous videos and interviewed the teachers regarding their uptake of the PD related to content knowledge, pedagogical knowledge and resources used.Keywords: teacher learning, professional development, pedagogical content knowledge, geometry
Procedia PDF Downloads 169188 Numerical Study of Homogeneous Nanodroplet Growth
Authors: S. B. Q. Tran
Abstract:
Drop condensation is the phenomenon that the tiny drops form when the oversaturated vapour present in the environment condenses on a substrate and makes the droplet growth. Recently, this subject has received much attention due to its applications in many fields such as thin film growth, heat transfer, recovery of atmospheric water and polymer templating. In literature, many papers investigated theoretically and experimentally in macro droplet growth with the size of millimeter scale of radius. However few papers about nanodroplet condensation are found in the literature especially theoretical work. In order to understand the droplet growth in nanoscale, we perform the numerical simulation work to study nanodroplet growth. We investigate and discuss the role of the droplet shape and monomer diffusion on drop growth and their effect on growth law. The effect of droplet shape is studied by doing parametric studies of contact angle and disjoining pressure magnitude. Besides, the effect of pinning and de-pinning behaviours is also studied. We investigate the axisymmetric homogeneous growth of 10–100 nm single water nanodroplet on a substrate surface. The main mechanism of droplet growth is attributed to the accumulation of laterally diffusing water monomers, formed by the absorption of water vapour in the environment onto the substrate. Under assumptions of quasi-steady thermodynamic equilibrium, the nanodroplet evolves according to the augmented Young–Laplace equation. Using continuum theory, we model the dynamics of nanodroplet growth including the coupled effects of disjoining pressure, contact angle and monomer diffusion with the assumption of constant flux of water monomers at the far field. The simulation result is validated by comparing with the published experimental result. For the case of nanodroplet growth with constant contact angle, our numerical results show that the initial droplet growth is transient by monomer diffusion. When the flux at the far field is small, at the beginning, the droplet grows by the diffusion of initially available water monomers on the substrate and after that by the flux at the far field. In the steady late growth rate of droplet radius and droplet height follow a power law of 1/3, which is unaffected by the substrate disjoining pressure and contact angle. However, it is found that the droplet grows faster in radial direction than high direction when disjoining pressure and contact angle increase. The simulation also shows the information of computational domain effect in the transient growth period. When the computational domain size is larger, the mass coming in the free substrate domain is higher. So the mass coming in the droplet is also higher. The droplet grows and reaches the steady state faster. For the case of pinning and de-pinning droplet growth, the simulation shows that the disjoining pressure does not affect the droplet radius growth law 1/3 in steady state. However the disjoining pressure modifies the growth rate of the droplet height, which then follows a power law of 1/4. We demonstrate how spatial depletion of monomers could lead to a growth arrest of the nanodroplet, as observed experimentally.Keywords: augmented young-laplace equation, contact angle, disjoining pressure, nanodroplet growth
Procedia PDF Downloads 271187 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application
Authors: Ramesh P., Aby Joseph
Abstract:
Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling
Procedia PDF Downloads 133186 Supercritical Water Gasification of Organic Wastes for Hydrogen Production and Waste Valorization
Authors: Laura Alvarez-Alonso, Francisco Garcia-Carro, Jorge Loredo
Abstract:
Population growth and industrial development imply an increase in the energy demands and the problems caused by emissions of greenhouse effect gases, which has inspired the search for clean sources of energy. Hydrogen (H₂) is expected to play a key role in the world’s energy future by replacing fossil fuels. The properties of H₂ make it a green fuel that does not generate pollutants and supplies sufficient energy for power generation, transportation, and other applications. Supercritical Water Gasification (SCWG) represents an attractive alternative for the recovery of energy from wastes. SCWG allows conversion of a wide range of raw materials into a fuel gas with a high content of hydrogen and light hydrocarbons through their treatment at conditions higher than those that define the critical point of water (temperature of 374°C and pressure of 221 bar). Methane used as a transport fuel is another important gasification product. The number of different uses of gas and energy forms that can be produced depending on the kind of material gasified and type of technology used to process it, shows the flexibility of SCWG. This feature allows it to be integrated with several industrial processes, as well as power generation systems or waste-to-energy production systems. The final aim of this work is to study which conditions and equipment are the most efficient and advantageous to explore the possibilities to obtain streams rich in H₂ from oily wastes, which represent a major problem both for the environment and human health throughout the world. In this paper, the relative complexity of technology needed for feasible gasification process cycles is discussed with particular reference to the different feedstocks that can be used as raw material, different reactors, and energy recovery systems. For this purpose, a review of the current status of SCWG technologies has been carried out, by means of different classifications based on key features as the feed treated or the type of reactor and other apparatus. This analysis allows to improve the technology efficiency through the study of model calculations and its comparison with experimental data, the establishment of kinetics for chemical reactions, the analysis of how the main reaction parameters affect the yield and composition of products, or the determination of the most common problems and risks that can occur. The results of this work show that SCWG is a promising method for the production of both hydrogen and methane. The most significant choices of design are the reactor type and process cycle, which can be conveniently adopted according to waste characteristics. Regarding the future of the technology, the design of SCWG plants is still to be optimized to include energy recovery systems in order to reduce costs of equipment and operation derived from the high temperature and pressure conditions that are necessary to convert water to the SC state, as well as to find solutions to remove corrosion and clogging of components of the reactor.Keywords: hydrogen production, organic wastes, supercritical water gasification, system integration, waste-to-energy
Procedia PDF Downloads 147185 Chiral Molecule Detection via Optical Rectification in Spin-Momentum Locking
Authors: Jessie Rapoza, Petr Moroshkin, Jimmy Xu
Abstract:
Chirality is omnipresent, in nature, in life, and in the field of physics. One intriguing example is the homochirality that has remained a great secret of life. Another is the pairs of mirror-image molecules – enantiomers. They are identical in atomic composition and therefore indistinguishable in the scalar physical properties. Yet, they can be either therapeutic or toxic, depending on their chirality. Recent studies suggest a potential link between abnormal levels of certain D-amino acids and some serious health impairments, including schizophrenia, amyotrophic lateral sclerosis, and potentially cancer. Although indistinguishable in their scalar properties, the chirality of a molecule reveals itself in interaction with the surrounding of a certain chirality, or more generally, a broken mirror-symmetry. In this work, we report on a system for chiral molecule detection, in which the mirror-symmetry is doubly broken, first by asymmetric structuring a nanopatterned plasmonic surface than by the incidence of circularly polarized light (CPL). In this system, the incident circularly-polarized light induces a surface plasmon polariton (SPP) wave, propagating along the asymmetric plasmonic surface. This SPP field itself is chiral, evanescently bound to a near-field zone on the surface (~10nm thick), but with an amplitude greatly intensified (by up to 104) over that of the incident light. It hence probes just the molecules on the surface instead of those in the volume. In coupling to molecules along its path on the surface, the chiral SPP wave favors one chirality over the other, allowing for chirality detection via the change in an optical rectification current measured at the edges of the sample. The asymmetrically structured surface converts the high-frequency electron plasmonic-oscillations in the SPP wave into a net DC drift current that can be measured at the edge of the sample via the mechanism of optical rectification. The measured results validate these design concepts and principles. The observed optical rectification current exhibits a clear differentiation between a pair of enantiomers. Experiments were performed by focusing a 1064nm CW laser light at the sample - a gold grating microchip submerged in an approximately 1.82M solution of either L-arabinose or D-arabinose and water. A measurement of the current output was then recorded under both rights and left circularly polarized lights. Measurements were recorded at various angles of incidence to optimize the coupling between the spin-momentums of the incident light and that of the SPP, that is, spin-momentum locking. In order to suppress the background, the values of the photocurrent for the right CPL are subtracted from those for the left CPL. Comparison between the two arabinose enantiomers reveals a preferential signal response of one enantiomer to left CPL and the other enantiomer to right CPL. In sum, this work reports on the first experimental evidence of the feasibility of chiral molecule detection via optical rectification in a metal meta-grating. This nanoscale interfaced electrical detection technology is advantageous over other detection methods due to its size, cost, ease of use, and integration ability with read-out electronic circuits for data processing and interpretation.Keywords: Chirality, detection, molecule, spin
Procedia PDF Downloads 92184 Study of Biomechanical Model for Smart Sensor Based Prosthetic Socket Design System
Authors: Wei Xu, Abdo S. Haidar, Jianxin Gao
Abstract:
Prosthetic socket is a component that connects the residual limb of an amputee with an artificial prosthesis. It is widely recognized as the most critical component that determines the comfort of a patient when wearing the prosthesis in his/her daily activities. Through the socket, the body weight and its associated dynamic load are distributed and transmitted to the prosthesis during walking, running or climbing. In order to achieve a good-fit socket for an individual amputee, it is essential to obtain the biomechanical properties of the residual limb. In current clinical practices, this is achieved by a touch-and-feel approach which is highly subjective. Although there have been significant advancements in prosthetic technologies such as microprocessor controlled knee and ankle joints in the last decade, the progress in designing a comfortable socket has been rather limited. This means that the current process of socket design is still very time-consuming, and highly dependent on the expertise of the prosthetist. Supported by the state-of-the-art sensor technologies and numerical simulations, a new socket design system is being developed to help prosthetists achieve rapid design of comfortable sockets for above knee amputees. This paper reports the research work related to establishing biomechanical models for socket design. Through numerical simulation using finite element method, comprehensive relationships between pressure on residual limb and socket geometry were established. This allowed local topological adjustment for the socket so as to optimize the pressure distributions across the residual limb. When the full body weight of a patient is exerted on the residual limb, high pressures and shear forces between the residual limb and the socket occur. During numerical simulations, various hyperplastic models, namely Ogden, Yeoh and Mooney-Rivlin, were used, and their effectiveness in representing the biomechanical properties of soft tissues of the residual limb was evaluated. This also involved reverse engineering, which resulted in an optimal representative model under compression test. To validate the simulation results, a range of silicone models were fabricated. They were tested by an indentation device which yielded the force-displacement relationships. Comparisons of results obtained from FEA simulations and experimental tests showed that the Ogden model did not fit well the soft tissue material indentation data, while the Yeoh model gave the best representation of the soft tissue mechanical behavior under indentation. Compared with hyperplastic model, the result showed that elastic model also had significant errors. In addition, normal and shear stress distributions on the surface of the soft tissue model were obtained. The effect of friction in compression testing and the influence of soft tissue stiffness and testing boundary conditions were also analyzed. All these have contributed to the overall goal of designing a good-fit socket for individual above knee amputees.Keywords: above knee amputee, finite element simulation, hyperplastic model, prosthetic socket
Procedia PDF Downloads 205183 Phorbol 12-Myristate 13-Acetate (PMA)-Differentiated THP-1 Monocytes as a Validated Microglial-Like Model in Vitro
Authors: Amelia J. McFarland, Andrew K. Davey, Shailendra Anoopkumar-Dukie
Abstract:
Microglia are the resident macrophage population of the central nervous system (CNS), contributing to both innate and adaptive immune response, and brain homeostasis. Activation of microglia occurs in response to a multitude of pathogenic stimuli in their microenvironment; this induces morphological and functional changes, resulting in a state of acute neuroinflammation which facilitates injury resolution. Adequate microglial function is essential for the health of the neuroparenchyma, with microglial dysfunction implicated in numerous CNS pathologies. Given the critical role that these macrophage-derived cells play in CNS homeostasis, there is a high demand for microglial models suitable for use in neuroscience research. The isolation of primary human microglia, however, is both difficult and costly, with microglial activation an unwanted but inevitable result of the extraction process. Consequently, there is a need for the development of alternative experimental models which exhibit morphological, biochemical and functional characteristics of human microglia without the difficulties associated with primary cell lines. In this study, our aim was to evaluate whether THP-1 human peripheral blood monocytes would display microglial-like qualities following an induced differentiation, and, therefore, be suitable for use as surrogate microglia. To achieve this aim, THP-1 human peripheral blood monocytes from acute monocytic leukaemia were differentiated with a range of phorbol 12-myristate 13-acetate (PMA) concentrations (50-200 nM) using two different protocols: a 5-day continuous PMA exposure or a 3-day continuous PMA exposure followed by a 5-day rest in normal media. In each protocol and at each PMA concentration, microglial-like cell morphology was assessed through crystal violet staining and the presence of CD-14 microglial / macrophage cell surface marker. Lipopolysaccharide (LPS) from Escherichia coli (055: B5) was then added at a range of concentrations from 0-10 mcg/mL to activate the PMA-differentiated THP-1 cells. Functional microglial-like behavior was evaluated by quantifying the release of prostaglandin (PG)-E2 and pro-inflammatory cytokines interleukin (IL)-1β and tumour necrosis factor (TNF)-α using mediator-specific ELISAs. Furthermore, production of global reactive oxygen species (ROS) and nitric oxide (NO) were determined fluorometrically using dichlorodihydrofluorescein diacetate (DCFH-DA) and diaminofluorescein diacetate (DAF-2-DA) respectively. Following PMA-treatment, it was observed both differentiation protocols resulted in cells displaying distinct microglial morphology from 10 nM PMA. Activation of differentiated cells using LPS significantly augmented IL-1β, TNF-α and PGE2 release at all LPS concentrations under both differentiation protocols. Similarly, a significant increase in DCFH-DA and DAF-2-DA fluorescence was observed, indicative of increases in ROS and NO production. For all endpoints, the 5-day continuous PMA treatment protocol yielded significantly higher mediator levels than the 3-day treatment and 5-day rest protocol. Our data, therefore, suggests that the differentiation of THP-1 human monocyte cells with PMA yields a homogenous microglial-like population which, following stimulation with LPS, undergo activation to release a range of pro-inflammatory mediators associated with microglial activation. Thus, the use of PMA-differentiated THP-1 cells represents a suitable microglial model for in vitro research.Keywords: differentiation, lipopolysaccharide, microglia, monocyte, neuroscience, THP-1
Procedia PDF Downloads 388182 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance
Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning
Procedia PDF Downloads 30181 Effect of Toxic Metals Exposure on Rat Behavior and Brain Morphology: Arsenic, Manganese
Authors: Tamar Bikashvili, Tamar Lordkipanidze, Ilia Lazrishvili
Abstract:
Heavy metals remain one of serious environmental problems due to their toxic effects. The effect of arsenic and manganese compounds on rat behavior and neuromorphology was studied. Wistar rats were assigned to four groups: rats in control group were given regular water, while rats in other groups drank water with final manganese concentration of 10 mg/L (group A), 20 mg/L (group B) and final arsenic concentration 68 mg/L (group C), respectively, for a month. To study exploratory and anxiety behavior and also to evaluate aggressive performance in “home cage” rats were tested in “Open Field” and to estimate learning and memory status multi-branched maze was used. Statistically significant increase of motor and oriental-searching activity in experimental groups was revealed by an open field test, which was expressed in increase of number of lines crossed, rearing and hole reflexes. Obtained results indicated the suppression of fear in rats exposed to manganese. Specifically, this was estimated by the frequency of getting to the central part of the open field. Experiments revealed that 30-day exposure to 10 mg/ml manganese did not stimulate aggressive behavior in rats, while exposure to the higher dose (20 mg/ml), 37% of initially non-aggressive animals manifested aggressive behavior. Furthermore, 25% of rats were extremely aggressive. Obtained data support the hypothesis that excess manganese in the body is one of the immediate causes of enhancement of interspecific predatory aggressive and violent behavior in rats. It was also discovered that manganese intoxication produces non-reversible severe learning disability and insignificant, reversible memory disturbances. Studies of rodents exposed to arsenic also revealed changes in the learning process. As it is known, the distribution of metal ions differs in various brain regions. The principle manganese accumulation was observed in the hippocampus and in the neocortex, while arsenic was predominantly accumulated in nucleus accumbens, striatum, and cortex. These brain regions play an important role in the regulation of emotional state and motor activity. Histopathological analyzes of brain sections illustrated two morphologically distinct altered phenotypes of neurons: (1) shrunk cells with indications of apoptosis - nucleus and cytoplasm were very difficult to be distinguished, the integrity of neuronal cytoplasm was not disturbed; and (2) swollen cells - with indications of necrosis. Pyknotic nucleus, plasma membrane disruption and cytoplasmic vacuoles were observed in swollen neurons and they were surrounded by activated gliocytes. It’s worth to mention that in the cortex the majority of damaged neurons were apoptotic while in subcortical nuclei –neurons were mainly necrotic. Ultrastructural analyses demonstrated that all cell types in the cortex and the nucleus caudatus represent destructed mitochondria, widened neurons’ vacuolar system profiles, increased number of lysosomes and degeneration of axonal endings.Keywords: arsenic, manganese, behavior, learning, neuron
Procedia PDF Downloads 359180 Reinventing Business Education: Filling the Knowledge Gap on the Verge of the 4th Industrial Revolution
Authors: Elena Perepelova
Abstract:
As the world approaches the 4th industrial revolution, income inequality has become one of the major societal concerns. Displacement of workers by technology becomes a reality, and in return, new skills and competencies are required. More important than ever, education needs to help individuals understand the wider world around them and make global connections. The author argues for the necessity to incorporate business, economics and finance studies as a part of primary education and offer access to business education to the general population with the primary objective to understand how the world functions. The paper offers a fresh look at existing business theory through an innovative program called 'Usefulnomics'. Realizing that the subject of Economics, Finance and Business are perceived as overwhelming for a large part of the population, the author has taken a holistic approach and created a program that simplifies the definitions of the existing concepts and shifts from the traditional breakdown into subjects and specialties to a teaching method that is based exclusively on real-life example case studies and group debates, in order to better grasp the concepts and put them into context. The paper findings are the result of a two-year project and experimental work with students from UK, USA, Malaysia, Russia, and Spain. The author conducted extensive research through on-line and in-person classes and workshops as well as in-depth interviews of primary and secondary grade students to assess their understanding of what is a business, how businesses operate and the role businesses play in their communities. The findings clearly indicate that students of all ages often understood business concepts and processes only in an intuitive way, which resulted in misconceptions and gaps in knowledge. While knowledge gaps were easier to identify and correct in primary school students, as students’ age increased, the learning process became distorted by career choices, political views, and the students’ actual (or perceived) economic status. While secondary school students recognized more concepts, their real understanding was often on par with upper primary school age students. The research has also shown that lack of correct vocabulary created a strong barrier to communication and real-life application or further learning. Based on these findings, each key business concept was practiced and put into context with small groups of students in order to design the content and format which would be well accepted and understood by the target group. As a result, the final learning program package was based on case studies from daily modern life and used a wide range of examples: from popular brands and well-known companies to basic commodities. In the final stage, the content and format were put into practice in larger classrooms. The author would like to share the key findings from the research, the resulting learning program as well as present new ideas on how the program could be further enriched and adapted so schools and organizations can deliver it.Keywords: business, finance, economics, lifelong learning, XXI century skills
Procedia PDF Downloads 118179 Bacterial Community Diversity in Soil under Two Tillage Systems
Authors: Dalia Ambrazaitienė, Monika Vilkienė, Danute Karcauskienė, Gintaras Siaudinis
Abstract:
The soil is a complex ecosystem that is part of our biosphere. The ability of soil to provide ecosystem services is dependent on microbial diversity. T Tillage is one of the major factors that affect soil properties. The no-till systems or shallow ploughless tillage are opposite of traditional deep ploughing, no-tillage systems, for instance, increase soil organic matter by reducing mineralization rates and stimulating litter concentrations of the top soil layer, whereas deep ploughing increases the biological activity of arable soil layer and reduces the incidence of weeds. The role of soil organisms is central to soil processes. Although the number of microbial species in soil is still being debated, the metagenomic approach to estimate microbial diversity predicted about 2000 – 18 000 bacterial genomes in 1 g of soil. Despite the key role of bacteria in soil processes, there is still lack of information about the bacterial diversity of soils as affected by tillage practices. This study focused on metagenomic analysis of bacterial diversity in long-term experimental plots of Dystric Epihypogleyic Albeluvisols in western part of Lithuania. The experiment was set up in 2013 and had a split-plot design where the whole-plot treatments were laid out in a randomized design with three replicates. The whole-plot treatments consisted of two tillage methods - deep ploughing (22-25 cm) (DP), ploughless tillage (7-10 cm) (PT). Three subsamples (0-20 cm) were collected on October 22, 2015 for each of the three replicates. Subsamples from the DP and PT systems were pooled together wise to make two composition samples, one representing deep ploughing (DP) and the other ploughless tillage (PT). Genomic DNA from soil sample was extracted from approximately 200 mg field-moist soil by using the D6005 Fungal/Bacterial Miniprep set (Zymo Research®) following the manufacturer’s instructions. To determine bacterial diversity and community composition, we employed a culture – independent approach of high-throughput pyrosequencing of the 16S rRNA gene. Metagenomic sequencing was made with Illumina MiSeq platform in Base Clear Company. The microbial component of soil plays a crucial role in cycling of nutrients in biosphere. Our study was a preliminary attempt at observing bacterial diversity in soil under two common but contrasting tillage practices. The number of sequenced reads obtained for PT (161 917) was higher than DP (131 194). The 10 most abundant genus in soil sample were the same (Arthrobacter, Candidatus Saccharibacteria, Actinobacteria, Acidobacterium, Mycobacterium, Bacillus, Alphaproteobacteria, Longilinea, Gemmatimonas, Solirubrobacter), just the percent of community part was different. In DP the Arthrobacter and Acidobacterium consist respectively 8.4 % and 2.5%, meanwhile in PT just 5.8% and 2.1% of all community. The Nocardioides and Terrabacter were observed just in PT. This work was supported by the project VP1-3.1-ŠMM-01-V-03-001 NKPDOKT and National Science Program: The effect of long-term, different-intensity management of resources on the soils of different genesis and on other components of the agro-ecosystems [grant number SIT-9/2015] funded by the Research Council of Lithuania.Keywords: deep ploughing, metagenomics, ploughless tillage, soil community analysis
Procedia PDF Downloads 246178 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar
Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo
Abstract:
The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB
Procedia PDF Downloads 89177 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments
Authors: David X. Dong, Qingming Zhang, Meng Lu
Abstract:
Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.Keywords: optical sensor, regression model, nitrites, water quality
Procedia PDF Downloads 72176 Examining the Impact of De-Escalation Training among Emergency Department Nurses
Authors: Jonathan D. Recchi
Abstract:
Introduction: Workplace violence is a major concern for nurses throughout the United States and is a rising occupational health hazard that has been exacerbated by both the Covid-19 pandemic and increasing patient and family member incivility. De-escalation training has been found to be an evidence-based tool for emergency department nurses to help avoid or mitigate high-risk situations that could lead to workplace violence. Many healthcare organizations either do not provide de-escalation training to their staff or only provide it sparingly, such as during new employee orientation. There is limited research in the literature on the psychological benefits of de-escalation training. Purpose: The purpose of this study is to determine if there are psychological and organizational advantages to providing emergency department nurses with de-escalation training. Equipping emergency department nurses with skills that are essential to de-escalate violent or potentially violent patients may help prevent physical, mental, and/or psychological damage to the nurse because of violence and/or threatening acts. The hypothesis is that providing de-scalation training to emergency department nurses will lead to increased nurse confidence in dealing with aggressive patients, increased resiliency, increased professional quality of life, and increased intention to stay with their current organization. This study aims to show that organizations would benefit from providing de-escalation training to all nurses operating in high-risk areas on a regular basis. Significance: Showing psychological benefits to providing evidence-based de-escalation training can provide healthcare organizations with the ability to retain a more resilient and prepared workforce. Method: This study uses a pre-experimental cross-sectional pre-/post-test design using a convenience sample of emergency department registered nurses employed across Jefferson Health Northeast (Jefferson Torresdale, Jefferson Bucks, and Jefferson Frankford. Inclusion criteria include registered nurses who work full or part-time, with 51% or more of their clinical time spent in direct clinical care. Excluded from participation are registered nurses in orientation, per-diem nurses, temporary and/or travel nurses, nurses who spend less than 51% of their time in direct patient care, and nurses who have received de-escalation training within the past two years. This study uses the Connor-Davidson Resilience Scale 10 (CD-RISC-10), the Clinician Confidence in Coping with Patient Aggression Scale, the Press Ganey Intention To Stay question, and the Professional Quality of Life Scale. Results: A Paired t-Test will be used to analyze the mean scores of the three scales and one question pre and post-intervention to determine if there is a statistically significant difference in RN resiliency, confidence in coping with patient aggression, intention to stay, and professional quality of life. Discussion and Conclusions: Upon completion, the outcomes of this intervention will show the importance of providing evidence-based de-escalation training to all nurses operating within the emergency department.Keywords: de-escalation, nursing, emergency department, workplace violence
Procedia PDF Downloads 103175 Nonequilibrium Effects in Photoinduced Ultrafast Charge Transfer Reactions
Authors: Valentina A. Mikhailova, Serguei V. Feskov, Anatoly I. Ivanov
Abstract:
In the last decade the nonequilibrium charge transfer have attracted considerable interest from the scientific community. Examples of such processes are the charge recombination in excited donor-acceptor complexes and the intramolecular electron transfer from the second excited electronic state. In these reactions the charge transfer proceeds predominantly in the nonequilibrium mode. In the excited donor-acceptor complexes the nuclear nonequilibrium is created by the pump pulse. The intramolecular electron transfer from the second excited electronic state is an example where the nuclear nonequilibrium is created by the forward electron transfer. The kinetics of these nonequilibrium reactions demonstrate a number of peculiar properties. Most important from them are: (i) the absence of the Marcus normal region in the free energy gap law for the charge recombination in excited donor-acceptor complexes, (ii) extremely low quantum yield of thermalized charge separated state in the ultrafast charge transfer from the second excited state, (iii) the nonexponential charge recombination dynamics in excited donor-acceptor complexes, (iv) the dependence of the charge transfer rate constant on the excitation pulse frequency. This report shows that most of these kinetic features can be well reproduced in the framework of stochastic point-transition multichannel model. The model involves an explicit description of the nonequilibrium excited state formation by the pump pulse and accounts for the reorganization of intramolecular high-frequency vibrational modes, for their relaxation as well as for the solvent relaxation. The model is able to quantitatively reproduce complex nonequilibrium charge transfer kinetics observed in modern experiments. The interpretation of the nonequilibrium effects from a unified point of view in the terms of the multichannel point transition stochastic model allows to see similarities and differences of electron transfer mechanism in various molecular donor-acceptor systems and formulates general regularities inherent in these phenomena. The nonequilibrium effects in photoinduced ultrafast charge transfer which have been studied for the last 10 years are analyzed. The methods of suppression of the ultrafast charge recombination, similarities and dissimilarities of electron transfer mechanism in different molecular donor-acceptor systems are discussed. The extremely low quantum yield of the thermalized charge separated state observed in the ultrafast charge transfer from the second excited state in the complex consisting of 1,2,4-trimethoxybenzene and tetracyanoethylene in acetonitrile solution directly demonstrates that its effectiveness can be close to unity. This experimental finding supports the idea that the nonequilibrium charge recombination in the excited donor-acceptor complexes can be also very effective so that the part of thermalized complexes is negligible. It is discussed the regularities inherent to the equilibrium and nonequilibrium reactions. Their fundamental differences are analyzed. Namely the opposite dependencies of the charge transfer rates on the dynamical properties of the solvent. The increase of the solvent viscosity results in decreasing the thermal rate and vice versa increasing the nonequilibrium rate. The dependencies of the rates on the solvent reorganization energy and the free energy gap also can considerably differ. This work was supported by the Russian Science Foundation (Grant No. 16-13-10122).Keywords: Charge recombination, higher excited states, free energy gap law, nonequilibrium
Procedia PDF Downloads 325174 The Effects of Aging on Visuomotor Behaviors in Reaching
Authors: Mengjiao Fan, Thomson W. L. Wong
Abstract:
It is unavoidable that older adults may have to deal with aging-related motor problems. Aging is highly likely to affect motor learning and control as well. For example, older adults may suffer from poor motor function and quality of life due to age-related eye changes. These adverse changes in vision results in impairment of movement automaticity. Reaching is a fundamental component of various complex movements, which is therefore beneficial to explore the changes and adaptation in visuomotor behaviors. The current study aims to explore how aging affects visuomotor behaviors by comparing motor performance and gaze behaviors between two age groups (i.e., young and older adults). Visuomotor behaviors in reaching under providing or blocking online visual feedback (simulated visual deficiency) conditions were investigated in 60 healthy young adults (Mean age=24.49 years, SD=2.12) and 37 older adults (Mean age=70.07 years, SD=2.37) with normal or corrected-to-normal vision. Participants in each group were randomly allocated into two subgroups. Subgroup 1 was provided with online visual feedback of the hand-controlled mouse cursor. However, in subgroup 2, visual feedback was blocked to simulate visual deficiency. The experimental task required participants to complete 20 times of reaching to a target by controlling the mouse cursor on the computer screen. Among all the 20 trials, start position was upright in the center of the screen and target appeared at a randomly selected position by the tailor-made computer program. Primary outcomes of motor performance and gaze behaviours data were recorded by the EyeLink II (SR Research, Canada). The results suggested that aging seems to affect the performance of reaching tasks significantly in both visual feedback conditions. In both age groups, blocking online visual feedback of the cursor in reaching resulted in longer hand movement time (p < .001), longer reaching distance away from the target center (p<.001) and poorer reaching motor accuracy (p < .001). Concerning gaze behaviors, blocking online visual feedback increased the first fixation duration time in young adults (p<.001) but decreased it in older adults (p < .001). Besides, under the condition of providing online visual feedback of the cursor, older adults conducted a longer fixation dwell time on target throughout reaching than the young adults (p < .001) although the effect was not significant under blocking online visual feedback condition (p=.215). Therefore, the results suggested that different levels of visual feedback during movement execution can affect gaze behaviors differently in older and young adults. Differential effects by aging on visuomotor behaviors appear on two visual feedback patterns (i.e., blocking or providing online visual feedback of hand-controlled cursor in reaching). Several specific gaze behaviors among the older adults were found, which imply that blocking of visual feedback may act as a stimulus to seduce extra perceptive load in movement execution and age-related visual degeneration might further deteriorate the situation. It indeed provides us with insight for the future development of potential rehabilitative training method (e.g., well-designed errorless training) in enhancing visuomotor adaptation for our aging population in the context of improving their movement automaticity by facilitating their compensation of visual degeneration.Keywords: aging effect, movement automaticity, reaching, visuomotor behaviors, visual degeneration
Procedia PDF Downloads 312173 Optimal-Based Structural Vibration Attenuation Using Nonlinear Tuned Vibration Absorbers
Authors: Pawel Martynowicz
Abstract:
Vibrations are a crucial problem for slender structures such as towers, masts, chimneys, wind turbines, bridges, high buildings, etc., that is why most of them are equipped with vibration attenuation or fatigue reduction solutions. In this work, a slender structure (i.e., wind turbine tower-nacelle model) equipped with nonlinear, semiactive tuned vibration absorber(s) is analyzed. For this study purposes, magnetorheological (MR) dampers are used as semiactive actuators. Several optimal-based approaches to structural vibration attenuation are investigated against the standard ‘ground-hook’ law and passive tuned vibration absorber(s) implementations. The common approach to optimal control of nonlinear systems is offline computation of the optimal solution, however, so determined open loop control suffers from lack of robustness to uncertainties (e.g., unmodelled dynamics, perturbations of external forces or initial conditions), and thus perturbation control techniques are often used. However, proper linearization may be an issue for highly nonlinear systems with implicit relations between state, co-state, and control. The main contribution of the author is the development as well as numerical and experimental verification of the Pontriagin maximum-principle-based vibration control concepts that produce directly actuator control input (not the demanded force), thus force tracking algorithm that results in control inaccuracy is entirely omitted. These concepts, including one-step optimal control, quasi-optimal control, and optimal-based modified ‘ground-hook’ law, can be directly implemented in online and real-time feedback control for periodic (or semi-periodic) disturbances with invariant or time-varying parameters, as well as for non-periodic, transient or random disturbances, what is a limitation for some other known solutions. No offline calculation, excitations/disturbances assumption or vibration frequency determination is necessary, moreover, all of the nonlinear actuator (MR damper) force constraints, i.e., no active forces, lower and upper saturation limits, hysteresis-type dynamics, etc., are embedded in the control technique, thus the solution is optimal or suboptimal for the assumed actuator, respecting its limitations. Depending on the selected method variant, a moderate or decisive reduction in the computational load is possible compared to other methods of nonlinear optimal control, while assuring the quality and robustness of the vibration reduction system, as well as considering multi-pronged operational aspects, such as possible minimization of the amplitude of the deflection and acceleration of the vibrating structure, its potential and/or kinetic energy, required actuator force, control input (e.g. electric current in the MR damper coil) and/or stroke amplitude. The developed solutions are characterized by high vibration reduction efficiency – the obtained maximum values of the dynamic amplification factor are close to 2.0, while for the best of the passive systems, these values exceed 3.5.Keywords: magnetorheological damper, nonlinear tuned vibration absorber, optimal control, real-time structural vibration attenuation, wind turbines
Procedia PDF Downloads 124172 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding
Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta
Abstract:
Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration
Procedia PDF Downloads 159171 Numerical Optimization of Cooling System Parameters for Multilayer Lithium Ion Cell and Battery Packs
Authors: Mohammad Alipour, Ekin Esen, Riza Kizilel
Abstract:
Lithium-ion batteries are a commonly used type of rechargeable batteries because of their high specific energy and specific power. With the growing popularity of electric vehicles and hybrid electric vehicles, increasing attentions have been paid to rechargeable Lithium-ion batteries. However, safety problems, high cost and poor performance in low ambient temperatures and high current rates, are big obstacles for commercial utilization of these batteries. By proper thermal management, most of the mentioned limitations could be eliminated. Temperature profile of the Li-ion cells has a significant role in the performance, safety, and cycle life of the battery. That is why little temperature gradient can lead to great loss in the performances of the battery packs. In recent years, numerous researchers are working on new techniques to imply a better thermal management on Li-ion batteries. Keeping the battery cells within an optimum range is the main objective of battery thermal management. Commercial Li-ion cells are composed of several electrochemical layers each consisting negative-current collector, negative electrode, separator, positive electrode, and positive current collector. However, many researchers have adopted a single-layer cell to save in computing time. Their hypothesis is that thermal conductivity of the layer elements is so high and heat transfer rate is so fast. Therefore, instead of several thin layers, they model the cell as one thick layer unit. In previous work, we showed that single-layer model is insufficient to simulate the thermal behavior and temperature nonuniformity of the high-capacity Li-ion cells. We also studied the effects of the number of layers on thermal behavior of the Li-ion batteries. In this work, first thermal and electrochemical behavior of the LiFePO₄ battery is modeled with 3D multilayer cell. The model is validated with the experimental measurements at different current rates and ambient temperatures. Real time heat generation rate is also studied at different discharge rates. Results showed non-uniform temperature distribution along the cell which requires thermal management system. Therefore, aluminum plates with mini-channel system were designed to control the temperature uniformity. Design parameters such as channel number and widths, inlet flow rate, and cooling fluids are optimized. As cooling fluids, water and air are compared. Pressure drop and velocity profiles inside the channels are illustrated. Both surface and internal temperature profiles of single cell and battery packs are investigated with and without cooling systems. Our results show that using optimized Mini-channel cooling plates effectively controls the temperature rise and uniformity of the single cells and battery packs. With increasing the inlet flow rate, cooling efficiency could be reached up to 60%.Keywords: lithium ion battery, 3D multilayer model, mini-channel cooling plates, thermal management
Procedia PDF Downloads 164170 Polarization as a Proxy of Misinformation Spreading
Authors: Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, Ana Lucía Schmidt, Fabiana Zollo
Abstract:
Information, rumors, and debates may shape and impact public opinion heavily. In the latest years, several concerns have been expressed about social influence on the Internet and the outcome that online debates might have on real-world processes. Indeed, on online social networks users tend to select information that is coherent to their system of beliefs and to form groups of like-minded people –i.e., echo chambers– where they reinforce and polarize their opinions. In this way, the potential benefits coming from the exposure to different points of view may be reduced dramatically, and individuals' views may become more and more extreme. Such a context fosters misinformation spreading, which has always represented a socio-political and economic risk. The persistence of unsubstantiated rumors –e.g., the hypothetical and hazardous link between vaccines and autism– suggests that social media do have the power to misinform, manipulate, or control public opinion. As an example, current approaches such as debunking efforts or algorithmic-driven solutions based on the reputation of the source seem to prove ineffective against collective superstition. Indeed, experimental evidence shows that confirmatory information gets accepted even when containing deliberately false claims while dissenting information is mainly ignored, influences users’ emotions negatively and may even increase group polarization. Moreover, confirmation bias has been shown to play a pivotal role in information cascades, posing serious warnings about the efficacy of current debunking efforts. Nevertheless, mitigation strategies have to be adopted. To generalize the problem and to better understand social dynamics behind information spreading, in this work we rely on a tight quantitative analysis to investigate the behavior of more than 300M users w.r.t. news consumption on Facebook over a time span of six years (2010-2015). Through a massive analysis on 920 news outlets pages, we are able to characterize the anatomy of news consumption on a global and international scale. We show that users tend to focus on a limited set of pages (selective exposure) eliciting a sharp and polarized community structure among news outlets. Moreover, we find similar patterns around the Brexit –the British referendum to leave the European Union– debate, where we observe the spontaneous emergence of two well segregated and polarized groups of users around news outlets. Our findings provide interesting insights into the determinants of polarization and the evolution of core narratives on online debating. Our main aim is to understand and map the information space on online social media by identifying non-trivial proxies for the early detection of massive informational cascades. Furthermore, by combining users traces, we are finally able to draft the main concepts and beliefs of the core narrative of an echo chamber and its related perceptions.Keywords: information spreading, misinformation, narratives, online social networks, polarization
Procedia PDF Downloads 288169 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs
Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan
Abstract:
Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour
Procedia PDF Downloads 93168 A Spatial Perspective on the Metallized Combustion Aspect of Rockets
Authors: Chitresh Prasad, Arvind Ramesh, Aditya Virkar, Karan Dholkaria, Vinayak Malhotra
Abstract:
Solid Propellant Rocket is a rocket that utilises a combination of a solid Oxidizer and a solid Fuel. Success in Solid Rocket Motor design and development depends significantly on knowledge of burning rate behaviour of the selected solid propellant under all motor operating conditions and design limit conditions. Most Solid Motor Rockets consist of the Main Engine, along with multiple Boosters that provide an additional thrust to the space-bound vehicle. Though widely used, they have been eclipsed by Liquid Propellant Rockets, because of their better performance characteristics. The addition of a catalyst such as Iron Oxide, on the other hand, can drastically enhance the performance of a Solid Rocket. This scientific investigation tries to emulate the working of a Solid Rocket using Sparklers and Energized Candles, with a central Energized Candle acting as the Main Engine and surrounding Sparklers acting as the Booster. The Energized Candle is made of Paraffin Wax, with Magnesium filings embedded in it’s wick. The Sparkler is made up of 45% Barium Nitrate, 35% Iron, 9% Aluminium, 10% Dextrin and the remaining composition consists of Boric Acid. The Magnesium in the Energized Candle, and the combination of Iron and Aluminium in the Sparkler, act as catalysts and enhance the burn rates of both materials. This combustion of Metallized Propellants has an influence over the regression rate of the subject candle. The experimental parameters explored here are Separation Distance, Systematically varying Configuration and Layout Symmetry. The major performance parameter under observation is the Regression Rate of the Energized Candle. The rate of regression is significantly affected by the orientation and configuration of the sparklers, which usually act as heat sources for the energized candle. The Overall Efficiency of any engine is factorised by the thermal and propulsive efficiencies. Numerous efforts have been made to improve one or the other. This investigation focuses on the Orientation of Rocket Motor Design to maximize their Overall Efficiency. The primary objective is to analyse the Flame Spread Rate variations of the energized candle, which resembles the solid rocket propellant used in the first stage of rocket operation thereby affecting the Specific Impulse values in a Rocket, which in turn have a deciding impact on their Time of Flight. Another objective of this research venture is to determine the effectiveness of the key controlling parameters explored. This investigation also emulates the exhaust gas interactions of the Solid Rocket through concurrent ignition of the Energized Candle and Sparklers, and their behaviour is analysed. Modern space programmes intend to explore the universe outside our solar system. To accomplish these goals, it is necessary to design a launch vehicle which is capable of providing incessant propulsion along with better efficiency for vast durations. The main motivation of this study is to enhance Rocket performance and their Overall Efficiency through better designing and optimization techniques, which will play a crucial role in this human conquest for knowledge.Keywords: design modifications, improving overall efficiency, metallized combustion, regression rate variations
Procedia PDF Downloads 178167 Numerical Model of Crude Glycerol Autothermal Reforming to Hydrogen-Rich Syngas
Authors: A. Odoom, A. Salama, H. Ibrahim
Abstract:
Hydrogen is a clean source of energy for power production and transportation. The main source of hydrogen in this research is biodiesel. Glycerol also called glycerine is a by-product of biodiesel production by transesterification of vegetable oils and methanol. This is a reliable and environmentally-friendly source of hydrogen production than fossil fuels. A typical composition of crude glycerol comprises of glycerol, water, organic and inorganic salts, soap, methanol and small amounts of glycerides. Crude glycerol has limited industrial application due to its low purity thus, the usage of crude glycerol can significantly enhance the sustainability and production of biodiesel. Reforming techniques is an approach for hydrogen production mainly Steam Reforming (SR), Autothermal Reforming (ATR) and Partial Oxidation Reforming (POR). SR produces high hydrogen conversions and yield but is highly endothermic whereas POR is exothermic. On the downside, PO yields lower hydrogen as well as large amount of side reactions. ATR which is a fusion of partial oxidation reforming and steam reforming is thermally neutral because net reactor heat duty is zero. It has relatively high hydrogen yield, selectivity as well as limits coke formation. The complex chemical processes that take place during the production phases makes it relatively difficult to construct a reliable and robust numerical model. Numerical model is a tool to mimic reality and provide insight into the influence of the parameters. In this work, we introduce a finite volume numerical study for an 'in-house' lab-scale experiment of ATR. Previous numerical studies on this process have considered either using Comsol or nodal finite difference analysis. Since Comsol is a commercial package which is not readily available everywhere and lab-scale experiment can be considered well mixed in the radial direction. One spatial dimension suffices to capture the essential feature of ATR, in this work, we consider developing our own numerical approach using MATLAB. A continuum fixed bed reactor is modelled using MATLAB with both pseudo homogeneous and heterogeneous models. The drawback of nodal finite difference formulation is that it is not locally conservative which means that materials and momenta can be generated inside the domain as an artifact of the discretization. Control volume, on the other hand, is locally conservative and suites very well problems where materials are generated and consumed inside the domain. In this work, species mass balance, Darcy’s equation and energy equations are solved using operator splitting technique. Therefore, diffusion-like terms are discretized implicitly while advection-like terms are discretized explicitly. An upwind scheme is adapted for the advection term to ensure accuracy and positivity. Comparisons with the experimental data show very good agreements which build confidence in our modeling approach. The models obtained were validated and optimized for better results.Keywords: autothermal reforming, crude glycerol, hydrogen, numerical model
Procedia PDF Downloads 140166 Experimental Research of Canine Mandibular Defect Construction with the Controlled Meshy Titanium Alloy Scaffold Fabricated by Electron Beam Melting Combined with BMSCs-Encapsulating Chitosan Hydrogel
Authors: Wang Hong, Liu Chang Kui, Zhao Bing Jing, Hu Min
Abstract:
Objection We observed the repairment effection of canine mandibular defect with meshy Ti6Al4V scaffold fabricated by electron beam melting (EBM) combined with bone marrow mesenchymal stem cells (BMMSCs) encapsulated in chitosan hydrogel. Method Meshy titanium scaffolds were prepared by EBM of commercial Ti6Al4V power. The length of scaffolds was 24 mm, the width was 5 mm and height was 8mm. The pore size and porosity were evaluated by scanning electron microscopy (SEM). Chitosan /Bio-Oss hydrogel was prepared by chitosan, β- sodium glycerophosphate and Bio-Oss power. BMMSCs were harvested from canine iliac crests. BMMSCs were seeded in titanium scaffolds and encapsulated in Chitosan /Bio-Oss hydrogel. The validity of BMMSCs was evaluated by cell count kit-8 (CCK-8). The osteogenic differentiation ability was evaluated by alkaline phosphatase (ALP) activity and gene expression of OC, OPN and CoⅠ. Combination were performed by injecting BMMSCs/ Chitosan /Bio-Oss hydrogel into the meshy Ti6Al4V scaffolds and solidified. 24 mm long box-shaped bone defects were made at the mid-portion of mandible of adult beagles. The defects were randomly filled with BMMSCs/ Chitosan/Bio-Oss + titanium, Chitosan /Bio-Oss+titanium, titanium alone. Autogenous iliac crests graft as control group in 3 beagles. Radionuclide bone imaging was used to monitor the new bone tissue at 2, 4, 8 and 12 weeks after surgery. CT examination was made on the surgery day and 4 weeks, 12 weeks and 24 weeks after surgery. The animals were sacrificed in 4, 12 and 24 weeks after surgery. The bone formation were evaluated by histology and micro-CT. Results: The pores of the scaffolds was interconnected, the pore size was about 1 mm, the average porosity was about 76%. The pore size of the hydrogel was 50-200μm and the average porosity was approximately 90%. The hydrogel were solidified under the condition of 37℃in 10 minutes. The validity and the osteogenic differentiation ability of BMSCs were not affected by titanium scaffolds and hydrogel. Radionuclide bone imaging shown an increasing tendency of the revascularization and bone regeneration was observed in all the groups at 2, 4, 8 weeks after operation, and there were no changes at 12weeks.The tendency was more obvious in the BMMSCs/ Chitosan/Bio-Oss +titanium group and autogenous group. CT, Micro-CT and histology shown that new bone formed increasingly with the time extend. There were more new bone regenerated in BMMSCs/ Chitosan /Bio-Oss + titanium group and autogenous group than the other two groups. At 24 weeks, the autogenous group was achieved bone union. The BMSCs/ Chitosan /Bio-Oss group was seen extensive new bone formed around the scaffolds and more new bone inside of the central pores of scaffolds than Chitosan /Bio-Oss + titanium group and titanium group. The difference was significantly. Conclusion: The titanium scaffolds fabricated by EBM had controlled porous structure, good bone conduction and biocompatibility. Chitosan /Bio-Oss hydrogel had injectable plasticity, thermosensitive property and good biocompatibility. The meshy Ti6Al4V scaffold produced by EBM combined BMSCs encapsulated in chitosan hydrogel had good capacity on mandibular bone defect repair.Keywords: mandibular reconstruction, tissue engineering, electron beam melting, titanium alloy
Procedia PDF Downloads 445165 Emotion and Risk Taking in a Casino Game
Authors: Yulia V. Krasavtseva, Tatiana V. Kornilova
Abstract:
Risk-taking behaviors are not only dictated by cognitive components but also involve emotional aspects. Anticipatory emotions, involving both cognitive and affective mechanisms, are involved in decision-making in general, and risk-taking in particular. Affective reactions are prompted when an expectation or prediction is either validated or invalidated in the achieved result. This study aimed to combine predictions, anticipatory emotions, affective reactions, and personality traits in the context of risk-taking behaviors. An experimental online method Emotion and Prediction In a Casino (EPIC) was used, based on a casino-like roulette game. In a series of choices, the participant is presented with progressively riskier roulette combinations, where the potential sums of wins and losses increase with each choice and the participant is given a choice: to 'walk away' with the current sum of money or to 'play' the displayed roulette, thus accepting the implicit risk. Before and after the result is displayed, participants also rate their emotions, using the Self-Assessment Mannequin [Bradley, Lang, 1994], picking a picture, representing the intensity of pleasure, arousal, and dominance. The following personality measures were used: 1) Personal Decision-Making Factors [Kornilova, 2003] assessing risk and rationality; 2) I7 – Impulsivity Questionnaire [Kornilova, 1995] assessing impulsiveness, risk readiness, and empathy and 3) Subjective Risk Intelligence Scale [Craparo et al., 2018] assessing negative attitude toward uncertainty, emotional stress vulnerability, imaginative capability, and problem-solving self-efficacy. Two groups of participants took part in the study: 1) 98 university students (Mage=19.71, SD=3.25; 72% female) and 2) 94 online participants (Mage=28.25, SD=8.25; 89% female). Online participants were recruited via social media. Students with high rationality rated their pleasure and dominance before and after choices as lower (ρ from -2.6 to -2.7, p < 0.05). Those with high levels of impulsivity rated their arousal lower before finding out their result (ρ from 2.5 - 3.7, p < 0.05), while also rating their dominance as low (ρ from -3 to -3.7, p < 0.05). Students prone to risk-rated their pleasure and arousal before and after higher (ρ from 2.5 - 3.6, p < 0.05). High empathy was positively correlated with arousal after learning the result. High emotional stress vulnerability positively correlates with arousal and pleasure after the choice (ρ from 3.9 - 5.7, p < 0.05). Negative attitude to uncertainty is correlated with high anticipatory and reactive arousal (ρ from 2.7 - 5.7, p < 0.05). High imaginative capability correlates negatively with anticipatory and reactive dominance (ρ from - 3.4 to - 4.3, p < 0.05). Pleasure (.492), arousal (.590), and dominance (.551) before and after the result were positively correlated. Higher predictions positively correlated with reactive pleasure and arousal. In a riskier scenario (6/8 chances to win), anticipatory arousal was negatively correlated with the pleasure emotion (-.326) and vice versa (-.265). Correlations occur regardless of the roulette outcome. In conclusion, risk-taking behaviors are linked not only to personality traits but also to anticipatory emotions and affect in a modeled casino setting. Acknowledgment: The study was supported by the Russian Foundation for Basic Research, project 19-29-07069.Keywords: anticipatory emotions, casino game, risk taking, impulsiveness
Procedia PDF Downloads 133164 Gastroprotective Effect of Copper Complex On Indomethacin-Induced Gastric Ulcer In Rats. Histological and Immunohistochemical Study
Authors: Heba M. Saad Eldien, Ola Abdel-Tawab Hussein, Ahmed Yassein Nassar
Abstract:
Background: Indomethacin is a non-steroidal anti inflammatory drug. Indomethacin induces an injury to gastrointestinal mucosa in experimental animals and humans and their use is associated with a significant risk of hemorrhage, erosions and perforation of both gastric and intestinal ulcers. The anti-inflammatory action of copper complexes is an important activity of their anti-ulcer effect achieved by their intermediary role as a transport form of copper that allow activation of the several copper-dependent enzymes. Therefore, several copper complexes were synthesized and investigated as promising alternative anti-ulcer therapy. Aim of the work: The purpose of this study was to evaluate a copper chelating complex consisting of egg albumin and copper as one of the copper peptides that can be used as anti-inflammatory agent and effective in ameliorates the hazards of the indomethacin on the histological structure of the fundus of the stomach that could be added to raise the efficacy of the currently used simple and cheap gastric anti-inflammatory drug mucogel. Material &methods: This study was carried out on 40 adult male albino rats,divided equally into 4 groups;Group I(control group) received distilled water,Group II(indomethacin treated group) received (25 mg/kg body weight, oral intubation) once, Group III (mucogel treated group)2 mL/rat once daily, oral incubation, Group IV(copper complex group) 1 mL /rat of 30 gm of copper albumin complex was mixed uniformly with mucogel to 100 mL. Treatment has been started six hour after Induction of Ulcers and continued till the 3rd day. The animals sacrificed and was processed for light, transmission electron microscopy(TEM) and immunostaining for inducible nitric oxide synthase(iNOS). Results: Fundic mucosa of group II, showed exfoliation of epithelial cells lining the gland, discontinuity of surface epithelial cells (ulcer formation), vacuolation and detachment of cells, eosinophilic infiltration and congestion of blood vessels in the lamina propria and submucosa. There was thickening and disarrangement of mucosa, weak positive reaction for PAS and marked increase in the collagen fibers lamina propria and the submucosa of the fundus. TEM revealed degeneration of cheif and parietal cells.Marked increase positive reactive of iNOS in all cells of the fundic gland. Group III showed reconstruction of gastric gland with cystic dilatation and vacuolation, moderate decrease of collagen fibers, reduced the intensity of iNOS while in Group IV healthy mucosa with normal surface lining epithelium and fundic glands, strong positive reaction for PAS, marked decrease of collagen fibers and positive reaction for iNOS. TEM revealed regeneration of cheif and parietal cells. Conclusion: Co treatment of copper-albumin complex seems to be useful for gastric ulcer treatment and ameliorates most of hazards of indomethacin.Keywords: copper complex, gastric ulcer, indomethacin, rat
Procedia PDF Downloads 338163 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak
Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi
Abstract:
This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak
Procedia PDF Downloads 153162 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 39