Search results for: particle physics
1299 Effect of Different Contaminants on Mineral Insulating Oil Characteristics
Authors: H. M. Wilhelm, P. O. Fernandes, L. P. Dill, C. Steffens, K. G. Moscon, S. M. Peres, V. Bender, T. Marchesan, J. B. Ferreira Neto
Abstract:
Deterioration of insulating oil is a natural process that occurs during transformers operation. However, this process can be accelerated by some factors, such as oxygen, high temperatures, metals and, moisture, which rapidly reduce oil insulating capacity and favor transformer faults. Parts of building materials of a transformer can be degraded and yield soluble compounds and insoluble particles that shorten the equipment life. Physicochemical tests, dissolved gas analysis (including propane, propylene and, butane), volatile and furanic compounds determination, besides quantitative and morphological analyses of particulate are proposed in this study in order to correlate transformers building materials degradation with insulating oil characteristics. The present investigation involves tests of medium temperature overheating simulation by means of an electric resistance wrapped with the following materials immersed in mineral insulating oil: test I) copper, tin, lead and, paper (heated at 350-400 °C for 8 h); test II) only copper (at 250 °C for 11 h); and test III) only paper (at 250 °C for 8 h and at 350 °C for 8 h). A different experiment is the simulation of electric arc involving copper, using an electric welding machine at two distinct energy sets (low and high). Analysis results showed that dielectric loss was higher in the sample of test I, higher neutralization index and higher values of hydrogen and hydrocarbons, including propane and butane, were also observed. Test III oil presented higher particle count, in addition, ferrographic analysis revealed contamination with fibers and carbonized paper. However, these particles had little influence on the oil physicochemical parameters (dielectric loss and neutralization index) and on the gas production, which was very low. Test II oil showed high levels of methane, ethane, and propylene, indicating the effect of metal on oil degradation. CO2 and CO gases were formed in the highest concentration in test III, as expected. Regarding volatile compounds, in test I acetone, benzene and toluene were detected, which are oil oxidation products. Regarding test III, methanol was identified due to cellulose degradation, as expected. Electric arc simulation test showed the highest oil oxidation in presence of copper and at high temperature, since these samples had huge concentration of hydrogen, ethylene, and acetylene. Particle count was also very high, showing the highest release of copper in such conditions. When comparing high and low energy, the first presented more hydrogen, ethylene, and acetylene. This sample had more similar results to test I, pointing out that the generation of different particles can be the cause for faults such as electric arc. Ferrography showed more evident copper and exfoliation particles than in other samples. Therefore, in this study, by using different combined analytical techniques, it was possible to correlate insulating oil characteristics with possible contaminants, which can lead to transformers failure.Keywords: Ferrography, gas analysis, insulating mineral oil, particle contamination, transformer failures
Procedia PDF Downloads 2251298 Numerical Simulation of a Single Cell Passing through a Narrow Slit
Authors: Lanlan Xiao, Yang Liu, Shuo Chen, Bingmei Fu
Abstract:
Most cancer-related deaths are due to metastasis. Metastasis is a complex, multistep processes including the detachment of cancer cells from the primary tumor and the migration to distant targeted organs through blood and/or lymphatic circulations. During hematogenous metastasis, the emigration of tumor cells from the blood stream through the vascular wall into the tissue involves arrest in the microvasculature, adhesion to the endothelial cells forming the microvessel wall and transmigration to the tissue through the endothelial barrier termed as extravasation. The narrow slit between endothelial cells that line the microvessel wall is the principal pathway for tumor cell extravasation to the surrounding tissue. To understand this crucial step for tumor hematogenous metastasis, we used Dissipative Particle Dynamics method to investigate an individual cell passing through a narrow slit numerically. The cell membrane was simulated by a spring-based network model which can separate the internal cytoplasm and surrounding fluid. The effects of the cell elasticity, cell shape and cell surface area increase, and slit size on the cell transmigration through the slit were investigated. Under a fixed driven force, the cell with higher elasticity can be elongated more and pass faster through the slit. When the slit width decreases to 2/3 of the cell diameter, the spherical cell becomes jammed despite reducing its elasticity modulus by 10 times. However, transforming the cell from a spherical to ellipsoidal shape and increasing the cell surface area only by 3% can enable the cell to pass the narrow slit. Therefore the cell shape and surface area increase play a more important role than the cell elasticity in cell passing through the narrow slit. In addition, the simulation results indicate that the cell migration velocity decreases during entry but increases during exit of the slit, which is qualitatively in agreement with the experimental observation.Keywords: dissipative particle dynamics, deformability, surface area increase, cell migration
Procedia PDF Downloads 3341297 Is It Important to Measure the Volumetric Mass Density of Nanofluids?
Authors: Z. Haddad, C. Abid, O. Rahli, O. Margeat, W. Dachraoui, A. Mataoui
Abstract:
The present study aims to measure the volumetric mass density of NiPd-heptane nanofluids synthesized using a one-step method known as thermal decomposition of metal-surfactant complexes. The particle concentration is up to 7.55 g/l and the temperature range of the experiment is from 20°C to 50°C. The measured values were compared with the mixture theory and good agreement between the theoretical equation and measurement were obtained. Moreover, the available nanofluids volumetric mass density data in the literature is reviewed.Keywords: NiPd nanoparticles, nanofluids, volumetric mass density, stability
Procedia PDF Downloads 4011296 Study Secondary Particle Production in Carbon Ion Beam Radiotherapy
Authors: Shaikah Alsubayae, Gianluigi Casse, Carlos Chavez, Jon Taylor, Alan Taylor, Mohammad Alsulimane
Abstract:
Ensuring accurate radiotherapy with carbon therapy requires precise monitoring of radiation dose distribution within the patient's body. This monitoring is essential for targeted tumor treatment, minimizing harm to healthy tissues, and improving treatment effectiveness while lowering side effects. In our investigation, we employed a methodological approach to monitor secondary proton doses in carbon therapy using Monte Carlo simulations. Initially, Geant4 simulations were utilized to extract the initial positions of secondary particles formed during interactions between carbon ions and water. These particles included protons, gamma rays, alpha particles, neutrons, and tritons. Subsequently, we studied the relationship between the carbon ion beam and these secondary particles. Interaction Vertex Imaging (IVI) is valuable for monitoring dose distribution in carbon therapy. It provides details about the positions and amounts of secondary particles, particularly protons. The IVI method depends on charged particles produced during ion fragmentation to gather information about the range by reconstructing particle trajectories back to their point of origin, referred to as the vertex. In our simulations regarding carbon ion therapy, we observed a strong correlation between some secondary particles and the range of carbon ions. However, challenges arose due to the target's unique elongated geometry, which hindered the straightforward transmission of forward-generated protons. Consequently, the limited protons that emerged mostly originated from points close to the target entrance. The trajectories of fragments (protons) were approximated as straight lines, and a beam back-projection algorithm, using recorded interaction positions in Si detectors, was developed to reconstruct vertices. The analysis revealed a correlation between the reconstructed and actual positions.Keywords: radiotherapy, carbon therapy, monitoring of radiation dose, interaction vertex imaging
Procedia PDF Downloads 841295 Analysis of the Torque Required for Mixing LDPE with Natural Fibre and DCP
Authors: A. E. Delgado, W. Aperador
Abstract:
This study evaluated the incidence of concentrated natural fibre, as well as the effects of adding a crosslinking agent on the torque when those components are mixed with low density polyethylene (LDPE). The natural fibre has a particle size of between 0.8-1.2mm and a moisture content of 0.17%. An internal mixer was used to measure the torque required to mix the polymer with the fibre. The effect of the fibre content and crosslinking agent on the torque was also determined. A change was observed in the morphology of the mixes using SEM differential scanning microscopy.Keywords: WPC, DCP, LDPE, natural fibre, torque
Procedia PDF Downloads 4191294 CFD Simulation of a Large Scale Unconfined Hydrogen Deflagration
Authors: I. C. Tolias, A. G. Venetsanos, N. Markatos
Abstract:
In the present work, CFD simulations of a large scale open deflagration experiment are performed. Stoichiometric hydrogen-air mixture occupies a 20 m hemisphere. Two combustion models are compared and are evaluated against the experiment. The Eddy Dissipation Model and a Multi-physics combustion model which is based on Yakhot’s equation for the turbulent flame speed. The values of models’ critical parameters are investigated. The effect of the turbulence model is also examined. k-ε model and LES approach were tested.Keywords: CFD, deflagration, hydrogen, combustion model
Procedia PDF Downloads 5021293 Li2o Loss of Lithium Niobate Nanocrystals during High-Energy Ball-Milling
Authors: Laura Kocsor, Laszlo Peter, Laszlo Kovacs, Zsolt Kis
Abstract:
The aim of our research is to prepare rare-earth-doped lithium niobate (LiNbO3) nanocrystals, having only a few dopant ions in the focal point of an exciting laser beam. These samples will be used to achieve individual addressing of the dopant ions by light beams in a confocal microscope setup. One method for the preparation of nanocrystalline materials is to reduce the particle size by mechanical grinding. High-energy ball-milling was used in several works to produce nano lithium niobate. Previously, it was reported that dry high-energy ball-milling of lithium niobate in a shaker mill results in the partial reduction of the material, which leads to a balanced formation of bipolarons and polarons yielding gray color together with oxygen release and Li2O segregation on the open surfaces. In the present work we focus on preparing LiNbO3 nanocrystals by high-energy ball-milling using a Fritsch Pulverisette 7 planetary mill. Every ball-milling process was carried out in zirconia vial with zirconia balls of different sizes (from 3 mm to 0.1 mm), wet grinding with water, and the grinding time being less than an hour. Gradually decreasing the ball size to 0.1 mm, an average particle size of about 10 nm could be obtained determined by dynamic light scattering and verified by scanning electron microscopy. High-energy ball-milling resulted in sample darkening evidenced by optical absorption spectroscopy measurements indicating that the material underwent partial reduction. The unwanted lithium oxide loss decreases the Li/Nb ratio in the crystal, strongly influencing the spectroscopic properties of lithium niobate. Zirconia contamination was found in ground samples proved by energy-dispersive X-ray spectroscopy measurements; however, it cannot be explained based on the hardness properties of the materials involved in the ball-milling process. It can be understood taking into account the presence of lithium hydroxide formed the segregated lithium oxide and water during the ball-milling process, through chemically induced abrasion. The quantity of the segregated Li2O was measured by coulometric titration. During the wet milling process in the planetary mill, it was found that the lithium oxide loss increases linearly in the early phase of the milling process, then a saturation of the Li2O loss can be seen. This change goes along with the disappearance of the relatively large particles until a relatively narrow size distribution is achieved in accord with the dynamic light scattering measurements. With the 3 mm ball size and 1100 rpm rotation rate, the mean particle size achieved is 100 nm, and the total Li2O loss is about 1.2 wt.% of the original LiNbO3. Further investigations have been done to minimize the Li2O segregation during the ball-milling process. Since the Li2O loss was observed to increase with the growing total surface of the particles, the influence of ball-milling parameters on its quantity has also been studied.Keywords: high-energy ball-milling, lithium niobate, mechanochemical reaction, nanocrystals
Procedia PDF Downloads 1351292 Solitons and Universes with Acceleration Driven by Bulk Particles
Authors: A. C. Amaro de Faria Jr, A. M. Canone
Abstract:
Considering a scenario where our universe is taken as a 3d domain wall embedded in a 5d dimensional Minkowski space-time, we explore the existence of a richer class of solitonic solutions and their consequences for accelerating universes driven by collisions of bulk particle excitations with the walls. In particular it is shown that some of these solutions should play a fundamental role at the beginning of the expansion process. We present some of these solutions in cosmological scenarios that can be applied to models that describe the inflationary period of the Universe.Keywords: solitons, topological defects, branes, kinks, accelerating universes in brane scenarios
Procedia PDF Downloads 1371291 Pegylated Liposomes of Trans Resveratrol, an Anticancer Agent, for Enhancing Therapeutic Efficacy and Long Circulation
Authors: M. R. Vijayakumar, Sanjay Kumar Singh, Lakshmi, Hithesh Dewangan, Sanjay Singh
Abstract:
Trans resveratrol (RES) is a natural molecule proved for cancer preventive and therapeutic activities devoid of any potential side effects. However, the therapeutic application of RES in disease management is limited because of its rapid elimination from blood circulation thereby low biological half life in mammals. Therefore, the main objective of this study is to enhance the circulation as well as therapeutic efficacy using PEGylated liposomes. D-α-tocopheryl polyethylene glycol 1000 succinate (vitamin E TPGS) is applied as steric surface decorating agent to prepare RES liposomes by thin film hydration method. The prepared nanoparticles were evaluated by various state of the art techniques such as dynamic light scattering (DLS) technique for particle size and zeta potential, TEM for shape, differential scanning calorimetry (DSC) for interaction analysis and XRD for crystalline changes of drug. Encapsulation efficiency and invitro drug release were determined by dialysis bag method. Cancer cell viability studies were performed by MTT assay, respectively. Pharmacokinetic studies were performed in sprague dawley rats. The prepared liposomes were found to be spherical in shape. Particle size and zeta potential of prepared formulations varied from 64.5±3.16 to 262.3±7.45 nm and -2.1 to 1.76 mV, respectively. DSC study revealed absence of potential interaction. XRD study revealed presence of amorphous form in liposomes. Entrapment efficiency was found to be 87.45±2.14 % and the drug release was found to be controlled up to 24 hours. Minimized MEC in MTT assay and tremendous enhancement in circulation time of RES PEGylated liposomes than its pristine form revealed that the stearic stabilized PEGylated liposomes can be an alternative tool to commercialize this molecule for chemopreventive and therapeutic applications in cancer.Keywords: trans resveratrol, cancer nanotechnology, long circulating liposomes, bioavailability enhancement, liposomes for cancer therapy, PEGylated liposomes
Procedia PDF Downloads 5891290 Understanding the Fundamental Driver of Semiconductor Radiation Tolerance with Experiment and Theory
Authors: Julie V. Logan, Preston T. Webster, Kevin B. Woller, Christian P. Morath, Michael P. Short
Abstract:
Semiconductors, as the base of critical electronic systems, are exposed to damaging radiation while operating in space, nuclear reactors, and particle accelerator environments. What innate property allows some semiconductors to sustain little damage while others accumulate defects rapidly with dose is, at present, poorly understood. This limits the extent to which radiation tolerance can be implemented as a design criterion. To address this problem of determining the driver of semiconductor radiation tolerance, the first step is to generate a dataset of the relative radiation tolerance of a large range of semiconductors (exposed to the same radiation damage and characterized in the same way). To accomplish this, Rutherford backscatter channeling experiments are used to compare the displaced lattice atom buildup in InAs, InP, GaP, GaN, ZnO, MgO, and Si as a function of step-wise alpha particle dose. With this experimental information on radiation-induced incorporation of interstitial defects in hand, hybrid density functional theory electron densities (and their derived quantities) are calculated, and their gradient and Laplacian are evaluated to obtain key fundamental information about the interactions in each material. It is shown that simple, undifferentiated values (which are typically used to describe bond strength) are insufficient to predict radiation tolerance. Instead, the curvature of the electron density at bond critical points provides a measure of radiation tolerance consistent with the experimental results obtained. This curvature and associated forces surrounding bond critical points disfavors localization of displaced lattice atoms at these points, favoring their diffusion toward perfect lattice positions. With this criterion to predict radiation tolerance, simple density functional theory simulations can be conducted on potential new materials to gain insight into how they may operate in demanding high radiation environments.Keywords: density functional theory, GaN, GaP, InAs, InP, MgO, radiation tolerance, rutherford backscatter channeling
Procedia PDF Downloads 1741289 On Deterministic Chaos: Disclosing the Missing Mathematics from the Lorenz-Haken Equations
Authors: Meziane Belkacem
Abstract:
We aim at converting the original 3D Lorenz-Haken equations, which describe laser dynamics –in terms of self-pulsing and chaos- into 2-second-order differential equations, out of which we extract the so far missing mathematics and corroborations with respect to nonlinear interactions. Leaning on basic trigonometry, we pull out important outcomes; a fundamental result attributes chaos to forbidden periodic solutions inside some precisely delimited region of the control parameter space that governs the bewildering dynamics.Keywords: Physics, optics, nonlinear dynamics, chaos
Procedia PDF Downloads 1571288 Inclusion Body Refolding at High Concentration for Large-Scale Applications
Authors: J. Gabrielczyk, J. Kluitmann, T. Dammeyer, H. J. Jördening
Abstract:
High-level expression of proteins in bacteria often causes production of insoluble protein aggregates, called inclusion bodies (IB). They contain mainly one type of protein and offer an easy and efficient way to get purified protein. On the other hand, proteins in IB are normally devoid of function and therefore need a special treatment to become active. Most refolding techniques aim at diluting the solubilizing chaotropic agents. Unfortunately, optimal refolding conditions have to be found empirically for every protein. For large-scale applications, a simple refolding process with high yields and high final enzyme concentrations is still missing. The constructed plasmid pASK-IBA63b containing the sequence of fructosyltransferase (FTF, EC 2.4.1.162) from Bacillus subtilis NCIMB 11871 was transformed into E. coli BL21 (DE3) Rosetta. The bacterium was cultivated in a fed-batch bioreactor. The produced FTF was obtained mainly as IB. For refolding experiments, five different amounts of IBs were solubilized in urea buffer with protein concentration of 0.2-8.5 g/L. Solubilizates were refolded with batch or continuous dialysis. The refolding yield was determined by measuring the protein concentration of the clear supernatant before and after the dialysis. Particle size was measured by dynamic light scattering. We tested the solubilization properties of fructosyltransferase IBs. The particle size measurements revealed that the solubilization of the aggregates is achieved at urea concentration of 5M or higher and confirmed by absorption spectroscopy. All results confirm previous investigations that refolding yields are dependent upon initial protein concentration. In batch dialysis, the yields dropped from 67% to 12% and 72% to 19% for continuous dialysis, in relation to initial concentrations from 0.2 to 8.5 g/L. Often used additives such as sucrose and glycerol had no effect on refolding yields. Buffer screening indicated a significant increase in activity but also temperature stability of FTF with citrate/phosphate buffer. By adding citrate to the dialysis buffer, we were able to increase the refolding yields to 82-47% in batch and 90-74% in the continuous process. Further experiments showed that in general, higher ionic strength of buffers had major impact on refolding yields; doubling the buffer concentration increased the yields up to threefold. Finally, we achieved corresponding high refolding yields by reducing the chamber volume by 75% and the amount of buffer needed. The refolded enzyme had an optimal activity of 12.5±0.3 x104 units/g. However, detailed experiments with native FTF revealed a reaggregation of the molecules and loss in specific activity depending on the enzyme concentration and particle size. For that reason, we actually focus on developing a process of simultaneous enzyme refolding and immobilization. The results of this study show a new approach in finding optimal refolding conditions for inclusion bodies at high concentrations. Straightforward buffer screening and increase of the ionic strength can optimize the refolding yield of the target protein by 400%. Gentle removal of chaotrope with continuous dialysis increases the yields by an additional 65%, independent of the refolding buffer applied. In general time is the crucial parameter for successful refolding of solubilized proteins.Keywords: dialysis, inclusion body, refolding, solubilization
Procedia PDF Downloads 2941287 AI for Efficient Geothermal Exploration and Utilization
Authors: Velimir Monty Vesselinov, Trais Kliplhuis, Hope Jasperson
Abstract:
Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal
Procedia PDF Downloads 531286 Optical Flow Technique for Supersonic Jet Measurements
Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi
Abstract:
This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.Keywords: Schlieren, optical flow, supersonic jets, shock shear layer
Procedia PDF Downloads 3121285 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses
Authors: André Jesus, Yanjie Zhu, Irwanda Laory
Abstract:
Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process
Procedia PDF Downloads 3261284 Preparation and in vivo Assessment of Nystatin-Loaded Solid Lipid Nanoparticles for Topical Delivery against Cutaneous Candidiasis
Authors: Rawia M. Khalil, Ahmed A. Abd El Rahman, Mahfouz A. Kassem, Mohamed S. El Ridi, Mona M. Abou Samra, Ghada E. A. Awad, Soheir S. Mansy
Abstract:
Solid lipid nanoparticles (SLNs) have gained great attention for the topical treatment of skin associated fungal infection as they facilitate the skin penetration of loaded drugs. Our work deals with the preparation of nystatin loaded solid lipid nanoparticles (NystSLNs) using the hot homogenization and ultrasonication method. The prepared NystSLNs were characterized in terms of entrapment efficiency, particle size, zeta potential, transmission electron microscopy, differential scanning calorimetry, rheological behavior and in vitro drug release. A stability study for 6 months was performed. A microbiological study was conducted in male rats infected with Candida albicans, by counting the colonies and examining the histopathological changes induced on the skin of infected rats. The results showed that SLNs dispersions are spherical in shape with particle size ranging from 83.26±11.33 to 955.04±1.09 nm. The entrapment efficiencies are ranging from 19.73±1.21 to 72.46±0.66% with zeta potential ranging from -18.9 to -38.8 mV and shear-thinning rheological Behavior. The stability studies done for 6 months showed that nystatin (Nyst) is a good candidate for topical SLN formulations. A least number of colony forming unit/ ml (cfu/ml) was recorded for the selected NystSLN compared to the drug solution and the commercial Nystatin® cream present in the market. It can be fulfilled from this work that SLNs provide a good skin targeting effect and may represent promising carrier for topical delivery of Nyst offering the sustained release and maintaining the localized effect, resulting in an effective treatment of cutaneous fungal infection.Keywords: candida infections, hot homogenization, nystatin, solid lipid nanoparticles, stability, topical delivery
Procedia PDF Downloads 3931283 Dust Particle Removal from Air in a Self-Priming Submerged Venturi Scrubber
Authors: Manisha Bal, Remya Chinnamma Jose, B.C. Meikap
Abstract:
Dust particles suspended in air are a major source of air pollution. A self-priming submerged venturi scrubber proven very effective in cases of handling nuclear power plant accidents is an efficient device to remove dust particles from the air and thus aids in pollution control. Venturi scrubbers are compact, have a simple mode of operation, no moving parts, easy to install and maintain when compared to other pollution control devices and can handle high temperatures and corrosive and flammable gases and dust particles. In the present paper, fly ash particles recognized as a high air pollutant substance emitted mostly from thermal power plants is considered as the dust particle. Its exposure through skin contact, inhalation and indigestion can lead to health risks and in severe cases can even root to lung cancer. The main focus of this study is on the removal of fly ash particles from polluted air using a self-priming venturi scrubber in submerged conditions using water as the scrubbing liquid. The venturi scrubber comprising of three sections: converging section, throat and diverging section is submerged inside a water tank. The liquid enters the throat due to the pressure difference composed of the hydrostatic pressure of the liquid and static pressure of the gas. The high velocity dust particles atomize the liquid droplets at the throat and this interaction leads to its absorption into water and thus removal of fly ash from the air. Detailed investigation on the scrubbing of fly ash has been done in this literature. Experiments were conducted at different throat gas velocities, water levels and fly ash inlet concentrations to study the fly ash removal efficiency. From the experimental results, the highest fly ash removal efficiency of 99.78% is achieved at the throat gas velocity of 58 m/s, water level of height 0.77m with fly ash inlet concentration of 0.3 x10⁻³ kg/Nm³ in the submerged condition. The effect of throat gas velocity, water level and fly ash inlet concentration on the removal efficiency has also been evaluated. Furthermore, experimental results of removal efficiency are validated with the developed empirical model.Keywords: dust particles, fly ash, pollution control, self-priming venturi scrubber
Procedia PDF Downloads 1641282 Measurement and Simulation of Axial Neutron Flux Distribution in Dry Tube of KAMINI Reactor
Authors: Manish Chand, Subhrojit Bagchi, R. Kumar
Abstract:
A new dry tube (DT) has been installed in the tank of KAMINI research reactor, Kalpakkam India. This tube will be used for neutron activation analysis of small to large samples and testing of neutron detectors. DT tube is 375 cm height and 7.5 cm in diameter, located 35 cm away from the core centre. The experimental thermal flux at various axial positions inside the tube has been measured by irradiating the flux monitor (¹⁹⁷Au) at 20kW reactor power. The measured activity of ¹⁹⁸Au and the thermal cross section of ¹⁹⁷Au (n,γ) ¹⁹⁸Au reaction were used for experimental thermal flux measurement. The flux inside the tube varies from 10⁹ to 10¹⁰ and maximum flux was (1.02 ± 0.023) x10¹⁰ n cm⁻²s⁻¹ at 36 cm from the bottom of the tube. The Au and Zr foils without and with cadmium cover of 1-mm thickness were irradiated at the maximum flux position in the DT to find out the irradiation specific input parameters like sub-cadmium to epithermal neutron flux ratio (f) and the epithermal neutron flux shape factor (α). The f value was 143 ± 5, indicates about 99.3% thermal neutron component and α value was -0.2886 ± 0.0125, indicates hard epithermal neutron spectrum due to insufficient moderation. The measured flux profile has been validated using theoretical model of KAMINI reactor through Monte Carlo N-Particle Code (MCNP). In MCNP, the complex geometry of the entire reactor is modelled in 3D, ensuring minimum approximations for all the components. Continuous energy cross-section data from ENDF-B/VII.1 as well as S (α, β) thermal neutron scattering functions are considered. The neutron flux has been estimated at the corresponding axial locations of the DT using mesh tally. The thermal flux obtained from the experiment shows good agreement with the theoretically predicted values by MCNP, it was within ± 10%. It can be concluded that this MCNP model can be utilized for calculating other important parameters like neutron spectra, dose rate, etc. and multi elemental analysis can be carried out by irradiating the sample at maximum flux position using measured f and α parameters by k₀-NAA standardization.Keywords: neutron flux, neutron activation analysis, neutron flux shape factor, MCNP, Monte Carlo N-Particle Code
Procedia PDF Downloads 1641281 Wood Dust and Nanoparticle Exposure among Workers during a New Building Construction
Authors: Atin Adhikari, Aniruddha Mitra, Abbas Rashidi, Imaobong Ekpo, Jefferson Doehling, Alexis Pawlak, Shane Lewis, Jacob Schwartz
Abstract:
Building constructions in the US involve numerous wooden structures. Woods are routinely used in walls, framing floors, framing stairs, and making of landings in building constructions. Cross-laminated timbers are currently being used as construction materials for tall buildings. Numerous workers are involved in these timber based constructions, and wood dust is one of the most common occupational exposures for them. Wood dust is a complex substance composed of cellulose, polyoses and other substances. According to US OSHA, exposure to wood dust is associated with a variety of adverse health effects among workers, including dermatitis, allergic respiratory effects, mucosal and nonallergic respiratory effects, and cancers. The amount and size of particles released as wood dust differ according to the operations performed on woods. For example, shattering of wood during sanding operations produces finer particles than does chipping in sawing and milling industries. To our knowledge, how shattering, cutting and sanding of woods and wood slabs during new building construction release fine particles and nanoparticles are largely unknown. General belief is that the dust generated during timber cutting and sanding tasks are mostly large particles. Consequently, little attention has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor and conventional particle counters. This study was conducted in a large new building construction site in southern Georgia primarily during the framing of wooden side walls, inner partition walls, and landings. Exposure levels of nanoparticles (n = 10) were measured by a newly developed nanoparticle counter (TSI NanoScan SMPS Model 3910) at four different distances (5, 10, 15, and 30 m) from the work location. Other airborne particles (number of particles/m3) including PM2.5 and PM10 were monitored using a 6-channel (0.3, 0.5, 1.0, 2.5, 5.0 and 10 µm) particle counter at 15 m, 30 m, and 75 m distances at both upwind and downwind directions. Mass concentration of PM2.5 and PM10 (µg/m³) were measured by using a DustTrak Aerosol Monitor. Temperature and relative humidity levels were recorded. Wind velocity was measured by a hot wire anemometer. Concentration ranges of nanoparticles of 13 particle sizes were: 11.5 nm: 221 – 816/cm³; 15.4 nm: 696 – 1735/cm³; 20.5 nm: 879 – 1957/cm³; 27.4 nm: 1164 – 2903/cm³; 36.5 nm: 1138 – 2640/cm³; 48.7 nm: 938 – 1650/cm³; 64.9 nm: 759 – 1284/cm³; 86.6 nm: 705 – 1019/cm³; 115.5 nm: 494 – 1031/cm³; 154 nm: 417 – 806/cm³; 205.4 nm: 240 – 471/cm³; 273.8 nm: 45 – 92/cm³; and 365.2 nm:1280 Machine Learning Approaches to Water Usage Prediction in Kocaeli: A Comparative Study
Authors: Kasim Görenekli, Ali Gülbağ
Abstract:
This study presents a comprehensive analysis of water consumption patterns in Kocaeli province, Turkey, utilizing various machine learning approaches. We analyzed data from 5,000 water subscribers across residential, commercial, and official categories over an 80-month period from January 2016 to August 2022, resulting in a total of 400,000 records. The dataset encompasses water consumption records, weather information, weekends and holidays, previous months' consumption, and the influence of the COVID-19 pandemic.We implemented and compared several machine learning models, including Linear Regression, Random Forest, Support Vector Regression (SVR), XGBoost, Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU). Particle Swarm Optimization (PSO) was applied to optimize hyperparameters for all models.Our results demonstrate varying performance across subscriber types and models. For official subscribers, Random Forest achieved the highest R² of 0.699 with PSO optimization. For commercial subscribers, Linear Regression performed best with an R² of 0.730 with PSO. Residential water usage proved more challenging to predict, with XGBoost achieving the highest R² of 0.572 with PSO.The study identified key factors influencing water consumption, with previous months' consumption, meter diameter, and weather conditions being among the most significant predictors. The impact of the COVID-19 pandemic on consumption patterns was also observed, particularly in residential usage.This research provides valuable insights for effective water resource management in Kocaeli and similar regions, considering Turkey's high water loss rate and below-average per capita water supply. The comparative analysis of different machine learning approaches offers a comprehensive framework for selecting appropriate models for water consumption prediction in urban settings.Keywords: mMachine learning, water consumption prediction, particle swarm optimization, COVID-19, water resource management
Procedia PDF Downloads 161279 Basics of Gamma Ray Burst and Its Afterglow
Authors: Swapnil Kumar Singh
Abstract:
Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.Keywords: GRB, synchrotron, X-ray, isotropic energy
Procedia PDF Downloads 881278 Time's Arrow and Entropy: Violations to the Second Law of Thermodynamics Disrupt Time Perception
Authors: Jason Clarke, Michaela Porubanova, Angela Mazzoli, Gulsah Kut
Abstract:
What accounts for our perception that time inexorably passes in one direction, from the past to the future, the so-called arrow of time, given that the laws of physics permit motion in one temporal direction to also happen in the reverse temporal direction? Modern physics says that the reason for time’s unidirectional physical arrow is the relationship between time and entropy, the degree of disorder in the universe, which is evolving from low entropy (high order; thermal disequilibrium) toward high entropy (high disorder; thermal equilibrium), the second law of thermodynamics. Accordingly, our perception of the direction of time, from past to future, is believed to emanate as a result of the natural evolution of entropy from low to high, with low entropy defining our notion of ‘before’ and high entropy defining our notion of ‘after’. Here we explored this proposed relationship between entropy and the perception of time’s arrow. We predicted that if the brain has some mechanism for detecting entropy, whose output feeds into processes involved in constructing our perception of the direction of time, presentation of violations to the expectation that low entropy defines ‘before’ and high entropy defines ‘after’ would alert this mechanism, leading to measurable behavioral effects, namely a disruption in duration perception. To test this hypothesis, participants were shown briefly-presented (1000 ms or 500 ms) computer-generated visual dynamic events: novel 3D shapes that were seen either to evolve from whole figures into parts (low to high entropy condition) or were seen in the reverse direction: parts that coalesced into whole figures (high to low entropy condition). On each trial, participants were instructed to reproduce the duration of their visual experience of the stimulus by pressing and releasing the space bar. To ensure that attention was being deployed to the stimuli, a secondary task was to report the direction of the visual event (forward or reverse motion). Participants completed 60 trials. As predicted, we found that duration reproduction was significantly longer for the high to low entropy condition compared to the low to high entropy condition (p=.03). This preliminary data suggests the presence of a neural mechanism that detects entropy, which is used by other processes to construct our perception of the direction of time or time’s arrow.Keywords: time perception, entropy, temporal illusions, duration perception
Procedia PDF Downloads 1721277 Assessing the Mass Concentration of Microplastics and Nanoplastics in Wastewater Treatment Plants by Pyrolysis Gas Chromatography−Mass Spectrometry
Authors: Yanghui Xu, Qin Ou, Xintu Wang, Feng Hou, Peng Li, Jan Peter van der Hoek, Gang Liu
Abstract:
The level and removal of microplastics (MPs) in wastewater treatment plants (WWTPs) has been well evaluated by the particle number, while the mass concentration of MPs and especially nanoplastics (NPs) remains unclear. In this study, microfiltration, ultrafiltration and hydrogen peroxide digestion were used to extract MPs and NPs with different size ranges (0.01−1, 1−50, and 50−1000 μm) across the whole treatment schemes in two WWTPs. By identifying specific pyrolysis products, pyrolysis gas chromatography−mass spectrometry were used to quantify their mass concentrations of selected six types of polymers (i.e., polymethyl methacrylate (PMMA), polypropylene (PP), polystyrene (PS), polyethylene (PE), polyethylene terephthalate (PET), and polyamide (PA)). The mass concentrations of total MPs and NPs decreased from 26.23 and 11.28 μg/L in the influent to 1.75 and 0.71 μg/L in the effluent, with removal rates of 93.3 and 93.7% in plants A and B, respectively. Among them, PP, PET and PE were the dominant polymer types in wastewater, while PMMA, PS and PA only accounted for a small part. The mass concentrations of NPs (0.01−1 μm) were much lower than those of MPs (>1 μm), accounting for 12.0−17.9 and 5.6− 19.5% of the total MPs and NPs, respectively. Notably, the removal efficiency differed with the polymer type and size range. The low-density MPs (e.g., PP and PE) had lower removal efficiency than high-density PET in both plants. Since particles with smaller size could pass the tertiary sand filter or membrane filter more easily, the removal efficiency of NPs was lower than that of MPs with larger particle size. Based on annual wastewater effluent discharge, it is estimated that about 0.321 and 0.052 tons of MPs and NPs were released into the river each year. Overall, this study investigated the mass concentration of MPs and NPs with a wide size range of 0.01−1000 μm in wastewater, which provided valuable information regarding the pollution level and distribution characteristics of MPs, especially NPs, in WWTPs. However, there are limitations and uncertainties in the current study, especially regarding the sample collection and MP/NP detection. The used plastic items (e.g., sampling buckets, ultrafiltration membranes, centrifugal tubes, and pipette tips) may introduce potential contamination. Additionally, the proposed method caused loss of MPs, especially NPs, which can lead to underestimation of MPs/NPs. Further studies are recommended to address these challenges about MPs/NPs in wastewater.Keywords: microplastics, nanoplastics, mass concentration, WWTPs, Py-GC/MS
Procedia PDF Downloads 2811276 Hierarchical Operation Strategies for Grid Connected Building Microgrid with Energy Storage and Photovoltatic Source
Authors: Seon-Ho Yoon, Jin-Young Choi, Dong-Jun Won
Abstract:
This paper presents hierarchical operation strategies which are minimizing operation error between day ahead operation plan and real time operation. Operating power systems between centralized and decentralized approaches can be represented as hierarchical control scheme, featured as primary control, secondary control and tertiary control. Primary control is known as local control, featuring fast response. Secondary control is referred to as microgrid Energy Management System (EMS). Tertiary control is responsible of coordinating the operations of multi-microgrids. In this paper, we formulated 3 stage microgrid operation strategies which are similar to hierarchical control scheme. First stage is to set a day ahead scheduled output power of Battery Energy Storage System (BESS) which is only controllable source in microgrid and it is optimized to minimize cost of exchanged power with main grid using Particle Swarm Optimization (PSO) method. Second stage is to control the active and reactive power of BESS to be operated in day ahead scheduled plan in case that State of Charge (SOC) error occurs between real time and scheduled plan. The third is rescheduling the system when the predicted error is over the limited value. The first stage can be compared with the secondary control in that it adjusts the active power. The second stage is comparable to the primary control in that it controls the error in local manner. The third stage is compared with the secondary control in that it manages power balancing. The proposed strategies will be applied to one of the buildings in Electronics and Telecommunication Research Institute (ETRI). The building microgrid is composed of Photovoltaic (PV) generation, BESS and load and it will be interconnected with the main grid. Main purpose of that is minimizing operation cost and to be operated in scheduled plan. Simulation results support validation of proposed strategies.Keywords: Battery Energy Storage System (BESS), Energy Management System (EMS), Microgrid (MG), Particle Swarm Optimization (PSO)
Procedia PDF Downloads 2481275 Dual Duality for Unifying Spacetime and Internal Symmetry
Authors: David C. Ni
Abstract:
The current efforts for Grand Unification Theory (GUT) can be classified into General Relativity, Quantum Mechanics, String Theory and the related formalisms. In the geometric approaches for extending General Relativity, the efforts are establishing global and local invariance embedded into metric formalisms, thereby additional dimensions are constructed for unifying canonical formulations, such as Hamiltonian and Lagrangian formulations. The approaches of extending Quantum Mechanics adopt symmetry principle to formulate algebra-group theories, which evolved from Maxwell formulation to Yang-Mills non-abelian gauge formulation, and thereafter manifested the Standard model. This thread of efforts has been constructing super-symmetry for mapping fermion and boson as well as gluon and graviton. The efforts of String theory currently have been evolving to so-called gauge/gravity correspondence, particularly the equivalence between type IIB string theory compactified on AdS5 × S5 and N = 4 supersymmetric Yang-Mills theory. Other efforts are also adopting cross-breeding approaches of above three formalisms as well as competing formalisms, nevertheless, the related symmetries, dualities, and correspondences are outlined as principles and techniques even these terminologies are defined diversely and often generally coined as duality. In this paper, we firstly classify these dualities from the perspective of physics. Then examine the hierarchical structure of classes from mathematical perspective referring to Coleman-Mandula theorem, Hidden Local Symmetry, Groupoid-Categorization and others. Based on Fundamental Theorems of Algebra, we argue that rather imposing effective constraints on different algebras and the related extensions, which are mainly constructed by self-breeding or self-mapping methodologies for sustaining invariance, we propose a new addition, momentum-angular momentum duality at the level of electromagnetic duality, for rationalizing the duality algebras, and then characterize this duality numerically with attempt for addressing some unsolved problems in physics and astrophysics.Keywords: general relativity, quantum mechanics, string theory, duality, symmetry, correspondence, algebra, momentum-angular-momentum
Procedia PDF Downloads 3981274 Computer Software for Calculating Electron Mobility of Semiconductors Compounds; Case Study for N-Gan
Authors: Emad A. Ahmed
Abstract:
Computer software to calculate electron mobility with respect to different scattering mechanism has been developed. This software is adopted completely Graphical User Interface (GUI) technique and its interface has been designed by Microsoft Visual Basic 6.0. As a case study the electron mobility of n-GaN was performed using this software. The behaviour of the mobility for n-GaN due to elastic scattering processes and its relation to temperature and doping concentration were discussed. The results agree with other available theoretical and experimental data.Keywords: electron mobility, relaxation time, GaN, scattering, computer software, computation physics
Procedia PDF Downloads 6711273 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy
Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini
Abstract:
The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering
Procedia PDF Downloads 2231272 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood
Authors: Randa Alharbi, Vladislav Vyshemirsky
Abstract:
Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)
Procedia PDF Downloads 2021271 Possible Sulfur Induced Superconductivity in Nano-Diamond
Authors: J. Mona, R. R. da Silva, C.-L.Cheng, Y. Kopelevich
Abstract:
We report on a possible occurrence of superconductivity in 5 nm particle size diamond powders treated with sulfur (S) at 500 o C for 10 hours in ~10-2 Torr vacuum. Superconducting-like magnetization hysteresis loops M(H) have been measured up to ~ 50 K by means of the SQUID magnetometer (Quantum Design). Both X-ray (Θ-2Θ geometry) and Raman spectroscopy analyses revealed no impurity or additional phases. Nevertheless, the measured Raman spectra are characteristic to the diamond with embedded disordered carbon and/or graphitic fragments suggesting a link to the previous reports of the local or surface superconductivity in graphite- and amorphous carbon–sulfur composites.Keywords: nanodiamond, sulfur, superconductivity, Raman spectroscopy
Procedia PDF Downloads 4931270 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 41