Search results for: thermal imaging
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4757

Search results for: thermal imaging

347 Marzuq Basin Palaeozoic Petroleum System

Authors: M. Dieb, T. Hodairi

Abstract:

In the Southwest Libya area, the Palaeozoic deposits are an important petroleum system, with Silurian shale considered a hydrocarbon source rock and Cambro-Ordovician recognized as a good reservoir. The Palaeozoic petroleum system has the greatest potential for conventional and is thought to represent the significant prospect of unconventional petroleum resources in Southwest Libya. Until now, the lateral and vertical heterogeneity of the source rock was not well evaluated, and oil-source correlation is still a matter of debate. One source rock, which is considered the main source potential in Marzuq Basin, was investigated for its uranium contents using gamma-ray logs, rock-eval pyrolysis, and organic petrography for their bulk kinetic characteristics to determine the petroleum potential qualitatively and quantitatively. Thirty source rock samples and fifteen oil samples from the Tannezzuft source rock were analyzed by Rock-Eval Pyrolysis, microscopely investigation, GC, and GC-MS to detect acyclic isoprenoids and aliphatic, aromatic, and NSO biomarkers. Geochemistry tools were applied to screen source and age-significant biomarkers to high-spot genetic relationships. A grating heterogeneity exists among source rock zones from different levels of depth with varying uranium contents according to gamma-ray logs, rock-eval pyrolysis results, and kinetic features. The uranium-rich Tannezzuft Formations (Hot Shales) produce oils and oil-to-gas hydrocarbons based on their richness, kerogen type, and thermal maturity. Biomarker results such as C₂₇, C₂₈, and C₂₉ steranes concentrations and C₂₄ tetracyclic terpane/C₂₉ tricyclic terpane ratios, with sterane and hopane ratios, are considered the most promising biomarker information in differentiating within the Silurian Shale Tannezzuft Formation and in correlating with its expelled oils. The Tannezzuft Hot Shale is considered the main source rock for oil and gas accumulations in the Cambro-Ordovician reservoirs within the Marzuq Basin. Migration of the generated and expelled oil and gas from the Tannezzuft source rock to the reservoirs of the Cambro-Ordovician petroleum system was interpreted to have occurred along vertical and lateral pathways along the faults in the Palaeozoic Strata. The Upper Tannezzuft Formation (cold shale) is considered the primary seal in the Marzuq Basin.

Keywords: heterogeneity, hot shale, kerogen, Silurian, uranium

Procedia PDF Downloads 63
346 Oxidovanadium(IV) and Dioxidovanadium(V) Complexes: Efficient Catalyst for Peroxidase Mimetic Activity and Oxidation

Authors: Mannar R. Maurya, Bithika Sarkar, Fernando Avecilla

Abstract:

Peroxidase activity is possibly successfully used for different industrial processes in medicine, chemical industry, food processing and agriculture. However, they bear some intrinsic drawback associated with denaturation by proteases, their special storage requisite and cost factor also. Now a day’s artificial enzyme mimics are becoming a research interest because of their significant applications over conventional organic enzymes for ease of their preparation, low price and good stability in activity and overcome the drawbacks of natural enzymes e.g serine proteases. At present, a large number of artificial enzymes have been synthesized by assimilating a catalytic center into a variety of schiff base complexes, ligand-anchoring, supramolecular complexes, hematin, porphyrin, nanoparticles to mimic natural enzymes. Although in recent years a several number of vanadium complexes have been reported by a continuing increase in interest in bioinorganic chemistry. To our best of knowledge, the investigation of artificial enzyme mimics of vanadium complexes is very less explored. Recently, our group has reported synthetic vanadium schiff base complexes capable of mimicking peroxidases. Herein, we have synthesized monoidovanadium(IV) and dioxidovanadium(V) complexes of pyrazoleone derivateis ( extensively studied on account of their broad range of pharmacological appication). All these complexes are characterized by various spectroscopic techniques like FT-IR, UV-Visible, NMR (1H, 13C and 51V), Elemental analysis, thermal studies and single crystal analysis. The peroxidase mimic activity has been studied towards oxidation of pyrogallol to purpurogallin with hydrogen peroxide at pH 7 followed by measuring kinetic parameters. The Michaelis-Menten behavior shows an excellent catalytic activity over its natural counterparts, e.g. V-HPO and HRP. The obtained kinetic parameters (Vmax, Kcat) were also compared with peroxidase and haloperoxidase enzymes making it a promising mimic of peroxidase catalyst. Also, the catalytic activity has been studied towards the oxidation of 1-phenylethanol in presence of H2O2 as an oxidant. Various parameters such as amount of catalyst and oxidant, reaction time, reaction temperature and solvent have been taken into consideration to get maximum oxidative products of 1-phenylethanol.

Keywords: oxovanadium(IV)/dioxidovanadium(V) complexes, NMR spectroscopy, Crystal structure, peroxidase mimic activity towards oxidation of pyrogallol, Oxidation of 1-phenylethanol

Procedia PDF Downloads 341
345 Deep Learning Based Polarimetric SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry

Procedia PDF Downloads 90
344 Energy Efficiency Line Guides for School Buildings in Florence in a Postgraduate Master Course

Authors: Lucia Ceccherini Nelli, Alessandra Donato

Abstract:

The ABITA Master course of the University of Florence offered by the Department of Architecture covers nearly all the energy-relevant issues that can arise in public and private companies and sectors. The main purpose of the Master course, active since 2003, is to analyse the energy consumption of building technologies, components, and structures at the conceptual design stage, so it could be very helpful, for designers, when making decisions related to the selection of the most suitable design alternatives and for the materials choice that will be used in an energy-efficient building. The training course provides a solid basis for increasing the knowledge and skills of energy managers and is developed with an emphasis on practical experiences related to the knowledge through case studies, measurements, and verification of energy-efficient solutions in buildings, in the industry and in the cities. The main objectives are: i)To raise the professional standards of those engaged in energy auditing, ii) To improve the practice of energy auditors by encouraging energy auditing professionals in a continuing education program of professional development, iii) Implement in the use of instrumentations for the typical measurements, iv) To propose an integrated methodology that links energy analysis tools with green building certification systems. This methodology will be applied at the early design stage of a project’s life. The final output of the practical training is to achieve an elevated professionalism in the study of environmental design and Energy management in buildings. The results are the redaction of line guides instruction for the energy refurbishment of Public schools in Florence. The school heritage of the Municipality of Florence requires interventions for the control of energy performance, as old construction buildings are often made without taking into account the necessary envelope performance. For this reason, every year, the Master's course aims to study groups of public schools to enable the Municipality to carry out energy redevelopment interventions on the existing building heritage. The future challenges of the education and training program are related to follow-up activities, the development of interactive tools and the curriculum's customization to meet the constantly growing needs of energy experts from industry.

Keywords: expert in energy, energy auditing, public buildings, thermal analysis

Procedia PDF Downloads 189
343 Optimizing the Field Emission Performance of SiNWs-Based Heterostructures: Controllable Synthesis, Core-Shell Structure, 3D ZnO/Si Nanotrees and Graphene/SiNWs

Authors: Shasha Lv, Zhengcao Li

Abstract:

Due to the CMOS compatibility, silicon-based field emission (FE) devices as potential electron sources have attracted much attention. The geometrical arrangement and dimensional features of aligned silicon nanowires (SiNWs) have a determining influence on the FE properties. We discuss a multistep template replication process of Ag-assisted chemical etching combined with polystyrene (PS) spheres to fabricate highly periodic and well-aligned silicon nanowires, then their diameter, aspect ratio and density were further controlled via dry oxidation and post chemical treatment. The FE properties related to proximity and aspect ratio were systematically studied. A remarkable improvement of FE propertiy was observed with the average nanowires tip interspace increasing from 80 to 820 nm. On the basis of adjusting SiNWs dimensions and morphology, addition of a secondary material whose properties complement the SiNWs could yield a combined characteristic. Three different nanoheterostructures were fabricated to control the FE performance, they are: NiSi/Si core-shell structures, ZnO/Si nanotrees, and Graphene/SiNWs. We successfully fabricated the high-quality NiSi/Si heterostructured nanowires with excellent conformality. First, nickle nanoparticles were deposited onto SiNWs, then rapid thermal annealing process were utilized to form NiSi shell. In addition, we demonstrate a new and simple method for creating 3D nanotree-like ZnO/Si nanocomposites with a spatially branched hierarchical structure. Compared with the as-prepared SiNRs and ZnO NWs, the high-density ZnO NWs on SiNRs have exhibited predominant FE characteristics, and the FE enhancement factors were attributed to band bending effect and geometrical morphology. The FE efficiency from flat sheet structure of graphene is low. We discussed an effective approach towards full control over the diameter of uniform SiNWs to adjust the protrusions of large-scale graphene sheet deposited on SiNWs. The FE performance regarding the uniformity and dimensional control of graphene protrusions supported on SiNWs was systematically clarified. Therefore, the hybrid SiNWs/graphene structures with protrusions provide a promising class of field emission cathodes.

Keywords: field emission, silicon nanowires, heterostructures, controllable synthesis

Procedia PDF Downloads 273
342 Detailed Sensitive Detection of Impurities in Waste Engine Oils Using Laser Induced Breakdown Spectroscopy, Rotating Disk Electrode Optical Emission Spectroscopy and Surface Plasmon Resonance

Authors: Cherry Dhiman, Ayushi Paliwal, Mohd. Shahid Khan, M. N. Reddy, Vinay Gupta, Monika Tomar

Abstract:

The laser based high resolution spectroscopic experimental techniques such as Laser Induced Breakdown Spectroscopy (LIBS), Rotating Disk Electrode Optical Emission spectroscopy (RDE-OES) and Surface Plasmon Resonance (SPR) have been used for the study of composition and degradation analysis of used engine oils. Engine oils are mainly composed of aliphatic and aromatics compounds and its soot contains hazardous components in the form of fine, coarse and ultrafine particles consisting of wear metal elements. Such coarse particulates matter (PM) and toxic elements are extremely dangerous for human health that can cause respiratory and genetic disorder in humans. The combustible soot from thermal power plants, industry, aircrafts, ships and vehicles can lead to the environmental and climate destabilization. It contributes towards global pollution for land, water, air and global warming for environment. The detection of such toxicants in the form of elemental analysis is a very serious issue for the waste material management of various organic, inorganic hydrocarbons and radioactive waste elements. In view of such important points, the current study on used engine oils was performed. The fundamental characterization of engine oils was conducted by measuring water content and kinematic viscosity test that proves the crude analysis of the degradation of used engine oils samples. The microscopic quantitative and qualitative analysis was presented by RDE-OES technique which confirms the presence of elemental impurities of Pb, Al, Cu, Si, Fe, Cr, Na and Ba lines for used waste engine oil samples in few ppm. The presence of such elemental impurities was confirmed by LIBS spectral analysis at various transition levels of atomic line. The recorded transition line of Pb confirms the maximum degradation which was found in used engine oil sample no. 3 and 4. Apart from the basic tests, the calculations for dielectric constants and refractive index of the engine oils were performed via SPR analysis.

Keywords: surface plasmon resonance, laser-induced breakdown spectroscopy, ICCD spectrometer, engine oil

Procedia PDF Downloads 143
341 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates

Authors: Jennifer Buz, Alvin Spivey

Abstract:

The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.

Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation

Procedia PDF Downloads 130
340 Characterizing Solid Glass in Bending, Torsion and Tension: High-Temperature Dynamic Mechanical Analysis up to 950 °C

Authors: Matthias Walluch, José Alberto Rodríguez, Christopher Giehl, Gunther Arnold, Daniela Ehgartner

Abstract:

Dynamic mechanical analysis (DMA) is a powerful method to characterize viscoelastic properties and phase transitions for a wide range of materials. It is often used to characterize polymers and their temperature-dependent behavior, including thermal transitions like the glass transition temperature Tg, via determination of storage and loss moduli in tension (Young’s modulus, E) and shear or torsion (shear modulus, G) or other testing modes. While production and application temperatures for polymers are often limited to several hundred degrees, material properties of glasses usually require characterization at temperatures exceeding 600 °C. This contribution highlights a high temperature setup for rotational and oscillatory rheometry as well as for DMA in different modes. The implemented standard convection oven enables the characterization of glass in different loading modes at temperatures up to 950 °C. Three-point bending, tension and torsional measurements on different glasses, with E and G moduli as a function of frequency and temperature, are presented. Additional tests include superimposing several frequencies in a single temperature sweep (“multiwave”). This type of test results in a considerable reduction of the experiment time and allows to evaluate structural changes of the material and their frequency dependence. Furthermore, DMA in torsion and tension was performed to determine the complex Poisson’s ratio as a function of frequency and temperature within a single test definition. Tests were performed in a frequency range from 0.1 to 10 Hz and temperatures up to the glass transition. While variations in the frequency did not reveal significant changes of the complex Poisson’s ratio of the glass, a monotonic increase of this parameter was observed when increasing the temperature. This contribution outlines the possibilities of DMA in bending, tension and torsion for an extended temperature range. It allows the precise mechanical characterization of material behavior from room temperature up to the glass transition and the softening temperature interval. Compared to other thermo-analytical methods, like Dynamic Scanning Calorimetry (DSC) where mechanical stress is neglected, the frequency-dependence links measurement results (e.g. relaxation times) to real applications

Keywords: dynamic mechanical analysis, oscillatory rheometry, Poisson's ratio, solid glass, viscoelasticity

Procedia PDF Downloads 83
339 A Radiofrequency Based Navigation Method for Cooperative Robotic Communities in Surface Exploration Missions

Authors: Francisco J. García-de-Quirós, Gianmarco Radice

Abstract:

When considering small robots working in a cooperative community for Moon surface exploration, navigation and inter-nodes communication aspects become a critical issue for the mission success. For this approach to succeed, it is necessary however to deploy the required infrastructure for the robotic community to achieve efficient self-localization as well as relative positioning and communications between nodes. In this paper, an exploration mission concept in which two cooperative robotic systems co-exist is presented. This paradigm hinges on a community of reference agents that provide support in terms of communication and navigation to a second agent community tasked with exploration goals. The work focuses on the role of the agent community in charge of the overall support and, more specifically, will focus on the positioning and navigation methods implemented in RF microwave bands, which are combined with the communication services. An analysis of the different methods for range and position calculation are presented, as well as the main limiting factors for precision and resolution, such as phase and frequency noise in RF reference carriers and drift mechanisms such as thermal drift and random walk. The effects of carrier frequency instability due to phase noise are categorized in different contributing bands, and the impact of these spectrum regions are considered both in terms of the absolute position and the relative speed. A mission scenario is finally proposed, and key metrics in terms of mass and power consumption for the required payload hardware are also assessed. For this purpose, an application case involving an RF communication network in UHF Band is described, in coexistence with a communications network used for the single agents to communicate within the both the exploring agents as well as the community and with the mission support agents. The proposed approach implements a substantial improvement in planetary navigation since it provides self-localization capabilities for robotic agents characterized by very low mass, volume and power budgets, thus enabling precise navigation capabilities to agents of reduced dimensions. Furthermore, a common and shared localization radiofrequency infrastructure enables new interaction mechanisms such as spatial arrangement of agents over the area of interest for distributed sensing.

Keywords: cooperative robotics, localization, robot navigation, surface exploration

Procedia PDF Downloads 294
338 Syntheses in Polyol Medium of Inorganic Oxides with Various Smart Optical Properties

Authors: Shian Guan, Marie Bourdin, Isabelle Trenque, Younes Messaddeq, Thierry Cardinal, Nicolas Penin, Issam Mjejri, Aline Rougier, Etienne Duguet, Stephane Mornet, Manuel Gaudon

Abstract:

At the interface of the studies performed by 3 Ph.D. students: Shian Guan (2017-2020), Marie Bourdin (2016-2019) and Isabelle Trenque (2012-2015), a single synthesis route: polyol-mediated process, was used with success for the preparation of different inorganic oxides. Both of these inorganic oxides were elaborated for their potential application as smart optical compounds. This synthesis route has allowed us to develop nanoparticles of zinc oxide, vanadium oxide or tungsten oxide. This route is with easy implementation, inexpensive and with large-scale production potentialities and leads to materials of high purity. The obtaining by this route of nanometric particles, however perfectly crystalline, has notably led to the possibility of doping these matrix materials with high doping ion concentrations (high solubility limits). Thus, Al3+ or Ga3+ doped-ZnO powder, with high doping rate in comparison with the literature, exhibits remarkable infrared absorption properties thanks to their high free carrier density. Note also that due to the narrow particle size distribution of the as-prepared nanometric doped-ZnO powder, the original correlation between crystallite size and unit-cell parameters have been established. Also, depending on the annealing atmosphere use to treat vanadium precursors, VO2, V2O3 or V2O5 oxides with thermochromic or electrochromic properties can be obtained without any impurity, despite the versatility of the oxidation state of vanadium. This is of more particular interest on vanadium dioxide, a relatively difficult-to-prepare oxide, whose first-order metal-insulator phase transition is widely explored in the literature for its thermochromic behavior (in smart windows with optimal thermal insulation). Finally, the reducing nature of the polyol solvents ensures the production of oxygen-deficient tungsten oxide, thus conferring to the nano-powders exotic colorimetric properties, as well as optimized photochromic and electrochromic behaviors.

Keywords: inorganic oxides, electrochromic, photochromic, thermochromic

Procedia PDF Downloads 221
337 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 212
336 Quantum Chemical Investigation of Hydrogen Isotopes Adsorption on Metal Ion Functionalized Linde Type A and Faujasite Type Zeolites

Authors: Gayathri Devi V, Aravamudan Kannan, Amit Sircar

Abstract:

In the inner fuel cycle system of a nuclear fusion reactor, the Hydrogen Isotopes Removal System (HIRS) plays a pivoted role. It enables the effective extraction of the hydrogen isotopes from the breeder purge gas which helps to maintain the tritium breeding ratio and sustain the fusion reaction. One of the components of HIRS, Cryogenic Molecular Sieve Bed (CMSB) columns with zeolites adsorbents are considered for the physisorption of hydrogen isotopes at 1 bar and 77 K. Even though zeolites have good thermal stability and reduced activation properties making them ideal for use in nuclear reactor applications, their modest capacity for hydrogen isotopes adsorption is a cause of concern. In order to enhance the adsorbent capacity in an informed manner, it is helpful to understand the adsorption phenomena at the quantum electronic structure level. Physicochemical modifications of the adsorbent material enhances the adsorption capacity through the incorporation of active sites. This may be accomplished through the incorporation of suitable metal ions in the zeolite framework. In this work, molecular hydrogen isotopes adsorption on the active sites of functionalized zeolites are investigated in detail using Density Functional Theory (DFT) study. This involves the utilization of hybrid Generalized Gradient Approximation (GGA) with dispersion correction to account for the exchange and correlation functional of DFT. The electronic energies, adsorption enthalpy, adsorption free energy, Highest Occupied Molecular Orbital (HOMO), Lowest Unoccupied Molecular Orbital (LUMO) energies are computed on the stable 8T zeolite clusters as well as the periodic structure functionalized with different active sites. The characteristics of the dihydrogen bond with the active metal sites and the isotopic effects are also studied in detail. Validation studies with DFT will also be presented for adsorption of hydrogen on metal ion functionalized zeolites. The ab-inito screening analysis gave insights regarding the mechanism of hydrogen interaction with the zeolites under study and also the effect of the metal ion on adsorption. This detailed study provides guidelines for selection of the appropriate metal ions that may be incorporated in the zeolites framework for effective adsorption of hydrogen isotopes in the HIRS.

Keywords: adsorption enthalpy, functionalized zeolites, hydrogen isotopes, nuclear fusion, physisorption

Procedia PDF Downloads 179
335 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure

Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik

Abstract:

Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.

Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT

Procedia PDF Downloads 128
334 Partial Discharge Characteristics of Free- Moving Particles in HVDC-GIS

Authors: Philipp Wenger, Michael Beltle, Stefan Tenbohlen, Uwe Riechert

Abstract:

The integration of renewable energy introduces new challenges to the transmission grid, as the power generation is located far from load centers. The associated necessary long-range power transmission increases the demand for high voltage direct current (HVDC) transmission lines and DC distribution grids. HVDC gas-insulated switchgears (GIS) are considered being a key technology, due to the combination of the DC technology and the long operation experiences of AC-GIS. To ensure long-term reliability of such systems, insulation defects must be detected in an early stage. Operational experience with AC systems has proven evidence, that most failures, which can be attributed to breakdowns of the insulation system, can be detected and identified via partial discharge (PD) measurements beforehand. In AC systems the identification of defects relies on the phase resolved partial discharge pattern (PRPD). Since there is no phase information within DC systems this method cannot be transferred to DC PD diagnostic. Furthermore, the behaviour of e.g. free-moving particles differs significantly at DC: Under the influence of a constant direct electric field, charge carriers can accumulate on particles’ surfaces. As a result, a particle can lift-off, oscillate between the inner conductor and the enclosure or rapidly bounces at just one electrode, which is known as firefly motion. Depending on the motion and the relative position of the particle to the electrodes, broadband electromagnetic PD pulses are emitted, which can be recorded by ultra-high frequency (UHF) measuring methods. PDs are often accompanied by light emissions at the particle’s tip which enables optical detection. This contribution investigates PD characteristics of free moving metallic particles in a commercially available 300 kV SF6-insulated HVDC-GIS. The influences of various defect parameters on the particle motion and the PD characteristic are evaluated experimentally. Several particle geometries, such as cylinder, lamella, spiral and sphere with different length, diameter and weight are determined. The applied DC voltage is increased stepwise from inception voltage up to UDC = ± 400 kV. Different physical detection methods are used simultaneously in a time-synchronized setup. Firstly, the electromagnetic waves emitted by the particle are recorded by an UHF measuring system. Secondly, a photomultiplier tube (PMT) detects light emission with a wavelength in the range of λ = 185…870 nm. Thirdly, a high-speed camera (HSC) tracks the particle’s motion trajectory with high accuracy. Furthermore, an electrically insulated electrode is attached to the grounded enclosure and connected to a current shunt in order to detect low frequency ion currents: The shunt measuring system’s sensitivity is in the range of 10 nA at a measuring bandwidth of bw = DC…1 MHz. Currents of charge carriers, which are generated at the particle’s tip migrate through the gas gap to the electrode and can be recorded by the current shunt. All recorded PD signals are analyzed in order to identify characteristic properties of different particles. This includes e.g. repetition rates and amplitudes of successive pulses, characteristic frequency ranges and detected signal energy of single PD pulses. Concluding, an advanced understanding of underlying physical phenomena particle motion in direct electric field can be derived.

Keywords: current shunt, free moving particles, high-speed imaging, HVDC-GIS, UHF

Procedia PDF Downloads 160
333 A Computer-Aided System for Tooth Shade Matching

Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan

Abstract:

Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.

Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction

Procedia PDF Downloads 444
332 Severe Post Operative Gas Gangrene of the Liver: Off-Label Treatment by Percutaneous Radiofrequency Ablation

Authors: Luciano Tarantino

Abstract:

Gas gangrene is a rare, severe infection with a very high mortality rate caused by Clostridium species. The infection causes a non-suppurative localized producing gas lesion from which harmful toxins that impair the inflammatory response cause vessel damage and multiple organ failure. Gas gangrene of the liver is very rare and develops suddenly, often as a complication of abdominal surgery and liver transplantation. The present paper deals with a case of gas gangrene of the liver that occurred after percutaneous MW ablation of hepatocellular carcinoma, resulting in progressive liver necrosis and multi-organ failure in spite of specific antibiotics administration. The patient was successfully treated with percutaneous Radiofrequency ablation. Case report: Female, 76 years old, Child A class cirrhosis, treated with synchronous insertion of 3 MW antennae for large HCC (5.5 cm) in the VIII segment. 24 hours after treatment, the patient was asymptomatic and left the hospital . 2 days later, she complained of fever, weakness, abdominal swelling, and pain. Abdominal US detected a 2.3 cm in size gas-containing area, eccentric within the large (7 cm) ablated area. The patient was promptly hospitalized with the diagnosis of anaerobic liver abscess and started antibiotic therapy with Imipenem/cilastatine+metronidazole+teicoplanine. On the fourth day, the patient was moved to the ICU because of dyspnea, congestive heart failure, atrial fibrillation, right pleural effusion, ascites, and renal failure. Blood tests demonstrated severe leukopenia and neutropenia, anemia, increased creatinine and blood nitrogen, high-level FDP, and high INR. Blood cultures were negative. At US, unenhanced CT, and CEUS, a progressive enlargement of the infected liver lesion was observed. Percutaneous drainage was attempted, but only drops of non-suppurative brownish material could be obtained. Pleural and peritoneal drainages gave serosanguineous muddy fluid. The Surgeon and the Anesthesiologist excluded any indication of surgical resection because of the high perioperative mortality risk. Therefore, we asked for the informed consent of the patient and her relatives to treat the gangrenous liver lesion by percutaneous Ablation. Under conscious sedation, percutaneous RFA of GG was performed by double insertion of 3 cool-tip needles (Covidien LDT, USA ) into the infected area. The procedure was well tolerated by the patient. A dramatic improvement in the patient's condition was observed in the subsequent 24 hours and thereafter. Fever and dyspnea disappeared. Normalization of blood tests, including creatinine, was observed within 4 days. Heart performance improved, 10 days after the RFA the patient left the hospital and was followed-up with weekly as an outpatient for 2 months and every two months thereafter. At 18 months follow-up, the patient is well compensated (Child-Pugh class B7), without any peritoneal or pleural effusion and without any HCC recurrence at imaging (US every 3 months, CT every 6 months). Percutaneous RFA could be a valuable therapy of focal GG of the liver in patients non-responder to antibiotics and when surgery and liver transplantation are not feasible. A fast and early indication is needed in case of rapid worsening of patient's conditions.

Keywords: liver tumor ablation, interventional ultrasound, liver infection, gas gangrene, radiofrequency ablation

Procedia PDF Downloads 79
331 Valorization of Seafood and Poultry By-Products as Gelatin Source and Quality Assessment

Authors: Elif Tugce Aksun Tumerkan, Umran Cansu, Gokhan Boran, Fatih Ozogul

Abstract:

Gelatin is a mixture of peptides obtained from collagen by partial thermal hydrolysis. It is an important and useful biopolymer that is used in the food, pharmacy, and photography products. Generally, gelatins are sourced from pig skin and bones, beef bone and hide, but within the last decade, using alternative gelatin resources has attracted some interest. In this study, functional properties of gelatin extracted from seafood and poultry by-products were evaluated. For this purpose, skins of skipjack tuna (Katsuwonus pelamis) and frog (Rana esculata) were used as seafood by-products and chicken skin as poultry by-product as raw material for gelatin extraction. Following the extraction of gelatin, all samples were lyophilized and stored in plastic bags at room temperature. For comparing gelatins obtained; chemical composition, common quality parameters including bloom value, gel strength, and viscosity in addition to some others like melting and gelling temperatures, hydroxyproline content, and colorimetric parameters were determined. The results showed that the highest protein content obtained in frog gelatin with 90.1% and the highest hydroxyproline content was in chicken gelatin with 7.6% value. Frog gelatin showed a significantly higher (P < 0.05) melting point (42.7°C) compared to that of fish (29.7°C) and chicken (29.7°C) gelatins. The bloom value of gelatin from frog skin was found higher (363 g) than chicken and fish gelatins (352 and 336 g, respectively) (P < 0.05). While fish gelatin had higher lightness (L*) value (92.64) compared to chicken and frog gelatins, redness/greenness (a*) value was significantly higher in frog skin gelatin. Based on the results obtained, it can be concluded that skins of different animals with high commercial value may be utilized as alternative sources to produce gelatin with high yield and desirable functional properties. Functional and quality analysis of gelatin from frog, chicken, and tuna skin showed by-product of poultry and seafood can be used as an alternative gelatine source to mammalian gelatine. The functional properties, including bloom strength, melting points, and viscosity of gelatin from frog skin were more admirable than that of the chicken and tuna skin. Among gelatin groups, significant characteristic differences such as gel strength and physicochemical properties were observed based on not only raw material but also the extraction method.

Keywords: chicken skin, fish skin, food industry, frog skin, gel strength

Procedia PDF Downloads 163
330 The Removal of Common Used Pesticides from Wastewater Using Golden Activated Charcoal

Authors: Saad Mohamed Elsaid Onaizah

Abstract:

One of the reasons for the intensive use of pesticides is to protect agricultural crops and orchards from pests or agricultural worms. The period of time that pesticides stay inside the soil is estimated at about (2) to (12) weeks. Perhaps the most important reason that led to groundwater pollution is the easy leakage of these harmful pesticides from the soil into the aquifers. This research aims to find the best ways to use trated activated charcoal with gold nitrate solution; For the purpose of removing the deadly pesticides from the aqueous solution by adsorption phenomenon. The most used pesticides in Egypt were selected, such as Malathion, Methomyl Abamectin and, Thiamethoxam. Activated charcoal doped with gold ions was prepared by applying chemical and thermal treatments to activated charcoal using gold nitrate solution. Adsorption of studied pesticide onto activated carbon /Au was mainly by chemical adsorption forming complex with the gold metal immobilised on activated carbon surfaces. Also, gold atom was considered as a catalyst to cracking the pesticide molecule. Gold activated charcoal is a low cost material due to the use of very low concentrations of gold nitrate solution. its notice the great ability of activated charcoal in removing selected pesticides due to the presence of the positive charge of the gold ion, in addition to other active groups such as functional oxygen and lignin cellulose. The presence of pores of different sizes on the surface of activated charcoal is the driving force for the good adsorption efficiency for the removal of the pesticides under study The surface area of the prepared char as well as the active groups were determined using infrared spectroscopy and scanning electron microscopy. Some factors affecting the ability of activated charcoal were applied in order to reach the highest adsorption capacity of activated charcoal, such as the weight of the charcoal, the concentration of the pesticide solution, the time of the experiment, and the pH. Experiments showed that the maximum limit revealed by the batch adsorption study for the adsorption of selected insecticides was in contact time (80) minutes at pH (7.70). These promising results were confirmed, and by establishing the practical application of the developed system, the effect of various operating factors with equilibrium, kinetic and thermodynamic studies is evident, using the Langmuir application on the effectiveness of the absorbent material with absorption capacities higher than most other adsorbents.

Keywords: waste water, pesticides pollution, adsorption, activated carbon

Procedia PDF Downloads 79
329 Syngas From Polypropylene Gasification in a Fluidized Bed

Authors: Sergio Rapagnà, Alessandro Antonio Papa, Armando Vitale, Andre Di Carlo

Abstract:

In recent years the world population has enormously increased the use of plastic products for their living needs, in particular for transporting and storing consumer goods such as food and beverage. Plastics are widely used in the automotive industry, in construction of electronic equipment, clothing and home furnishings. Over the last 70 years, the annual production of plastic products has increased from 2 million tons to 460 million tons. About 20% of the last quantity is mismanaged as waste. The consequence of this mismanagement is the release of plastic waste into the terrestrial and marine environments which represents a danger to human health and the ecosystem. Recycling all plastics is difficult because they are often made with mixtures of polymers that are incompatible with each other and contain different additives. The products obtained are always of lower quality and after two/three recycling cycles they must be eliminated either by thermal treatment to produce heat or disposed of in landfill. An alternative to these current solutions is to obtain a mixture of gases rich in H₂, CO and CO₂ suitable for being profitably used for the production of chemicals with consequent savings fossil sources. Obtaining a hydrogen-rich syngas can be achieved by gasification process using the fluidized bed reactor, in presence of steam as the fluidization medium. The fluidized bed reactor allows the gasification process of plastics to be carried out at a constant temperature and allows the use of different plastics with different compositions and different grain sizes. Furthermore, during the gasification process the use of steam increase the gasification of char produced by the first pyrolysis/devolatilization process of the plastic particles. The bed inventory can be made with particles having catalytic properties such as olivine, capable to catalyse the steam reforming reactions of heavy hydrocarbons normally called tars, with a consequent increase in the quantity of gases produced. The plant is composed of a fluidized bed reactor made of AISI 310 steel, having an internal diameter of 0.1 m, containing 3 kg of olivine particles as a bed inventory. The reactor is externally heated by an oven up to 1000 °C. The hot producer gases that exit the reactor, after being cooled, are quantified using a mass flow meter. Gas analyzers are present to measure instantly the volumetric composition of H₂, CO, CO₂, CH₄ and NH₃. At the conference, the results obtained from the continuous gasification of polypropylene (PP) particles in a steam atmosphere at temperatures of 840-860 °C will be presented.

Keywords: gasification, fluidized bed, hydrogen, olivine, polypropyle

Procedia PDF Downloads 27
328 Formulation and Evaluation of Glimepiride (GMP)-Solid Nanodispersion and Nanodispersed Tablets

Authors: Ahmed. Abdel Bary, Omneya. Khowessah, Mojahed. al-jamrah

Abstract:

Introduction: The major challenge with the design of oral dosage forms lies with their poor bioavailability. The most frequent causes of low oral bioavailability are attributed to poor solubility and low permeability. The aim of this study was to develop solid nanodispersed tablet formulation of Glimepiride for the enhancement of the solubility and bioavailability. Methodology: Solid nanodispersions of Glimepiride (GMP) were prepared using two different ratios of 2 different carriers, namely; PEG6000, pluronic F127, and by adopting two different techniques, namely; solvent evaporation technique and fusion technique. A full factorial design of 2 3 was adopted to investigate the influence of formulation variables on the prepared nanodispersion properties. The best chosen formula of nanodispersed powder was formulated into tablets by direct compression. The Differential Scanning Calorimetry (DSC) analysis and Fourier Transform Infra-Red (FTIR) analysis were conducted for the thermal behavior and surface structure characterization, respectively. The zeta potential and particle size analysis of the prepared glimepiride nanodispersions was determined. The prepared solid nanodispersions and solid nanodispersed tablets of GMP were evaluated in terms of pre-compression and post-compression parameters, respectively. Results: The DSC and FTIR studies revealed that there was no interaction between GMP and all the excipients used. Based on the resulted values of different pre-compression parameters, the prepared solid nanodispersions powder blends showed poor to excellent flow properties. The resulted values of the other evaluated pre-compression parameters of the prepared solid nanodispersion were within the limits of pharmacopoeia. The drug content of the prepared nanodispersions ranged from 89.6 ± 0.3 % to 99.9± 0.5% with particle size ranged from 111.5 nm to 492.3 nm and the resulted zeta potential (ζ ) values of the prepared GMP-solid nanodispersion formulae (F1-F8) ranged from -8.28±3.62 mV to -78±11.4 mV. The in-vitro dissolution studies of the prepared solid nanodispersed tablets of GMP concluded that GMP- pluronic F127 combinations (F8), exhibited the best extent of drug release, compared to other formulations, and to the marketed product. One way ANOVA for the percent of drug released from the prepared GMP-nanodispersion formulae (F1- F8) after 20 and 60 minutes showed significant differences between the percent of drug released from different GMP-nanodispersed tablet formulae (F1- F8), (P<0.05). Conclusion: Preparation of glimepiride as nanodispersed particles proven to be a promising tool for enhancing the poor solubility of glimepiride.

Keywords: glimepiride, solid Nanodispersion, nanodispersed tablets, poorly water soluble drugs

Procedia PDF Downloads 488
327 Ultrasonic Micro Injection Molding: Manufacturing of Micro Plates of Biomaterials

Authors: Ariadna Manresa, Ines Ferrer

Abstract:

Introduction: Ultrasonic moulding process (USM) is a recent injection technology used to manufacture micro components. It is able to melt small amounts of material so the waste of material is certainly reduced comparing to microinjection molding. This is an important advantage when the materials are expensive like medical biopolymers. Micro-scaled components are involved in a variety of uses, such as biomedical applications. It is required replication fidelity so it is important to stabilize the process and minimize the variability of the responses. The aim of this research is to investigate the influence of the main process parameters on the filling behaviour, the dimensional accuracy and the cavity pressure when a micro-plate is manufactured by biomaterials such as PLA and PCL. Methodology or Experimental Procedure: The specimens are manufactured using a Sonorus 1G Ultrasound Micro Molding Machine. The used geometry is a rectangular micro-plate of 15x5mm and 1mm of thickness. The materials used for the investigation are PLA and PCL due to biocompatible and degradation properties. The experimentation is divided into two phases. Firstly, the influence of process parameters (vibration amplitude, sonotrodo velocity, ultrasound time and compaction force) on filling behavior is analysed, in Phase 1. Next, when filling cavity is assured, the influence of both cooling time and force compaction on the cavity pressure, part temperature and dimensional accuracy is instigated, which is done in Phase. Results and Discussion: Filling behavior depends on sonotrodo velocity and vibration amplitude. When the ultrasonic time is higher, more ultrasonic energy is applied and the polymer temperature increases. Depending on the cooling time, it is possible that when mold is opened, the micro-plate temperature is too warm. Consequently, the polymer relieve its stored internal energy (ultrasonic and thermal) expanding through the easier direction. This fact is reflected on dimensional accuracy, causing micro-plates thicker than the mold. It has also been observed the most important fact that affects cavity pressure is the compaction configuration during the manufacturing cycle. Conclusions: This research demonstrated the influence of process parameters on the final micro-plated manufactured. Future works will be focused in manufacturing other geometries and analysing the mechanical properties of the specimens.

Keywords: biomaterial, biopolymer, micro injection molding, ultrasound

Procedia PDF Downloads 284
326 An Experimental Determination of the Limiting Factors Governing the Operation of High-Hydrogen Blends in Domestic Appliances Designed to Burn Natural Gas

Authors: Haiqin Zhou, Robin Irons

Abstract:

The introduction of hydrogen into local networks may, in many cases, require the initial operation of those systems on natural gas/hydrogen blends, either because of a lack of sufficient hydrogen to allow a 100% conversion or because existing infrastructure imposes limitations on the % hydrogen that can be burned before the end-use technologies are replaced. In many systems, the largest number of end-use technologies are small-scale but numerous appliances used for domestic and industrial heating and cooking. In such a scenario, it is important to understand exactly how much hydrogen can be introduced into these appliances before their performance becomes unacceptable and what imposes that limitation. This study seeks to explore a range of significantly higher hydrogen blends and a broad range of factors that might limit operability or environmental acceptability. We will present tests from a burner designed for space heating and optimized for natural gas as an increasing % of hydrogen blends (increasing from 25%) were burned and explore the range of parameters that might govern the acceptability of operation. These include gaseous emissions (particularly NOx and unburned carbon), temperature, flame length, stability and general operational acceptability. Results will show emissions, Temperature, and flame length as a function of thermal load and percentage of hydrogen in the blend. The relevant application and regulation will ultimately determine the acceptability of these values, so it is important to understand the full operational envelope of the burners in question through the sort of extensive parametric testing we have carried out. The present dataset should represent a useful data source for designers interested in exploring appliance operability. In addition to this, we present data on two factors that may be absolutes in determining allowable hydrogen percentages. The first of these is flame blowback. Our results show that, for our system, the threshold between acceptable and unacceptable performance lies between 60 and 65% mol% hydrogen. Another factor that may limit operation, and which would be important in domestic applications, is the acoustic performance of these burners. We will describe a range of operational conditions in which hydrogen blend burners produce a loud and invasive ‘screech’. It will be important for equipment designers and users to find ways to avoid this or mitigate it if performance is to be deemed acceptable.

Keywords: blends, operational, domestic appliances, future system operation.

Procedia PDF Downloads 24
325 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids

Authors: S. Gariani, I. Shyha

Abstract:

Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.

Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions

Procedia PDF Downloads 279
324 Performance of HVOF Sprayed Ni-20CR and Cr3C2-NiCr Coatings on Fe-Based Superalloy in an Actual Industrial Environment of a Coal Fired Boiler

Authors: Tejinder Singh Sidhu

Abstract:

Hot corrosion has been recognized as a severe problem in steam-powered electricity generation plants and industrial waste incinerators as it consumes the material at an unpredictably rapid rate. Consequently, the load-carrying ability of the components reduces quickly, eventually leading to catastrophic failure. The inability to either totally prevent hot corrosion or at least detect it at an early stage has resulted in several accidents, leading to loss of life and/or destruction of infrastructures. A number of countermeasures are currently in use or under investigation to combat hot corrosion, such as using inhibitors, controlling the process parameters, designing a suitable industrial alloy, and depositing protective coatings. However, the protection system to be selected for a particular application must be practical, reliable, and economically viable. Due to the continuously rising cost of the materials as well as increased material requirements, the coating techniques have been given much more importance in recent times. Coatings can add value to products up to 10 times the cost of the coating. Among the different coating techniques, thermal spraying has grown into a well-accepted industrial technology for applying overlay coatings onto the surfaces of engineering components to allow them to function under extreme conditions of wear, erosion-corrosion, high-temperature oxidation, and hot corrosion. In this study, the hot corrosion performances of Ni-20Cr and Cr₃C₂-NiCr coatings developed by High Velocity Oxy-Fuel (HVOF) process have been studied. The coatings were developed on a Fe-based superalloy, and experiments were performed in an actual industrial environment of a coal-fired boiler. The cyclic study was carried out around the platen superheater zone where the temperature was around 1000°C. The study was conducted for 10 cycles, and one cycle was consisting of 100 hours of heating followed by 1 hour of cooling at ambient temperature. Both the coatings deposited on Fe-based superalloy imparted better hot corrosion resistance than the uncoated one. The Ni-20Cr coated superalloy performed better than the Cr₃C₂-NiCr coated in the actual working conditions of the coal fired boiler. It is found that the formation of chromium oxide at the boundaries of Ni-rich splats of the coating blocks the inward permeation of oxygen and other corrosive species to the substrate.

Keywords: hot corrosion, coating, HVOF, oxidation

Procedia PDF Downloads 83
323 Evaluation of Air Movement, Humidity and Temperature Perceptions with the Occupant Satisfaction in Office Buildings in Hot and Humid Climate Regions by Means of Field Surveys

Authors: Diego S. Caetano, Doreen E. Kalz, Louise L. B. Lomardo, Luiz P. Rosa

Abstract:

The energy consumption in non-residential buildings in Brazil has a great impact on the national infrastructure. The growth of the energy consumption has a special role over the building cooling systems, supported by the increased people's requirements on hygrothermal comfort. This paper presents how the occupants of office buildings notice and evaluate the hygrothermic comfort regarding temperature, humidity, and air movement, considering the cooling systems presented at the buildings studied, analyzed by real occupants in areas of hot and humid climate. The paper presents results collected over a long time from 3 office buildings in the cities of Rio de Janeiro and Niteroi (Brazil) in 2015 and 2016, from daily questionnaires with eight questions answered by 114 people between 3 to 5 weeks per building, twice a day (10 a.m. and 3 p.m.). The paper analyses 6 out of 8 questions, emphasizing on the perception of temperature, humidity, and air movement. Statistics analyses were made crossing participant answers and humidity and temperature data related to time high time resolution time. Analyses were made from regressions comparing: internal and external temperature, and then compared with the answers of the participants. The results were put in graphics combining statistic graphics related to temperature and air humidity with the answers of the real occupants. Analysis related to the perception of the participants to humidity and air movements were also analyzed. The hygrothermal comfort statistic model of the European standard DIN EN 15251 and that from the Brazilian standard NBR 16401 were compared taking into account the perceptions of the hygrothermal comfort of the participants, with emphasis on air humidity, taking basis on prior studies published on this same research. The studies point out a relative tolerance for higher temperatures than the ones determined by the standards, besides a variation on the participants' perception concerning air humidity. The paper presents a group of detailed information that permits to improve the quality of the buildings based on the perception of occupants of the office buildings, contributing to the energy reduction without health damages and demands of necessary hygrothermal comfort, reducing the consumption of electricity on cooling.

Keywords: thermal comfort, energy consumption, energy standards, comfort models

Procedia PDF Downloads 323
322 Vertebral Artery Dissection Complicating Pregnancy and Puerperium: Case Report and Review of the Literature

Authors: N. Reza Pour, S. Chuah, T. Vo

Abstract:

Background: Vertebral artery dissection (VAD) is a rare complication of pregnancy. It can occur spontaneously or following a traumatic event. The pathogenesis is unclear. Predisposing factors include chronic hypertension, Marfan’s syndrome, fibromuscular dysplasia, vasculitis and cystic medial necrosis. Physiological changes of pregnancy have also been proposed as potential mechanisms of injury to the vessel wall. The clinical presentation varies and it can present as a headache, neck pain, diplopia, transient ischaemic attack, or an ischemic stroke. Isolated cases of VAD in pregnancy and puerperium have been reported in the literature. One case was found to have posterior circulation stroke as a result of bilateral VAD and labour was induced at 37 weeks gestation for preeclampsia. Another patient at 38 weeks with severe neck pain that persisted after induction for elevated blood pressure and arteriography showed right VAD postpartum. A single case of lethal VAD in pregnancy with subsequent massive subarachnoid haemorrhage has been reported which was confirmed by the autopsy. Case Presentation: We report two cases of vertebral artery dissection in pregnancy. The first patient was a 32-year-old primigravida presented at the 38th week of pregnancy with the onset of early labour and blood pressure (BP) of 130/70 on arrival. After 2 hours, the patient developed a severe headache with blurry vision and BP was 238/120. Despite treatment with an intravenous antihypertensive, she had eclamptic fit. Magnesium solfate was started and Emergency Caesarean Section was performed under the general anaesthesia. On the second day after the operation, she developed left-sided neck pain. Magnetic Resonance Imaging (MRI) angiography confirmed a short segment left vertebral artery dissection at the level of C3. The patient was treated with aspirin and remained stable without any neurological deficit. The second patient was a 33-year-old primigavida who was admitted to the hospital at 36 weeks gestation with BP of 155/105, constant headache and visual disturbances. She was medicated with an oral antihypertensive agent. On day 4, she complained of right-sided neck pain. MRI angiogram revealed a short segment dissection of the right vertebral artery at the C2-3 level. Pregnancy was terminated on the same day with emergency Caesarean Section and anticoagulation was started subsequently. Post-operative recovery was complicated by rectus sheath haematoma requiring evacuation. She was discharged home on Aspirin without any neurological sequelae. Conclusion: Because of collateral circulation, unilateral vertebral artery dissections may go unrecognized and may be more common than suspected. The outcome for most patients is benign, reflecting the adequacy of the collateral circulation in young patients. Spontaneous VAD is usually treated with anticoagulation or antiplatelet therapy for a minimum of 3-6 months to prevent future ischaemic events, allowing the dissection to heal on its own. We had two cases of VAD in the context of hypertensive disorders of pregnancy with an acceptable outcome. A high level of vigilance is required particularly with preeclamptic patients presenting with head/neck pain to allow an early diagnosis. This is as we hypothesize, early and aggressive management of vertebral artery dissection may potentially prevent further complications.

Keywords: eclampsia, preeclampsia, pregnancy, Vertebral Artery Dissection

Procedia PDF Downloads 279
321 Advancements in Arthroscopic Surgery Techniques for Anterior Cruciate Ligament (ACL) Reconstruction

Authors: Islam Sherif, Ahmed Ashour, Ahmed Hassan, Hatem Osman

Abstract:

Anterior Cruciate Ligament (ACL) injuries are common among athletes and individuals participating in sports with sudden stops, pivots, and changes in direction. Arthroscopic surgery is the gold standard for ACL reconstruction, aiming to restore knee stability and function. Recent years have witnessed significant advancements in arthroscopic surgery techniques, graft materials, and technological innovations, revolutionizing the field of ACL reconstruction. This presentation delves into the latest advancements in arthroscopic surgery techniques for ACL reconstruction and their potential impact on patient outcomes. Traditionally, autografts from the patellar tendon, hamstring tendon, or quadriceps tendon have been commonly used for ACL reconstruction. However, recent studies have explored the use of allografts, synthetic scaffolds, and tissue-engineered grafts as viable alternatives. This abstract evaluates the benefits and potential drawbacks of each graft type, considering factors such as graft incorporation, strength, and risk of graft failure. Moreover, the application of augmented reality (AR) and virtual reality (VR) technologies in surgical planning and intraoperative navigation has gained traction. AR and VR platforms provide surgeons with detailed 3D anatomical reconstructions of the knee joint, enhancing preoperative visualization and aiding in graft tunnel placement during surgery. We discuss the integration of AR and VR in arthroscopic ACL reconstruction procedures, evaluating their accuracy, cost-effectiveness, and overall impact on surgical outcomes. Beyond graft selection and surgical navigation, patient-specific planning has gained attention in recent research. Advanced imaging techniques, such as MRI-based personalized planning, enable surgeons to tailor ACL reconstruction procedures to each patient's unique anatomy. By accounting for individual variations in the femoral and tibial insertion sites, this personalized approach aims to optimize graft placement and potentially improve postoperative knee kinematics and stability. Furthermore, rehabilitation and postoperative care play a crucial role in the success of ACL reconstruction. This abstract explores novel rehabilitation protocols, emphasizing early mobilization, neuromuscular training, and accelerated recovery strategies. Integrating technology, such as wearable sensors and mobile applications, into postoperative care can facilitate remote monitoring and timely intervention, contributing to enhanced rehabilitation outcomes. In conclusion, this presentation provides an overview of the cutting-edge advancements in arthroscopic surgery techniques for ACL reconstruction. By embracing innovative graft materials, augmented reality, patient-specific planning, and technology-driven rehabilitation, orthopedic surgeons and sports medicine specialists can achieve superior outcomes in ACL injury management. These developments hold great promise for improving the functional outcomes and long-term success rates of ACL reconstruction, benefitting athletes and patients alike.

Keywords: arthroscopic surgery, ACL, autograft, allograft, graft materials, ACL reconstruction, synthetic scaffolds, tissue-engineered graft, virtual reality, augmented reality, surgical planning, intra-operative navigation

Procedia PDF Downloads 92
320 Spark Plasma Sintering/Synthesis of Alumina-Graphene Composites

Authors: Nikoloz Jalabadze, Roin Chedia, Lili Nadaraia, Levan Khundadze

Abstract:

Nanocrystalline materials in powder condition can be manufactured by a number of different methods, however manufacture of composite materials product in the same nanocrystalline state is still a problem because the processes of compaction and synthesis of nanocrystalline powders go with intensive growth of particles – the process which promotes formation of pieces in an ordinary crystalline state instead of being crystallized in the desirable nanocrystalline state. To date spark plasma sintering (SPS) has been considered as the most promising and energy efficient method for producing dense bodies of composite materials. An advantage of the SPS method in comparison with other methods is mainly low temperature and short time of the sintering procedure. That finally gives an opportunity to obtain dense material with nanocrystalline structure. Graphene has recently garnered significant interest as a reinforcing phase in composite materials because of its excellent electrical, thermal and mechanical properties. Graphene nanoplatelets (GNPs) in particular have attracted much interest as reinforcements for ceramic matrix composites (mostly in Al2O3, Si3N4, TiO2, ZrB2 a. c.). SPS has been shown to fully densify a variety of ceramic systems effectively including Al2O3 and often with improvements in mechanical and functional behavior. Alumina consolidated by SPS has been shown to have superior hardness, fracture toughness, plasticity and optical translucency compared to conventionally processed alumina. Knowledge of how GNPs influence sintering behavior is important to effectively process and manufacture process. In this study, the effects of GNPs on the SPS processing of Al2O3 are investigated by systematically varying sintering temperature, holding time and pressure. Our experiments showed that SPS process is also appropriate for the synthesis of nanocrystalline powders of alumina-graphene composites. Depending on the size of the molds, it is possible to obtain different amount of nanopowders. Investigation of the structure, physical-chemical, mechanical and performance properties of the elaborated composite materials was performed. The results of this study provide a fundamental understanding of the effects of GNP on sintering behavior, thereby providing a foundation for future optimization of the processing of these promising nanocomposite systems.

Keywords: alumina oxide, ceramic matrix composites, graphene nanoplatelets, spark-plasma sintering

Procedia PDF Downloads 376
319 Automated Facial Symmetry Assessment for Orthognathic Surgery: Utilizing 3D Contour Mapping and Hyperdimensional Computing-Based Machine Learning

Authors: Wen-Chung Chiang, Lun-Jou Lo, Hsiu-Hsia Lin

Abstract:

This study aimed to improve the evaluation of facial symmetry, which is crucial for planning and assessing outcomes in orthognathic surgery (OGS). Facial symmetry plays a key role in both aesthetic and functional aspects of OGS, making its accurate evaluation essential for optimal surgical results. To address the limitations of traditional methods, a different approach was developed, combining three-dimensional (3D) facial contour mapping with hyperdimensional (HD) computing to enhance precision and efficiency in symmetry assessments. The study was conducted at Chang Gung Memorial Hospital, where data were collected from 2018 to 2023 using 3D cone beam computed tomography (CBCT), a highly detailed imaging technique. A large and comprehensive dataset was compiled, consisting of 150 normal individuals and 2,800 patients, totaling 5,750 preoperative and postoperative facial images. These data were critical for training a machine learning model designed to analyze and quantify facial symmetry. The machine learning model was trained to process 3D contour data from the CBCT images, with HD computing employed to power the facial symmetry quantification system. This combination of technologies allowed for an objective and detailed analysis of facial features, surpassing the accuracy and reliability of traditional symmetry assessments, which often rely on subjective visual evaluations by clinicians. In addition to developing the system, the researchers conducted a retrospective review of 3D CBCT data from 300 patients who had undergone OGS. The patients’ facial images were analyzed both before and after surgery to assess the clinical utility of the proposed system. The results showed that the facial symmetry algorithm achieved an overall accuracy of 82.5%, indicating its robustness in real-world clinical applications. Postoperative analysis revealed a significant improvement in facial symmetry, with an average score increase of 51%. The mean symmetry score rose from 2.53 preoperatively to 3.89 postoperatively, demonstrating the system's effectiveness in quantifying improvements after OGS. These results underscore the system's potential for providing valuable feedback to surgeons and aiding in the refinement of surgical techniques. The study also led to the development of a web-based system that automates facial symmetry assessment. This system integrates HD computing and 3D contour mapping into a user-friendly platform that allows for rapid and accurate evaluations. Clinicians can easily access this system to perform detailed symmetry assessments, making it a practical tool for clinical settings. Additionally, the system facilitates better communication between clinicians and patients by providing objective, easy-to-understand symmetry scores, which can help patients visualize the expected outcomes of their surgery. In conclusion, this study introduced a valuable and highly effective approach to facial symmetry evaluation in OGS, combining 3D contour mapping, HD computing, and machine learning. The resulting system achieved high accuracy and offers a streamlined, automated solution for clinical use. The development of the web-based platform further enhances its practicality, making it a valuable tool for improving surgical outcomes and patient satisfaction in orthognathic surgery.

Keywords: facial symmetry, orthognathic surgery, facial contour mapping, hyperdimensional computing

Procedia PDF Downloads 27
318 Impact of Primary Care Telemedicine Consultations On Health Care Resource Utilisation: A Systematic Review

Authors: Anastasia Constantinou, Stephen Morris

Abstract:

Background: The adoption of synchronous and asynchronous telemedicine modalities for primary care consultations has exponentially increased since the COVID-19 pandemic. However, there is limited understanding of how virtual consultations influence healthcare resource utilization and other quality measures including safety, timeliness, efficiency, patient and provider satisfaction, cost-effectiveness and environmental impact. Aim: Quantify the rate of follow-up visits, emergency department visits, hospitalizations, request for investigations and prescriptions and comment on the effect on different quality measures associated with different telemedicine modalities used for primary care services and primary care referrals to secondary care Design and setting: Systematic review in primary care Methods: A systematic search was carried out across three databases (Medline, PubMed and Scopus) between August and November 2023, using terms related to telemedicine, general practice, electronic referrals, follow-up, use and efficiency and supported by citation searching. This was followed by screening according to pre-defined criteria, data extraction and critical appraisal. Narrative synthesis and metanalysis of quantitative data was used to summarize findings. Results: The search identified 2230 studies; 50 studies are included in this review. There was a prevalence of asynchronous modalities in both primary care services (68%) and referrals from primary care to secondary care (83%), and most of the study participants were females (63.3%), with mean age of 48.2. The average follow-up for virtual consultations in primary care was 28.4% (eVisits: 36.8%, secure messages 18.7%, videoconference 23.5%) with no significant difference between them or F2F consultations. There was an average annual reduction of primary care visits by 0.09/patient, an increase in telephone visits by 0.20/patient, an increase in ED encounters by 0.011/patient, an increase in hospitalizations by 0.02/patient and an increase in out of hours visits by 0.019/patient. Laboratory testing was requested on average for 10.9% of telemedicine patients, imaging or procedures for 5.6% and prescriptions for 58.7% of patients. When looking at referrals to secondary care, on average 36.7% of virtual referrals required follow-up visit, with the average rate of follow-up for electronic referrals being higher than for videoconferencing (39.2% vs 23%, p=0.167). Technical failures were reported on average for 1.4% of virtual consultations to primary care. When using carbon footprint estimates, we calculate that the use of telemedicine in primary care services can potentially provide a net decrease in carbon footprint by 0.592kgCO2/patient/year. When follow-up rates are taken into account, we estimate that virtual consultations reduce carbon footprint for primary care services by 2.3 times, and for secondary care referrals by 2.2 times. No major concerns regarding quality of care, or patient satisfaction were identified. 5/7 studies that addressed cost-effectiveness, reported increased savings. Conclusions: Telemedicine provides quality, cost-effective, and environmentally sustainable care for patients in primary care with inconclusive evidence regarding the rates of subsequent healthcare utilization. The evidence is limited by heterogeneous, small-scale studies and lack of prospective comparative studies. Further research to identify the most appropriate telemedicine modality for different patient populations, clinical presentations, service provision (e.g. used to follow-up patients instead of initial diagnosis) as well as further education for patients and providers alike on how to make best use of this service is expected to improve outcomes and influence practice.

Keywords: telemedicine, healthcare utilisation, digital interventions, environmental impact, sustainable healthcare

Procedia PDF Downloads 57