Search results for: heat transport
375 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers
Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi
Abstract:
Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics
Procedia PDF Downloads 171374 Port Miami in the Caribbean and Mesoamerica: Data, Spatial Networks and Trends
Authors: Richard Grant, Landolf Rhode-Barbarigos, Shouraseni Sen Roy, Lucas Brittan, Change Li, Aiden Rowe
Abstract:
Ports are critical for the US economy, connecting farmers, manufacturers, retailers, consumers and an array of transport and storage operators. Port facilities vary widely in terms of their productivity, footprint, specializations, and governance. In this context, Port Miami is considered as one of the busiest ports providing both cargo and cruise services in connecting the wider region of the Caribbean and Mesoamerica to the global networks. It is considered as the “Cruise Capital of the World and Global Gateway of the Americas” and “leading container port in Florida.” Furthermore, it has also been ranked as one of the top container ports in the world and the second most efficient port in North America. In this regard, Port Miami has made significant investments in the strategic and capital infrastructure of about US$1 billion, including increasing the channel depth and other onshore infrastructural enhancements. Therefore, this study involves a detailed analysis of Port Miami’s network, using publicly available multiple years of data about marine vessel traffic, cargo, and connectivity and performance indices from 2015-2021. Through the analysis of cargo and cruise vessels to and from Port Miami and its relative performance at the global scale from 2015 to 2021, this study examines the port’s long-term resilience and future growth potential. The main results of the analyses indicate that the top category for both inbound and outbound cargo is manufactured products and textiles. In addition, there are a lot of fresh fruits, vegetables, and produce for inbound and processed food for outbound cargo. Furthermore, the top ten port connections for Port Miami are all located in the Caribbean region, the Gulf of Mexico, and the Southeast USA. About half of the inbound cargo comes from Savannah, Saint Thomas, and Puerto Plata, while outbound cargo is from Puerto Corte, Freeport, and Kingston. Additionally, for cruise vessels, a significantly large number of vessels originate from Nassau, followed by Freeport. The number of passenger's vessels pre-COVID was almost 1,000 per year, which dropped substantially in 2020 and 2021 to around 300 vessels. Finally, the resilience and competitiveness of Port Miami were also assessed in terms of its network connectivity by examining the inbound and outbound maritime vessel traffic. It is noteworthy that the most frequent port connections for Port Miami were Freeport and Savannah, followed by Kingston, Nassau, and New Orleans. However, several of these ports, Puerto Corte, Veracruz, Puerto Plata, and Santo Thomas, have low resilience and are highly vulnerable, which needs to be taken into consideration for the long-term resilience of Port Miami in the future.Keywords: port, Miami, network, cargo, cruise
Procedia PDF Downloads 79373 Evolution of Microstructure through Phase Separation via Spinodal Decomposition in Spinel Ferrite Thin Films
Authors: Nipa Debnath, Harinarayan Das, Takahiko Kawaguchi, Naonori Sakamoto, Kazuo Shinozaki, Hisao Suzuki, Naoki Wakiya
Abstract:
Nowadays spinel ferrite magnetic thin films have drawn considerable attention due to their interesting magnetic and electrical properties with enhanced chemical and thermal stability. Spinel ferrite magnetic films can be implemented in magnetic data storage, sensors, and spin filters or microwave devices. It is well established that the structural, magnetic and transport properties of the magnetic thin films are dependent on microstructure. Spinodal decomposition (SD) is a phase separation process, whereby a material system is spontaneously separated into two phases with distinct compositions. The periodic microstructure is the characteristic feature of SD. Thus, SD can be exploited to control the microstructure at the nanoscale level. In bulk spinel ferrites having general formula, MₓFe₃₋ₓ O₄ (M= Co, Mn, Ni, Zn), phase separation via SD has been reported only for cobalt ferrite (CFO); however, long time post-annealing is required to occur the spinodal decomposition. We have found that SD occurs in CoF thin film without using any post-deposition annealing process if we apply magnetic field during thin film growth. Dynamic Aurora pulsed laser deposition (PLD) is a specially designed PLD system through which in-situ magnetic field (up to 2000 G) can be applied during thin film growth. The in-situ magnetic field suppresses the recombination of ions in the plume. In addition, the peak’s intensity of the ions in the spectra of the plume also increases when magnetic field is applied to the plume. As a result, ions with high kinetic energy strike into the substrate. Thus, ion-impingement occurred under magnetic field during thin film growth. The driving force of SD is the ion-impingement towards the substrates that is induced by in-situ magnetic field. In this study, we report about the occurrence of phase separation through SD and evolution of microstructure after phase separation in spinel ferrite thin films. The surface morphology of the phase separated films show checkerboard like domain structure. The cross-sectional microstructure of the phase separated films reveal columnar type phase separation. Herein, the decomposition wave propagates in lateral direction which has been confirmed from the lateral composition modulations in spinodally decomposed films. Large magnetic anisotropy has been found in spinodally decomposed nickel ferrite (NFO) thin films. This approach approves that magnetic field is also an important thermodynamic parameter to induce phase separation by the enhancement of up-hill diffusion in thin films. This thin film deposition technique could be a more efficient alternative for the fabrication of self-organized phase separated thin films and employed in controlling of the microstructure at nanoscale level.Keywords: Dynamic Aurora PLD, magnetic anisotropy, spinodal decomposition, spinel ferrite thin film
Procedia PDF Downloads 366372 Nonlinear Optics of Dirac Fermion Systems
Authors: Vipin Kumar, Girish S. Setlur
Abstract:
Graphene has been recognized as a promising 2D material with many new properties. However, pristine graphene is gapless which hinders its direct application towards graphene-based semiconducting devices. Graphene is a zero-gapp and linearly dispersing semiconductor. Massless charge carriers (quasi-particles) in graphene obey the relativistic Dirac equation. These Dirac fermions show very unusual physical properties such as electronic, optical and transport. Graphene is analogous to two-level atomic systems and conventional semiconductors. We may expect that graphene-based systems will also exhibit phenomena that are well-known in two-level atomic systems and in conventional semiconductors. Rabi oscillation is a nonlinear optical phenomenon well-known in the context of two-level atomic systems and also in conventional semiconductors. It is the periodic exchange of energy between the system of interest and the electromagnetic field. The present work describes the phenomenon of Rabi oscillations in graphene based systems. Rabi oscillations have already been described theoretically and experimentally in the extensive literature available on this topic. To describe Rabi oscillations they use an approximation known as rotating wave approximation (RWA) well-known in studies of two-level systems. RWA is valid only near conventional resonance (small detuning)- when the frequency of the external field is nearly equal to the particle-hole excitation frequency. The Rabi frequency goes through a minimum close to conventional resonance as a function of detuning. Far from conventional resonance, the RWA becomes rather less useful and we need some other technique to describe the phenomenon of Rabi oscillation. In conventional systems, there is no second minimum - the only minimum is at conventional resonance. But in graphene we find anomalous Rabi oscillations far from conventional resonance where the Rabi frequency goes through a minimum that is much smaller than the conventional Rabi frequency. This is known as anomalous Rabi frequency and is unique to graphene systems. We have shown that this is attributable to the pseudo-spin degree of freedom in graphene systems. A new technique, which is an alternative to RWA called asymptotic RWA (ARWA), has been invoked by our group to discuss the phenomenon of Rabi oscillation. Experimentally accessible current density shows different types of threshold behaviour in frequency domain close to the anomalous Rabi frequency depending on the system chosen. For single layer graphene, the exponent at threshold is equal to 1/2 while in case of bilayer graphene, it is computed to be equal to 1. Bilayer graphene shows harmonic (anomalous) resonances absent in single layer graphene. The effect of asymmetry and trigonal warping (a weak direct inter-layer hopping in bilayer graphene) on these oscillations is also studied in graphene systems. Asymmetry has a remarkable effect only on anomalous Rabi oscillations whereas the Rabi frequency near conventional resonance is not significantly affected by the asymmetry parameter. In presence of asymmetry, these graphene systems show Rabi-like oscillations (offset oscillations) even for vanishingly small applied field strengths (less than the gap parameter). The frequency of offset oscillations may be identified with the asymmetry parameter.Keywords: graphene, Bilayer graphene, Rabi oscillations, Dirac fermion systems
Procedia PDF Downloads 298371 Reorientation of Sustainable Livestock Management: A Case Study Applied to Wastes Management in Faculty of Animal Husbandry, Padjadjaran University, Indonesia
Authors: Raka Rahmatulloh, Mohammad Ilham Nugraha, Muhammad Ifan Fathurrahman
Abstract:
The agricultural sector covers a wide area, one of them is livestock subsector that supply needs of the food source of animal protein. Animal protein is produced by the main livestock production such as meat, milk, eggs, etc. Besides the main production, livestock would produce metabolic residue, so called livestock wastes. Characteristics of livestock wastes can be either solid (feces), liquid (urine), and gas (methane) which turned out to be useful and has economical value when well-processed and well-controlled. Nowadays, this livestock wastes is considered as a source of pollutants, especially water pollution. If the source of pollutants used in an integrated way, it will have a positive impact on organic farming and a healthy environment. Management of livestock wastes can be integrated with the farming sector to the planting and caring that rely on fertilizers. Most Indonesian farmers still use chemical fertilizers, where the use of it in the long term will disturb the ecological balance of the environment. One of the main efforts is to use organic fertilizers instead of chemical fertilizer that conducted by the Faculty of Animal Husbandry, Padjadjaran University. The method is to use the solid waste of livestock and agricultural wastes into liquid organic fertilizer, feed additive, biogas and vermicompost through decomposition. The decomposition takes as long as 14 days including aeration and extraction process using water as a nutrients solvent media which contained in decomposes and disinfection media to release pathogenic microorganisms in decomposes. Liquid organic fertilizer has highly efficient for the farmers to have a ratio of carbon/nitrogen (C/N) 25/1 to 30/1 and neutral pH (6.5-7.5) which is good for plant growth. Feed additive may be given to improve the digestibility of feed so that substances can be easily absorbed by the body for production. Biogas contains methane (CH4), which has a high enough heat to produce electricity. Vermicompost is an overhaul of waste organic material that has excellent structure, porosity, aeration, drainage, and moisture holding capacity. Based on the case study above, an integrated livestock wastes management program strongly supports the Indonesian government in the achievement of sustainable livestock development.Keywords: integrated, livestock wastes, organic fertilizer, sustainable livestock development
Procedia PDF Downloads 434370 Mitochondrial DNA Defect and Mitochondrial Dysfunction in Diabetic Nephropathy: The Role of Hyperglycemia-Induced Reactive Oxygen Species
Authors: Ghada Al-Kafaji, Mohamed Sabry
Abstract:
Mitochondria are the site of cellular respiration and produce energy in the form of adenosine triphosphate (ATP) via oxidative phosphorylation. They are the major source of intracellular reactive oxygen species (ROS) and are also direct target to ROS attack. Oxidative stress and ROS-mediated disruptions of mitochondrial function are major components involved in the pathogenicity of diabetic complications. In this work, the changes in mitochondrial DNA (mtDNA) copy number, biogenesis, gene expression of mtDNA-encoded subunits of electron transport chain (ETC) complexes, and mitochondrial function in response to hyperglycemia-induced ROS and the effect of direct inhibition of ROS on mitochondria were investigated in an in vitro model of diabetic nephropathy using human renal mesangial cells. The cells were exposed to normoglycemic and hyperglycemic conditions in the presence and absence of Mn(III)tetrakis(4-benzoic acid) porphyrin chloride (MnTBAP) or catalase for 1, 4 and 7 days. ROS production was assessed by the confocal microscope and flow cytometry. mtDNA copy number and PGC-1a, NRF-1, and TFAM, as well as ND2, CYTB, COI, and ATPase 6 transcripts, were all analyzed by real-time PCR. PGC-1a, NRF-1, and TFAM, as well as ND2, CYTB, COI, and ATPase 6 proteins, were analyzed by Western blotting. Mitochondrial function was determined by assessing mitochondrial membrane potential and adenosine triphosphate (ATP) levels. Hyperglycemia-induced a significant increase in the production of mitochondrial superoxide and hydrogen peroxide at day 1 (P < 0.05), and this increase remained significantly elevated at days 4 and 7 (P < 0.05). The copy number of mtDNA and expression of PGC-1a, NRF-1, and TFAM as well as ND2, CYTB, CO1 and ATPase 6 increased after one day of hyperglycemia (P < 0.05), with a significant reduction in all those parameters at 4 and 7 days (P < 0.05). The mitochondrial membrane potential decreased progressively at 1 to 7 days of hyperglycemia with the parallel progressive reduction in ATP levels over time (P < 0.05). MnTBAP and catalase treatment of cells cultured under hyperglycemic conditions attenuated ROS production reversed renal mitochondrial oxidative stress and improved mtDNA, mitochondrial biogenesis, and function. These results show that hyperglycemia-induced ROS caused an early increase in mtDNA copy number, mitochondrial biogenesis and mtDNA-encoded gene expression of the ETC subunits in human mesangial cells as a compensatory response to the decline in mitochondrial function, which precede the mtDNA defect and mitochondrial dysfunction with a progressive oxidative response. Protection from ROS-mediated damage to renal mitochondria induced by hyperglycemia may be a novel therapeutic approach for the prevention/treatment of DN.Keywords: diabetic nephropathy, hyperglycemia, reactive oxygen species, oxidative stress, mtDNA, mitochondrial dysfunction, manganese superoxide dismutase, catalase
Procedia PDF Downloads 247369 Quantum Mechanics as A Limiting Case of Relativistic Mechanics
Authors: Ahmad Almajid
Abstract:
The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics
Procedia PDF Downloads 78368 Coulomb-Explosion Driven Proton Focusing in an Arched CH Target
Authors: W. Q. Wang, Y. Yin, D. B. Zou, T. P. Yu, J. M. Ouyang, F. Q. Shao
Abstract:
High-energy-density state, i.e., matter and radiation at energy densities in excess of 10^11 J/m^3, is related to material, nuclear physics, astrophysics, and geophysics. Laser-driven particle beams are better suited to heat the matter as a trigger due to their unique properties of ultrashort duration and low emittance. Compared to X-ray and electron sources, it is easier to generate uniformly heated large-volume material for the proton and ion beams because of highly localized energy deposition. With the construction of state-of-art high power laser facilities, creating of extremely conditions of high-temperature and high-density in laboratories becomes possible. It has been demonstrated that on a picosecond time scale the solid density material can be isochorically heated to over 20 eV by the ultrafast proton beam generated from spherically shaped targets. For the above-mentioned technique, the proton energy density plays a crucial role in the formation of warm dense matter states. Recently, several methods have devoted to realize the focusing of the accelerated protons, involving externally exerted static-fields or specially designed targets interacting with a single or multi-pile laser pulses. In previous works, two co-propagating or opposite direction laser pulses are employed to strike a submicron plasma-shell. However, ultra-high pulse intensities, accurately temporal synchronization and undesirable transverse instabilities for a long time are still intractable for currently experimental implementations. A mechanism of the focusing of laser-driven proton beams from two-ion-species arched targets is investigated by multi-dimensional particle-in-cell simulations. When an intense linearly-polarized laser pulse impinges on the thin arched target, all electrons are completely evacuated, leading to a Coulomb-explosive electric-field mostly originated from the heavier carbon ions. The lighter protons in the moving reference frame by the ionic sound speed will be accelerated and effectively focused because of this radially isotropic field. At a 2.42×10^21 W/cm^2 laser intensity, a ballistic proton bunch with its energy-density as high as 2.15×10^17 J/m^3 is produced, and the highest proton energy and the focusing position agree well with that from the theory.Keywords: Coulomb explosion, focusing, high-energy-density, ion acceleration
Procedia PDF Downloads 345367 High Capacity SnO₂/Graphene Composite Anode Materials for Li-Ion Batteries
Authors: Hilal Köse, Şeyma Dombaycıoğlu, Ali Osman Aydın, Hatem Akbulut
Abstract:
Rechargeable lithium-ion batteries (LIBs) have become promising power sources for a wide range of applications, such as mobile communication devices, portable electronic devices and electrical/hybrid vehicles due to their long cycle life, high voltage and high energy density. Graphite, as anode material, has been widely used owing to its extraordinary electronic transport properties, large surface area, and high electrocatalytic activities although its limited specific capacity (372 mAh g-1) cannot fulfil the increasing demand for lithium-ion batteries with higher energy density. To settle this problem, many studies have been taken into consideration to investigate new electrode materials and metal oxide/graphene composites are selected as a kind of promising material for lithium ion batteries as their specific capacities are much higher than graphene. Among them, SnO₂, an n-type and wide band gap semiconductor, has attracted much attention as an anode material for the new-generation lithium-ion batteries with its high theoretical capacity (790 mAh g-1). However, it suffers from large volume changes and agglomeration associated with the Li-ion insertion and extraction processes, which brings about failure and loss of electrical contact of the anode. In addition, there is also a huge irreversible capacity during the first cycle due to the formation of amorphous Li₂O matrix. To obtain high capacity anode materials, we studied on the synthesis and characterization of SnO₂-Graphene nanocomposites and investigated the capacity of this free-standing anode material in this work. For this aim, firstly, graphite oxide was obtained from graphite powder using the method described by Hummers method. To prepare the nanocomposites as free-standing anode, graphite oxide particles were ultrasonicated in distilled water with SnO2 nanoparticles (1:1, w/w). After vacuum filtration, the GO-SnO₂ paper was peeled off from the PVDF membrane to obtain a flexible, free-standing GO paper. Then, GO structure was reduced in hydrazine solution. Produced SnO2- graphene nanocomposites were characterized by scanning electron microscopy (SEM), energy dispersive X-ray spectrometer (EDS), and X-ray diffraction (XRD) analyses. CR2016 cells were assembled in a glove box (MBraun-Labstar). The cells were charged and discharged at 25°C between fixed voltage limits (2.5 V to 0.2 V) at a constant current density on a BST8-MA MTI model battery tester with 0.2C charge-discharge rate. Cyclic voltammetry (CV) was performed at the scan rate of 0.1 mVs-1 and electrochemical impedance spectroscopy (EIS) measurements were carried out using Gamry Instrument applying a sine wave of 10 mV amplitude over a frequency range of 1000 kHz-0.01 Hz.Keywords: SnO₂-graphene, nanocomposite, anode, Li-ion battery
Procedia PDF Downloads 227366 Uniform and Controlled Cooling of a Steel Block by Multiple Jet Impingement and Airflow
Authors: E. K. K. Agyeman, P. Mousseau, A. Sarda, D. Edelin
Abstract:
During the cooling of hot metals by the circulation of water in canals formed by boring holes in the metal, the rapid phase change of the water due to the high initial temperature of the metal leads to a non homogenous distribution of the phases within the canals. The liquid phase dominates towards the entrance of the canal while the gaseous phase dominates towards the exit. As a result of the different thermal properties of both phases, the metal is not uniformly cooled. This poses a problem during the cooling of moulds, where a uniform temperature distribution is needed in order to ensure the integrity of the part being formed. In this study, the simultaneous use of multiple water jets and an airflow for the uniform and controlled cooling of a steel block is investigated. A circular hole is bored at the centre of the steel block along its length and a perforated steel pipe is inserted along the central axis of the hole. Water jets that impact the internal surface of the steel block are generated from the perforations in the steel pipe when the water within it is put under pressure. These jets are oriented in the opposite direction to that of gravity. An intermittent airflow is imposed in the annular space between the steel pipe and the surface of hole bored in the steel block. The evolution of the temperature with respect to time of the external surface of the block is measured with the help of thermocouples and an infrared camera. Due to the high initial temperature of the steel block (350 °C), the water changes phase when it impacts the internal surface of the block. This leads to high heat fluxes. The strategy used to control the cooling speed of the block is the intermittent impingement of its internal surface by the jets. The intervals of impingement and of non impingement are varied in order to achieve the desired result. An airflow is used during the non impingement periods as an additional regulator of the cooling speed and to improve the temperature homogeneity of the impinged surface. After testing different jet positions, jet speeds and impingement intervals, it’s observed that the external surface of the steel block has a uniform temperature distribution along its length. However, the temperature distribution along its width isn’t uniform with the maximum temperature difference being between the centre of the block and its edge. Changing the positions of the jets has no significant effect on the temperature distribution on the external surface of the steel block. It’s also observed that reducing the jet impingement interval and increasing the non impingement interval slows down the cooling of the block and improves upon the temperature homogeneity of its external surface while increasing the duration of jet impingement speeds up the cooling process.Keywords: cooling speed, homogenous cooling, jet impingement, phase change
Procedia PDF Downloads 125365 Plasma Arc Burner for Pulverized Coal Combustion
Authors: Gela Gelashvili, David Gelenidze, Sulkhan Nanobashvili, Irakli Nanobashvili, George Tavkhelidze, Tsiuri Sitchinava
Abstract:
Development of new highly efficient plasma arc combustion system of pulverized coal is presented. As it is well-known, coal is one of the main energy carriers by means of which electric and heat energy is produced in thermal power stations. The quality of the extracted coal decreases very rapidly. Therefore, the difficulties associated with its firing and complete combustion arise and thermo-chemical preparation of pulverized coal becomes necessary. Usually, other organic fuels (mazut-fuel oil or natural gas) are added to low-quality coal for this purpose. The fraction of additional organic fuels varies within 35-40% range. This decreases dramatically the economic efficiency of such systems. At the same time, emission of noxious substances in the environment increases. Because of all these, intense development of plasma combustion systems of pulverized coal takes place in whole world. These systems are equipped with Non-Transferred Plasma Arc Torches. They allow practically complete combustion of pulverized coal (without organic additives) in boilers, increase of energetic and financial efficiency. At the same time, emission of noxious substances in the environment decreases dramatically. But, the non-transferred plasma torches have numerous drawbacks, e.g. complicated construction, low service life (especially in the case of high power), instability of plasma arc and most important – up to 30% of energy loss due to anode cooling. Due to these reasons, intense development of new plasma technologies that are free from these shortcomings takes place. In our proposed system, pulverized coal-air mixture passes through plasma arc area that burns between to carbon electrodes directly in pulverized coal muffler burner. Consumption of the carbon electrodes is low and does not need a cooling system, but the main advantage of this method is that radiation of plasma arc directly impacts on coal-air mixture that accelerates the process of thermo-chemical preparation of coal to burn. To ensure the stability of the plasma arc in such difficult conditions, we have developed a power source that provides fixed current during fluctuations in the arc resistance automatically compensated by the voltage change as well as regulation of plasma arc length over a wide range. Our combustion system where plasma arc acts directly on pulverized coal-air mixture is simple. This should allow a significant improvement of pulverized coal combustion (especially low-quality coal) and its economic efficiency. Preliminary experiments demonstrated the successful functioning of the system.Keywords: coal combustion, plasma arc, plasma torches, pulverized coal
Procedia PDF Downloads 161364 A Comparison of Direct Water Injection with Membrane Humidifier for Proton Exchange Membrane Fuel Cells Humification
Authors: Flavien Marteau, Pedro Affonso Nóbrega, Pascal Biwole, Nicolas Autrusson, Iona De Bievre, Christian Beauger
Abstract:
Effective water management is essential for the optimal performance of fuel cells. For this reason, many vehicle systems use a membrane humidifier, a passive device that humidifies the air before the cathode inlet. Although they offer good performance, humidifiers are voluminous, costly, and fragile, hence the desire to find an alternative. Direct water injection could be an option, although this method lacks maturity. It consists of injecting liquid water as a spray in the dry heated air coming out from the compressor. This work focuses on the evaluation of direct water injection and its performance compared to the membrane humidifier selected as a reference. Two architectures were experimentally tested to humidify an industrial 2 kW short stack made up of 20 cells of 150 cm² each. For the reference architecture, the inlet air is humidified with a commercial membrane humidifier. For the direct water injection architecture, a pneumatic nozzle was selected to generate a fine spray in the air flow with a Sauter mean diameter of about 20 μm. Initial performance was compared over the entire range of current based on polarisation curves. Then, the influence of various parameters impacting water management was studied, such as the temperature, the gas stoichiometry, and the water injection flow rate. The experimental results obtained confirm the possibility of humidifying the fuel cell using direct water injection. This study, however shows the limits of this humidification method, the mean cell voltage being significantly lower in some operating conditions with direct water injection than with the membrane humidifier. The voltage drop reaches 30 mV per cell (4 %) at 1 A/cm² (1,8 bara, 80 °C) and increases in more demanding humidification conditions. It is noteworthy that the heat of compression available is not enough to evaporate all the injected liquid water in the case of DWI, resulting in a mix of liquid and vapour water entering the fuel cell, whereas only vapour is present with the humidifier. Variation of the injection flow rate shows that part of the injected water is useless for humidification and seems to cross channels without reaching the membrane. The stack was successfully humidified thanks to direct water injection. Nevertheless, our work shows that its implementation requires substantial adaptations and may reduce the fuel cell stack performance when compared to conventional membrane humidifiers, but opportunities for optimisation have been identified.Keywords: cathode humidification, direct water injection, membrane humidifier, proton exchange membrane fuel cell
Procedia PDF Downloads 44363 Peak Constituent Fluxes from Small Arctic Rivers Generated by Late Summer Episodic Precipitation Events
Authors: Shawn G. Gallaher, Lilli E. Hirth
Abstract:
As permafrost thaws with the continued warming of the Alaskan North Slope, a progressively thicker active thaw layer is evidently releasing previously sequestered nutrients, metals, and particulate matter exposed to fluvial transport. In this study, we estimate material fluxes on the North Slope of Alaska during the 2019-2022 melt seasons. The watershed of the Alaskan North Slope can be categorized into three regions: mountains, tundra, and coastal plain. Precipitation and discharge data were collected from repeat visits to 14 sample sites for biogeochemical surface water samples, 7 point discharge measurements, 3 project deployed meteorology stations, and 2 U. S. Geological Survey (USGS) continuous discharge observation sites. The timing, intensity, and spatial distribution of precipitation determine the material flux composition in the Sagavanirktok and surrounding bodies of water, with geogenic constituents (e.g., dissolved inorganic carbon (DIC)) expected from mountain flushed events and biogenic constituents (e.g., dissolved organic compound (DOC)) expected from transitional tundra precipitation events. Project goals include connecting late summer precipitation events to peak discharge to determine the responses of the watershed to localized atmospheric forcing. Field study measurements showed widespread precipitation in August 2019, generating an increase in total suspended solids, dissolved organic carbon, and iron fluxes from the tundra, shifting the main-stem mountain river biogeochemistry toward tundra source characteristics typically only observed during the spring floods. Intuitively, a large-scale precipitation event (as defined by this study as exceeding 12.5 mm of precipitation on a single observation day) would dilute a body of water; however, in this study, concentrations increased with higher discharge responses on several occasions. These large-scale precipitation events continue to produce peak constituent fluxes as the thaw layer increases in depth and late summer precipitation increases, evidenced by 6 large-scale events in July 2022 alone. This increase in late summer events is in sharp contrast to the 3 or fewer large events in July in each of the last 10 years. Changes in precipitation intensity, timing, and location have introduced late summer peak constituent flux events previously confined to the spring freshet.Keywords: Alaska North Slope, arctic rivers, material flux, precipitation
Procedia PDF Downloads 75362 Application of Industrial Ecology to the INSPIRA Zone: Territory Planification and New Activities
Authors: Mary Hanhoun, Jilla Bamarni, Anne-Sophie Bougard
Abstract:
INSPIR’ECO is a 18-month research and innovation project that aims to specify and develop a tool to offer new services for industrials and territorial planners/managers based on Industrial Ecology Principles. This project is carried out on the territory of Salaise Sablons and the services are designed to be deployed on other territories. Salaise-Sablons area is located in the limit of 5 departments on a major European economic axis multimodal traffic (river, rail and road). The perimeter of 330 ha includes 90 hectares occupied by 20 companies, with a total of 900 jobs, and represents a significant potential basin of development. The project involves five multi-disciplinary partners (Syndicat Mixte INSPIRA, ENGIE, IDEEL, IDEAs Laboratory and TREDI). INSPIR’ECO project is based on the principles that local stakeholders need services to pool, share their activities/equipment/purchases/materials. These services aims to : 1. initiate and promote exchanges between existing companies and 2. identify synergies between pre-existing industries and future companies that could be implemented in INSPIRA. These eco-industrial synergies can be related to: the recovery / exchange of industrial flows (industrial wastewater, waste, by-products, etc.); the pooling of business services (collective waste management, stormwater collection and reuse, transport, etc.); the sharing of equipments (boiler, steam production, wastewater treatment unit, etc.) or resources (splitting jobs cost, etc.); and the creation of new activities (interface activities necessary for by-product recovery, development of products or services from a newly identified resource, etc.). These services are based on IT tool used by the interested local stakeholders that intends to allow local stakeholders to take decisions. Thus, this IT tool: - include an economic and environmental assessment of each implantation or pooling/sharing scenarios for existing or further industries; - is meant for industrial and territorial manager/planners - is designed to be used for each new industrial project. - The specification of the IT tool is made through an agile process all along INSPIR’ECO project fed with: - Users expectations thanks to workshop sessions where mock-up interfaces are displayed; - Data availability based on local and industrial data inventory. These input allow to specify the tool not only with technical and methodological constraints (notably the ones from economic and environmental assessments) but also with data availability and users expectations. A feedback on innovative resource management initiatives in port areas has been realized in the beginning of the project to feed the designing services step.Keywords: development opportunities, INSPIR’ECO, INSPIRA, industrial ecology, planification, synergy identification
Procedia PDF Downloads 163361 Effect of Laser Ablation OTR Films on the Storability of Endive and Pak Choi by Baby Vegetables in Modified Atmosphere Condition
Authors: In-Lee Choi, Min Jae Jeong, Jun Pill Baek, Ho-Min Kang
Abstract:
As the consumption trends of vegetables become different from the past, it is increased using vegetable more convenience such as fresh-cut vegetables, sprouts, baby vegetables rather than an existing hole piece of vegetables. Selected baby vegetables have various functional materials but they have short shelf life. This study was conducted to improve storability by using suitable laser ablation OTR (oxygen transmission rate) films. Baby vegetable of endive (Cichorium endivia L.) and pak choi (Brassica rapa chinensis) for this research, around 10 cm height, cultivated in glass greenhouse during 3 weeks. Harvested endive and pak choi were stored at 8 ℃ for 5 days and were packed by PP (Polypropylene) container and covered different types of laser ablation OTR film (DaeRyung Co., Ltd.) such as 1,300 cc, 10,000 cc, 20,000 cc, 40,000 cc /m2•day•atm, and control (perforated film) with heat sealing machine (SC200-IP, Kumkang, Korea). All the samples conducted 5 times replication. Statistical analysis was carried out using a Microsoft Excel 2010 program and results were expressed as standard deviations. The fresh weight loss rate of both baby vegetables were less than 0.3 % in treated films as maximum weight loss rate. On the other hands, control in the final storage day had around 3.0 % weight loss rate and it followed decreasing quantity. Endive had less 2.0 % carbon dioxide contents as maximum contents in 20,000 cc and 40,000 cc. Oxygen contents was maintained between 17 and 20 % in endive, 19 and 20 % in pak choi. Ethylene concentration of both vegetables maintained little lower contents in 20,000 cc treatments than others at final storage day without statistical significance. In the case of hardness, 40,000 cc film was shown little higher value at both baby vegetables without statistical significance. Visual quality was good at 10,000 cc and 20,000 cc in endive and pak choi, and off-flavor was not appeard any off-flavor in both vegetables. Chlorophyll (SPAD-502, Minolta, Japan) value of endive was shown as similar result with initial in all treatments except 20,000 cc as little lower. And chlorophyll value of pak choi decreased in all treatments compared with initial value but was not shown significantly difference each other. Color of leaves (CR-400, Minolta, Japan) changed significantly in 40,000 cc at endive. In an event of pak choi, all the treatments started yellowing by increasing hunter b value, among them control increased substantially. As above the result, 10,000 cc film was most reasonable packaging film for storing at endive and 20,000 cc at pak choi with good quality.Keywords: carbon dioxide, shelf-life, visual quality, pak choi
Procedia PDF Downloads 789360 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites
Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer
Abstract:
Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation
Procedia PDF Downloads 239359 The Invisibility of Production: A Comparative Study of the Marker of Modern Urban-Centric Economic Development
Authors: Arpita Banerjee
Abstract:
We now live in a world where half of the human population is city dwellers. The migration of people from rural to urban areas is rising continuously. But, the promise of a greater wage and better quality of life cannot keep up with the pace of migration. The rate of urbanization is much higher in developing countries. The UN predicts that 95 percent of this urban expansion will take place in the developing world in the next few decades. The population in the urban settlements of the developing nations is soaring, and megacities like Mumbai, Dhaka, Jakarta, Karachi, Manila, Shanghai, Rio de Janeiro, Lima, and Kinshasa are crammed with people, a majority of whom are migrants. Rural-urban migration has taken a new shape with the rising number of smaller cities. Apart from the increase in non-agricultural economic activities, growing demand for resources and energy, an increase in wastes and pollution, and a greater ecological footprint, there is another significant characteristic of the current wave of urbanization. This paper analyses that important marker of urbanization. It is the invisibility of production sites. The growing urban space ensures that the producers, the production sites, or the process stay beyond urban visibility. In cities and towns, living is majorly about earning money in either the informal service and small scale manufacturing sectors (a major part of which is food preparation), or the formal service sector. In the cases of both the informal service and small scale manufacturing or the formal service sector, commodity creation cannot be seen. The urban space happens to be the marketplace, where nature and its services, along with the non-urban labour, cannot be seen unless it is sold in the market. Hence, the consumers are now increasingly becoming disengaged from the producers. This paper compares the rate of increase in the size of and employment in the informal sector and/or that of the formal sector of some selected urban areas of India. Also, a comparison over the years of the aforementioned characteristics is presented in this paper, in order to find out how the anonymity of the producers to the urban consumers have grown as urbanization has risen. This paper also analyses the change in the transport cost of goods into the cities and towns of India and supports that claim made here that the invisibility of production is a crucial marker of modern-day urban-centric economic development. Such urbanization has an important ecological impact. The invisibility of the production site saves the urban consumer society from dealing with the ethical and ecological aspects of the production process. Once the real sector production is driven out of the cities and towns, the invisible ethical and ecological impacts of the growing urban consumption frees the consumers from associating themselves with any responsibility towards those impacts.Keywords: ecological impact of urbanization, informal sector, invisibility of production, urbanization
Procedia PDF Downloads 131358 A Brazilian Study Applied to the Regulatory Environmental Issues of Nanomaterials
Authors: Luciana S. Almeida
Abstract:
Nanotechnology has revolutionized the world of science and technology bringing great expectations due to its great potential of application in the most varied industrial sectors. The same characteristics that make nanoparticles interesting from the point of view of the technological application, these may be undesirable when released into the environment. The small size of nanoparticles facilitates their diffusion and transport in the atmosphere, water, and soil and facilitates the entry and accumulation of nanoparticles in living cells. The main objective of this study is to evaluate the environmental regulatory process of nanomaterials in the Brazilian scenario. Three specific objectives were outlined. The first is to carry out a global scientometric study, in a research platform, with the purpose of identifying the main lines of study of nanomaterials in the environmental area. The second is to verify how environmental agencies in other countries have been working on this issue by means of a bibliographic review. And the third is to carry out an assessment of the Brazilian Nanotechnology Draft Law 6741/2013 with the state environmental agencies. This last one has the aim of identifying the knowledge of the subject by the environmental agencies and necessary resources available in the country for the implementation of the Policy. A questionnaire will be used as a tool for this evaluation to identify the operational elements and build indicators through the Environment of Evaluation Application, a computational application developed for the development of questionnaires. At the end will be verified the need to propose changes in the Draft Law of the National Nanotechnology Policy. Initial studies, in relation to the first specific objective, have already identified that Brazil stands out in the production of scientific publications in the area of nanotechnology, although the minority is in studies focused on environmental impact studies. Regarding the general panorama of other countries, some findings have also been raised. The United States has included the nanoform of the substances in an existing program in the EPA (Environmental Protection Agency), the TSCA (Toxic Substances Control Act). The European Union issued a draft of a document amending Regulation 1907/2006 of the European Parliament and Council to cover the nanoform of substances. Both programs are based on the study and identification of environmental risks associated with nanomaterials taking into consideration the product life cycle. In relation to Brazil, regarding the third specific objective, it is notable that the country does not have any regulations applicable to nanostructures, although there is a Draft Law in progress. In this document, it is possible to identify some requirements related to the environment, such as environmental inspection and licensing; industrial waste management; notification of accidents and application of sanctions. However, it is not known if these requirements are sufficient for the prevention of environmental impacts and if national environmental agencies will know how to apply them correctly. This study intends to serve as a basis for future actions regarding environmental management applied to the use of nanotechnology in Brazil.Keywords: environment; management; nanotecnology; politics
Procedia PDF Downloads 122357 Characterization of Aerosol Droplet in Absorption Columns to Avoid Amine Emissions
Authors: Hammad Majeed, Hanna Knuutila, Magne Hilestad, Hallvard Svendsen
Abstract:
Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem.Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles. Results: As an example a droplet of initial size of 3 microns, initially containing a 5M MEA, solution is exposed to an atmosphere free of MEA. Composition of the gas phase and temperature is changing with respect to time throughout the absorber.Keywords: amine solvents, emissions, global climate change, simulation and modelling, aerosol generation
Procedia PDF Downloads 265356 Processes Controlling Release of Phosphorus (P) from Catchment Soils and the Relationship between Total Phosphorus (TP) and Humic Substances (HS) in Scottish Loch Waters
Authors: Xiaoyun Hui, Fiona Gentle, Clemens Engelke, Margaret C. Graham
Abstract:
Although past work has shown that phosphorus (P), an important nutrient, may form complexes with aqueous humic substances (HS), the principal component of natural organic matter, the nature of such interactions is poorly understood. Humic complexation may not only enhance P concentrations but it may change its bioavailability within such waters and, in addition, influence its transport within catchment settings. This project is examining the relationships and associations of P, HS, and iron (Fe) in Loch Meadie, Sutherland, North Scotland, a mesohumic freshwater loch which has been assessed as reference condition with respect to P. The aim is to identify characteristic spectroscopic parameters which can enhance the performance of the model currently used to predict reference condition TP levels for highly-coloured Scottish lochs under the Water Framework Directive. In addition to Loch Meadie, samples from other reference condition lochs in north Scotland and Shetland were analysed. By including different types of reference condition lochs (clear water, mesohumic and polyhumic water) this allowed the relationship between total phosphorus (TP) and HS to be more fully explored. The pH, [TP], [Fe], UV/Vis absorbance/spectra, [TOC] and [DOC] for loch water samples have been obtained using accredited methods. Loch waters were neutral to slightly acidic/alkaline (pH 6-8). [TP] in loch waters were lower than 50 µg L-1, and in Loch Meadie waters were typically <10 µg L-1. [Fe] in loch waters were mainly <0.6 mg L-1, but for some loch water samples, [Fe] were in the range 1.0-1.8 mg L-1and there was a positive correlation with [TOC] (r2=0.61). Lochs were classified as clear water, mesohumic or polyhumic based on water colour. The range of colour values of sampled lochs in each category were 0.2–0.3, 0.2–0.5 and 0.5–0.8 a.u. (10 mm pathlength), respectively. There was also a strong positive correlation between [DOC] and water colour (R2=0.84). The UV/Vis spectra (200-700 nm) for water samples were featureless with only a slight “shoulder” observed in the 270–290 nm region. Ultrafiltration was then used to separate colloidal and truly dissolved components from the loch waters and, since it contained the majority of aqueous P and Fe, the colloidal component was fractionated by gel filtration chromatography method. Gel filtration chromatographic fractionation of the colloids revealed two brown-coloured bands which had distinctive UV/Vis spectral features. The first eluting band had larger and more aromatic HS molecules than the second band, and in addition both P and Fe were primarily associated with the larger, more aromatic HS. This result demonstrated that P was able to form complexes with Fe-rich components of HS, and thus provided a scientific basis for the significant correlation between [Fe] and [TP] that the previous monitoring data of reference condition lochs from Scottish Environment Protection Agency (SEPA) showed. The distinctive features of the HS will be used as the basis for an improved spectroscopic tool.Keywords: total phosphorus, humic substances, Scottish loch water, WFD model
Procedia PDF Downloads 546355 An Investigation on the Sandwich Panels with Flexible and Toughened Adhesives under Flexural Loading
Authors: Emre Kara, Şura Karakuzu, Ahmet Fatih Geylan, Metehan Demir, Kadir Koç, Halil Aykul
Abstract:
The material selection in the design of the sandwich structures is very crucial aspect because of the positive or negative influences of the base materials to the mechanical properties of the entire panel. In the literature, it was presented that the selection of the skin and core materials plays very important role on the behavior of the sandwich. Beside this, the use of the correct adhesive can make the whole structure to show better mechanical results and behavior. By this way, the sandwich structures realized in the study were obtained with the combination of aluminum foam core and three different glass fiber reinforced polymer (GFRP) skins using two different commercial adhesives which are based on flexible polyurethane and toughened epoxy. The static and dynamic tests were already applied on the sandwiches with different types of adhesives. In the present work, the static three-point bending tests were performed on the sandwiches having an aluminum foam core with the thickness of 15 mm, the skins with three different types of fabrics ([0°/90°] cross ply E-Glass Biaxial stitched, [0°/90°] cross ply E-Glass Woven and [0°/90°] cross ply S-Glass Woven which have same thickness value of 1.75 mm) and two different commercial adhesives (flexible polyurethane and toughened epoxy based) at different values of support span distances (L= 55, 70, 80, 125 mm) by aiming the analyses of their flexural performance. The skins used in the study were produced via Vacuum Assisted Resin Transfer Molding (VARTM) technique and were easily bonded onto the aluminum foam core with flexible and toughened adhesives under a very low pressure using press machine with the alignment tabs having the total thickness of the whole panel. The main results of the flexural loading are: force-displacement curves obtained after the bending tests, peak force values, absorbed energy, collapse mechanisms, adhesion quality and the effect of the support span length and adhesive type. The experimental results presented that the sandwiches with epoxy based toughened adhesive and the skins made of S-Glass Woven fabrics indicated the best adhesion quality and mechanical properties. The sandwiches with toughened adhesive exhibited higher peak force and energy absorption values compared to the sandwiches with flexible adhesive. The core shear mode occurred in the sandwiches with flexible polyurethane based adhesive through the thickness of the core while the same mode took place in the sandwiches with toughened epoxy based adhesive along the length of the core. The use of these sandwich structures can lead to a weight reduction of the transport vehicles, providing an adequate structural strength under operating conditions.Keywords: adhesive and adhesion, aluminum foam, bending, collapse mechanisms
Procedia PDF Downloads 328354 Alternative Fuel Production from Sewage Sludge
Authors: Jaroslav Knapek, Kamila Vavrova, Tomas Kralik, Tereza Humesova
Abstract:
The treatment and disposal of sewage sludge is one of the most important and critical problems of waste water treatment plants. Currently, 180 thousand tonnes of sludge dry matter are produced in the Czech Republic, which corresponds to approximately 17.8 kg of stabilized sludge dry matter / year per inhabitant of the Czech Republic. Due to the fact that sewage sludge contains a large amount of substances that are not beneficial for human health, the conditions for sludge management will be significantly tightened in the Czech Republic since 2023. One of the tested methods of sludge liquidation is the production of alternative fuel from sludge from sewage treatment plants and paper production. The paper presents an analysis of economic efficiency of alternative fuel production from sludge and its use for fluidized bed boiler with nominal consumption of 5 t of fuel per hour. The evaluation methodology includes the entire logistics chain from sludge extraction, through mechanical moisture reduction to about 40%, transport to the pelletizing line, moisture drying for pelleting and pelleting itself. For economic analysis of sludge pellet production, a time horizon of 10 years corresponding to the expected lifetime of the critical components of the pelletizing line is chosen. The economic analysis of pelleting projects is based on a detailed analysis of reference pelleting technologies suitable for sludge pelleting. The analysis of the economic efficiency of pellet is based on the simulation of cash flows associated with the implementation of the project over the life of the project. For the entered value of return on the invested capital, the price of the resulting product (in EUR / GJ or in EUR / t) is searched to ensure that the net present value of the project is zero over the project lifetime. The investor then realizes the return on the investment in the amount of the discount used to calculate the net present value. The calculations take place in a real business environment (taxes, tax depreciation, inflation, etc.) and the inputs work with market prices. At the same time, the opportunity cost principle is respected; waste disposal for alternative fuels includes the saved costs of waste disposal. The methodology also respects the emission allowances saved due to the displacement of coal by alternative (bio) fuel. Preliminary results of testing of pellet production from sludge show that after suitable modifications of the pelletizer it is possible to produce sufficiently high quality pellets from sludge. A mixture of sludge and paper waste has proved to be a more suitable material for pelleting. At the same time, preliminary results of the analysis of the economic efficiency of this sludge disposal method show that, despite the relatively low calorific value of the fuel produced (about 10-11 MJ / kg), this sludge disposal method is economically competitive. This work has been supported by the Czech Technology Agency within the project TN01000048 Biorefining as circulation technology.Keywords: Alternative fuel, Economic analysis, Pelleting, Sewage sludge
Procedia PDF Downloads 135353 ‘Green Gait’ – The Growing Relevance of Podiatric Medicine amid Climate Change
Authors: Angela Evans, Gabriel Gijon-Nogueron, Alfonso Martinez-Nova
Abstract:
Background The health sector, whose mission is protecting health, also contributes to the climate crisis, the greatest health threat of the 21st century. The carbon footprint from healthcare exceeds 5% of emissions globally, surpassing 7% in the USA and Australia. Global recognition has led to the Paris Agreement, the United Nations Sustainable Development Goals, and the World Health Organization's Climate Change Action Plan. It is agreed that the majority of health impacts stem from energy and resource consumption, as well as the production of greenhouse gases in the environment and deforestation. Many professional medical associations and healthcare providers advocate for their members to take the lead in environmental sustainability. Objectives To avail and expand ‘Green Podiatry’ via the three pillars of: Exercise ; Evidence ; Everyday changes; to highlight the benefits of physical activity and exercise for both human health and planet health. Walking and running are beneficial for health, provide low carbon transport, and have evidence-based health benefits. Podiatrists are key healthcare professionals in the physical activity space and can influence and guide their patients to increase physical activity and avert the many non-communicable diseases that are decimating public health, eg diabetes, arthritis, depression, cancer, obesity. Methods Publications, conference presentations, and pilot projects pertinent to ‘Green Podiatry’ have been activated since 2021, and a survey of podiatrist’s knowledge and awareness has been undertaken.The survey assessed attitudes towards environmental sustainability in work environment. The questions addressed commuting habits, hours of physical exercise per week, and attitudes in the clinic, such as prescribing unnecessary treatments or emphasizing sports as primary treatment. Results Teaching and Learning modules have been developed for podiatric medicine students and graduates globally. These will be availed. A pilot foot orthoses recycling project has been undertaken and will be reported, in addition to established footwear recycling. The preliminary survey found almost 90% of respondents had no knowledge of green podiatry or footwear recycling. Only 30% prescribe sports/exercise as the primary treatment for patients, and 45% do not to prescribe unnecessary treatments. Conclusions Podiatrists are in a good position to lead in the crucial area of healthcare and climate change implications. Sufficient education of podiatrists is essential for the profession to beneficially promote health and physical activity, which is beneficial for the health of all peoples and all communities.Keywords: climate change, gait, green, healthcare, sustainability
Procedia PDF Downloads 91352 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology
Authors: Sanjeev Kumar Appicharla
Abstract:
This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach
Procedia PDF Downloads 188351 Interfacial Instability and Mixing Behavior between Two Liquid Layers Bounded in Finite Volumes
Authors: Lei Li, Ming M. Chai, Xiao X. Lu, Jia W. Wang
Abstract:
The mixing process of two liquid layers in a cylindrical container includes the upper liquid with higher density rushing into the lower liquid with lighter density, the lower liquid rising into the upper liquid, meanwhile the two liquid layers having interactions with each other, forming vortices, spreading or dispersing in others, entraining or mixing with others. It is a complex process constituted of flow instability, turbulent mixing and other multiscale physical phenomena and having a fast evolution velocity. In order to explore the mechanism of the process and make further investigations, some experiments about the interfacial instability and mixing behavior between two liquid layers bounded in different volumes are carried out, applying the planar laser induced fluorescence (PLIF) and the high speed camera (HSC) techniques. According to the results, the evolution of interfacial instability between immiscible liquid develops faster than theoretical rate given by the Rayleigh-Taylor Instability (RTI) theory. It is reasonable to conjecture that some mechanisms except the RTI play key roles in the mixture process of two liquid layers. From the results, it is shown that the invading velocity of the upper liquid into the lower liquid does not depend on the upper liquid's volume (height). Comparing to the cases that the upper and lower containers are of identical diameter, in the case that the lower liquid volume increases to larger geometric space, the upper liquid spreads and expands into the lower liquid more quickly during the evolution of interfacial instability, indicating that the container wall has important influence on the mixing process. In the experiments of miscible liquid layers’ mixing, the diffusion time and pattern of the liquid interfacial mixing also does not depend on the upper liquid's volumes, and when the lower liquid volume increases to larger geometric space, the action of the bounded wall on the liquid falling and rising flow will decrease, and the liquid interfacial mixing effects will also attenuate. Therefore, it is also concluded that the volume weight of upper heavier liquid is not the reason of the fast interfacial instability evolution between the two liquid layers and the bounded wall action is limited to the unstable and mixing flow. The numerical simulations of the immiscible liquid layers’ interfacial instability flow using the VOF method show the typical flow pattern agree with the experiments. However the calculated instability development is much slower than the experimental measurement. The numerical simulation of the miscible liquids’ mixing, which applying Fick’s diffusion law to the components’ transport equation, shows a much faster mixing rate than the experiments on the liquids’ interface at the initial stage. It can be presumed that the interfacial tension plays an important role in the interfacial instability between the two liquid layers bounded in finite volume.Keywords: interfacial instability and mixing, two liquid layers, Planar Laser Induced Fluorescence (PLIF), High Speed Camera (HSC), interfacial energy and tension, Cahn-Hilliard Navier-Stokes (CHNS) equations
Procedia PDF Downloads 248350 The Effect of Sea Buckthorn (Hippophae rhamnoides L.) Berries on Some Quality Characteristics of Cooked Pork Sausages
Authors: Anna M. Salejda, Urszula Tril, Grażyna Krasnowska
Abstract:
The aim of this study was to analyze selected quality characteristics of cooked pork sausages manufactured with the addition of Sea buckthorn (Hippophae rhamnoides L.) berries preparations. Stuffings of model sausages consisted of pork, backfat, water and additives such a curing salt and sodium isoascorbate. Functional additives used in production process were two preparations obtained from dried Sea buckthorn berries in form of powder and brew. Powder of dried berries was added in amount of 1 and 3 g, while water infusion as a replacement of 50 and 100% ice water included in meat products formula. Control samples were produced without functional additives. Experimental stuffings were heat treated in water bath and stored for 4 weeks under cooled conditions (4±1ºC). Physical parameters of colour, texture profile and technological parameters as acidity, weight losses and water activity were estimated. The effect of Sea buckthorn berries preparations on lipid oxidation during storage of final products was determine by TBARS method. Studies have shown that addition of Sea buckthorn preparations to meat-fatty batters significant (P≤0.05) reduced the pH values of sausages samples after thermal treatment. Moreover, the addition of berries powder caused significant differences (P ≤ 0.05) in weight losses after cooking process. Analysis of results of texture profile analysis indicated, that utilization of infusion prepared from Sea buckthorn dried berries caused increase of springiness, gumminess and chewiness of final meat products. At the same time, the highest amount of Sea buckthorn berries powder in recipe caused the decrease of all measured texture parameters. Utilization of experimental preparations significantly decreased (P≤0.05) lightness (L* parameter of color) of meat products. Simultaneously, introduction of 1 and 3 grams of Sea buckthorn berries powder to meat-fatty batter increased redness (a* parameter) of samples under investigation. Higher content of substances reacting with thiobarbituric acid was observed in meat products produced without functional additives. It was observed that powder of Sea buckthorn berries added to meat-fatty batters caused higher protection against lipid oxidation in cooked sausages.Keywords: sea buckthorn, meat products, texture, color parameters, lipid oxidation
Procedia PDF Downloads 296349 A Novel Harmonic Compensation Algorithm for High Speed Drives
Authors: Lakdar Sadi-Haddad
Abstract:
The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.Keywords: active harmonic compensation, eddy current losses, high speed machine
Procedia PDF Downloads 395348 Experimental and Numerical Investigation on the Torque in a Small Gap Taylor-Couette Flow with Smooth and Grooved Surface
Authors: L. Joseph, B. Farid, F. Ravelet
Abstract:
Fundamental studies were performed on bifurcation, instabilities and turbulence in Taylor-Couette flow and applied to many engineering applications like astrophysics models in the accretion disks, shrouded fans, and electric motors. Such rotating machinery performances need to have a better understanding of the fluid flow distribution to quantify the power losses and the heat transfer distribution. The present investigation is focused on high gap ratio of Taylor-Couette flow with high rotational speeds, for smooth and grooved surfaces. So far, few works has been done in a very narrow gap and with very high rotation rates and, to the best of our knowledge, not with this combination with grooved surface. We study numerically the turbulent flow between two coaxial cylinders where R1 and R2 are the inner and outer radii respectively, where only the inner is rotating. The gap between the rotor and the stator varies between 0.5 and 2 mm, which corresponds to a radius ratio η = R1/R2 between 0.96 and 0.99 and an aspect ratio Γ= L/d between 50 and 200, where L is the length of the rotor and d being the gap between the two cylinders. The scaling of the torque with the Reynolds number is determined at different gaps for different smooth and grooved surfaces (and also with different number of grooves). The fluid in the gap is air. Re varies between 8000 and 30000. Another dimensionless parameter that plays an important role in the distinction of the regime of the flow is the Taylor number that corresponds to the ratio between the centrifugal forces and the viscous forces (from 6.7 X 105 to 4.2 X 107). The torque will be first evaluated with RANS and U-RANS models, and compared to empirical models and experimental results. A mesh convergence study has been done for each rotor-stator combination. The results of the torque are compared to different meshes in 2D dimensions. For the smooth surfaces, the models used overestimate the torque compared to the empirical equations that exist in the bibliography. The closest models to the empirical models are those solving the equations near to the wall. The greatest torque achieved with grooved surface. The tangential velocity in the gap was always higher in between the rotor and the stator and not on the wall of rotor. Also the greater one was in the groove in the recirculation zones. In order to avoid endwall effects, long cylinders are used in our setup (100 mm), torque is measured by a co-rotating torquemeter. The rotor is driven by an air turbine of an automotive turbo-compressor for high angular velocities. The results of the experimental measurements are at rotational speed of up to 50 000 rpm. The first experimental results are in agreement with numerical ones. Currently, quantitative study is performed on grooved surface, to determine the effect of number of grooves on the torque, experimentally and numerically.Keywords: Taylor-Couette flow, high gap ratio, grooved surface, high speed
Procedia PDF Downloads 407347 Mixing Enhancement with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure Micromixer Using Different Mixing Fluids
Authors: Ayalew Yimam Ali
Abstract:
The T-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the T-junction microchannel can be difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The newly developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the T-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal, triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on the top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the T-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement.
Procedia PDF Downloads 20346 Effect of the Orifice Plate Specifications on Coefficient of Discharge
Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer
Abstract:
On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications
Procedia PDF Downloads 119