Search results for: nuclear fuel rod
74 The U.S. Missile Defense Shield and Global Security Destabilization: An Inconclusive Link
Authors: Michael A. Unbehauen, Gregory D. Sloan, Alberto J. Squatrito
Abstract:
Missile proliferation and global stability are intrinsically linked. Missile threats continually appear at the forefront of global security issues. North Korea’s recently demonstrated nuclear and intercontinental ballistic missile (ICBM) capabilities, for the first time since the Cold War, renewed public interest in strategic missile defense capabilities. To protect from limited ICBM attacks from so-called rogue actors, the United States developed the Ground-based Midcourse Defense (GMD) system. This study examines if the GMD missile defense shield has contributed to a safer world or triggered a new arms race. Based upon increased missile-related developments and the lack of adherence to international missile treaties, it is generally perceived that the GMD system is a destabilizing factor for global security. By examining the current state of arms control treaties as well as existing missile arsenals and ongoing efforts in technologies to overcome U.S. missile defenses, this study seeks to analyze the contribution of GMD to global stability. A thorough investigation cannot ignore that, through the establishment of this limited capability, the U.S. violated longstanding, successful weapons treaties and caused concern among states that possess ICBMs. GMD capability contributes to the perception that ICBM arsenals could become ineffective, creating an imbalance in favor of the United States, leading to increased global instability and tension. While blame for the deterioration of global stability and non-adherence to arms control treaties is often placed on U.S. missile defense, the facts do not necessarily support this view. The notion of a renewed arms race due to GMD is supported neither by current missile arsenals nor by the inevitable development of new and enhanced missile technology, to include multiple independently targeted reentry vehicles (MIRVs), maneuverable reentry vehicles (MaRVs), and hypersonic glide vehicles (HGVs). The methodology in this study encapsulates a period of time, pre- and post-GMD introduction, while analyzing international treaty adherence, missile counts and types, and research in new missile technologies. The decline in international treaty adherence, coupled with a measurable increase in the number and types of missiles or research in new missile technologies during the period after the introduction of GMD, could be perceived as a clear indicator of GMD contributing to global instability. However, research into improved technology (MIRV, MaRV and HGV) prior to GMD, as well as a decline of various global missile inventories and testing of systems during this same period, would seem to invalidate this theory. U.S. adversaries have exploited the perception of the U.S. missile defense shield as a destabilizing factor as a pretext to strengthen and modernize their militaries and justify their policies. As a result, it can be concluded that global stability has not significantly decreased due to GMD; but rather, the natural progression of technological and missile development would inherently include innovative and dynamic approaches to target engagement, deterrence, and national defense.Keywords: arms control, arms race, global security, GMD, ICBM, missile defense, proliferation
Procedia PDF Downloads 14373 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System
Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar
Abstract:
Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel
Procedia PDF Downloads 13472 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach
Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman
Abstract:
Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.Keywords: categorical data, log linear modeling, neural network, shifting cultivation
Procedia PDF Downloads 5471 Numerical Investigations of Unstable Pressure Fluctuations Behavior in a Side Channel Pump
Authors: Desmond Appiah, Fan Zhang, Shouqi Yuan, Wei Xueyuan, Stephen N. Asomani
Abstract:
The side channel pump has distinctive hydraulic performance characteristics over other vane pumps because of its generation of high pressure heads in only one impeller revolution. Hence, there is soaring utilization and application in the fields of petrochemical, food processing fields, automotive and aerospace fuel pumping where high heads are required at low flows. The side channel pump is characterized by unstable flow because after fluid flows into the impeller passage, it moves into the side channel and comes back to the impeller again and then moves to the next circulation. Consequently, the flow leaves the side channel pump following a helical path. However, the pressure fluctuation exhibited in the flow greatly contributes to the unwanted noise and vibration which is associated with the flow. In this paper, a side channel pump prototype was examined thoroughly through numerical calculations based on SST k-ω turbulence model to ascertain the pressure fluctuation behavior. The pressure fluctuation intensity of the 3D unstable flow dynamics were carefully investigated under different working conditions 0.8QBEP, 1.0 QBEP and 1.2QBEP. The results showed that the pressure fluctuation distribution around the pressure side of the blade is greater than the suction side at the impeller and side channel interface (z=0) for all three operating conditions. Part-load condition 0.8QBEP recorded the highest pressure fluctuation distribution because of the high circulation velocity thus causing an intense exchanged flow between the impeller and side channel. Time and frequency domains spectra of the pressure fluctuation patterns in the impeller and the side channel were also analyzed under the best efficiency point value, QBEP using the solution from the numerical calculations. It was observed from the time-domain analysis that the pressure fluctuation characteristics in the impeller flow passage increased steadily until the flow reached the interrupter which separates low-pressure at the inflow from high pressure at the outflow. The pressure fluctuation amplitudes in the frequency domain spectrum at the different monitoring points depicted a gentle decreasing trend of the pressure amplitudes which was common among the operating conditions. The frequency domain also revealed that the main excitation frequencies occurred at 600Hz, 1200Hz, and 1800Hz and continued in the integers of the rotating shaft frequency. Also, the mass flow exchange plots indicated that the side channel pump is characterized with many vortex flows. Operating conditions 0.8QBEP, 1.0 QBEP depicted less and similar vortex flow while 1.2Q recorded many vortex flows around the inflow, middle and outflow regions. The results of the numerical calculations were finally verified experimentally. The performance characteristics curves from the simulated results showed that 0.8QBEP working condition recorded a head increase of 43.03% and efficiency decrease of 6.73% compared to 1.0QBEP. It can be concluded that for industrial applications where the high heads are mostly required, the side channel pump can be designed to operate at part-load conditions. This paper can serve as a source of information in order to optimize a reliable performance and widen the applications of the side channel pumps.Keywords: exchanged flow, pressure fluctuation, numerical simulation, side channel pump
Procedia PDF Downloads 13670 Adaptive Power Control of the City Bus Integrated Photovoltaic System
Authors: Piotr Kacejko, Mariusz Duk, Miroslaw Wendeker
Abstract:
This paper presents an adaptive controller to track the maximum power point of a photovoltaic modules (PV) under fast irradiation change on the city-bus roof. Photovoltaic systems have been a prominent option as an additional energy source for vehicles. The Municipal Transport Company (MPK) in Lublin has installed photovoltaic panels on its buses roofs. The solar panels turn solar energy into electric energy and are used to load the buses electric equipment. This decreases the buses alternators load, leading to lower fuel consumption and bringing both economic and ecological profits. A DC–DC boost converter is selected as the power conditioning unit to coordinate the operating point of the system. In addition to the conversion efficiency of a photovoltaic panel, the maximum power point tracking (MPPT) method also plays a main role to harvest most energy out of the sun. The MPPT unit on a moving vehicle must keep tracking accuracy high in order to compensate rapid change of irradiation change due to dynamic motion of the vehicle. Maximum power point track controllers should be used to increase efficiency and power output of solar panels under changing environmental factors. There are several different control algorithms in the literature developed for maximum power point tracking. However, energy performances of MPPT algorithms are not clarified for vehicle applications that cause rapid changes of environmental factors. In this study, an adaptive MPPT algorithm is examined at real ambient conditions. PV modules are mounted on a moving city bus designed to test the solar systems on a moving vehicle. Some problems of a PV system associated with a moving vehicle are addressed. The proposed algorithm uses a scanning technique to determine the maximum power delivering capacity of the panel at a given operating condition and controls the PV panel. The aim of control algorithm was matching the impedance of the PV modules by controlling the duty cycle of the internal switch, regardless of changes of the parameters of the object of control and its outer environment. Presented algorithm was capable of reaching the aim of control. The structure of an adaptive controller was simplified on purpose. Since such a simple controller, armed only with an ability to learn, a more complex structure of an algorithm can only improve the result. The presented adaptive control system of the PV system is a general solution and can be used for other types of PV systems of both high and low power. Experimental results obtained from comparison of algorithms by a motion loop are presented and discussed. Experimental results are presented for fast change in irradiation and partial shading conditions. The results obtained clearly show that the proposed method is simple to implement with minimum tracking time and high tracking efficiency proving superior to the proposed method. This work has been financed by the Polish National Centre for Research and Development, PBS, under Grant Agreement No. PBS 2/A6/16/2013.Keywords: adaptive control, photovoltaic energy, city bus electric load, DC-DC converter
Procedia PDF Downloads 21169 Implementation of Synthesis and Quality Control Procedures of ¹⁸F-Fluoromisonidazole Radiopharmaceutical
Authors: Natalia C. E. S. Nascimento, Mercia L. Oliveira, Fernando R. A. Lima, Leonardo T. C. do Nascimento, Marina B. Silveira, Brigida G. A. Schirmer, Andrea V. Ferreira, Carlos Malamut, Juliana B. da Silva
Abstract:
Tissue hypoxia is a common characteristic of solid tumors leading to decreased sensitivity to radiotherapy and chemotherapy. In the clinical context, tumor hypoxia assessment employing the positron emission tomography (PET) tracer ¹⁸F-fluoromisonidazole ([¹⁸F]FMISO) is helpful for physicians for planning and therapy adjusting. The aim of this work was to implement the synthesis of 18F-FMISO in a TRACERlab® MXFDG module and also to establish the quality control procedure. [¹⁸F]FMISO was synthesized at Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN/Brazil) using an automated synthesizer (TRACERlab® MXFDG, GE) adapted for the production of [¹⁸F]FMISO. The FMISO chemical standard was purchased from ABX. 18O- enriched water was acquired from Center of Molecular Research. Reagent kits containing eluent solution, acetonitrile, ethanol, 2.0 M HCl solution, buffer solution, water for injections and [¹⁸F]FMISO precursor (dissolved in 2 ml acetonitrile) were purchased from ABX. The [¹⁸F]FMISO samples were purified by Solid Phase Extraction method. The quality requirements of [¹⁸F]FMISO are established in the European Pharmacopeia. According to that reference, quality control of [¹⁸F]FMISO should include appearance, pH, radionuclidic identity and purity, radiochemical identity and purity, chemical purity, residual solvents, bacterial endotoxins, and sterility. The duration of the synthesis process was 53 min, with radiochemical yield of (37.00 ± 0.01) % and the specific activity was more than 70 GBq/µmol. The syntheses were reproducible and showed satisfactory results. In relation to the quality control analysis, the samples were clear and colorless at pH 6.0. The spectrum emission, measured by using a High-Purity Germanium Detector (HPGe), presented a single peak at 511 keV and the half-life, determined by the decay method in an activimeter, was (111.0 ± 0.5) min, indicating no presence of radioactive contaminants, besides the desirable radionuclide (¹⁸F). The samples showed concentration of tetrabutylammonium (TBA) < 50μg/mL, assessed by visual comparison to TBA standard applied in the same thin layer chromatographic plate. Radiochemical purity was determined by high performance liquid chromatography (HPLC) and the results were 100%. Regarding the residual solvents tested, ethanol and acetonitrile presented concentration lower than 10% and 0.04%, respectively. Healthy female mice were injected via lateral tail vein with [¹⁸F]FMISO, microPET imaging studies (15 min) were performed after 2 h post injection (p.i), and the biodistribution was analyzed in five-time points (30, 60, 90, 120 and 180 min) after injection. Subsequently, organs/tissues were assayed for radioactivity with a gamma counter. All parameters of quality control test were in agreement to quality criteria confirming that [¹⁸F]FMISO was suitable for use in non-clinical and clinical trials, following the legal requirements for the production of new radiopharmaceuticals in Brazil.Keywords: automatic radiosynthesis, hypoxic tumors, pharmacopeia, positron emitters, quality requirements
Procedia PDF Downloads 19368 Phospholipid Cationic and Zwitterionic Compounds as Potential Non-Toxic Antifouling Agents: A Study of Biofilm Formation Assessed by Micro-titer Assays with Marine Bacteria and Eco-toxicological Effect on Marine Microalgae
Authors: D. Malouch, M. Berchel, C. Dreanno, S. Stachowski-Haberkorn, P-A. Jaffres
Abstract:
Biofouling is a complex natural phenomenon that involves biological, physical and chemical properties related to the environment, the submerged surface and the living organisms involved. Bio-colonization of artificial structures can cause various economic and environmental impacts. The increase in costs associated with the over-consumption of fuel from biocolonized vessels has been widely studied. Measurement drifts from submerged sensors, as well as obstructions in heat exchangers, and deterioration of offshore structures are major difficulties that industries are dealing with. Therefore, surfaces that inhibit biocolonization are required in different areas (water treatment, marine paints, etc.) and many efforts have been devoted to produce efficient and eco-compatible antifouling agents. The different steps of surface fouling are widely described in literature. Studying the biofilm and its stages provides a better understanding of how to elaborate more efficient antifouling strategies. Several approaches are currently applied, such as the use of biocide anti-fouling paint (mainly with copper derivatives) and super-hydrophobic coatings. While these two processes are proving to be the most effective, they are not entirely satisfactory, especially in a context of a changing legislation. Nowadays, the challenge is to prevent biofouling with non-biocide compounds, offering a cost effective solution, but with no toxic effects on marine organisms. Since the micro-fouling phase plays an important role in the regulation of the following steps of biofilm formation, it is desired to reduce or delate biofouling of a given surface by inhibiting the micro-fouling at its early stages. In our recent works, we reported that some amphiphilic compounds exhibited bacteriostatic or bactericidal properties at a concentration that did not affect mammalian eukaryotic cells. These remarkable properties invited us to assess this type of bio-inspired phospholipids to prevent the colonization of surfaces by marine bacteria. Of note, other studies reported that amphiphilic compounds interacted with bacteria leading to a reduction of their development. An amphiphilic compound is a molecule consisting of a hydrophobic domain and a polar head (ionic or non-ionic). These compounds appear to have interesting antifouling properties: some ionic compounds have shown antimicrobial activity, and zwitterions can reduce nonspecific adsorption of proteins. Herein, we investigate the potential of amphiphilic compounds as inhibitors of bacterial growth and marine biofilm formation. The aim of this study is to compare the efficacy of four synthetic phospholipids that features a cationic charge or a zwitterionic polar-head group to prevent microfouling with marine bacteria. Toxicity of these compounds was also studied in order to identify the most promising compounds that inhibit biofilm development and show low cytotoxicity on two links representative of coastal marine food webs: phytoplankton and oyster larvae.Keywords: amphiphilic phospholipids, biofilm, marine fouling, non-toxique assays
Procedia PDF Downloads 13467 Interactions between Sodium Aerosols and Fission Products: A Theoretical Chemistry and Experimental Approach
Authors: Ankita Jadon, Sidi Souvi, Nathalie Girault, Denis Petitprez
Abstract:
Safety requirements for Generation IV nuclear reactor designs, especially the new generation sodium-cooled fast reactors (SFR) require a risk-informed approach to model severe accidents (SA) and their consequences in case of outside release. In SFRs, aerosols are produced during a core disruptive accident when primary system sodium is ejected into the containment and burn in contact with the air; producing sodium aerosols. One of the key aspects of safety evaluation is the in-containment sodium aerosol behavior and their interaction with fission products. The study of the effects of sodium fires is essential for safety evaluation as the fire can both thermally damage the containment vessel and cause an overpressurization risk. Besides, during the fire, airborne fission product first dissolved in the primary sodium can be aerosolized or, as it can be the case for fission products, released under the gaseous form. The objective of this work is to study the interactions between sodium aerosols and fission products (Iodine, toxic and volatile, being the primary concern). Sodium fires resulting from an SA would produce aerosols consisting of sodium peroxides, hydroxides, carbonates, and bicarbonates. In addition to being toxic (in oxide form), this aerosol will then become radioactive. If such aerosols are leaked into the environment, they can pose a danger to the ecosystem. Depending on the chemical affinity of these chemical forms with fission products, the radiological consequences of an SA leading to containment leak tightness loss will also be affected. This work is split into two phases. Firstly, a method to theoretically understand the kinetics and thermodynamics of the heterogeneous reaction between sodium aerosols and fission products: I2 and HI are proposed. Ab-initio, density functional theory (DFT) calculations using Vienna ab-initio simulation package are carried out to develop an understanding of the surfaces of sodium carbonate (Na2CO3) aerosols and hence provide insight on its affinity towards iodine species. A comprehensive study of I2 and HI adsorption, as well as bicarbonate formation on the calculated lowest energy surface of Na2CO3, was performed which provided adsorption energies and description of the optimized configuration of adsorbate on the stable surface. Secondly, the heterogeneous reaction between (I2)g and Na2CO3 aerosols were investigated experimentally. To study this, (I2)g was generated by heating a permeation tube containing solid I2, and, passing it through a reaction chamber containing Na2CO3 aerosol deposit. The concentration of iodine was then measured at the exit of the reaction chamber. Preliminary observations indicate that there is an effective uptake of (I2)g on Na2CO3 surface, as suggested by our theoretical chemistry calculations. This work is the first step in addressing the gaps in knowledge of in-containment and atmospheric source term which are essential aspects of safety evaluation of SFR SA. In particular, this study is aimed to determine and characterize the radiological and chemical source term. These results will then provide useful insights for the developments of new models to be implemented in integrated computer simulation tool to analyze and evaluate SFR safety designs.Keywords: iodine adsorption, sodium aerosols, sodium cooled reactor, DFT calculations, sodium carbonate
Procedia PDF Downloads 21566 Lifting Body Concepts for Unmanned Fixed-Wing Transport Aircrafts
Authors: Anand R. Nair, Markus Trenker
Abstract:
Lifting body concepts were conceived as early as 1917 and patented by Roy Scroggs. It was an idea of using the fuselage as a lift producing body with no or small wings. Many of these designs were developed and even flight tested between 1920’s to 1970’s, but it was not pursued further for commercial flight as at lower airspeeds, such a configuration was incapable to produce sufficient lift for the entire aircraft. The concept presented in this contribution is combining the lifting body design along with a fixed wing to maximise the lift produced by the aircraft. Conventional aircraft fuselages are designed to be aerodynamically efficient, which is to minimise the drag; however, these fuselages produce very minimal or negligible lift. For the design of an unmanned fixed wing transport aircraft, many of the restrictions which are present for commercial aircraft in terms of fuselage design can be excluded, such as windows for the passengers/pilots, cabin-environment systems, emergency exits, and pressurization systems. This gives new flexibility to design fuselages which are unconventionally shaped to contribute to the lift of the aircraft. The two lifting body concepts presented in this contribution are targeting different applications: For a fast cargo delivery drone, the fuselage is based on a scaled airfoil shape with a cargo capacity of 500 kg for euro pallets. The aircraft has a span of 14 m and reaches 1500 km at a cruising speed of 90 m/s. The aircraft could also easily be adapted to accommodate pilot and passengers with modifications to the internal structures, but pressurization is not included as the service ceiling envisioned for this type of aircraft is limited to 10,000 ft. The next concept to be investigated is called a multi-purpose drone, which incorporates a different type of lifting body and is a much more versatile aircraft as it will have a VTOL capability. The aircraft will have a wingspan of approximately 6 m and flight speeds of 60 m/s within the same service ceiling as the fast cargo delivery drone. The multi-purpose drone can be easily adapted for various applications such as firefighting, agricultural purposes, surveillance, and even passenger transport. Lifting body designs are not a new concept, but their effectiveness in terms of cargo transportation has not been widely investigated. Due to their enhanced lift producing capability, lifting body designs enable the reduction of the wing area and the overall weight of the aircraft. This will, in turn, reduce the thrust requirement and ultimately the fuel consumption. The various designs proposed in this contribution will be based on the general aviation category of aircrafts and will be focussed on unmanned methods of operation. These unmanned fixed-wing transport drones will feature appropriate cargo loading/unloading concepts which can accommodate large size cargo for efficient time management and ease of operation. The various designs will be compared in performance to their conventional counterpart to understand their benefits/shortcomings in terms of design, performance, complexity, and ease of operation. The majority of the performance analysis will be carried out using industry relevant standards in computational fluid dynamics software packages.Keywords: lifting body concept, computational fluid dynamics, unmanned fixed-wing aircraft, cargo drone
Procedia PDF Downloads 24665 Estimated Heat Production, Blood Parameters and Mitochondrial DNA Copy Number of Nellore Bulls with High and Low Residual Feed Intake
Authors: Welder A. Baldassini, Jon J. Ramsey, Marcos R. Chiaratti, Amália S. Chaves, Renata H. Branco, Sarah F. M. Bonilha, Dante P. D. Lanna
Abstract:
With increased production costs there is a need for animals that are more efficient in terms of meat production. In this context, the role of mitochondrial DNA (mtDNA) on physiological processes in liver, muscle and adipose tissues may account for inter-animal variation in energy expenditures and heat production. The purpose this study was to investigate if the amounts of mtDNA in liver, muscle and adipose tissue (subcutaneous and visceral depots) of Nellore bulls are associated with residual feed intake (RFI) and estimated heat production (EHP). Eighteen animals were individually fed in a feedlot for 90 days. RFI values were obtained by regression of dry matter intake (DMI) in relation to average daily gain (ADG) and mid-test metabolic body weight (BW). The animals were classified into low (more efficient) and high (less efficient) RFI groups. The bulls were then randomly distributed in individual pens where they were given excess feed twice daily to result in 5 to 10% orts for 90 d with diet containing 15% crude protein and 2.7 Mcal ME/kg DM. The heart rate (HR) of bulls was monitored for 4 consecutive days and used for calculation of EHP. Electrodes were fitted to bulls with stretch belts (POLAR RS400; Kempele, Finland). To calculate oxygen pulse (O2P), oxygen consumption was obtained using a facemask connected to the gas analyzer (EXHALYZER, ECOMedics, Zurich, Switzerland) and HR were simultaneously measured for 15 minutes period. Daily oxygen (O2) consumption was calculated by multiplying the volume of O2 per beat by total daily beats. EHP was calculated multiplying O2P by the average HR obtained during the 4 days, assuming 4.89 kcal/L of O2 to measure daily EHP that was expressed in kilocalories/day/kilogram metabolic BW (kcal/day/kg BW0.75). Blood samples were collected between days 45 and 90th after the beginning of the trial period in order to measure the concentration of hemoglobin and hematocrit. The bulls were slaughtered in an experimental slaughter house in accordance with current guidelines. Immediately after slaughter, a section of liver, a portion of longissimus thoracis (LT) muscle, plus a portion of subcutaneous fat (surrounding LT muscle) and portions of visceral fat (kidney, pelvis and inguinal fat) were collected. Samples of liver, muscle and adipose tissues were used to quantify mtDNA copy number per cell. The number of mtDNA copies was determined by normalization of mtDNA amount against a single copy nuclear gene (B2M). Mean of EHP, hemoglobin and hematocrit of high and low RFI bulls were compared using two-sample t-tests. Additionally, the one-way ANOVA was used to compare mtDNA quantification considering the mains effects of RFI groups. We found lower EHP (83.047 vs. 97.590 kcal/day/kgBW0.75; P < 0.10), hemoglobin concentration (13.533 vs. 15.108 g/dL; P < 0.10) and hematocrit percentage (39.3 vs. 43.6 %; P < 0.05) in low compared to high RFI bulls, respectively, which may be useful traits to identify efficient animals. However, no differences were observed between the mtDNA content in liver, muscle and adipose tissue of Nellore bulls with high and low RFI.Keywords: bioenergetics, Bos indicus, feed efficiency, mitochondria
Procedia PDF Downloads 24664 Catalytic Dehydrogenation of Formic Acid into H2/CO2 Gas: A Novel Approach
Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy
Abstract:
Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of biomass platform, comprising a potential pool of hydrogen energy that stands as a new energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need of in-situ H2 production, which plays a key role in the hydrogenation reactions of biomass into higher value components. It is reported elsewhere in literature that catalytic decomposition of FA is usually performed in poorly designed setup using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. it work suggests an approach that integrates designing a novel catalyst featuring magnetic property with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H2 gas from FA. Using ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under inert medium. Through a novel approach, FA is charged into the reactor via high-pressure positive displacement pump at steady state conditions. The produced gas (H2+CO2) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The novelty of this work lies in designing a very responsive catalyst, pumping consistent amount of FA into a sealed reactor running at steady state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at lower temperature range (35-50°C) yielded more gas while the catalyst loading and Pd doping wt.% were found to be the most significant factors with a P-values 0.026 & 0.031, respectively.Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles
Procedia PDF Downloads 5263 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel
Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler
Abstract:
Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process
Procedia PDF Downloads 13562 Impact Of Anthropogenic Pressures On The Water Quality Of Hammams In The Municipality Of Dar Bouazza, Morocco
Authors: Nihad Chakri, Btissam El Amrani, Faouzi Berrada, Halima Jounaid, Fouad Amraoui
Abstract:
Public baths or hammams play an essential role in the Moroccan urban and peri-urban fabric, constituting part of the cultural heritage. Urbanization in Morocco has led to a significant increase in the number of these traditional hammams: between 6,000 and 15,000 units (to be updated) operate with a traditional heating system. Numerous studies on energy consumption indicate that a hammam consumes between 60 and 120m3 of water and one to two tons of wood per day. On average, one ton of wood costs 650 Moroccan dirhams (approximately 60 Euros), resulting in a daily fuel cost of around 1300 Moroccan dirhams (about 120 Euros). These high consumptions result in significant environmental nuisances generated by: Wastewater: in the case of hammams located on the outskirts of Casablanca, such as our study area, the Municipality of Dar Bouazza, most of these waters are directly discharged into the receiving environment without prior treatment because they are not connected to the sanitation network. Emissions of black smoke and ashes produced by the often incomplete combustion of wood. Reducing the liquid and gas emissions generated by these hammams thus poses an environmental and sustainable development challenge that needs to be addressed. In this context, we initiated the Eco-hammam project with the objective of implementing innovative and locally adapted solutions to limit the negative impacts of hammams on the environment and reduce water and wood energy consumption. This involves treating and reusing wastewater through a compact system with heat recovery and using alternative energy sources to increase and enhance the energy efficiency of these traditional hammams. To achieve this, on-site surveys of hammams in the Dar Bouazza Municipality and the application of statistical approaches to the results of the physico-chemical and bacteriological characterization of incoming and outgoing water from these units were conducted. This allowed us to establish an environmental diagnosis of these entities. In conclusion, the analysis of well water used by Dar Bouazza's hammams revealed the presence of certain parameters that could be hazardous to public health, such as total germs, total coliforms, sulfite-reducing spores, chromium, nickel, and nitrates. Therefore, this work primarily focuses on prospecting upstream of our study area to verify if other sources of pollution influence the quality of well water.Keywords: public baths, hammams, cultural heritage, urbanization, water consumption, wood consumption, environmental nuisances, wastewater, environmental challenge, sustainable development, Eco-hammam project, innovative solutions, local adaptation, negative impacts, water conservation, wastewater treatment, heat recovery, alternative energy sources, on-site surveys, Dar Bouazza Municipality, statistical approaches, physico-chemical characterization, bacteriological characterization, environmental diagnosis, well water analysis, public health, pollution sources, well water quality
Procedia PDF Downloads 7061 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)
Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara
Abstract:
Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry
Procedia PDF Downloads 17560 Increasing System Adequacy Using Integration of Pumped Storage: Renewable Energy to Reduce Thermal Power Generations Towards RE100 Target, Thailand
Authors: Mathuravech Thanaphon, Thephasit Nat
Abstract:
The Electricity Generating Authority of Thailand (EGAT) is focusing on expanding its pumped storage hydropower (PSH) capacity to increase the reliability of the system during peak demand and allow for greater integration of renewables. To achieve this requirement, Thailand will have to double its current renewable electricity production. To address the challenges of balancing supply and demand in the grid with increasing levels of RE penetration, as well as rising peak demand, EGAT has already been studying the potential for additional PSH capacity for several years to enable an increased share of RE and replace existing fossil fuel-fired generation. In addition, the role that pumped-storage hydropower would play in fulfilling multiple grid functions and renewable integration. The proposed sites for new PSH would help increase the reliability of power generation in Thailand. However, most of the electricity generation will come from RE, chiefly wind and photovoltaic, and significant additional Energy Storage capacity will be needed. In this paper, the impact of integrating the PSH system on the adequacy of renewable rich power generating systems to reduce the thermal power generating units is investigated. The variations of system adequacy indices are analyzed for different PSH-renewables capacities and storage levels. Power Development Plan 2018 rev.1 (PDP2018 rev.1), which is modified by integrating a six-new PSH system and RE planning and development aftermath in 2030, is the very challenge. The system adequacy indices through power generation are obtained using Multi-Objective Genetic Algorithm (MOGA) Optimization. MOGA is a probabilistic heuristic and stochastic algorithm that is able to find the global minima, which have the advantage that the fitness function does not necessarily require the gradient. In this sense, the method is more flexible in solving reliability optimization problems for a composite power system. The optimization with hourly time step takes years of planning horizon much larger than the weekly horizon that usually sets the scheduling studies. The objective function is to be optimized to maximize RE energy generation, minimize energy imbalances, and minimize thermal power generation using MATLAB. The PDP2018 rev.1 was set to be simulated based on its planned capacity stepping into 2030 and 2050. Therefore, the four main scenario analyses are conducted as the target of renewables share: 1) Business-As-Usual (BAU), 2) National Targets (30% RE in 2030), 3) Carbon Neutrality Targets (50% RE in 2050), and 5) 100% RE or full-decarbonization. According to the results, the generating system adequacy is significantly affected by both PSH-RE and Thermal units. When a PSH is integrated, it can provide hourly capacity to the power system as well as better allocate renewable energy generation to reduce thermal generations and improve system reliability. These results show that a significant level of reliability improvement can be obtained by PSH, especially in renewable-rich power systems.Keywords: pumped storage hydropower, renewable energy integration, system adequacy, power development planning, RE100, multi-objective genetic algorithm
Procedia PDF Downloads 5759 A Novel Upregulated circ_0032746 on Sponging with MIR4270 Promotes the Proliferation and Migration of Esophageal Squamous Cell Carcinoma
Authors: Sachin Mulmi Shrestha, Xin Fang, Hui Ye, Lihua Ren, Qinghua Ji, Ruihua Shi
Abstract:
Background: Esophageal squamous cell carcinoma (ESCC) is a tumor arising from esophageal epithelial cells and is one of the major disease subtype in Asian countries, including China. Esophageal cancer is the 7th highest incidence based on the 2020 data of GLOBOCAN. The pathogenesis of cancer is still not well understood as many molecular and genetic basis of esophageal carcinogenesis has yet to be clearly elucidated. Circular RNAs are RNA molecules that are formed by back-splicing covalently joined 3′- and 5′-endsrather than canonical splicing, and recent data suggest circular RNAs could sponge miRNAs and are enriched with functional miRNA binding sites. Hence, we studied the mechanism of circular RNA, its biological function, and the relationship between microRNA in the carcinogenesis of ESCC. Methods: 4 pairs of normal and esophageal cancer tissues were collected in Zhongda hospital, affiliated to Southeast University, and high-throughput RNA sequencing was done. The result revealed that circ_0032746 was upregulated, and thus we selected circ_0032746 for further study. The backsplice junction of circRNA was validated by sanger sequence, and stability was determined by RNASE R assay. The binding site of circRNA and microRNA was predicted by circinteractome,mirandaand RNAhybrid database. Furthermore, circRNA was silenced by siRNA and then by lentivirus. The regulatory axis of circ0032746/miR4270 was validated by shRNA, mimic, and inhibitor transfection. Then, in vitro experiments were performed to assess the role of circ0032746 on proliferation (CCK-8 assay and colon formation assay), migration and invasion (Transewell assay), and apoptosis of ESCC. Results: The upregulated circ0032746 was validated in 9 pairs of tissues and 5 types of cell lines by qPCR, which showed high expression and was statistically significant (P<0.005) ). Upregulated circ0032746 was silenced by shRNA, which showed significant knockdown in KYSE 30 and TE-1 cell lines expression compared to control. Nuclear and cytoplasmic mRNA fraction experiment displayed the cytoplasmic location of circ0032746. The sponging of miR4270 was validated by co-transfection of sh-circ0032746 and mimic or inhibitor. Transfection with mimic showed the decreased expression of circ_0032746, whereas inhibitor inhibited the result. In vitro experiments showed that silencing of circ_0032746 inhibited the proliferation, migration, and invasion compared to the negative control group. The apoptosis was seen higher in a knockdown group than in the control group. Furthermore, 11 common mircoRNA target mRNAs were predicted by Targetscan, MirTarbase, and miRanda database, which may further play role in the pathogenesis. Conclusion: Our results showed that novel circ_0032746 is upregulated in ESCC and plays role in itsoncogenicity. Silencing of circ_0032746 inhibits the proliferation and migration of ESCC whereas increases the apoptosis of cancer cells. Hence, circ0032746 acts as an oncogene in ESCC by sponging with miR4270 and could be a potential biomarker in the diagnosis of ESCC in the future.Keywords: circRNA, esophageal squamous cell carcinoma, microRNA, upregulated
Procedia PDF Downloads 11358 Nephrotoxicity and Hepatotoxicity Induced by Chronic Aluminium Exposure in Rats: Impact of Nutrients Combination versus Social Isolation and Protein Malnutrition
Authors: Azza A. Ali, Doaa M. Abd El-Latif, Amany M. Gad, Yasser M. A. Elnahas, Karema Abu-Elfotuh
Abstract:
Background: Exposure to Aluminium (Al) has been increased recently. It is found in food products, food additives, drinking water, cosmetics and medicines. Chronic consumption of Al causes oxidative stress and has been implicated in several chronic disorders. Liver is considered as the major site for detoxification while kidney is involved in the elimination of toxic substances and is a target organ of metal toxicity. Social isolation (SI) or protein malnutrition (PM) also causes oxidative stress and has negative impact on Al-induced nephrotoxicity as well as hepatotoxicity. Coenzyme Q10 (CoQ10) is a powerful intracellular antioxidant with mitochondrial membrane stabilizing ability while wheat grass is a natural product with antioxidant, anti-inflammatory and different protective activities, cocoa is also potent antioxidants and can protect against many diseases. They provide different degrees of protection from the impact of oxidative stress. Objective: To study the impact of social isolation together with Protein malnutrition on nephro- and hepato-toxicity induced by chronic Al exposure in rats as well as to investigate the postulated protection using a combination of Co Q10, wheat grass and cocoa. Methods: Eight groups of rats were used; four served as protected groups and four as un-protected. Each of them received daily for five weeks AlCl3 (70 mg/kg, IP) for Al-toxicity model groups except one group served as control. Al-toxicity model groups were divided to Al-toxicity alone, SI- associated PM (10% casein diet) and Al- associated SI&PM groups. Protection was induced by oral co-administration of CoQ10 (200mg/kg), wheat grass (100mg/kg) and cocoa powder (24mg/kg) combination together with Al. Biochemical changes in total bilirubin, lipids, cholesterol, triglycerides, glucose, proteins, creatinine and urea as well as alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), lactate deshydrogenase (LDH) were measured in serum of all groups. Specimens of kidney and liver were used for assessment of oxidative parameters (MDA, SOD, TAC, NO), inflammatory mediators (TNF-α, IL-6β, nuclear factor kappa B (NF-κB), Caspase-3) and DNA fragmentation in addition to evaluation of histopathological changes. Results: SI together with PM severely enhanced nephro- and hepato-toxicity induced by chronic Al exposure. Co Q10, wheat grass and cocoa combination showed clear protection against hazards of Al exposure either alone or when associated with SI&PM. Their protection were indicated by the significant decrease in Al-induced elevations in total bilirubin, lipids, cholesterol, triglycerides, glucose, creatinine and urea levels as well as ALT, AST, ALP, LDH. Liver and kidney of the treated groups also showed significant decrease in MDA, NO, TNF-α, IL-6β, NF-κB, caspase-3 and DNA fragmentation, together with significant increase in total proteins, SOD and TAC. Biochemical results were confirmed by the histopathological examinations. Conclusion: SI together with PM represents a risk factor in enhancing nephro- and hepato-toxicity induced by Al in rats. CoQ10, wheat grass and cocoa combination provide clear protection against nephro- and hepatotoxicity as well as the consequent degenerations induced by chronic Al-exposure even when associated with the risk of SI together with PM.Keywords: aluminum, nephrotoxicity, hepatotoxicity, isolation and protein malnutrition, coenzyme Q10, wheatgrass, cocoa, nutrients combinations
Procedia PDF Downloads 24757 Pharmacognostical, Phytochemical and Biological Studies of Leaves and Stems of Hippophae Salicifolia
Authors: Bhupendra Kumar Poudel, Sadhana Amatya, Tirtha Maiya Shrestha, Bharatmani Pokhrel, Mohan Prasad Amatya
Abstract:
Background: H. salicifolia is a dense, branched, multipurpose, deciduous, nitrogen fixing, thorny willow-like small to moderate tree, restricted to the Himalaya. Among the two species of Nepal (Hippophae salicifolia and H. tibetana), it has been traditionally used as food additive, anticancer (bark), and treating toothache, tooth inflammation (anti-inflammatory) and radiation injury; while people of Western Nepal have largely undermined its veiled treasure by using it for fuel, wood and soil stabilization only. Therefore, the main objective of this study was to explore biological properties (analgesic, antidiabetic, cytotoxic and anti-inflammatory properties of this plant. Methodology: The transverse section of leaves and stems were viewed under microscope. Extracts obtained from soxhlation subjected to tests for phytochemical and biological studies. Rats (used to study antidiabetic and anti-inflammatory properties) and mice (used to study analgesic, CNS depressant, muscle relaxant and locomotor properties) were assumed to be normally distributed; then ANOVA and post hoc tukey test was used to find significance. The data obtained were analyzed by SPSS 17 and Excel 2007. Results and Conclusion: Pharmacognostical analysis revealed the presence of long stellate trichomes, double layered vascular bundle 5-6 in number and double layered compact sclerenchyma. The preliminary phytochemical screening of the extracts was found to exhibit the positive reaction tests for glycoside, steroid, tannin, flavonoid, saponin, coumarin and reducing sugar. The brine shrimp lethality bioassay tested in 1000, 100 and 10 ppm revealed cytotoxic activity inherent in methanol, water, chloroform and ethyl acetate extracts with LC50 (μg/ml) values of 61.42, 99.77, 292.72 and 277.84 respectively. The cytotoxic activity may be due to presence of tannins in the constituents. Antimicrobial screening of the extracts by cup diffusion method using Staphylococcus aereus, Escherichia coli and Pseudomonas aeruginosa against standard antibiotics (oxacillin, gentamycin and amikacin respectively) portrayed no activity against the microorganisms tested. The methanol extract of the stems and leaves showed various pharmacological properties: and antidiabetic, anti-inflammatory, analgesic [chemical writhing method], CNS depressant, muscle relaxant and locomotor activities in a dose-dependent fashion, indicating the possibility of the presence of different constituents in the stems and leaves responsible for these biological activities. All the effects when analyzed by post hoc tukey test were found to be significant at 95% confidence level. The antidiabetic activity was presumed to be due to flavonoids present in extract. Therefore, it can be concluded that this plant’s secondary metabolites possessed strong antidiabetic, anti-inflammatory and cytotoxic activity which could be isolated for further investigation.Keywords: Hippophae salicifolia, constituents, antidiabetic, inflammatory, brine shrimp
Procedia PDF Downloads 34656 Peculiarities of Absorption near the Edge of the Fundamental Band of Irradiated InAs-InP Solid Solutions
Authors: Nodar Kekelidze, David Kekelidze, Elza Khutsishvili, Bela Kvirkvelia
Abstract:
The semiconductor devices are irreplaceable elements for investigations in Space (artificial Earth satellite, interplanetary space craft, probes, rockets) and for investigation of elementary particles on accelerators, for atomic power stations, nuclear reactors, robots operating on heavily radiation contaminated territories (Chernobyl, Fukushima). Unfortunately, the most important parameters of semiconductors dramatically worsen under irradiation. So creation of radiation-resistant semiconductor materials for opto and microelectronic devices is actual problem, as well as investigation of complicated processes developed in irradiated solid states. Homogeneous single crystals of InP-InAs solid solutions were grown with zone melting method. There has been studied the dependence of the optical absorption coefficient vs photon energy near fundamental absorption edge. This dependence changes dramatically with irradiation. The experiments were performed on InP, InAs and InP-InAs solid solutions before and after irradiation with electrons and fast neutrons. The investigations of optical properties were carried out on infrared spectrophotometer in temperature range of 10K-300K and 1mkm-50mkm spectral area. Radiation fluencies of fast neutrons was equal to 2·1018neutron/cm2 and electrons with 3MeV, 50MeV up to fluxes of 6·1017electron/cm2. Under irradiation, there has been revealed the exponential type of the dependence of the optical absorption coefficient vs photon energy with energy deficiency. The indicated phenomenon takes place at high and low temperatures as well at impurity different concentration and practically in all cases of irradiation by various energy electrons and fast neutrons. We have developed the common mechanism of this phenomenon for unirradiated materials and implemented the quantitative calculations of distinctive parameter; this is in a satisfactory agreement with experimental data. For the irradiated crystals picture get complicated. In the work, the corresponding analysis is carried out. It has been shown, that in the case of InP, irradiated with electrons (Ф=1·1017el/cm2), the curve of optical absorption is shifted to lower energies. This is caused by appearance of the tails of density of states in forbidden band due to local fluctuations of ionized impurity (defect) concentration. Situation is more complicated in the case of InAs and for solid solutions with composition near to InAs when besides noticeable phenomenon there takes place Burstein effect caused by increase of electrons concentration as a result of irradiation. We have shown, that in certain conditions it is possible the prevalence of Burstein effect. This causes the opposite effect: the shift of the optical absorption edge to higher energies. So in given solid solutions there take place two different opposite directed processes. By selection of solid solutions composition and doping impurity we obtained such InP-InAs, solid solution in which under radiation mutual compensation of optical absorption curves displacement occurs. Obtained result let create on the base of InP-InAs, solid solution radiation-resistant optical materials. Conclusion: It was established the nature of optical absorption near fundamental edge in semiconductor materials and it was created radiation-resistant optical material.Keywords: InAs-InP, electrons concentration, irradiation, solid solutions
Procedia PDF Downloads 20155 The Applications of Zero Water Discharge (ZWD) Systems for Environmental Management
Authors: Walter W. Loo
Abstract:
China declared the “zero discharge rules which leave no toxics into our living environment and deliver blue sky, green land and clean water to many generations to come”. The achievement of ZWD will provide conservation of water, soil and energy and provide drastic increase in Gross Domestic Products (GDP). Our society’s engine needs a major tune up; it is sputtering. ZWD is achieved in world’s space stations – no toxic air emission and the water is totally recycled and solid wastes all come back to earth. This is all done with solar power. These are all achieved under extreme temperature, pressure and zero gravity in space. ZWD can be achieved on earth under much less fluctuations in temperature, pressure and normal gravity environment. ZWD systems are not expensive and will have multiple beneficial returns on investment which are both financially and environmentally acceptable. The paper will include successful case histories since the mid-1970s. ZWD discharge can be applied to the following types of projects: nuclear and coal fire power plants with a closed loop system that will eliminate thermal water discharge; residential communities with wastewater treatment sump and recycle the water use as a secondary water supply; waste water treatment Plants with complete water recycling including water distillation to produce distilled water by very economical 24-hours solar power plant. Landfill remediation is based on neutralization of landfilled gas odor and preventing anaerobic leachate formation. It is an aerobic condition which will render landfill gas emission explosion proof. Desert development is the development of recovering soil moisture from soil and completing a closed loop water cycle by solar energy within and underneath an enclosed greenhouse. Salt-alkali land development can be achieved by solar distillation of salty shallow water into distilled water. The distilled water can be used for soil washing and irrigation and complete a closed loop water cycle with energy and water conservation. Heavy metals remediation can be achieved by precipitation of dissolved toxic metals below the plant or vegetation root zone by solar electricity without pumping and treating. Soil and groundwater remediation - abandoned refineries, chemical and pesticide factories can be remediated by in-situ electrobiochemical and bioventing treatment method without pumping or excavation. Toxic organic chemicals are oxidized into carbon dioxide and heavy metals precipitated below plant and vegetation root zone. New water sources: low temperature distilled water can be recycled for repeated use within a greenhouse environment by solar distillation; nano bubble water can be made from the distilled water with nano bubbles of oxygen, nitrogen and carbon dioxide from air (fertilizer water) and also eliminate the use of pesticides because the nano oxygen will break the insect growth chain in the larvae state. Three dimensional high yield greenhouses can be constructed by complete water recycling using the vadose zone soil as a filter with no farming wastewater discharge.Keywords: greenhouses, no discharge, remediation of soil and water, wastewater
Procedia PDF Downloads 34454 Transparency of Algorithmic Decision-Making: Limits Posed by Intellectual Property Rights
Authors: Olga Kokoulina
Abstract:
Today, algorithms are assuming a leading role in various areas of decision-making. Prompted by a promise to provide increased economic efficiency and fuel solutions for pressing societal challenges, algorithmic decision-making is often celebrated as an impartial and constructive substitute for human adjudication. But in the face of this implied objectivity and efficiency, the application of algorithms is also marred with mounting concerns about embedded biases, discrimination, and exclusion. In Europe, vigorous debates on risks and adverse implications of algorithmic decision-making largely revolve around the potential of data protection laws to tackle some of the related issues. For example, one of the often-cited venues to mitigate the impact of potentially unfair decision-making practice is a so-called 'right to explanation'. In essence, the overall right is derived from the provisions of the General Data Protection Regulation (‘GDPR’) ensuring the right of data subjects to access and mandating the obligation of data controllers to provide the relevant information about the existence of automated decision-making and meaningful information about the logic involved. Taking corresponding rights and obligations in the context of the specific provision on automated decision-making in the GDPR, the debates mainly focus on efficacy and the exact scope of the 'right to explanation'. In essence, the underlying logic of the argued remedy lies in a transparency imperative. Allowing data subjects to acquire as much knowledge as possible about the decision-making process means empowering individuals to take control of their data and take action. In other words, forewarned is forearmed. The related discussions and debates are ongoing, comprehensive, and, often, heated. However, they are also frequently misguided and isolated: embracing the data protection law as ultimate and sole lenses are often not sufficient. Mandating the disclosure of technical specifications of employed algorithms in the name of transparency for and empowerment of data subjects potentially encroach on the interests and rights of IPR holders, i.e., business entities behind the algorithms. The study aims at pushing the boundaries of the transparency debate beyond the data protection regime. By systematically analysing legal requirements and current judicial practice, it assesses the limits of the transparency requirement and right to access posed by intellectual property law, namely by copyrights and trade secrets. It is asserted that trade secrets, in particular, present an often-insurmountable obstacle for realising the potential of the transparency requirement. In reaching that conclusion, the study explores the limits of protection afforded by the European Trade Secrets Directive and contrasts them with the scope of respective rights and obligations related to data access and portability enshrined in the GDPR. As shown, the far-reaching scope of the protection under trade secrecy is evidenced both through the assessment of its subject matter as well as through the exceptions from such protection. As a way forward, the study scrutinises several possible legislative solutions, such as flexible interpretation of the public interest exception in trade secrets as well as the introduction of the strict liability regime in case of non-transparent decision-making.Keywords: algorithms, public interest, trade secrets, transparency
Procedia PDF Downloads 12453 Catalytic Decomposition of Formic Acid into H₂/CO₂ Gas: A Distinct Approach
Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy
Abstract:
Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of the biomass platform, comprising a potential pool of hydrogen energy that stands as a distinct energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need for in-situ H₂ production, which plays a key role in the hydrogenation reactions of biomass into higher-value components. It is reported elsewhere in the literature that catalytic decomposition of FA is usually performed in poorly designed setups using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. Our work suggests an approach that integrates designing a distinct catalyst featuring magnetic properties with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for the dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H₂ gas from FA. Using an ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under an inert medium. Through a distinct approach, FA is charged into the reactor via a high-pressure positive displacement pump at steady-state conditions. The produced gas (H₂+CO₂) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The uniqueness of this work lies in designing a very responsive catalyst, pumping a consistent amount of FA into a sealed reactor running at steady-state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at a lower temperature range (35-50°C) yielded more gas, while the catalyst loading and Pd doping wt.% were found to be the most significant factors with P-values 0.026 & 0.031, respectively.Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles
Procedia PDF Downloads 5652 The Sense of Recognition of Muslim Women in Western Academia
Authors: Naima Mohammadi
Abstract:
The present paper critically reports on the emergency of Iranian international students in a large public university in Italy. Although the most sizeable diaspora of Iranians dates back to the 1979 revolution, a huge wave of Iranian female students travelled abroad after the Iranian Green Movement (2009) due to the intensification of gender discrimination and Islamization. To explore the experience of Iranian female students at an Italian public university, two complementary methods were adopted: a focus group and individual interviews. Focus groups yield detailed collective conversations and provide researchers with an opportunity to observe the interaction between participants, rather than between participant and researcher, which generates data. Semi-structured interviews allow participants to share their stories in their own words and speak about personal experiences and opinions. Research participants were invited to participate through a public call in a Telegram group of Iranian students. Theoretical and purposive sampling was applied to select participants. All participants were assured that full anonymity would be ensured and they consented to take part in the research. A two-hour focus group was held in English with participants in the presence and some online. They were asked to share their motivations for studying in Italy and talk about their experiences both within and outside the university context. Each of these interviews lasted from 45 to 60 minutes and was mostly carried out online and in Farsi. The focus group consisted of 8 Iranian female post-graduate students. In analyzing the data a blended approach was adopted, with a combination of deductive and inductive coding. According to research findings, although 9/11 was the beginning of the West’s challenges against Muslims, the nuclear threats of Islamic regimes promoted the toughest international sanctions against Iranians as a nation across the world. Accordingly, carrying an Iranian identity contributes to social, political, and economic exclusion. Research findings show that geopolitical factors such as international sanctions and Islamophobia, and a lack of reciprocity in terms of recognition, have created a sense of stigmatization for veiled and unveiled Iranian female students who are the largest groups of ‘non-European Muslim international students’ enrolled in Italian universities. Participants addressed how their nationality has devalued their public image and negatively impacted their self-confidence and self-realization in academia. They highlighted the experience of an unwelcoming atmosphere by different groups of people and institutes, such as receiving marked students’ badges, rejected bank account requests, failed visa processes, secondary security screening selection, and hyper-visibility of veiled students. This study corroborates the need for institutions to pay attention to geopolitical factors and religious diversity in student recruitment and provide support mechanisms and access to basic rights. Accordingly, it is suggested that Higher Education Institutions (HEIs) have a social and moral responsibility towards the discrimination and both social and academic exclusion of Iranian students.Keywords: Iranian diaspora, female students, recognition theory, inclusive university
Procedia PDF Downloads 7351 Quantified Metabolomics for the Determination of Phenotypes and Biomarkers across Species in Health and Disease
Authors: Miroslava Cuperlovic-Culf, Lipu Wang, Ketty Boyle, Nadine Makley, Ian Burton, Anissa Belkaid, Mohamed Touaibia, Marc E. Surrette
Abstract:
Metabolic changes are one of the major factors in the development of a variety of diseases in various species. Metabolism of agricultural plants is altered the following infection with pathogens sometimes contributing to resistance. At the same time, pathogens use metabolites for infection and progression. In humans, metabolism is a hallmark of cancer development for example. Quantified metabolomics data combined with other omics or clinical data and analyzed using various unsupervised and supervised methods can lead to better diagnosis and prognosis. It can also provide information about resistance as well as contribute knowledge of compounds significant for disease progression or prevention. In this work, different methods for metabolomics quantification and analysis from Nuclear Magnetic Resonance (NMR) measurements that are used for investigation of disease development in wheat and human cells will be presented. One-dimensional 1H NMR spectra are used extensively for metabolic profiling due to their high reliability, wide range of applicability, speed, trivial sample preparation and low cost. This presentation will describe a new method for metabolite quantification from NMR data that combines alignment of spectra of standards to sample spectra followed by multivariate linear regression optimization of spectra of assigned metabolites to samples’ spectra. Several different alignment methods were tested and multivariate linear regression result has been compared with other quantification methods. Quantified metabolomics data can be analyzed in the variety of ways and we will present different clustering methods used for phenotype determination, network analysis providing knowledge about the relationships between metabolites through metabolic network as well as biomarker selection providing novel markers. These analysis methods have been utilized for the investigation of fusarium head blight resistance in wheat cultivars as well as analysis of the effect of estrogen receptor and carbonic anhydrase activation and inhibition on breast cancer cell metabolism. Metabolic changes in spikelet’s of wheat cultivars FL62R1, Stettler, MuchMore and Sumai3 following fusarium graminearum infection were explored. Extensive 1D 1H and 2D NMR measurements provided information for detailed metabolite assignment and quantification leading to possible metabolic markers discriminating resistance level in wheat subtypes. Quantification data is compared to results obtained using other published methods. Fusarium infection induced metabolic changes in different wheat varieties are discussed in the context of metabolic network and resistance. Quantitative metabolomics has been used for the investigation of the effect of targeted enzyme inhibition in cancer. In this work, the effect of 17 β -estradiol and ferulic acid on metabolism of ER+ breast cancer cells has been compared to their effect on ER- control cells. The effect of the inhibitors of carbonic anhydrase on the observed metabolic changes resulting from ER activation has also been determined. Metabolic profiles were studied using 1D and 2D metabolomic NMR experiments, combined with the identification and quantification of metabolites, and the annotation of the results is provided in the context of biochemical pathways.Keywords: metabolic biomarkers, metabolic network, metabolomics, multivariate linear regression, NMR quantification, quantified metabolomics, spectral alignment
Procedia PDF Downloads 33850 Philippine Site Suitability Analysis for Biomass, Hydro, Solar, and Wind Renewable Energy Development Using Geographic Information System Tools
Authors: Jara Kaye S. Villanueva, M. Rosario Concepcion O. Ang
Abstract:
For the past few years, Philippines has depended most of its energy source on oil, coal, and fossil fuel. According to the Department of Energy (DOE), the dominance of coal in the energy mix will continue until the year 2020. The expanding energy needs in the country have led to increasing efforts to promote and develop renewable energy. This research is a part of the government initiative in preparation for renewable energy development and expansion in the country. The Philippine Renewable Energy Resource Mapping from Light Detection and Ranging (LiDAR) Surveys is a three-year government project which aims to assess and quantify the renewable energy potential of the country and to put them into usable maps. This study focuses on the site suitability analysis of the four renewable energy sources – biomass (coconut, corn, rice, and sugarcane), hydro, solar, and wind energy. The site assessment is a key component in determining and assessing the most suitable locations for the construction of renewable energy power plants. This method maximizes the use of both the technical methods in resource assessment, as well as taking into account the environmental, social, and accessibility aspect in identifying potential sites by utilizing and integrating two different methods: the Multi-Criteria Decision Analysis (MCDA) method and Geographic Information System (GIS) tools. For the MCDA, Analytical Hierarchy Processing (AHP) is employed to determine the parameters needed for the suitability analysis. To structure these site suitability parameters, various experts from different fields were consulted – scientists, policy makers, environmentalists, and industrialists. The need to have a well-represented group of people to consult with is relevant to avoid bias in the output parameter of hierarchy levels and weight matrices. AHP pairwise matrix computation is utilized to derive weights per level out of the expert’s gathered feedback. Whereas from the threshold values derived from related literature, international studies, and government laws, the output values were then consulted with energy specialists from the DOE. Geospatial analysis using GIS tools translate this decision support outputs into visual maps. Particularly, this study uses Euclidean distance to compute for the distance values of each parameter, Fuzzy Membership algorithm which normalizes the output from the Euclidean Distance, and the Weighted Overlay tool for the aggregation of the layers. Using the Natural Breaks algorithm, the suitability ratings of each of the map are classified into 5 discrete categories of suitability index: (1) not suitable (2) least suitable, (3) suitable, (4) moderately suitable, and (5) highly suitable. In this method, the classes are grouped based on the best groups similar values wherein each subdivision are set from the rest based on the big difference in boundary values. Results show that in the entire Philippine area of responsibility, biomass has the highest suitability rating with rice as the most suitable at 75.76% suitability percentage, whereas wind has the least suitability percentage with score 10.28%. Solar and Hydro fall in the middle of the two, with suitability values 28.77% and 21.27%.Keywords: site suitability, biomass energy, hydro energy, solar energy, wind energy, GIS
Procedia PDF Downloads 14949 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management
Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities
Procedia PDF Downloads 7248 Strategies for Drought Adpatation and Mitigation via Wastewater Management
Authors: Simrat Kaur, Fatema Diwan, Brad Reddersen
Abstract:
The unsustainable and injudicious use of natural renewable resources beyond the self-replenishment limits of our planet has proved catastrophic. Most of the Earth’s resources, including land, water, minerals, and biodiversity, have been overexploited. Owing to this, there is a steep rise in the global events of natural calamities of contrasting nature, such as torrential rains, storms, heat waves, rising sea levels, and megadroughts. These are all interconnected through common elements, namely oceanic currents and land’s the green cover. The deforestation fueled by the ‘economic elites’ or the global players have already cleared massive forests and ecological biomes in every region of the globe, including the Amazon. These were the natural carbon sinks prevailing and performing CO2 sequestration for millions of years. The forest biomes have been turned into mono cultivation farms to produce feedstock crops such as soybean, maize, and sugarcane; which are one of the biggest green house gas emitters. Such unsustainable agriculture practices only provide feedstock for livestock and food processing industries with huge carbon and water footprints. These are two main factors that have ‘cause and effect’ relationships in the context of climate change. In contrast to organic and sustainable farming, the mono-cultivation practices to produce food, fuel, and feedstock using chemicals devoid of the soil of its fertility, abstract surface, and ground waters beyond the limits of replenishment, emit green house gases, and destroy biodiversity. There are numerous cases across the planet where due to overuse; the levels of surface water reservoir such as the Lake Mead in Southwestern USA and ground water such as in Punjab, India, have deeply shrunk. Unlike the rain fed food production system on which the poor communities of the world relies; the blue water (surface and ground water) dependent mono-cropping for industrial and processed food create water deficit which put the burden on the domestic users. Excessive abstraction of both surface and ground waters for high water demanding feedstock (soybean, maize, sugarcane), cereal crops (wheat, rice), and cash crops (cotton) have a dual and synergistic impact on the global green house gas emissions and prevalence of megadroughts. Both these factors have elevated global temperatures, which caused cascading events such as soil water deficits, flash fires, and unprecedented burning of the woods, creating megafires in multiple continents, namely USA, South America, Europe, and Australia. Therefore, it is imperative to reduce the green and blue water footprints of agriculture and industrial sectors through recycling of black and gray waters. This paper explores various opportunities for successful implementation of wastewater management for drought preparedness in high risk communities.Keywords: wastewater, drought, biodiversity, water footprint, nutrient recovery, algae
Procedia PDF Downloads 10047 Decision Making on Smart Energy Grid Development for Availability and Security of Supply Achievement Using Reliability Merits
Authors: F. Iberraken, R. Medjoudj, D. Aissani
Abstract:
The development of the smart grids concept is built around two separate definitions, namely: The European one oriented towards sustainable development and the American one oriented towards reliability and security of supply. In this paper, we have investigated reliability merits enabling decision-makers to provide a high quality of service. It is based on system behavior using interruptions and failures modeling and forecasting from one hand and on the contribution of information and communication technologies (ICT) to mitigate catastrophic ones such as blackouts from the other hand. It was found that this concept has been adopted by developing and emerging countries in short and medium terms followed by sustainability concept at long term planning. This work has highlighted the reliability merits such as: Benefits, opportunities, costs and risks considered as consistent units of measuring power customer satisfaction. From the decision making point of view, we have used the analytic hierarchy process (AHP) to achieve customer satisfaction, based on the reliability merits and the contribution of such energy resources. Certainly nowadays, fossil and nuclear ones are dominating energy production but great advances are already made to jump into cleaner ones. It was demonstrated that theses resources are not only environmentally but also economically and socially sustainable. The paper is organized as follows: Section one is devoted to the introduction, where an implicit review of smart grids development is given for the two main concepts (for USA and Europeans countries). The AHP method and the BOCR developments of reliability merits against power customer satisfaction are developed in section two. The benefits where expressed by the high level of availability, maintenance actions applicability and power quality. Opportunities were highlighted by the implementation of ICT in data transfer and processing, the mastering of peak demand control, the decentralization of the production and the power system management in default conditions. Costs were evaluated using cost-benefit analysis, including the investment expenditures in network security, becoming a target to hackers and terrorists, and the profits of operating as decentralized systems, with a reduced energy not supplied, thanks to the availability of storage units issued from renewable resources and to the current power lines (CPL) enabling the power dispatcher to manage optimally the load shedding. For risks, we have razed the adhesion of citizens to contribute financially to the system and to the utility restructuring. What is the degree of their agreement compared to the guarantees proposed by the managers about the information integrity? From technical point of view, have they sufficient information and knowledge to meet a smart home and a smart system? In section three, an application of AHP method is made to achieve power customer satisfaction based on the main energy resources as alternatives, using knowledge issued from a country that has a great advance in energy mutation. Results and discussions are given in section four. It was given us to conclude that the option to a given resource depends on the attitude of the decision maker (prudent, optimistic or pessimistic), and that status quo is neither sustainable nor satisfactory.Keywords: reliability, AHP, renewable energy resources, smart grids
Procedia PDF Downloads 44246 Study on Aerosol Behavior in Piping Assembly under Varying Flow Conditions
Authors: Anubhav Kumar Dwivedi, Arshad Khan, S. N. Tripathi, Manish Joshi, Gaurav Mishra, Dinesh Nath, Naveen Tiwari, B. K. Sapra
Abstract:
In a nuclear reactor accident scenario, a large number of fission products may release to the piping system of the primary heat transport. The released fission products, mostly in the form of the aerosol, get deposited on the inner surface of the piping system mainly due to gravitational settling and thermophoretic deposition. The removal processes in the complex piping system are controlled to a large extent by the thermal-hydraulic conditions like temperature, pressure, and flow rates. These parameters generally vary with time and therefore must be carefully monitored to predict the aerosol behavior in the piping system. The removal process of aerosol depends on the size of particles that determines how many particles get deposit or travel across the bends and reach to the other end of the piping system. The released aerosol gets deposited onto the inner surface of the piping system by various mechanisms like gravitational settling, Brownian diffusion, thermophoretic deposition, and by other deposition mechanisms. To quantify the correct estimate of deposition, the identification and understanding of the aforementioned deposition mechanisms are of great importance. These mechanisms are significantly affected by different flow and thermodynamic conditions. Thermophoresis also plays a significant role in particle deposition. In the present study, a series of experiments were performed in the piping system of the National Aerosol Test Facility (NATF), BARC using metal aerosols (zinc) in dry environments to study the spatial distribution of particles mass and number concentration, and their depletion due to various removal mechanisms in the piping system. The experiments were performed at two different carrier gas flow rates. The commercial CFD software FLUENT is used to determine the distribution of temperature, velocity, pressure, and turbulence quantities in the piping system. In addition to the in-built models for turbulence, heat transfer and flow in the commercial CFD code (FLUENT), a new sub-model PBM (population balance model) is used to describe the coagulation process and to compute the number concentration along with the size distribution at different sections of the piping. In the sub-model coagulation kernels are incorporated through user-defined function (UDF). The experimental results are compared with the CFD modeled results. It is found that most of the Zn particles (more than 35 %) deposit near the inlet of the plenum chamber and a low deposition is obtained in piping sections. The MMAD decreases along the length of the test assembly, which shows that large particles get deposited or removed in the course of flow, and only fine particles travel to the end of the piping system. The effect of a bend is also observed, and it is found that the relative loss in mass concentration at bends is more in case of a high flow rate. The simulation results show that the thermophoresis and depositional effects are more dominating for the small and larger sizes as compared to the intermediate particles size. Both SEM and XRD analysis of the collected samples show the samples are highly agglomerated non-spherical and composed mainly of ZnO. The coupled model framed in this work could be used as an important tool for predicting size distribution and concentration of some other aerosol released during a reactor accident scenario.Keywords: aerosol, CFD, deposition, coagulation
Procedia PDF Downloads 14445 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis
Authors: Kimberly Samaha
Abstract:
In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.Keywords: bio-economy, investment risk, circular design, economic modelling
Procedia PDF Downloads 101