Search results for: renewable energy technologies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11522

Search results for: renewable energy technologies

92 Interactions between Sodium Aerosols and Fission Products: A Theoretical Chemistry and Experimental Approach

Authors: Ankita Jadon, Sidi Souvi, Nathalie Girault, Denis Petitprez

Abstract:

Safety requirements for Generation IV nuclear reactor designs, especially the new generation sodium-cooled fast reactors (SFR) require a risk-informed approach to model severe accidents (SA) and their consequences in case of outside release. In SFRs, aerosols are produced during a core disruptive accident when primary system sodium is ejected into the containment and burn in contact with the air; producing sodium aerosols. One of the key aspects of safety evaluation is the in-containment sodium aerosol behavior and their interaction with fission products. The study of the effects of sodium fires is essential for safety evaluation as the fire can both thermally damage the containment vessel and cause an overpressurization risk. Besides, during the fire, airborne fission product first dissolved in the primary sodium can be aerosolized or, as it can be the case for fission products, released under the gaseous form. The objective of this work is to study the interactions between sodium aerosols and fission products (Iodine, toxic and volatile, being the primary concern). Sodium fires resulting from an SA would produce aerosols consisting of sodium peroxides, hydroxides, carbonates, and bicarbonates. In addition to being toxic (in oxide form), this aerosol will then become radioactive. If such aerosols are leaked into the environment, they can pose a danger to the ecosystem. Depending on the chemical affinity of these chemical forms with fission products, the radiological consequences of an SA leading to containment leak tightness loss will also be affected. This work is split into two phases. Firstly, a method to theoretically understand the kinetics and thermodynamics of the heterogeneous reaction between sodium aerosols and fission products: I2 and HI are proposed. Ab-initio, density functional theory (DFT) calculations using Vienna ab-initio simulation package are carried out to develop an understanding of the surfaces of sodium carbonate (Na2CO3) aerosols and hence provide insight on its affinity towards iodine species. A comprehensive study of I2 and HI adsorption, as well as bicarbonate formation on the calculated lowest energy surface of Na2CO3, was performed which provided adsorption energies and description of the optimized configuration of adsorbate on the stable surface. Secondly, the heterogeneous reaction between (I2)g and Na2CO3 aerosols were investigated experimentally. To study this, (I2)g was generated by heating a permeation tube containing solid I2, and, passing it through a reaction chamber containing Na2CO3 aerosol deposit. The concentration of iodine was then measured at the exit of the reaction chamber. Preliminary observations indicate that there is an effective uptake of (I2)g on Na2CO3 surface, as suggested by our theoretical chemistry calculations. This work is the first step in addressing the gaps in knowledge of in-containment and atmospheric source term which are essential aspects of safety evaluation of SFR SA. In particular, this study is aimed to determine and characterize the radiological and chemical source term. These results will then provide useful insights for the developments of new models to be implemented in integrated computer simulation tool to analyze and evaluate SFR safety designs.

Keywords: iodine adsorption, sodium aerosols, sodium cooled reactor, DFT calculations, sodium carbonate

Procedia PDF Downloads 220
91 Upon Poly(2-Hydroxyethyl Methacrylate-Co-3, 9-Divinyl-2, 4, 8, 10-Tetraoxaspiro (5.5) Undecane) as Polymer Matrix Ensuring Intramolecular Strategies for Further Coupling Applications

Authors: Aurica P. Chiriac, Vera Balan, Mihai Asandulesa, Elena Butnaru, Nita Tudorachi, Elena Stoleru, Loredana E. Nita, Iordana Neamtu, Alina Diaconu, Liliana Mititelu-Tartau

Abstract:

The interest for studying ‘smart’ materials is entirely justified and in this context were realized investigations on poly(2-hydroxyethylmethacrylate-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane), which is a macromolecular compound with sensibility at pH and temperature, gel formation capacity, binding properties, amphilicity, good oxidative and thermal stability. Physico-chemical characteristics in terms of the molecular weight, temperature-sensitive abilities and thermal stability, as well rheological, dielectric and spectroscopic properties were evaluated in correlation with further coupling capabilities. Differential scanning calorimetry investigation indicated Tg at 36.6 °C and a melting point at Tm=72.8°C, for the studied copolymer, and up to 200oC two exothermic processes (at 99.7°C and 148.8°C) were registered with losing weight of about 4 %, respective 19.27%, which indicate just processes of thermal decomposition (and not phenomena of thermal transition) owing to scission of the functional groups and breakage of the macromolecular chains. At the same time, the rheological studies (rotational tests) confirmed the non-Newtonian shear-thinning fluid behavior of the copolymer solution. The dielectric properties of the copolymer have been evaluated in order to investigate the relaxation processes and two relaxation processes under Tg value were registered and attributed to localized motions of polar groups from side chain macromolecules, or parts of them, without disturbing the main chains. According to literature and confirmed as well by our investigations, β-relaxation is assigned with the rotation of the ester side group and the γ-relaxation corresponds to the rotation of hydroxy- methyl side groups. The fluorescence spectroscopy confirmed the copolymer structure, the spiroacetal moiety getting an axial conformation, more stable, with lower energy, able for specific interactions with molecules from environment, phenomena underlined by different shapes of the emission spectra of the copolymer. Also, the copolymer was used as template for indomethacin incorporation as model drug, and the biocompatible character of the complex was confirmed. The release behavior of the bioactive compound was dependent by the copolymer matrix composition, the increasing of 3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane comonomer amount attenuating the drug release. At the same time, the in vivo studies did not show significant differences of leucocyte formula elements, GOT, GPT and LDH levels, nor immune parameters (OC, PC, and BC) between control mice group and groups treated just with copolymer samples, with or without drug, data attesting the biocompatibility of the polymer samples. The investigation of the physico-chemical characteristics of poly(2-hydrxyethyl methacrylate-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane) in terms of temperature-sensitive abilities, rheological and dielectrical properties, are bringing useful information for further specific use of this polymeric compound.

Keywords: bioapplications, dielectric and spectroscopic properties, dual sensitivity at pH and temperature, smart materials

Procedia PDF Downloads 284
90 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 165
89 Estimated Heat Production, Blood Parameters and Mitochondrial DNA Copy Number of Nellore Bulls with High and Low Residual Feed Intake

Authors: Welder A. Baldassini, Jon J. Ramsey, Marcos R. Chiaratti, Amália S. Chaves, Renata H. Branco, Sarah F. M. Bonilha, Dante P. D. Lanna

Abstract:

With increased production costs there is a need for animals that are more efficient in terms of meat production. In this context, the role of mitochondrial DNA (mtDNA) on physiological processes in liver, muscle and adipose tissues may account for inter-animal variation in energy expenditures and heat production. The purpose this study was to investigate if the amounts of mtDNA in liver, muscle and adipose tissue (subcutaneous and visceral depots) of Nellore bulls are associated with residual feed intake (RFI) and estimated heat production (EHP). Eighteen animals were individually fed in a feedlot for 90 days. RFI values were obtained by regression of dry matter intake (DMI) in relation to average daily gain (ADG) and mid-test metabolic body weight (BW). The animals were classified into low (more efficient) and high (less efficient) RFI groups. The bulls were then randomly distributed in individual pens where they were given excess feed twice daily to result in 5 to 10% orts for 90 d with diet containing 15% crude protein and 2.7 Mcal ME/kg DM. The heart rate (HR) of bulls was monitored for 4 consecutive days and used for calculation of EHP. Electrodes were fitted to bulls with stretch belts (POLAR RS400; Kempele, Finland). To calculate oxygen pulse (O2P), oxygen consumption was obtained using a facemask connected to the gas analyzer (EXHALYZER, ECOMedics, Zurich, Switzerland) and HR were simultaneously measured for 15 minutes period. Daily oxygen (O2) consumption was calculated by multiplying the volume of O2 per beat by total daily beats. EHP was calculated multiplying O2P by the average HR obtained during the 4 days, assuming 4.89 kcal/L of O2 to measure daily EHP that was expressed in kilocalories/day/kilogram metabolic BW (kcal/day/kg BW0.75). Blood samples were collected between days 45 and 90th after the beginning of the trial period in order to measure the concentration of hemoglobin and hematocrit. The bulls were slaughtered in an experimental slaughter house in accordance with current guidelines. Immediately after slaughter, a section of liver, a portion of longissimus thoracis (LT) muscle, plus a portion of subcutaneous fat (surrounding LT muscle) and portions of visceral fat (kidney, pelvis and inguinal fat) were collected. Samples of liver, muscle and adipose tissues were used to quantify mtDNA copy number per cell. The number of mtDNA copies was determined by normalization of mtDNA amount against a single copy nuclear gene (B2M). Mean of EHP, hemoglobin and hematocrit of high and low RFI bulls were compared using two-sample t-tests. Additionally, the one-way ANOVA was used to compare mtDNA quantification considering the mains effects of RFI groups. We found lower EHP (83.047 vs. 97.590 kcal/day/kgBW0.75; P < 0.10), hemoglobin concentration (13.533 vs. 15.108 g/dL; P < 0.10) and hematocrit percentage (39.3 vs. 43.6 %; P < 0.05) in low compared to high RFI bulls, respectively, which may be useful traits to identify efficient animals. However, no differences were observed between the mtDNA content in liver, muscle and adipose tissue of Nellore bulls with high and low RFI.

Keywords: bioenergetics, Bos indicus, feed efficiency, mitochondria

Procedia PDF Downloads 249
88 Structural Molecular Dynamics Modelling of FH2 Domain of Formin DAAM

Authors: Rauan Sakenov, Peter Bukovics, Peter Gaszler, Veronika Tokacs-Kollar, Beata Bugyi

Abstract:

FH2 (formin homology-2) domains of several proteins, collectively known as formins, including DAAM, DAAM1 and mDia1, promote G-actin nucleation and elongation. FH2 domains of these formins exist as oligomers. Chain dimerization by ring structure formation serves as a structural basis for actin polymerization function of FH2 domain. Proper single chain configuration and specific interactions between its various regions are necessary for individual chains to form a dimer functional in G-actin nucleation and elongation. FH1 and WH2 domain-containing formins were shown to behave as intrinsically disordered proteins. Thus, the aim of this research was to study structural dynamics of FH2 domain of DAAM. To investigate structural features of FH2 domain of DAAM, molecular dynamics simulation of chain A of FH2 domain of DAAM solvated in water box in 50 mM NaCl was conducted at temperatures from 293.15 to 353.15K, with VMD 1.9.2, NAMD 2.14 and Amber Tools 21 using 2z6e and 1v9d PDB structures of DAAM was obtained on I-TASSER webserver. Calcium and ATP bound G-actin 3hbt PDB structure was used as a reference protein with well-described structural dynamics of denaturation. Topology and parameter information of CHARMM 2012 additive all-atom force fields for proteins, carbohydrate derivatives, water and ions were used in NAMD 2.14 and ff19SB force field for proteins in Amber Tools 21. The systems were energy minimized for the first 1000 steps, equilibrated and produced in NPT ensemble for 1ns using stochastic Langevin dynamics and the particle mesh Ewald method. Our root-mean square deviation (RMSD) analysis of molecular dynamics of chain A of FH2 domains of DAAM revealed similar insignificant changes of total molecular average RMSD values of FH2 domain of these formins at temperatures from 293.15 to 353.15K. In contrast, total molecular average RMSD values of G-actin showed considerable increase at 328K, which corresponds to the denaturation of G-actin molecule at this temperature and its transition from native, ordered, to denatured, disordered, state which is well-described in the literature. RMSD values of lasso and tail regions of chain A of FH2 domain of DAAM exhibited higher than total molecular average RMSD at temperatures from 293.15 to 353.15K. These regions are functional in intra- and interchain interactions and contain highly conserved tryptophan residues of lasso region, highly conserved GNYMN sequence of post region and amino acids of the shell of hydrophobic pocket of the salt bridge between Arg171 and Asp321, which are important for structural stability and ordered state of FH2 domain of DAAM and its functions in FH2 domain dimerization. In conclusion, higher than total molecular average RMSD values of lasso and post regions of chain A of FH2 domain of DAAM may explain disordered state of FH2 domain of DAAM at temperatures from 293.15 to 353.15K. Finally, absence of marked transition, in terms of significant changes in average molecular RMSD values between native and denatured states of FH2 domain of DAAM at temperatures from 293.15 to 353.15K, can make it possible to attribute these formins to the group of intrinsically disordered proteins rather than to the group of intrinsically ordered proteins such as G-actin.

Keywords: FH2 domain, DAAM, formins, molecular modelling, computational biophysics

Procedia PDF Downloads 137
87 Health Risk Assessment from Potable Water Containing Tritium and Heavy Metals

Authors: Olga A. Momot, Boris I. Synzynys, Alla A. Oudalova

Abstract:

Obninsk is situated in the Kaluga region 100 km southwest of Moscow on the left bank of the Protva River. Several enterprises utilizing nuclear energy are operating in the town. A special attention in the region where radiation-hazardous facilities are located has traditionally been paid to radioactive gas and aerosol releases into the atmosphere; liquid waste discharges into the Protva river and groundwater pollution. Municipal intakes involve 34 wells arranged 15 km apart in a sequence north-south along the foot of the left slope of the Protva river valley. Northern and southern water intakes are upstream and downstream of the town, respectively. They belong to river valley intakes with mixed feeding, i.e. precipitation infiltration is responsible for a smaller part of groundwater, and a greater amount is being formed by overflowing from Protva. Water intakes are maintained by the Protva river runoff, the volume of which depends on the precipitation fallen out and watershed area. Groundwater contamination with tritium was first detected in a sanitary-protective zone of the Institute of Physics and Power Engineering (SRC-IPPE) by Roshydromet researchers when realizing the “Program of radiological monitoring in the territory of nuclear industry enterprises”. A comprehensive survey of the SRC-IPPE’s industrial site and adjacent territories has revealed that research nuclear reactors and accelerators where tritium targets are applied as well as radioactive waste storages could be considered as potential sources of technogenic tritium. All the above sources are located within the sanitary controlled area of intakes. Tritium activity in water of springs and wells near the SRC-IPPE is about 17.4 – 3200 Bq/l. The observed values of tritium activity are below the intervention levels (7600 Bq/l for inorganic compounds and 3300 Bq/l for organically bound tritium). The risk has being assessed to estimate possible effect of considered tritium concentrations on human health. Data on tritium concentrations in pipe-line drinking water were used for calculations. The activity of 3H amounted to 10.6 Bq/l and corresponded to the risk of such water consumption of ~ 3·10-7 year-1. The risk value given in magnitude is close to the individual annual death risk for population living near a NPP – 1.6·10-8 year-1 and at the same time corresponds to the level of tolerable risk (10-6) and falls within “risk optimization”, i.e. in the sphere for planning the economically sound measures on exposure risk reduction. To estimate the chemical risk, physical and chemical analysis was made of waters from all springs and wells near the SRC-IPPE. Chemical risk from groundwater contamination was estimated according to the EPA US guidance. The risk of carcinogenic diseases at a drinking water consumption amounts to 5·10-5. According to the classification accepted the health risk in case of spring water consumption is inadmissible. The compared assessments of risk associated with tritium exposure, on the one hand, and the dangerous chemical (e.g. heavy metals) contamination of Obninsk drinking water, on the other hand, have confirmed that just these chemical pollutants are responsible for health risk.

Keywords: radiation-hazardous facilities, water intakes, tritium, heavy metal, health risk

Procedia PDF Downloads 241
86 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada

Authors: Stefan W. Kienzle

Abstract:

The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.

Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes

Procedia PDF Downloads 93
85 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 234
84 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 177
83 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 185
82 Environmental Effect of Empty Nest Households in Germany: An Empirical Approach

Authors: Dominik Kowitzke

Abstract:

Housing constructions have direct and indirect environmental impacts especially caused by soil sealing and gray energy consumption related to the use of construction materials. Accordingly, the German government introduced regulations limiting additional annual soil sealing. At the same time, in many regions like metropolitan areas the demand for further housing is high and of current concern in the media and politics. It is argued that meeting this demand by making better use of the existing housing supply is more sustainable than the construction of new housing units. In this context, targeting the phenomenon of so-called over the housing of empty nest households seems worthwhile to investigate for its potential to free living space and thus, reduce the need for new housing constructions and related environmental harm. Over housing occurs if no space adjustment takes place in household lifecycle stages when children move out from home and the space formerly created for the offspring is from then on under-utilized. Although in some cases the housing space consumption might actually meet households’ equilibrium preferences, frequently space-wise adjustments to the living situation doesn’t take place due to transaction or information costs, habit formation, or government intervention leading to increasing costs of relocations like real estate transfer taxes or tenant protection laws keeping tenure rents below the market price. Moreover, many detached houses are not long-term designed in a way that freed up space could be rent out. Findings of this research based on socio-economic survey data, indeed, show a significant difference between the living space of empty nest and a comparison group of households which never had children. The approach used to estimate the average difference in living space is a linear regression model regressing the response variable living space on a two-dimensional categorical variable distinguishing the two groups of household types and further controls. This difference is assumed to be the under-utilized space and is extrapolated to the total amount of empty nests in the population. Supporting this result, it is found that households that move, despite market frictions impairing the relocation, after children left their home tend to decrease the living space. In the next step, only for areas with tight housing markets in Germany and high construction activity, the total under-utilized space in empty nests is estimated. Under the assumption of full substitutability of housing space in empty nests and space in new dwellings in these locations, it is argued that in a perfect market with empty nest households consuming their equilibrium demand quantity of housing space, dwelling constructions in the amount of the excess consumption of living space could be saved. This, on the other hand, would prevent environmental harm quantified in carbon dioxide equivalence units related to average constructions of detached or multi-family houses. This study would thus provide information on the amount of under-utilized space inside dwellings which is missing in public data and further estimates the external effect of over housing in environmental terms.

Keywords: empty nests, environment, Germany, households, over housing

Procedia PDF Downloads 173
81 Assessment of Efficiency of Underwater Undulatory Swimming Strategies Using a Two-Dimensional CFD Method

Authors: Dorian Audot, Isobel Margaret Thompson, Dominic Hudson, Joseph Banks, Martin Warner

Abstract:

In competitive swimming, after dives and turns, athletes perform underwater undulatory swimming (UUS), copying marine mammals’ method of locomotion. The body, performing this wave-like motion, accelerates the fluid downstream in its vicinity, generating propulsion with minimal resistance. Through this technique, swimmers can maintain greater speeds than surface swimming and take advantage of the overspeed granted by the dive (or push-off). Almost all previous work has considered UUS when performed at maximum effort. Critical parameters to maximize UUS speed are frequently discussed; however, this does not apply to most races. In only 3 out of the 16 individual competitive swimming events are athletes likely to attempt to perform UUS with the greatest speed, without thinking of the cost of locomotion. In the other cases, athletes will want to control the speed of their underwater swimming, attempting to maximise speed whilst considering energy expenditure appropriate to the duration of the event. Hence, there is a need to understand how swimmers adapt their underwater strategies to optimize the speed within the allocated energetic cost. This paper develops a consistent methodology that enables different sets of UUS kinematics to be investigated. These may have different propulsive efficiencies and force generation mechanisms (e.g.: force distribution along with the body and force magnitude). The developed methodology, therefore, needs to: (i) provide an understanding of the UUS propulsive mechanisms at different speeds, (ii) investigate the key performance parameters when UUS is not performed solely for maximizing speed; (iii) consistently determine the propulsive efficiency of a UUS technique. The methodology is separated into two distinct parts: kinematic data acquisition and computational fluid dynamics (CFD) analysis. For the kinematic acquisition, the position of several joints along the body and their sequencing were either obtained by video digitization or by underwater motion capture (Qualisys system). During data acquisition, the swimmers were asked to perform UUS at a constant depth in a prone position (facing the bottom of the pool) at different speeds: maximum effort, 100m pace, 200m pace and 400m pace. The kinematic data were input to a CFD algorithm employing a two-dimensional Large Eddy Simulation (LES). The algorithm adopted was specifically developed in order to perform quick unsteady simulations of deforming bodies and is therefore suitable for swimmers performing UUS. Despite its approximations, the algorithm is applied such that simulations are performed with the inflow velocity updated at every time step. It also enables calculations of the resistive forces (total and applied to each segment) and the power input of the modeled swimmer. Validation of the methodology is achieved by comparing the data obtained from the computations with the original data (e.g.: sustained swimming speed). This method is applied to the different kinematic datasets and provides data on swimmers’ natural responses to pacing instructions. The results show how kinematics affect force generation mechanisms and hence how the propulsive efficiency of UUS varies for different race strategies.

Keywords: CFD, efficiency, human swimming, hydrodynamics, underwater undulatory swimming

Procedia PDF Downloads 221
80 Low-carbon Footprint Diluents in Solvent Extraction for Lithium-ion Battery Recycling

Authors: Abdoulaye Maihatchi Ahamed, Zubin Arora, Benjamin Swobada, Jean-yves Lansot, Alexandre Chagnes

Abstract:

Lithium-ion battery (LiB) is the technology of choice in the development of electric vehicles. But there are still many challenges, including the development of positive electrode materials exhibiting high cycle ability, high energy density, and low environmental impact. For this latter, LiBs must be manufactured in a circular approach by developing the appropriate strategies to reuse and recycle them. Presently, the recycling of LiBs is carried out by the pyrometallurgical route, but more and more processes implement or will implement the hydrometallurgical route or a combination of pyrometallurgical and hydrometallurgical operations. After producing the black mass by mineral processing, the hydrometallurgical process consists in leaching the black mass in order to uptake the metals contained in the cathodic material. Then, these metals are extracted selectively by liquid-liquid extraction, solid-liquid extraction, and/or precipitation stages. However, liquid-liquid extraction combined with precipitation/crystallization steps is the most implemented operation in the LiB recycling process to selectively extract copper, aluminum, cobalt, nickel, manganese, and lithium from the leaching solution and precipitate these metals as high-grade sulfate or carbonate salts. Liquid-liquid extraction consists in contacting an organic solvent and an aqueous feed solution containing several metals, including the targeted metal(s) to extract. The organic phase is non-miscible with the aqueous phase. It is composed of an extractant to extract the target metals and a diluent, which is usually aliphatic kerosene produced from the petroleum industry. Sometimes, a phase modifier is added in the formulation of the extraction solvent to avoid the third phase formation. The extraction properties of the diluent do not depend only on the chemical structure of the extractant, but it may also depend on the nature of the diluent. Indeed, the interactions between the diluent can influence more or less the interactions between extractant molecules besides the extractant-diluent interactions. Only a few studies in the literature addressed the influence of the diluent on the extraction properties, while many studies focused on the effect of the extractants. Recently, new low-carbon footprint aliphatic diluents were produced by catalytic dearomatisation and distillation of bio-based oil. This study aims at investigating the influence of the nature of the diluent on the extraction properties of three extractants towards cobalt, nickel, manganese, copper, aluminum, and lithium: Cyanex®272 for nickel-cobalt separation, DEHPA for manganese extraction, and Acorga M5640 for copper extraction. The diluents used in the formulation of the extraction solvents are (i) low-odor aliphatic kerosene produced from the petroleum industry (ELIXORE 180, ELIXORE 230, ELIXORE 205, and ISANE IP 175) and (ii) bio-sourced aliphatic diluents (DEV 2138, DEV 2139, DEV 1763, DEV 2160, DEV 2161 and DEV 2063). After discussing the effect of the diluents on the extraction properties, this conference will address the development of a low carbon footprint process based on the use of the best bio-sourced diluent for the production of high-grade cobalt sulfate, nickel sulfate, manganese sulfate, and lithium carbonate, as well as metal copper.

Keywords: diluent, hydrometallurgy, lithium-ion battery, recycling

Procedia PDF Downloads 89
79 Recycling Biomass of Constructed Wetlands as Precursors of Electrodes for Removing Heavy Metals and Persistent Pollutants

Authors: Álvaro Ramírez Vidal, Martín Muñoz Morales, Francisco Jesús Fernández Morales, Luis Rodríguez Romero, José Villaseñor Camacho, Javier Llanos López

Abstract:

In recent times, environmental problems have led to the extensive use of biological systems to solve them. Among the different types of biological systems, the use of plants such as aquatic macrophytes in constructed wetlands and terrestrial plant species for treating polluted soils and sludge has gained importance. Though the use of constructed wetlands for wastewater treatment is a well-researched domain, the slowness of pollutant degradation and high biomass production pose some challenges. Plants used in CW participate in different mechanisms for the capture and degradation of pollutants that also can retain some pharmaceutical and personal care products (PPCPs) that are very persistent in the environment. Thus, these systems present advantages in line with the guidelines published for the transition towards friendly and ecological procedures as they are environmentally friendly systems, consume low energy, or capture atmospheric CO₂. However, the use of CW presents some drawbacks, as the slowness of pollutant degradation or the production of important amounts of plant biomass, which need to be harvested and managed periodically. Taking this opportunity in mind, it is important to highlight that this residual biomass (of lignocellulosic nature) could be used as the feedstock for the generation of carbonaceous materials using thermochemical transformations such as slow pyrolysis or hydrothermal carbonization to produce high-value biomass-derived carbons through sustainable processes as adsorbents, catalysts…, thereby improving the circular carbon economy. Thus, this work carried out the analysis of some PPCPs commonly found in urban wastewater, as salicylic acid or ibuprofen, to evaluate the remediation carried out for the Phragmites Australis. Then, after the harvesting, this biomass can be used to synthesize electrodes through hydrothermal carbonization (HTC) and produce high-value biomass-derived carbons with electrocatalytic activity to remove heavy metals and persistent pollutants, promoting circular economy concepts. To do this, it was chosen biomass derived from the natural environment in high environmental risk as the Daimiel Wetlands National Park in the center of Spain, and the rest of the biomass developed in a CW specifically designed to remove pollutants. The research emphasizes the impact of the composition of the biomass waste and the synthetic parameters applied during HTC on the electrocatalytic activity. Additionally, this parameter can be related to the physicochemical properties, as porosity, surface functionalization, conductivity, and mass transfer of the electrodes lytic inks. Data revealed that carbon materials synthesized have good surface properties (good conductivities and high specific surface area) that enhance the electro-oxidants generated and promote the removal of PPCPs and the chemical oxygen demand of polluted waters.

Keywords: constructed wetlands, carbon materials, heavy metals, pharmaceutical and personal care products, hydrothermal carbonization

Procedia PDF Downloads 96
78 Co2e Sequestration via High Yield Crops and Methane Capture for ZEV Sustainable Aviation Fuel

Authors: Bill Wason

Abstract:

143 Crude Palm Oil Coop mills on Sumatra Island are participating in a program to transfer land from defaulted estates to small farmers while improving the sustainability of palm production to allow for biofuel & food production. GCarbon will be working with farmers to transfer technology, fertilizer, and trees to double the yield from the current baseline of 3.5 tons to at least 7 tons of oil per ha (25 tons of fruit bunches). This will be measured via evaluation of yield comparisons between participant and non-participant farms. We will also capture methane from Palm Oil Mill Effluent (POME)throughbelt press filtering. Residues will be weighed and a formula used to estimate methane emission reductions based on methodologies developed by other researchers. GCarbon will also cover mill ponds with a non-permeable membrane and collect methane for energy or steam production. A system for accelerating methane production involving ozone and electro-flocculation will be tested to intensifymethane generation and reduce the time for wastewater treatment. A meta-analysis of research on sweet potatoes and sorghum as rotation crops will look at work in the Rio Grande do Sul, Brazil where5 ha. oftest plots of industrial sweet potato have achieved yields of 60 tons and 40 tons per ha. from 2 harvests in one year (100 MT/ha./year). Field trials will be duplicated in Bom Jesus Das Selvas, Maranhaothat will test varieties of sweet potatoes to measure yields and evaluate disease risks in a very different soil and climate of NE Brazil. Hog methane will also be captured. GCarbon Brazil, Coop Sisal, and an Australian research partner will plant several varieties of agave and use agronomic procedures to get yields of 880 MT per ha. over 5 years. They will also plant new varieties expected to get 3500 MT of biomass after 5 years (176-700 MT per ha. per year). The goal is to show that the agave can adapt to Brazil’s climate without disease problems. The study will include a field visit to growing sites in Australia where agave is being grown commercially for biofuels production. Researchers will measure the biomass per hectare at various stages in the growing cycle, sugar content at harvest, and other metrics to confirm the yield of sugar per ha. is up to 10 times greater than sugar cane. The study will look at sequestration rates from measuring soil carbon and root accumulation in various plots in Australia to confirm carbon sequestered from 5 years of production. The agave developer estimates that 60-80 MT of sequestration per ha. per year occurs from agave. The three study efforts in 3 different countries will define a feedstock pathway for jet fuel that involves very high yield crops that can produce 2 to 10 times more biomass than current assumptions. This cost-effective and less land intensive strategy will meet global jet fuel demand and produce huge quantities of food for net zero aviation and feeding 9-10 billion people by 2050

Keywords: zero emission SAF, methane capture, food-fuel integrated refining, new crops for SAF

Procedia PDF Downloads 103
77 Subcontractor Development Practices and Processes: A Conceptual Model for LEED Projects

Authors: Andrea N. Ofori-Boadu

Abstract:

The purpose is to develop a conceptual model of subcontractor development practices and processes that strengthen the integration of subcontractors into construction supply chain systems for improved subcontractor performance on Leadership in Energy and Environmental Design (LEED) certified building projects. The construction management of a LEED project has an important objective of meeting sustainability certification requirements. This is in addition to the typical project management objectives of cost, time, quality, and safety for traditional projects; and, therefore increases the complexity of LEED projects. Considering that construction management organizations rely heavily on subcontractors, poor performance on complex projects such as LEED projects has been largely attributed to the unsatisfactory preparation of subcontractors. Furthermore, the extensive use of unique and non-repetitive short term contracts limits the full integration of subcontractors into construction supply chains and hinders long-term cooperation and benefits that could enhance performance on construction projects. Improved subcontractor development practices are needed to better prepare and manage subcontractors, so that complex objectives can be met or exceeded. While supplier development and supply chain theories and practices for the manufacturing sector have been extensively investigated to address similar challenges, investigations in the construction sector are not that obvious. Consequently, the objective of this research is to investigate effective subcontractor development practices and processes to guide construction management organizations in their development of a strong network of high performing subcontractors. Drawing from foundational supply chain and supplier development theories in the manufacturing sector, a mixed interpretivist and empirical methodology is utilized to assess the body of knowledge within literature for conceptual model development. A self-reporting survey with five-point Likert scale items and open-ended questions is administered to 30 construction professionals to estimate their perceptions of the effectiveness of 37 practices, classified into five subcontractor development categories. Data analysis includes descriptive statistics, weighted means, and t-tests that guide the effectiveness ranking of practices and categories. The results inform the proposed three-phased LEED subcontractor development program model which focuses on preparation, development and implementation, and monitoring. Highly ranked LEED subcontractor pre-qualification, commitment, incentives, evaluation, and feedback practices are perceived as more effective, when compared to practices requiring more direct involvement and linkages between subcontractors and construction management organizations. This is attributed to unfamiliarity, conflicting interests, lack of trust, and resource sharing challenges. With strategic modifications, the recommended practices can be extended to other non-LEED complex projects. Additional research is needed to guide the development of subcontractor development programs that strengthen direct involvement between construction management organizations and their network of high performing subcontractors. Insights from this present research strengthen theoretical foundations to support future research towards more integrated construction supply chains. In the long-term, this would lead to increased performance, profits and client satisfaction.

Keywords: construction management, general contractor, supply chain, sustainable construction

Procedia PDF Downloads 112
76 Company-Independent Standardization of Timber Construction to Promote Urban Redensification of Housing Stock

Authors: Andreas Schweiger, Matthias Gnigler, Elisabeth Wieder, Michael Grobbauer

Abstract:

Especially in the alpine region, available areas for new residential development are limited. One possible solution is to exploit the potential of existing settlements. Urban redensification, especially the addition of floors to existing buildings, requires efficient, lightweight constructions with short construction times. This topic is being addressed in the five-year Alpine Building Centre. The focus of this cooperation between Salzburg University of Applied Sciences and RSA GH Studio iSPACE is on transdisciplinary research in the fields of building and energy technology, building envelopes and geoinformation, as well as the transfer of research results to industry. One development objective is a system of wood panel system construction with a high degree of prefabrication to optimize the construction quality, the construction time and the applicability for small and medium-sized enterprises. The system serves as a reliable working basis for mastering the complex building task of redensification. The technical solution is the development of an open system in timber frame and solid wood construction, which is suitable for a maximum two-story addition of residential buildings. The applicability of the system is mainly influenced by the existing building stock. Therefore, timber frame and solid timber construction are combined where necessary to bridge large spans of the existing structure while keeping the dead weight as low as possible. Escape routes are usually constructed in reinforced concrete and are located outside the system boundary. Thus, within the framework of the legal and normative requirements of timber construction, a hybrid construction method for redensification created. Component structure, load-bearing structure and detail constructions are developed in accordance with the relevant requirements. The results are directly applicable in individual cases, with the exception of the required verifications. In order to verify the practical suitability of the developed system, stakeholder workshops are held on the one hand, and the system is applied in the planning of a two-storey extension on the other hand. A company-independent construction standard offers the possibility of cooperation and bundling of capacities in order to be able to handle larger construction volumes in collaboration with several companies. Numerous further developments can take place on the basis of the system, which is under open license. The construction system will support planners and contractors from design to execution. In this context, open means publicly published and freely usable and modifiable for own use as long as the authorship and deviations are mentioned. The companies are provided with a system manual, which contains the system description and an application manual. This manual will facilitate the selection of the correct component cross-sections for the specific construction projects by means of all component and detail specifications. This presentation highlights the initial situation, the motivation, the approach, but especially the technical solution as well as the possibilities for the application. After an explanation of the objectives and working methods, the component and detail specifications are presented as work results and their application.

Keywords: redensification, SME, urban development, wood building system

Procedia PDF Downloads 111
75 The Governance of Net-Zero Emission Urban Bus Transitions in the United Kingdom: Insight from a Transition Visioning Stakeholder Workshop

Authors: Iraklis Argyriou

Abstract:

The transition to net-zero emission urban bus (ZEB) systems is receiving increased attention in research and policymaking throughout the globe. Most studies in this area tend to address techno-economic aspects and the perspectives of a narrow group of stakeholders, while they largely overlook analysis of current bus system dynamics. This offers limited insight into the types of ZEB governance challenges and opportunities that are encountered in real-world contexts, as well as into some of the immediate actions that need to be taken to set off the transition over the longer term. This research offers a multi-stakeholder perspective into both the technical and non-technical factors that influence ZEB transitions within a particular context, the UK. It does so by drawing from a recent transition visioning stakeholder workshop (June 2023) with key public, private and civic actors of the urban bus transportation system. Using NVivo software to qualitatively analyze the workshop discussions, the research examines the key technological and funding aspects, as well as the short-term actions (over the next five years), that need to be addressed for supporting the ZEB transition in UK cities. It finds that ZEB technology has reached a mature stage (i.e., high efficiency of batteries, motors and inverters), but important improvements can be pursued through greater control and integration of ZEB technological components and systems. In this regard, telemetry, predictive maintenance and adaptive control strategies pertinent to the performance and operation of ZEB vehicles have a key role to play in the techno-economic advancement of the transition. Yet, more pressing gaps were identified in the current ZEB funding regime. Whereas the UK central government supports greater ZEB adoption through a series of grants and subsidies, the scale of the funding and its fragmented nature do not match the needs for a UK-wide transition. Funding devolution arrangements (i.e., stable funding settlement deals between the central government and the devolved administrations/local authorities), as well as locally-driven schemes (i.e., congestion charging/workplace parking levy), could then enhance the financial prospects of the transition. As for short-term action, three areas were identified as critical: (1) the creation of whole value chains around the supply, use and recycling of ZEB components; (2) the ZEB retrofitting of existing fleets; and (3) integrated transportation that prioritizes buses as a first-choice, convenient and reliable mode while it simultaneously reduces car dependency in urban areas. Taken together, the findings point to the need for place-based transition approaches that create a viable techno-economic ecosystem for ZEB development but at the same time adopt a broader governance perspective beyond a ‘net-zero’ and ‘bus sectoral’ focus. As such, multi-actor collaborations and the coordination of wider resources and agency, both vertically across institutional scales and horizontally across transport, energy and urban planning, become fundamental features of comprehensive ZEB responses. The lessons from the UK case can inform a broader body of empirical contextual knowledge of ZEB transition governance within domestic political economies of public transportation.

Keywords: net-zero emission transition, stakeholders, transition governance, UK, urban bus transportation

Procedia PDF Downloads 76
74 Analyzing the Heat Transfer Mechanism in a Tube Bundle Air-PCM Heat Exchanger: An Empirical Study

Authors: Maria De Los Angeles Ortega, Denis Bruneau, Patrick Sebastian, Jean-Pierre Nadeau, Alain Sommier, Saed Raji

Abstract:

Phase change materials (PCM) present attractive features that made them a passive solution for thermal comfort assessment in buildings during summer time. They show a large storage capacity per volume unit in comparison with other structural materials like bricks or concrete. If their use is matched with the peak load periods, they can contribute to the reduction of the primary energy consumption related to cooling applications. Despite these promising characteristics, they present some drawbacks. Commercial PCMs, as paraffines, offer a low thermal conductivity affecting the overall performance of the system. In some cases, the material can be enhanced, adding other elements that improve the conductivity, but in general, a design of the unit that optimizes the thermal performance is sought. The material selection is the departing point during the designing stage, and it does not leave plenty of room for optimization. The PCM melting point depends highly on the atmospheric characteristics of the building location. The selection must relay within the maximum, and the minimum temperature reached during the day. The geometry of the PCM container and the geometrical distribution of these containers are designing parameters, as well. They significantly affect the heat transfer, and therefore its phenomena must be studied exhaustively. During its lifetime, an air-PCM unit in a building must cool down the place during daytime, while the melting of the PCM occurs. At night, the PCM must be regenerated to be ready for next uses. When the system is not in service, a minimal amount of thermal exchanges is desired. The aforementioned functions result in the presence of sensible and latent heat storage and release. Hence different types of mechanisms drive the heat transfer phenomena. An experimental test was designed to study the heat transfer phenomena occurring in a circular tube bundle air-PCM exchanger. An in-line arrangement was selected as the geometrical distribution of the containers. With the aim of visual identification, the containers material and a section of the test bench were transparent. Some instruments were placed on the bench for measuring temperature and velocity. The PCM properties were also available through differential scanning calorimeter (DSC) tests. An evolution of the temperature during both cycles, melting and solidification were obtained. The results showed some phenomena at a local level (tubes) and on an overall level (exchanger). Conduction and convection appeared as the main heat transfer mechanisms. From these results, two approaches to analyze the heat transfer were followed. The first approach described the phenomena in a single tube as a series of thermal resistances, where a pure conduction controlled heat transfer was assumed in the PCM. For the second approach, the temperature measurements were used to find some significant dimensionless numbers and parameters as Stefan, Fourier and Rayleigh numbers, and the melting fraction. These approaches allowed us to identify the heat transfer phenomena during both cycles. The presence of natural convection during melting might have been stated from the influence of the Rayleigh number on the correlations obtained.

Keywords: phase change materials, air-PCM exchangers, convection, conduction

Procedia PDF Downloads 182
73 Electrochemical Activity of NiCo-GDC Cermet Anode for Solid Oxide Fuel Cells Operated in Methane

Authors: Kamolvara Sirisuksakulchai, Soamwadee Chaianansutcharit, Kazunori Sato

Abstract:

Solid Oxide Fuel Cells (SOFCs) have been considered as one of the most efficient large unit power generators for household and industrial applications. The efficiency of an electronic cell depends mainly on the electrochemical reactions in the anode. The development of anode materials has been intensely studied to achieve higher kinetic rates of redox reactions and lower internal resistance. Recent studies have introduced an efficient cermet (ceramic-metallic) material for its ability in fuel oxidation and oxide conduction. This could expand the reactive site, also known as the triple-phase boundary (TPB), thus increasing the overall performance. In this study, a bimetallic catalyst Ni₀.₇₅Co₀.₂₅Oₓ was combined with Gd₀.₁Ce₀.₉O₁.₉₅ (GDC) to be used as a cermet anode (NiCo-GDC) for an anode-supported type SOFC. The synthesis of Ni₀.₇₅Co₀.₂₅Oₓ was carried out by ball milling NiO and Co3O4 powders in ethanol and calcined at 1000 °C. The Gd₀.₁Ce₀.₉O₁.₉₅ was prepared by a urea co-precipitation method. Precursors of Gd(NO₃)₃·6H₂O and Ce(NO₃)₃·6H₂O were dissolved in distilled water with the addition of urea and were heated subsequently. The heated mixture product was filtered and rinsed thoroughly, then dried and calcined at 800 °C and 1500 °C, respectively. The two powders were combined followed by pelletization and sintering at 1100 °C to form an anode support layer. The fabrications of an electrolyte layer and cathode layer were conducted. The electrochemical performance in H₂ was measured from 800 °C to 600 °C while for CH₄ was from 750 °C to 600 °C. The maximum power density at 750 °C in H₂ was 13% higher than in CH₄. The difference in performance was due to higher polarization resistances confirmed by the impedance spectra. According to the standard enthalpy, the dissociation energy of C-H bonds in CH₄ is slightly higher than the H-H bond H₂. The dissociation of CH₄ could be the cause of resistance within the anode material. The results from lower temperatures showed a descending trend of power density in relevance to the increased polarization resistance. This was due to lowering conductivity when the temperature decreases. The long-term stability was measured at 750 °C in CH₄ monitoring at 12-hour intervals. The maximum power density tends to increase gradually with time while the resistances were maintained. This suggests the enhanced stability from charge transfer activities in doped ceria due to the transition of Ce⁴⁺ ↔ Ce³⁺ at low oxygen partial pressure and high-temperature atmosphere. However, the power density started to drop after 60 h, and the cell potential also dropped from 0.3249 V to 0.2850 V. These phenomena was confirmed by a shifted impedance spectra indicating a higher ohmic resistance. The observation by FESEM and EDX-mapping suggests the degradation due to mass transport of ions in the electrolyte while the anode microstructure was still maintained. In summary, the electrochemical test and stability test for 60 h was achieved by NiCo-GDC cermet anode. Coke deposition was not detected after operation in CH₄, hence this confirms the superior properties of the bimetallic cermet anode over typical Ni-GDC.

Keywords: bimetallic catalyst, ceria-based SOFCs, methane oxidation, solid oxide fuel cell

Procedia PDF Downloads 156
72 Modeling Thermal Changes of Urban Blocks in Relation to the Landscape Structure and Configuration in Guilan Province

Authors: Roshanak Afrakhteh, Abdolrasoul Salman Mahini, Mahdi Motagh, Hamidreza Kamyab

Abstract:

Urban Heat Islands (UHIs) are distinctive urban areas characterized by densely populated central cores surrounded by less densely populated peripheral lands. These areas experience elevated temperatures, primarily due to impermeable surfaces and specific land use patterns. The consequences of these temperature variations are far-reaching, impacting the environment and society negatively, leading to increased energy consumption, air pollution, and public health concerns. This paper emphasizes the need for simplified approaches to comprehend UHI temperature dynamics and explains how urban development patterns contribute to land surface temperature variation. To illustrate this relationship, the study focuses on the Guilan Plain, utilizing techniques like principal component analysis and generalized additive models. The research centered on mapping land use and land surface temperature in the low-lying area of Guilan province. Satellite data from Landsat sensors for three different time periods (2002, 2012, and 2021) were employed. Using eCognition software, a spatial unit known as a "city block" was utilized through object-based analysis. The study also applied the normalized difference vegetation index (NDVI) method to estimate land surface radiance. Predictive variables for urban land surface temperature within residential city blocks were identified categorized as intrinsic (related to the block's structure) and neighboring (related to adjacent blocks) variables. Principal Component Analysis (PCA) was used to select significant variables, and a Generalized Additive Model (GAM) approach, implemented using R's mgcv package, modeled the relationship between urban land surface temperature and predictor variables.Notable findings included variations in urban temperature across different years attributed to environmental and climatic factors. Block size, shared boundary, mother polygon area, and perimeter-to-area ratio were identified as main variables for the generalized additive regression model. This model showed non-linear relationships, with block size, shared boundary, and mother polygon area positively correlated with temperature, while the perimeter-to-area ratio displayed a negative trend. The discussion highlights the challenges of predicting urban surface temperature and the significance of block size in determining urban temperature patterns. It also underscores the importance of spatial configuration and unit structure in shaping urban temperature patterns. In conclusion, this study contributes to the growing body of research on the connection between land use patterns and urban surface temperature. Block size, along with block dispersion and aggregation, emerged as key factors influencing urban surface temperature in residential areas. The proposed methodology enhances our understanding of parameter significance in shaping urban temperature patterns across various regions, particularly in Iran.

Keywords: urban heat island, land surface temperature, LST modeling, GAM, Gilan province

Procedia PDF Downloads 77
71 Efficient Utilization of Negative Half Wave of Regulator Rectifier Output to Drive Class D LED Headlamp

Authors: Lalit Ahuja, Nancy Das, Yashas Shetty

Abstract:

LED lighting has been increasingly adopted for vehicles in both domestic and foreign automotive markets. Although this miniaturized technology gives the best light output, low energy consumption, and cost-efficient solutions for driving, the same is the need of the hour. In this paper, we present a methodology for driving the highest class two-wheeler headlamp with regulator and rectifier (RR) output. Unlike usual LED headlamps, which are driven by a battery, regulator, and rectifier (RR) driven, a low-cost and highly efficient LED Driver Module (LDM) is proposed. The positive half of magneto output is regulated and used to charge batteries used for various peripherals. While conventionally, the negative half was used for operating bulb-based exterior lamps. But with advancements in LED-based headlamps, which are driven by a battery, this negative half pulse remained unused in most of the vehicles. Our system uses negative half-wave rectified DC output from RR to provide constant light output at all RPMs of the vehicle. With the negative rectified DC output of RR, we have the advantage of pulsating DC input which periodically goes to zero, thus helping us to generate a constant DC output equivalent to the required LED load, and with a change in RPM, additional active thermal bypass circuit help us to maintain the efficiency and thermal rise. The methodology uses the negative half wave output of the RR along with a linear constant current driver with significantly higher efficiency. Although RR output has varied frequency and duty cycles at different engine RPMs, the driver is designed such that it provides constant current to LEDs with minimal ripple. In LED Headlamps, a DC-DC switching regulator is usually used, which is usually bulky. But with linear regulators, we’re eliminating bulky components and improving the form factor. Hence, this is both cost-efficient and compact. Presently, output ripple-free amplitude drivers with fewer components and less complexity are limited to lower-power LED Lamps. The focus of current high-efficiency research is often on high LED power applications. This paper presents a method of driving LED load at both High Beam and Low Beam using the negative half wave rectified pulsating DC from RR with minimum components, maintaining high efficiency within the thermal limitations. Linear regulators are significantly inefficient, with efficiencies typically about 40% and reaching as low as 14%. This leads to poor thermal performance. Although they don’t require complex and bulky circuitry, powering high-power devices is difficult to realise with the same. But with the input being negative half wave rectified pulsating DC, this efficiency can be improved as this helps us to generate constant DC output equivalent to LED load minimising the voltage drop on the linear regulator. Hence, losses are significantly reduced, and efficiency as high as 75% is achieved. With a change in RPM, DC voltage increases, which can be managed by active thermal bypass circuitry, thus resulting in better thermal performance. Hence, the use of bulky and expensive heat sinks can be avoided. Hence, the methodology to utilize the unused negative pulsating DC output of RR to optimize the utilization of RR output power and provide a cost-efficient solution as compared to costly DC-DC drivers.

Keywords: class D LED headlamp, regulator and rectifier, pulsating DC, low cost and highly efficient, LED driver module

Procedia PDF Downloads 68
70 Solid Polymer Electrolyte Membranes Based on Siloxane Matrix

Authors: Natia Jalagonia, Tinatin Kuchukhidze

Abstract:

Polymer electrolytes (PE) play an important part in electrochemical devices such as batteries and fuel cells. To achieve optimal performance, the PE must maintain a high ionic conductivity and mechanical stability at both high and low relative humidity. The polymer electrolyte also needs to have excellent chemical stability for long and robustness. According to the prevailing theory, ionic conduction in polymer electrolytes is facilitated by the large-scale segmental motion of the polymer backbone, and primarily occurs in the amorphous regions of the polymer electrolyte. Crystallinity restricts polymer backbone segmental motion and significantly reduces conductivity. Consequently, polymer electrolytes with high conductivity at room temperature have been sought through polymers which have highly flexible backbones and have largely amorphous morphology. The interest in polymer electrolytes was increased also by potential applications of solid polymer electrolytes in high energy density solid state batteries, gas sensors and electrochromic windows. Conductivity of 10-3 S/cm is commonly regarded as a necessary minimum value for practical applications in batteries. At present, polyethylene oxide (PEO)-based systems are most thoroughly investigated, reaching room temperature conductivities of 10-7 S/cm in some cross-linked salt in polymer systems based on amorphous PEO-polypropylene oxide copolymers.. It is widely accepted that amorphous polymers with low glass transition temperatures Tg and a high segmental mobility are important prerequisites for high ionic conductivities. Another necessary condition for high ionic conductivity is a high salt solubility in the polymer, which is most often achieved by donors such as ether oxygen or imide groups on the main chain or on the side groups of the PE. It is well established also that lithium ion coordination takes place predominantly in the amorphous domain, and that the segmental mobility of the polymer is an important factor in determining the ionic mobility. Great attention was pointed to PEO-based amorphous electrolyte obtained by synthesis of comb-like polymers, by attaching short ethylene oxide unit sequences to an existing amorphous polymer backbone. The aim of presented work is to obtain of solid polymer electrolyte membranes using PMHS as a matrix. For this purpose the hydrosilylation reactions of α,ω-bis(trimethylsiloxy)methyl¬hydrosiloxane with allyl triethylene-glycol mo¬nomethyl ether and vinyltriethoxysilane at 1:28:7 ratio of initial com¬pounds in the presence of Karstedt’s catalyst, platinum hydrochloric acid (0.1 M solution in THF) and platinum on the carbon catalyst in 50% solution of anhydrous toluene have been studied. The synthesized olygomers are vitreous liquid products, which are well soluble in organic solvents with specific viscosity ηsp ≈ 0.05 - 0.06. The synthesized olygomers were analysed with FTIR, 1H, 13C, 29Si NMR spectroscopy. Synthesized polysiloxanes were investigated with wide-angle X-ray, gel-permeation chromatography, and DSC analyses. Via sol-gel processes of doped with lithium trifluoromethylsulfonate (triflate) or lithium bis¬(trifluoromethylsulfonyl)¬imide polymer systems solid polymer electrolyte membranes have been obtained. The dependence of ionic conductivity as a function of temperature and salt concentration was investigated and the activation energies of conductivity for all obtained compounds are calculated

Keywords: synthesis, PMHS, membrane, electrolyte

Procedia PDF Downloads 259
69 Functional Traits and Agroecosystem Multifunctionality in Summer Cover Crop Mixtures and Monocultures

Authors: Etienne Herrick

Abstract:

As an economically and ecologically feasible method for farmers to introduce greater diversity into their crop rotations, cover cropping presents a valuable opportunity for improving the sustainability of food production. Planted in-between cash crop growing seasons, cover crops serve to enhance agroecosystem functioning, rather than being destined for sale or consumption. In fact, cover crops may hold the capacity to deliver multiple ecosystem functions or services simultaneously (multifunctionality). Building upon this line of research will not only benefit society at present, but also support its continued survival through its potential for restoring depleted soils and reducing the need for energy-intensive and harmful external inputs like fertilizers and pesticides. This study utilizes a trait-based approach to explore the influence of inter- and intra-specific interactions in summer cover crop mixtures and monocultures on functional trait expression and ecosystem services. Functional traits that enhance ecosystem services related to agricultural production include height, specific leaf area (SLA), root, shoot ratio, leaf C and N concentrations, and flowering phenology. Ecosystem services include biomass production, weed suppression, reduced N leaching, N recycling, and support of pollinators. Employing a trait-based approach may allow for the elucidation of mechanistic links between plant structure and resulting ecosystem service delivery. While relationships between some functional traits and the delivery of particular ecosystem services may be readily apparent through existing ecological knowledge (e.g. height positively correlating with weed suppression), this study will begin to quantify those relationships so as to gain further understanding of whether and how measurable variation in functional trait expression across cover crop mixtures and monocultures can serve as a reliable predictor of variation in the types and abundances of ecosystem services delivered. Six cover crop species, including legume, grass, and broadleaf functional types, were selected for growth in six mixtures and their component monocultures based upon the principle of trait complementarity. The tricultures (three-way mixtures) are comprised of a legume, grass, and broadleaf species, and include cowpea/sudex/buckwheat, sunnhemp/sudex/buckwheat, and chickling vetch/oat/buckwheat combinations; the dicultures contain the same legume and grass combinations as above, without the buckwheat broadleaf. By combining species with expectedly complimentary traits (for example, legumes are N suppliers and grasses are N acquirers, creating a nutrient cycling loop) the cover crop mixtures may elicit a broader range of ecosystem services than that provided by a monoculture, though trade-offs could exist. Collecting functional trait data will enable the investigation of the types of interactions driving these ecosystem service outcomes. It also allows for generalizability across a broader range of species than just those selected for this study, which may aid in informing further research efforts exploring species and ecosystem functioning, as well as on-farm management decisions.

Keywords: agroecology, cover crops, functional traits, multifunctionality, trait complementarity

Procedia PDF Downloads 256
68 Simple Finite-Element Procedure for Modeling Crack Propagation in Reinforced Concrete Bridge Deck under Repetitive Moving Truck Wheel Loads

Authors: Rajwanlop Kumpoopong, Sukit Yindeesuk, Pornchai Silarom

Abstract:

Modeling cracks in concrete is complicated by its strain-softening behavior which requires the use of sophisticated energy criteria of fracture mechanics to assure stable and convergent solutions in the finite-element (FE) analysis particularly for relatively large structures. However, for small-scale structures such as beams and slabs, a simpler approach relies on retaining some shear stiffness in the cracking plane has been adopted in literature to model the strain-softening behavior of concrete under monotonically increased loading. According to the shear retaining approach, each element is assumed to be an isotropic material prior to cracking of concrete. Once an element is cracked, the isotropic element is replaced with an orthotropic element in which the new orthotropic stiffness matrix is formulated with respect to the crack orientation. The shear transfer factor of 0.5 is used in parallel to the crack plane. The shear retaining approach is adopted in this research to model cracks in RC bridge deck with some modifications to take into account the effect of repetitive moving truck wheel loads as they cause fatigue cracking of concrete. First modification is the introduction of fatigue tests of concrete and reinforcing steel and the Palmgren-Miner linear criterion of cumulative damage in the conventional FE analysis. For a certain loading, the number of cycles to failure of each concrete or RC element can be calculated from the fatigue or S-N curves of concrete and reinforcing steel. The elements with the minimum number of cycles to failure are the failed elements. For the elements that do not fail, the damage is accumulated according to Palmgren-Miner linear criterion of cumulative damage. The stiffness of the failed element is modified and the procedure is repeated until the deck slab fails. The total number of load cycles to failure of the deck slab can then be obtained from which the S-N curve of the deck slab can be simulated. Second modification is the modification in shear transfer factor. Moving loading causes continuous rubbing of crack interfaces which greatly reduces shear transfer mechanism. It is therefore conservatively assumed in this study that the analysis is conducted with shear transfer factor of zero for the case of moving loading. A customized FE program has been developed using the MATLAB software to accomodate such modifications. The developed procedure has been validated with the fatigue test of the 1/6.6-scale AASHTO bridge deck under the applications of both fixed-point repetitive loading and moving loading presented in the literature. Results are in good agreement both experimental vs. simulated S-N curves and observed vs. simulated crack patterns. Significant contribution of the developed procedure is a series of S-N relations which can now be simulated at any desired levels of cracking in addition to the experimentally derived S-N relation at the failure of the deck slab. This permits the systematic investigation of crack propagation or deterioration of RC bridge deck which is appeared to be useful information for highway agencies to prolong the life of their bridge decks.

Keywords: bridge deck, cracking, deterioration, fatigue, finite-element, moving truck, reinforced concrete

Procedia PDF Downloads 258
67 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 150
66 Development of Cost Effective Ultra High Performance Concrete by Using Locally Available Materials

Authors: Mohamed Sifan, Brabha Nagaratnam, Julian Thamboo, Keerthan Poologanathan

Abstract:

Ultra high performance concrete (UHPC) is a type of cementitious material known for its exceptional strength, ductility, and durability. However, its production is often associated with high costs due to the significant amount of cementitious materials required and the use of fine powders to achieve the desired strength. The aim of this research is to explore the feasibility of developing cost-effective UHPC mixes using locally available materials. Specifically, the study aims to investigate the use of coarse limestone sand along with other sand types, namely, basalt sand, dolomite sand, and river sand for developing UHPC mixes and evaluating its performances. The study utilises the particle packing model to develop various UHPC mixes. The particle packing model involves optimising the combination of coarse limestone sand, basalt sand, dolomite sand, and river sand to achieve the desired properties of UHPC. The developed UHPC mixes are then evaluated based on their workability (measured through slump flow and mini slump value), compressive strength (at 7, 28, and 90 days), splitting tensile strength, and microstructural characteristics analysed through scanning electron microscope (SEM) analysis. The results of this study demonstrate that cost-effective UHPC mixes can be developed using locally available materials without the need for silica fume or fly ash. The UHPC mixes achieved impressive compressive strengths of up to 149 MPa at 28 days with a cement content of approximately 750 kg/m³. The mixes also exhibited varying levels of workability, with slump flow values ranging from 550 to 850 mm. Additionally, the inclusion of coarse limestone sand in the mixes effectively reduced the demand for superplasticizer and served as a filler material. By exploring the use of coarse limestone sand and other sand types, this study provides valuable insights into optimising the particle packing model for UHPC production. The findings highlight the potential to reduce costs associated with UHPC production without compromising its strength and durability. The study collected data on the workability, compressive strength, splitting tensile strength, and microstructural characteristics of the developed UHPC mixes. Workability was measured using slump flow and mini slump tests, while compressive strength and splitting tensile strength were assessed at different curing periods. Microstructural characteristics were analysed through SEM and energy dispersive X-ray spectroscopy (EDS) analysis. The collected data were then analysed and interpreted to evaluate the performance and properties of the UHPC mixes. The research successfully demonstrates the feasibility of developing cost-effective UHPC mixes using locally available materials. The inclusion of coarse limestone sand, in combination with other sand types, shows promising results in achieving high compressive strengths and satisfactory workability. The findings suggest that the use of the particle packing model can optimise the combination of materials and reduce the reliance on expensive additives such as silica fume and fly ash. This research provides valuable insights for researchers and construction practitioners aiming to develop cost-effective UHPC mixes using readily available materials and an optimised particle packing approach.

Keywords: cost-effective, limestone powder, particle packing model, ultra high performance concrete

Procedia PDF Downloads 116
65 Promotion of Healthy Food Choices in School Children through Nutrition Education

Authors: Vinti Davar

Abstract:

Introduction: Childhood overweight increases the risk for certain medical and psychological conditions. Millions of school-age children worldwide are affected by serious yet easily treatable and preventable illnesses that inhibit their ability to learn. Healthier children stay in school longer, attend more regularly, learn more and become healthier and more productive adults. Schools are an important setting for nutrition education because one can reach most children, teachers and parents. These years offer a key window for shaping their lifetime habits, which have an impact on their health throughout life. Against this background, an attempt was made to impart nutrition education to school children in Haryana state of India to promote healthy food choices and assess the effectiveness of this program. Methodology: This study was completed in two phases. During the first phase, pre-intervention anthropometric and dietary survey was conducted; the teaching materials for nutrition intervention program were developed and tested; and the questionnaire was validated. In the second phase, an intervention was implemented in two schools of Kurukshetra, Haryana for six months by personal visits once a week. A total of 350 children in the age group of 6-12 years were selected. Out of these, 279 children, 153 boys and 126 girls completed the study. The subjects were divided into four groups namely: underweight, normal, overweight and obese based on body mass index-for-age categories. A power point colorful presentation to improve the quality of tiffin, snacks and meals emphasizing inclusion of all food groups especially vegetables every day and fruits at least 3-4 days per week was used. An extra 20 minutes of aerobic exercise daily was likewise organized and a healthy school environment created. Provision of clean drinking water by school authorities was ensured. Selling of soft drinks and energy-dense snacks in the school canteen as well as advertisements about soft drink and snacks on the school walls were banned. Post intervention, anthropometric indices and food selections were reassessed. Results: The results of this study reiterate the critical role of nutrition education and promotion in improving the healthier food choices by school children. It was observed that normal, overweight and obese children participating in nutrition education intervention program significantly (p≤0.05) increased their daily seasonal fruit and vegetable consumption. Fat and oil consumption was significantly reduced by overweight and obese subjects. Fast food intake was controlled by obese children. The nutrition knowledge of school children significantly improved (p≤0.05) from pre to post intervention. A highly significant increase (p≤0.00) was noted in the nutrition attitude score after intervention in all four groups. Conclusion: This study has shown that a well-planned nutrition education program could improve nutrition knowledge and promote positive changes in healthy food choices. A nutrition program inculcates wholesome eating and active life style habits in children and adolescents that could not only prevent them from chronic diseases and early death but also reduce healthcare cost and enhance the quality of life of citizens and thereby nations.

Keywords: children, eating habits healthy food, obesity, school going, fast foods

Procedia PDF Downloads 205
64 Selective Immobilization of Fructosyltransferase onto Glutaraldehyde Modified Support and Its Application in the Production of Fructo-Oligosaccharides

Authors: Milica B. Veljković, Milica B. Simović, Marija M. Ćorović, Ana D. Milivojević, Anja I. Petrov, Katarina M. Banjanac, Dejan I. Bezbradica

Abstract:

In recent decades, the scientific community has recognized the growing importance of prebiotics, and therefore, numerous studies are focused on their economic production due to their low presence in natural resources. It has been confirmed that prebiotics is a source of energy for probiotics in the gastrointestinal tract (GIT) and enable their proliferation, consequently leading to the normal functioning of the intestinal microbiota. Also, products of their fermentation are short-chain fatty acids (SCFA), which play a key role in maintaining and improving the health not only of the GIT but also of the whole organism. Among several confirmed prebiotics, fructooligosaccharides (FOS) are considered interesting candidates for use in a wide range of products in the food industry. They are characterized as low-calorie and non-cariogenic substances that represent an adequate sugar substitute and can be considered suitable for use in products intended for diabetics. The subject of this research will be the production of FOS by transforming sucrose using a fructosyltransferase (FTase) present in commercial preparation Pectinex® Ultra SP-L, with special emphasis on the development of adequate FTase immobilization method that would enable selective isolation of the enzyme responsible for the synthesis of FOS from the complex enzymatic mixture. This would lead to considerable enzyme purification and allow its direct incorporation into different sucrose-based products without the fear that the action of the other hydrolytic enzymes may adversely affect the products' functional characteristics. Accordingly, the possibility of selective immobilization of the enzyme using support with primary amino groups, Purolite® A109, which was previously activated and modified using glutaraldehyde (GA), was investigated. In the initial phase of the research, the effects of individual immobilization parameters such as pH, enzyme concentration, and immobilization time were investigated to optimize the process using support chemically activated with 15% and 0.5% GA to form dimers and monomers, respectively. It was determined that highly active immobilized preparations (371.8 IU/g of support - dimer and 213.8 IU/g of support – monomer) were achieved under acidic conditions (pH 4) provided that an enzyme concentration was 50 mg/g of support after 7 h and 3 h, respectively. Bearing in mind the obtained results of the expressed activity, it is noticeable that the formation of dimers showed higher reactivity compared to the form of monomers. Also, in the case of support modification using 15% GA, the value of the ratio of FTase and pectinase (as dominant enzyme mixture component) activity immobilization yields was 16.45, indicating the high feasibility of selective immobilization of FTase on modified polystyrene resin. After obtaining immobilized preparations of satisfactory features, they were tested in a reaction of FOS synthesis under determined optimal conditions. The maximum FOS yields of approximately 50% of total carbohydrates in the reaction mixture were recorded after 21 h. Finally, it can be concluded that the examined immobilization method yielded highly active, stable and, more importantly, refined enzyme preparation that can be further utilized on a larger scale for the development of continual processes for FOS synthesis, as well as for modification of different sucrose-based mediums.

Keywords: chemical modification, fructooligosaccharides, glutaraldehyde, immobilization of fructosyltransferase

Procedia PDF Downloads 191
63 Molecular Characterization and Arsenic Mobilization Properties of a Novel Strain IIIJ3-1 Isolated from Arsenic Contaminated Aquifers of Brahmaputra River Basin, India

Authors: Soma Ghosh, Balaram Mohapatra, Pinaki Sar, Abhijeet Mukherjee

Abstract:

Microbial role in arsenic (As) mobilization in the groundwater aquifers of Brahmaputra river basin (BRB) in India, severely threatened by high concentrations of As, remains largely unknown. The present study, therefore, is a molecular and ecophysiological characterization of an indigenous bacterium strain IIIJ3-1 isolated from As contaminated groundwater of BRB and application of this strain in several microcosm set ups differing in their organic carbon (OC) source and terminal electron acceptors (TEA), to understand its role in As dissolution under aerobic and anaerobic conditions. Strain IIIJ3-1 was found to be a new facultative anaerobic, gram-positive, endospore-forming strain capable of arsenite (As3+) oxidation and dissimilatory arsenate (As5+) reduction. The bacterium exhibited low genomic (G+C)% content (45 mol%). Although, its 16S rRNA gene sequence revealed a maximum similarity of 99% with Bacillus cereus ATCC 14579(T) but the DNA-DNA relatedness of their genomic DNAs was only 49.9%, which remains well below the value recommended to delimit different species. Abundance of fatty acids iC17:0, iC15:0 and menaquinone (MK) 7 though corroborates its taxonomic affiliation with B. cereus sensu-lato group, presence of hydroxy fatty acids (HFAs), C18:2, MK5 and MK6 marked its uniqueness. Besides being highly As resistant (MTC=10mM As3+, 350mM As5+), metabolically diverse, efficient aerobic As3+ oxidizer; it exhibited near complete dissimilatory reduction of As5+ (1 mM). Utilization of various carbon sources with As5+ as TEA revealed lactate to serve as the best electron donor. Aerobic biotransformation assay yielded a lower Km for As3+ oxidation than As5+ reduction. Arsenic homeostasis was found to be conferred by the presence of arr, arsB, aioB, and acr3(1) genes. Scanning electron microscopy (SEM) coupled with energy dispersive X-ray (EDX) analysis of this bacterium revealed reduction in cell size upon exposure to As and formation of As-rich electron opaque dots following growth with As3+. Incubation of this strain with sediment (sterilised) collected from BRB aquifers under varying OC, TEA and redox conditions revealed that the strain caused highest As mobilization from solid to aqueous phase under anaerobic condition with lactate and nitrate as electron donor and acceptor, respectively. Co-release of highest concentrations of oxalic acid, a well known bioweathering agent, considerable fold increase in viable cell counts and SEM-EDX and X-ray diffraction analysis of the sediment after incubation under this condition indicated that As release is consequent to microbial bioweathering of the minerals. Co-release of other elements statistically proves decoupled release of As with Fe and Zn. Principle component analysis also revealed prominent role of nitrate under aerobic and/or anaerobic condition in As release by strain IIIJ3-1. This study, therefore, is the first to isolate, characterize and reveal As mobilization property of a strain belonging to the Bacillus cereus sensu lato group isolated from highly As contaminated aquifers of Brahmaputra River Basin.

Keywords: anaerobic microcosm, arsenic rich electron opaque dots, Arsenic release, Bacillus strain IIIJ3-1

Procedia PDF Downloads 128