Search results for: Laurent Mora
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 75

Search results for: Laurent Mora

15 Fabric Softener Deposition on Cellulose Nanocrystals and Cotton Fibers

Authors: Evdokia K. Oikonomou, Nikolay Christov, Galder Cristobal, Graziana Messina, Giovani Marletta, Laurent Heux, Jean-Francois Berret

Abstract:

Fabric softeners are aqueous formulations that contain ~10 wt. % double tailed cationic surfactants. Here, a formulation in which 50% surfactant was replaced with low quantities of natural guar polymers was developed. Thanks to the reduced surfactant quantity this product has less environmental impact while the guars presence was found to maintain the product’s performance. The objective of this work is to elucidate the effect of the guar polymers on the softener deposition and the adsorption mechanism on the cotton surface. The surfactants in these formulations are assembled into large distributed (0.1 – 1 µm) vesicles that are stable in the presence of guars and upon dilution. The effect of guars on the vesicles adsorption on cotton was first estimated by using cellulose nanocrystals (CNC) as a stand-in for cotton. The dispersion of CNC in water permits to follow the interaction between the vesicles, guars, and CNC in the bulk. It was found that guars enhance the deposition on CNC and that the vesicles are deposited intactly on the fibers driven by electrostatics. The mechanism of the vesicles/guars adsorption on cellulose fibers was identified by quartz crystal microbalance with dissipation monitoring. It was found that the guars increase the surfactant deposited quantity, in agreement with the results in the bulk. Also, the structure of the adsorbed surfactant on the fibers' surfaces (vesicle or bilayer) was influenced by the guars presence. Deposition studies on cotton fabrics were also conducted. Attenuated total reflection and scanning electron microscopy were used to study the effect of the polymers on this deposition. Finally, fluorescent microscopy was used to follow the adsorption of surfactant vesicles, labeled with a fluorescent dye, on cotton fabrics in water. It was found that, in the presence or not of polymers, the surfactant vesicles are adsorbed on fiber maintaining their vesicular structure in water (supported vesicular bilayer structure). The guars influence this process. However, upon drying the vesicles are transformed into bilayers and eventually wrap the fibers (supported lipid bilayer structure). This mechanism is proposed for the adsorption of vesicular conditioner on cotton fiber and can be affected by the presence of polymers.

Keywords: cellulose nanocrystals, cotton fibers, fabric softeners, guar polymers, surfactant vesicles

Procedia PDF Downloads 152
14 Early Age Behavior of Wind Turbine Gravity Foundations

Authors: Janet Modu, Jean-Francois Georgin, Laurent Briancon, Eric Antoinet

Abstract:

The current practice during the repowering phase of wind turbines is deconstruction of existing foundations and construction of new foundations to accept larger wind loads or once the foundations have reached the end of their service lives. The ongoing research project FUI25 FEDRE (Fondations d’Eoliennes Durables et REpowering) therefore serves to propose scalable wind turbine foundation designs to allow reuse of the existing foundations. To undertake this research, numerical models and laboratory-scale models are currently being utilized and implemented in the GEOMAS laboratory at INSA Lyon following instrumentation of a reference wind turbine situated in the Northern part of France. Sensors placed within both the foundation and the underlying soil monitor the evolution of stresses from the foundation’s early age to stresses during service. The results from the instrumentation form the basis of validation for both the laboratory and numerical works conducted throughout the project duration. The study currently focuses on the effect of coupled mechanisms (Thermal-Hydro-Mechanical-Chemical) that induce stress during the early age of the reinforced concrete foundation, and scale factor considerations in the replication of the reference wind turbine foundation at laboratory-scale. Using THMC 3D models on COMSOL Multi-physics software, the numerical analysis performed on both the laboratory-scale and the full-scale foundations simulate the thermal deformation, hydration, shrinkage (desiccation and autogenous) and creep so as to predict the initial damage caused by internal processes during concrete setting and hardening. Results show a prominent effect of early age properties on the damage potential in full-scale wind turbine foundations. However, a prediction of the damage potential at laboratory scale shows significant differences in early age stresses in comparison to the full-scale model depending on the spatial position in the foundation. In addition to the well-known size effect phenomenon, these differences may contribute to inaccuracies encountered when predicting ultimate deformations of the on-site foundation using laboratory scale models.

Keywords: cement hydration, early age behavior, reinforced concrete, shrinkage, THMC 3D models, wind turbines

Procedia PDF Downloads 145
13 Teacher-Child Interactions within Learning Contexts in Prekindergarten

Authors: Angélique Laurent, Marie-Josée Letarte, Jean-Pascal Lemelin, Marie-France Morin

Abstract:

This study aims at exploring teacher-child interactions within learning contexts in public prekindergartens of the province of Québec (Canada). It is based on previous research showing that teacher-child interactions in preschools have direct and determining effects on the quality of early childhood education and could directly or indirectly influence child development. However, throughout a typical preschool day, children experience different learning contexts to promote their learning opportunities. Depending on these specific contexts, teacher-child interactions could vary, for example, between free play and shared book reading. Indeed, some studies have found that teacher-directed or child-directed contexts might lead to significant variations in teacher-child interactions. This study drew upon both the bioecological and the Teaching Through Interactions frameworks. It was conducted through a descriptive and correlational design. Fifteen teachers were recruited to participate in the study. At Time 1 in October, they completed a diary to report the learning contexts they proposed in their classroom during a typical week. At Time 2, seven months later (May), they were videotaped three times in the morning (two weeks’ time between each recording) during a typical morning class. The quality of teacher-child interactions was then coded with the Classroom Assessment Scoring System (CLASS) through the contexts identified. This tool measures three main domains of interactions: emotional support, classroom organization, and instruction support, and10 dimensions scored on a scale from 1 (low quality) to 7 (high quality). Based on the teachers’ reports, five learning contexts were identified: 1) shared book reading, 2) free play, 3) morning meeting, 4) teacher-directed activity (such as craft), and 5) snack. Based on preliminary statistical analyses, little variation was observed within the learning contexts for each domain of the CLASS. However, the instructional support domain showed lower scores during specific learning contexts, specifically free play and teacher-directed activity. Practical implications for how preschool teachers could foster specific domains of interactions depending on learning contexts to enhance children’s social and academic development will be discussed.

Keywords: teacher practices, teacher-child interactions, preschool education, learning contexts, child development

Procedia PDF Downloads 74
12 Laser Paint Stripping on Large Zones on AA 2024 Based Substrates

Authors: Selen Unaldi, Emmanuel Richaud, Matthieu Gervais, Laurent Berthe

Abstract:

Aircrafts are painted with several layers to guarantee their protection from external attacks. For aluminum AA 2024-T3 (metallic structural part of the plane), a protective primer is applied to ensure its corrosion protection. On top of this layer, the top coat is applied for aesthetic aspects. During the lifetime of an aircraft, top coat stripping has an essential role which should be operated as an average of every four years. However, since conventional stripping processes create hazardous disposals and need long hours of labor work, alternative methods have been investigated. Amongst them, laser stripping appears as one of the most promising techniques not only because of the reasons mentioned above but also its controllable and monitorable aspects. The application of a laser beam from the coated side provides stripping, but the depth of the process should be well controlled in order to prevent damage to a substrate and the anticorrosion primer. Apart from that, thermal effects should be taken into account on the painted layers. As an alternative, we worked on developing a process that includes the usage of shock wave propagation to create the stripping via mechanical effects with the application of the beam from the substrate side (back face) of the samples. Laser stripping was applied on thickness-specified samples with a thickness deviation of 10-20%. First, the stripping threshold is determined as a function of power density which is the first flight off of the top coats. After obtaining threshold values, the same power densities were applied to specimens to create large stripping zones with a spot overlap of 10-40%. Layer characteristics were determined on specimens in terms of physicochemical properties and thickness range both before and after laser stripping in order to validate the substrate material health and coating properties. The substrate health is monitored by measuring the roughness of the laser-impacted zones and free surface energy tests (both before and after laser stripping). Also, Hugoniot Elastic Limit (HEL) is determined from VISAR diagnostic on AA 2024-T3 substrates (for the back face surface deformations). In addition, the coating properties are investigated as a function of adhesion levels and anticorrosion properties (neutral salt spray test). The influence of polyurethane top-coat thickness is studied in order to verify the laser stripping process window for industrial aircraft applications.

Keywords: aircraft coatings, laser stripping, laser adhesion tests, epoxy, polyurethane

Procedia PDF Downloads 45
11 The Interaction of Lay Judges and Professional Judges in French, German and British Labour Courts

Authors: Susan Corby, Pete Burgess, Armin Hoeland, Helene Michel, Laurent Willemez

Abstract:

In German 1st instance labour courts, lay judges always sit with a professional judge and in British and French 1st instance labour courts, lay judges sometimes sit with a professional judge. The lay judges’ main contribution is their workplace knowledge, but they act in a juridical setting where legal norms prevail. Accordingly, the research question is: does the professional judge dominate the lay judges? The research, funded by the Hans-Böckler-Stiftung, is based on over 200 qualitative interviews conducted in France, Germany and Great Britain in 2016-17 with lay and professional judges. Each interview lasted an hour on average, was audio-recorded, transcribed and then analysed using MaxQDA. Status theories, which argue that external sources of (perceived) status are imported into the court, and complementary notions of informational advantage suggest professional judges might exercise domination and control. Furthermore, previous empirical research on British and German labour courts, now some 30 years old, found that professional judges dominated. More recent research on lay judges and professional judges in criminal courts also found professional judge domination. Our findings, however, are more nuanced and distinguish between the hearing and deliberations, and also between the attitudes of judges in the three countries. First, in Germany and Great Britain the professional judge has specialist knowledge and expertise in labour law. In contrast, French professional judges do not study employment law and may only seldom adjudicate on employment law cases. Second, although the professional judge chairs and controls the hearing when he/she sits with lay judges in all three countries, exceptionally in Great Britain lay judges have some latent power as they have to take notes systematically due to the lack of recording technology. Such notes can be material if a party complains of bias, or if there is an appeal. Third, as to labour court deliberations: in France, the professional judge alone determines the outcome of the case, but only if the lay judges have been unable to agree at a previous hearing, which only occurs in 20% of cases. In Great Britain and Germany, although the two lay judges and the professional judge have equal votes, the contribution of British lay judges’ workplace knowledge is less important than that of their German counterparts. British lay judges essentially only sit on discrimination cases where the law, the purview of the professional judge, is complex. They do not sit routinely on unfair dismissal cases where workplace practices are often a key factor in the decision. Also, British professional judges are less reliant on their lay judges than German professional judges. Whereas the latter are career judges, the former only become professional judges after having had several years’ experience in the law and many know, albeit indirectly through their clients, about a wide range of workplace practices. In conclusion, whether or if the professional judge dominates lay judges in labour courts varies by country, although this is mediated by the attitudes of the interactionists.

Keywords: cross-national comparisons, labour courts, professional judges, lay judges

Procedia PDF Downloads 270
10 Dynamic Characterization of Shallow Aquifer Groundwater: A Lab-Scale Approach

Authors: Anthony Credoz, Nathalie Nief, Remy Hedacq, Salvador Jordana, Laurent Cazes

Abstract:

Groundwater monitoring is classically performed in a network of piezometers in industrial sites. Groundwater flow parameters, such as direction, sense and velocity, are deduced from indirect measurements between two or more piezometers. Groundwater sampling is generally done on the whole column of water inside each borehole to provide concentration values for each piezometer location. These flow and concentration values give a global ‘static’ image of potential plume of contaminants evolution in the shallow aquifer with huge uncertainties in time and space scales and mass discharge dynamic. TOTAL R&D Subsurface Environmental team is challenging this classical approach with an innovative dynamic way of characterization of shallow aquifer groundwater. The current study aims at optimizing the tools and methodologies for (i) a direct and multilevel measurement of groundwater velocities in each piezometer and, (ii) a calculation of potential flux of dissolved contaminant in the shallow aquifer. Lab-scale experiments have been designed to test commercial and R&D tools in a controlled sandbox. Multiphysics modeling were performed and took into account Darcy equation in porous media and Navier-Stockes equation in the borehole. The first step of the current study focused on groundwater flow at porous media/piezometer interface. Huge uncertainties from direct flow rate measurements in the borehole versus Darcy flow rate in the porous media were characterized during experiments and modeling. The structure and location of the tools in the borehole also impacted the results and uncertainties of velocity measurement. In parallel, direct-push tool was tested and presented more accurate results. The second step of the study focused on mass flux of dissolved contaminant in groundwater. Several active and passive commercial and R&D tools have been tested in sandbox and reactive transport modeling has been performed to validate the experiments at the lab-scale. Some tools will be selected and deployed in field assays to better assess the mass discharge of dissolved contaminants in an industrial site. The long-term subsurface environmental strategy is targeting an in-situ, real-time, remote and cost-effective monitoring of groundwater.

Keywords: dynamic characterization, groundwater flow, lab-scale, mass flux

Procedia PDF Downloads 136
9 Learning Trajectories of Mexican Language Teachers: A Cross-Cultural Comparative Study

Authors: Alberto Mora-Vazquez, Nelly Paulina Trejo Guzmán

Abstract:

This study examines the learning trajectories of twelve language teachers who were former students of a BA in applied linguistics at a Mexican state university. In particular, the study compares the social, academic and professional trajectories of two groups of teachers, six locally raised and educated ones and six repatriated ones from the U.S. Our interest in undertaking this research lies in the wide variety of students’ backgrounds we as professors in the BA program have witnessed throughout the years it has been around. Ever since the academic program started back in 2006, the student population has been made up of students whose backgrounds are highly diverse in terms of English language proficiency level, professional orientations and degree of cross-cultural awareness. Such diversity is further evidenced by the ongoing incorporation of some transnational students who have lived and studied in the United States for a significant period of time before their enrolment in the BA program. This, however, is not an isolated event as other researchers have reported this phenomenon in other TESOL-related programs of Mexican universities in the literature. Therefore, this suggests that their social and educational experiences are quite different from those of their Mexican born and educated counterparts. In addition, an informal comparison of the participation in formal teaching activities of the two groups at the beginning of their careers also suggested that significant differences in teacher training and development needs could also be identified. This issue raised questions about the need to examine the life and learning trajectories of these two groups of student teachers so as to develop an intervention plan aimed at supporting and encouraging their academic and professional advancement based on their particular needs. To achieve this goal, the study makes use of a combination of retrospective life-history research and the analysis of academic documents. The first approach uses interviews for data-collection. Through the use of a narrative life-history interview protocol, teachers were asked about their childhood home context, their language learning and teaching experiences, their stories of studying applied linguistics, and self-description. For the analysis of participants’ educational outcomes, a wide range of academic records, including reports of language proficiency exams results and language teacher training certificates, were used. The analysis revealed marked differences between the two groups of teachers in terms of academic and professional orientations. The locally educated teachers tended to graduate first, to look for further educational opportunities after graduation, to enter the language teaching profession earlier, and to expand their professional development options more than their peers. It is argued that these differences can be explained by their identities, which are made up of the interplay of influences such as their home context, their previous educational experiences and their cultural background. Implications for language teacher trainers and applied linguistics academic program administrators are provided.

Keywords: beginning language teachers, life-history research, Mexican context, transnational students

Procedia PDF Downloads 399
8 On-Farm Biopurification Systems: Fungal Bioaugmentation of Biomixtures For Carbofuran Removal

Authors: Carlos E. Rodríguez-Rodríguez, Karla Ruiz-Hidalgo, Kattia Madrigal-Zúñiga, Juan Salvador Chin-Pampillo, Mario Masís-Mora, Elizabeth Carazo-Rojas

Abstract:

One of the main causes of contamination linked to agricultural activities is the spillage and disposal of pesticides, especially during the loading, mixing or cleaning of agricultural spraying equipment. One improvement in the handling of pesticides is the use of biopurification systems (BPS), simple and cheap degradation devices where the pesticides are biologically degraded at accelerated rates. The biologically active core of BPS is the biomixture, which is constituted by soil pre-exposed to the target pesticide, a lignocellulosic substrate to promote the activity of ligninolitic fungi and a humic component (peat or compost), mixed at a volumetric proportion of 50:25:25. Considering the known ability of lignocellulosic fungi to degrade a wide range of organic pollutants, and the high amount of lignocellulosic waste used in biomixture preparation, the bioaugmentation of biomixtures with these fungi represents an interesting approach for improving biomixtures. The present work aimed at evaluating the effect of the bioaugmentation of rice husk based biomixtures with the fungus Trametes versicolor in the removal of the insectice/nematicide carbofuran (CFN) and to optimize the composition of the biomixture to obtain the best performance in terms of CFN removal and mineralization, reduction in formation of transformation products and decrease in residual toxicity of the matrix. The evaluation of several lignocellulosic residues (rice husk, wood chips, coconut fiber, sugarcane bagasse or newspaper print) revealed the best colonization by T. versicolor in rice husk. Pre-colonized rice husk was then used in the bioaugmentation of biomixtures also containing soil pre-exposed to CFN and either peat (GTS biomixture) or compost (GCS biomixture). After spiking with 10 mg/kg CBF, the efficiency of the biomixture was evaluated through a multi-component approach that included: monitoring of CBF removal and production of CBF transformation products, mineralization of radioisotopically labeled carbofuran (14C-CBF) and changes in the toxicity of the matrix after the treatment (Daphnia magna acute immobilization test). Estimated half-lives of CBF in the biomixtures were 3.4 d and 8.1 d in GTS and GCS, respectively. The transformation products 3-hydroxycarbofuran and 3-ketocarbofuran were detected at the moment of CFN application, however their concentration continuously disappeared. Mineralization of 14C-CFN was also faster in GTS than GCS. The toxicological evaluation showed a complete toxicity removal in the biomixtures after 48 d of treatment. The composition of the GCS biomixture was optimized using a central composite design and response surface methodology. The design variables were the volumetric content of fungally pre-colonized rice husk and the volumetric ratio compost/soil. According to the response models, maximization of CFN removal and mineralization rate, and minimization in the accumulation of transformation products were obtained with an optimized biomixture of composition 30:43:27 (pre-colonized rice husk:compost:soil), which differs from the 50:25:25 composition commonly employed in BPS. Results suggest that fungal bioaugmentation may enhance the performance of biomixtures in CFN removal. Optimization reveals the importance of assessing new biomixture formulations in order to maximize their performance.

Keywords: bioaugmentation, biopurification systems, degradation, fungi, pesticides, toxicity

Procedia PDF Downloads 286
7 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches

Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys

Abstract:

Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.

Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites

Procedia PDF Downloads 172
6 Shock-Induced Densification in Glass Materials: A Non-Equilibrium Molecular Dynamics Study

Authors: Richard Renou, Laurent Soulard

Abstract:

Lasers are widely used in glass material processing, from waveguide fabrication to channel drilling. The gradual damage of glass optics under UV lasers is also an important issue to be addressed. Glass materials (including metallic glasses) can undergo a permanent densification under laser-induced shock loading. Despite increased interest on interactions between laser and glass materials, little is known about the structural mechanisms involved under shock loading. For example, the densification process in silica glasses occurs between 8 GPa and 30 GPa. Above 30 GPa, the glass material returns to the original density after relaxation. Investigating these unusual mechanisms in silica glass will provide an overall better understanding in glass behaviour. Non-Equilibrium Molecular Dynamics simulations (NEMD) were carried out in order to gain insight on the silica glass microscopic structure under shock loading. The shock was generated by the use of a piston impacting the glass material at high velocity (from 100m/s up to 2km/s). Periodic boundary conditions were used in the directions perpendicular to the shock propagation to model an infinite system. One-dimensional shock propagations were therefore studied. Simulations were performed with the STAMP code developed by the CEA. A very specific structure is observed in a silica glass. Oxygen atoms around Silicon atoms are organized in tetrahedrons. Those tetrahedrons are linked and tend to form rings inside the structure. A significant amount of empty cavities is also observed in glass materials. In order to understand how a shock loading is impacting the overall structure, the tetrahedrons, the rings and the cavities were thoroughly analysed. An elastic behaviour was observed when the shock pressure is below 8 GPa. This is consistent with the Hugoniot Elastic Limit (HEL) of 8.8 GPa estimated experimentally for silica glasses. Behind the shock front, the ring structure and the cavity distribution are impacted. The ring volume is smaller, and most cavities disappear with increasing shock pressure. However, the tetrahedral structure is not affected. The elasticity of the glass structure is therefore related to a ring shrinking and a cavity closing. Above the HEL, the shock pressure is high enough to impact the tetrahedral structure. An increasing number of hexahedrons and octahedrons are formed with the pressure. The large rings break to form smaller ones. The cavities are however not impacted as most cavities are already closed under an elastic shock. After the material relaxation, a significant amount of hexahedrons and octahedrons is still observed, and most of the cavities remain closed. The overall ring distribution after relaxation is similar to the equilibrium distribution. The densification process is therefore related to two structural mechanisms: a change in the coordination of silicon atoms and a cavity closing. To sum up, non-equilibrium molecular dynamics were carried out to investigate silica behaviour under shock loading. Analysing the structure lead to interesting conclusions upon the elastic and the densification mechanisms in glass materials. This work will be completed with a detailed study of the mechanism occurring above 30 GPa, where no sign of densification is observed after the material relaxation.

Keywords: densification, molecular dynamics simulations, shock loading, silica glass

Procedia PDF Downloads 201
5 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators

Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy

Abstract:

Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.

Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators

Procedia PDF Downloads 89
4 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products

Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet

Abstract:

All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.

Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis

Procedia PDF Downloads 165
3 Defining a Framework for Holistic Life Cycle Assessment of Building Components by Considering Parameters Such as Circularity, Material Health, Biodiversity, Pollution Control, Cost, Social Impacts, and Uncertainty

Authors: Naomi Grigoryan, Alexandros Loutsioli Daskalakis, Anna Elisse Uy, Yihe Huang, Aude Laurent (Webanck)

Abstract:

In response to the building and construction sectors accounting for a third of all energy demand and emissions, the European Union has placed new laws and regulations in the construction sector that emphasize material circularity, energy efficiency, biodiversity, and social impact. Existing design tools assess sustainability in early-stage design for products or buildings; however, there is no standardized methodology for measuring the circularity performance of building components. Existing assessment methods for building components focus primarily on carbon footprint but lack the comprehensive analysis required to design for circularity. The research conducted in this paper covers the parameters needed to assess sustainability in the design process of architectural products such as doors, windows, and facades. It maps a framework for a tool that assists designers with real-time sustainability metrics. Considering the life cycle of building components such as façades, windows, and doors involves the life cycle stages applied to product design and many of the methods used in the life cycle analysis of buildings. The current industry standards of sustainability assessment for metal building components follow cradle-to-grave life cycle assessment (LCA), track Global Warming Potential (GWP), and document the parameters used for an Environmental Product Declaration (EPD). Developed by the Ellen Macarthur Foundation, the Material Circularity Indicator (MCI) is a methodology utilizing the data from LCA and EPDs to rate circularity, with a "value between 0 and 1 where higher values indicate a higher circularity+". Expanding on the MCI with additional indicators such as the Water Circularity Index (WCI), the Energy Circularity Index (ECI), the Social Circularity Index (SCI), Life Cycle Economic Value (EV), and calculating biodiversity risk and uncertainty, the assessment methodology of an architectural product's impact can be targeted more specifically based on product requirements, performance, and lifespan. Broadening the scope of LCA calculation for products to incorporate aspects of building design allows product designers to account for the disassembly of architectural components. For example, the Material Circularity Indicator for architectural products such as windows and facades is typically low due to the impact of glass, as 70% of glass ends up in landfills due to damage in the disassembly process. The low MCI can be combatted by expanding beyond cradle-to-grave assessment and focusing the design process on disassembly, recycling, and repurposing with the help of real-time assessment tools. Design for Disassembly and Urban Mining has been integrated within the construction field on small scales as project-based exercises, not addressing the entire supply chain of architectural products. By adopting more comprehensive sustainability metrics and incorporating uncertainty calculations, the sustainability assessment of building components can be more accurately assessed with decarbonization and disassembly in mind, addressing the large-scale commercial markets within construction, some of the most significant contributors to climate change.

Keywords: architectural products, early-stage design, life cycle assessment, material circularity indicator

Procedia PDF Downloads 46
2 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 362
1 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports

Authors: Carolyn Hatch, Laurent Simon

Abstract:

Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.

Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs

Procedia PDF Downloads 153