Search results for: Laurent Birry
22 Protein Feeding Pattern, Casein Feeding, or Milk-Soluble Protein Feeding did not Change the Evolution of Body Composition during a Short-Term Weight Loss Program
Authors: Solange Adechian, Michèle Balage, Didier Remond, Carole Migné, Annie Quignard-Boulangé, Agnès Marset-Baglieri, Sylvie Rousset, Yves Boirie, Claire Gaudichon, Dominique Dardevet, Laurent Mosoni
Abstract:
Studies have shown that timing of protein intake, leucine content, and speed of digestion significantly affect postprandial protein utilization. Our aim was to determine if one can spare lean body mass during energy restriction by varying the quality and the timing of protein intake. Obese volunteers followed a 6-wk restricted energy diet. Four groups were compared: casein pulse, casein spread, milk-soluble protein (MSP, = whey) pulse, and MSP spread (n = 10-11 per group). In casein groups, caseins were the only protein source; it was MSP in MSP groups. Proteins were distributed in four meals per day in the proportion 8:80:4:8% in the pulse groups; it was 25:25:25:25% in the spread groups. We measured weight, body composition, nitrogen balance, 3-methylhistidine excretion, perception of hunger, plasma parameters, adipose tissue metabolism, and whole body protein metabolism. Volunteers lost 7.5 ± 0.4 kg of weight, 5.1 ± 0.2 kg of fat, and 2.2 ± 0.2 kg of lean mass, with no difference between groups. In adipose tissue, cell size and mRNA expression of various genes were reduced with no difference between groups. Hunger perception was also never different between groups. In the last week, due to a higher inhibition of protein degradation and despite a lower stimulation of protein synthesis, postprandial balance between whole body protein synthesis and degradation was better with caseins than with MSP. It seems likely that the positive effect of caseins on protein balance occurred only at the end of the experiment.Keywords: lean body mass, fat mass, casein, whey, protein metabolism
Procedia PDF Downloads 7221 Surveillance of Adverse Events Following Immunization during New Vaccines Introduction in Cameroon: A Cross-Sectional Study on the Role of Mobile Technology
Authors: Andreas Ateke Njoh, Shalom Tchokfe Ndoula, Amani Adidja, Germain Nguessan Menan, Annie Mengue, Eric Mboke, Hassan Ben Bachir, Sangwe Clovis Nchinjoh, Yauba Saidu, Laurent Cleenewerck De Kiev
Abstract:
Vaccines serve a great deal in protecting the population globally. Vaccine products are subject to rigorous quality control and approval before use to ensure safety. Even if all actors take the required precautions, some people could still have adverse events following immunization (AEFI) caused by the vaccine composition or an error in its administration. AEFI underreporting is pronounced in low-income settings like Cameroon. The Country introduced electronic platforms to strengthen surveillance. With the introduction of many novel vaccines, like COVID-19 and the novel Oral Polio Vaccine (nOPV) 2, there was a need to monitor AEFI in the Country. A cross-sectional study was conducted from July to December 2022. Data on AEFI per region of Cameroon were reviewed for the past five years. Data were analyzed with MS Excel, and the results were presented in proportions. AEFI reporting was uncommon in Cameroon. With the introduction of novel vaccines in 2021, the health authorities engaged in new tools and training to capture cases. AEFI detected almost doubled using the open data kit (ODK) compared to previous platforms, especially following the introduction of the nOPV2 and COVID-19 vaccines. The AEFI rate was 1.9 and 160 per administered 100 000 doses of nOPV2 and COVID-19 vaccines, respectively. This mobile tool captured individual information for people with AEFI from all regions. The platform helped to identify common AEFI following the use of these new vaccines. The ODK mobile technology was vital in improving AEFI reporting and providing data to monitor using new vaccines in Cameroon.Keywords: adverse events following immunization, cameroon, COVID-19 vaccines, nOPV, ODK
Procedia PDF Downloads 8820 Building a Comprehensive Repository for Montreal Gamelan Archives
Authors: Laurent Bellemare
Abstract:
After the showcase of traditional Indonesian performing arts at the Vancouver Expo 1986, Canadian universities inherited sets of Indonesian gamelan orchestras and soon began offering courses for music students interested in learning these diverse traditions. Among them, Université de Montréal was offered two sets of Balinese orchestras, a novelty that allowed a community of Montreal gamelan enthusiasts to form and engage with this music. A few generations later, a large body of archives have amassed, framing the history of this niche community’s achievements. This data, scattered in public and private archive collections, comes in various formats: Digital Audio Tape, audio cassettes, Video Home System videotape, digital files, photos, reel-to-reel audiotape, posters, concert programs, letters, TV shows, reports and more. Attempting to study these documents in order to unearth a chronology of gamelan in Montreal has proven to be challenging since no suitable platform for preservation, storage, and research currently exists. These files are, therefore, hard to find due to their decentralized locations. Additionally, most of the documents in older formats have yet to be digitized. In the case of recent digital files, such as pictures or rehearsal recordings, their locations can be even messier and their quantity overwhelming. Aside from the basic issue of choosing a suitable repository platform, questions of legal rights and methodology arise. For posterity, these documents should nonetheless be digitized, organized, and stored in an easily accessible online repository. This paper aims to underline the various challenges encountered in the early stages of such a project as well as to suggest ways of overcoming the obstacles to a thorough archival investigation.Keywords: archival work, archives, Balinese gamelan, Canada, Gamelan, Indonesia, Javanese gamelan, Montreal
Procedia PDF Downloads 11919 Control Strategy for a Solar Vehicle Race
Authors: Francois Defay, Martim Calao, Jean Francois Dassieu, Laurent Salvetat
Abstract:
Electrical vehicles are a solution for reducing the pollution using green energy. The shell Eco-Marathon provides rules in order to minimize the battery use for the race. The use of solar panel combined with efficient motor control and race strategy allow driving a 60kg vehicle with one pilot using only the solar energy in the best case. This paper presents a complete modelization of a solar vehicle used for the shell eco-marathon. This project called Helios is cooperation between non-graduated students, academic institutes, and industrials. The prototype is an ultra-energy-efficient vehicle based on one-meter square solar panel and an own-made brushless controller to optimize the electrical part. The vehicle is equipped with sensors and embedded system to provide all the data in real time in order to evaluate the best strategy for the course. A complete modelization with Matlab/Simulink is used to test the optimal strategy to increase the global endurance. Experimental results are presented to validate the different parts of the model: mechanical, aerodynamics, electrical, solar panel. The major finding of this study is to provide solutions to identify the model parameters (Rolling Resistance Coefficient, drag coefficient, motor torque coefficient, etc.) by means of experimental results combined with identification techniques. One time the coefficients are validated, the strategy to optimize the consumption and the average speed can be tested first in simulation before to be implanted for the race. The paper describes all the simulation and experimental parts and provides results in order to optimize the global efficiency of the vehicle. This works have been started four years ago and evolved many students for the experimental and theoretical parts and allow to increase the knowledge on electrical self-efficient vehicle.Keywords: electrical vehicle, endurance, optimization, shell eco-marathon
Procedia PDF Downloads 26618 High-Frequency Monitoring Results of a Piled Raft Foundation under Wind Loading
Authors: Laurent Pitteloud, Jörg Meier
Abstract:
Piled raft foundations represent an efficient and reliable technique for transferring high vertical and horizontal loads to the subsoil. Piled raft foundations were successfully implemented for several high-rise buildings worldwide over the last decades. For the structural design of this foundation type the stiffnesses of both the piles and the raft have to be determined for the static (e.g. dead load, live load) and the dynamic load cases (e.g. earthquake). In this context the question often arises, to which proportion wind loads are to be considered as dynamic loads. Usually a piled raft foundation has to be monitored in order to verify the design hypotheses. As an additional benefit, the analysis of this monitoring data may lead to a better understanding of the behaviour of this foundation type for future projects in similar subsoil conditions. In case the measurement frequency is high enough, one may also draw conclusions on the effect of wind loading on the piled raft foundation. For a 41-storey office building in Basel, Switzerland, the preliminary design showed that a piled raft foundation was the best solution to satisfy both design requirements, as well as economic aspects. A high-frequency monitoring of the foundation including pile loads, vertical stresses under the raft, as well as pore water pressures was performed over 5 years. In windy situations the analysis of the measurements shows that the pile load increment due to wind consists of a static and a cyclic load term. As piles and raft react with different stiffnesses under static and dynamic loading, these measurements are useful for the correct definition of stiffnesses of future piled raft foundations. This paper outlines the design strategy and the numerical modelling of the aforementioned piled raft foundation. The measurement results are presented and analysed. Based on the findings, comments and conclusions on the definition of pile and raft stiffnesses for vertical and wind loading are proposed.Keywords: design, dynamic, foundation, monitoring, pile, raft, wind load
Procedia PDF Downloads 19617 Murine Pulmonary Responses after Sub-Chronic Exposure to Environmental Ultrafine Particles
Authors: Yara Saleh, Sebastien Antherieu, Romain Dusautoir, Jules Sotty, Laurent Alleman, Ludivine Canivet, Esperanza Perdrix, Pierre Dubot, Anne Platel, Fabrice Nesslany, Guillaume Garcon, Jean-Marc Lo-Guidice
Abstract:
Air pollution is one of the leading causes of premature death worldwide. Among air pollutants, particulate matter (PM) is a major health risk factor, through the induction of cardiopulmonary diseases and lung cancers. They are composed of coarse, fine and ultrafine particles (PM10, PM2.5, and PM0.1 respectively). Ultrafine particles are emerging unregulated pollutants that might have greater toxicity than larger particles, since they are more abundant and consequently have higher surface area per unit of mass. Our project aims to develop a relevant in vivo model of sub-chronic exposure to atmospheric particles in order to elucidate the specific respiratory impact of ultrafine particles compared to fine particulate matter. Quasi-ultrafine (PM0.18) and fine (PM2.5) particles have been collected in the urban industrial zone of Dunkirk in north France during a 7-month campaign, and submitted to physico-chemical characterization. BALB/c mice were then exposed intranasally to 10µg of PM0.18 or PM2.5 3 times a week. After 1 or 3-month exposure, broncho alveolar lavages (BAL) were performed and lung tissues were harvested for histological and transcriptomic analyses. The physico-chemical study of the collected particles shows that there is no major difference in elemental and surface chemical composition between PM0.18 and PM2.5. Furthermore, the results of the cytological analyses carried out show that both types of particulate fractions can be internalized in lung cells. However, the cell count in BAL and preliminary transcriptomic data suggest that PM0.18 could be more reactive and induce a stronger lung inflammation in exposed mice than PM2.5. Complementary studies are in progress to confirm these first data and to identify the metabolic pathways more specifically associated with the toxicity of ultrafine particles.Keywords: environmental pollution, lung affect, mice, ultrafine particles
Procedia PDF Downloads 23916 Identification of Rare Mutations in Genes Involved in Monogenic Forms of Obesity and Diabetes in Obese Guadeloupean Children through Next-Generation Sequencing
Authors: Lydia Foucan, Laurent Larifla, Emmanuelle Durand, Christine Rambhojan, Veronique Dhennin, Jean-Marc Lacorte, Philippe Froguel, Amelie Bonnefond
Abstract:
In the population of Guadeloupe Island (472,124 inhabitants and 80% of subjects of African descent), overweight and obesity were estimated at 23% and 9% respectively among children. High prevalence of diabetes has been reported (~10%) in the adult population. Nevertheless, no study has investigated the contribution of gene mutations to childhood obesity in this population. We aimed to investigate rare genetic mutations in genes involved in monogenic obesity or diabetes in obese Afro-Caribbean children from Guadeloupe Island using next-generation sequencing. The present investigation included unrelated obese children, from a previous study on overweight conducted in Guadeloupe Island in 2013. We sequenced coding regions of 59 genes involved in monogenic obesity or diabetes. A total of 25 obese schoolchildren (with Z-score of body mass index [BMI]: 2.0 to 2.8) were screened for rare mutations (non-synonymous, splice-site, or insertion/deletion) in 59 genes. Mean age of the study population was 12.4 ± 1.1 years. Seventeen children (68%) had insulin-resistance (HOMA-IR > 3.16). A family history of obesity (mother or father) was observed in eight children and three of the accompanying parent presented with type 2 diabetes. None of the children had gonadotrophic abnormality or mental retardation. We detected five rare heterozygous mutations, in four genes involved in monogenic obesity, in five different obese children: MC4R p.Ile301Thr and SIM1 p.Val326Thrfs*43 mutations which were pathogenic; SIM1 p.Ser343Pro and SH2B1 p.Pro90His mutations which were likely pathogenic; and NTRK2 p.Leu140Phe that was of uncertain significance. In parallel, we identified seven carriers of mutation in ABCC8 or KCNJ11 (involved in monogenic diabetes), which were of uncertain significance (KCNJ11 p.Val13Met, KCNJ11 p.Val151Met, ABCC8 p.Lys1521Asn and ABCC8 p.Ala625Val). Rare pathogenic or likely pathogenic mutations, linked to severe obesity were detected in more than 15% of this Afro-Caribbean population at high risk of obesity and type 2 diabetes.Keywords: childhood obesity, MC4R, monogenic obesity, SIM1
Procedia PDF Downloads 19315 Towards Conservation and Recovery of Species at Risk in Ontario: Progress on Recovery Planning and Implementation and an Overview of Key Research Needs
Authors: Rachel deCatanzaro, Madeline Austen, Ken Tuininga, Kathy St. Laurent, Christina Rohe
Abstract:
In Canada, the federal Species at Risk Act (SARA) provides protection for wildlife species at risk and a national legislative framework for the conservation or recovery of species that are listed as endangered, threatened, or special concern under Schedule 1 of SARA. Key aspects of the federal species at risk program include the development of recovery documents (recovery strategies, action plans, and management plans) outlining threats, objectives, and broad strategies or measures for conservation or recovery of the species; the identification and protection of critical habitat for threatened and endangered species; and working with groups and organizations to implement on-the-ground recovery actions. Environment Canada’s progress on the development of recovery documents and on the identification and protection of critical habitat in Ontario will be presented, along with successes and challenges associated with on-the ground implementation of recovery actions. In Ontario, Environment Canada is currently involved in several recovery and monitoring programs for at-risk bird species such as the Loggerhead Shrike, Piping Plover, Golden-winged Warbler and Cerulean Warbler and has provided funding for a wide variety of recovery actions targeting priority species at risk and geographic areas each year through stewardship programs including the Habitat Stewardship Program, Aboriginal Fund for Species at Risk, and the Interdepartmental Recovery Fund. Key research needs relevant to the recovery of species at risk have been identified, and include: surveys and monitoring of population sizes and threats, population viability analyses, and addressing knowledge gaps identified for individual species (e.g., species biology and habitat needs). The engagement of all levels of government, the local and international conservation communities, and the scientific research community plays an important role in the conservation and recovery of species at risk in Ontario– through surveying and monitoring, filling knowledge gaps, conducting public outreach, and restoring, protecting, or managing habitat – and will be critical to the continued success of the federal species at risk program.Keywords: conservation biology, habitat protection, species at risk, wildlife recovery
Procedia PDF Downloads 45214 Fabric Softener Deposition on Cellulose Nanocrystals and Cotton Fibers
Authors: Evdokia K. Oikonomou, Nikolay Christov, Galder Cristobal, Graziana Messina, Giovani Marletta, Laurent Heux, Jean-Francois Berret
Abstract:
Fabric softeners are aqueous formulations that contain ~10 wt. % double tailed cationic surfactants. Here, a formulation in which 50% surfactant was replaced with low quantities of natural guar polymers was developed. Thanks to the reduced surfactant quantity this product has less environmental impact while the guars presence was found to maintain the product’s performance. The objective of this work is to elucidate the effect of the guar polymers on the softener deposition and the adsorption mechanism on the cotton surface. The surfactants in these formulations are assembled into large distributed (0.1 – 1 µm) vesicles that are stable in the presence of guars and upon dilution. The effect of guars on the vesicles adsorption on cotton was first estimated by using cellulose nanocrystals (CNC) as a stand-in for cotton. The dispersion of CNC in water permits to follow the interaction between the vesicles, guars, and CNC in the bulk. It was found that guars enhance the deposition on CNC and that the vesicles are deposited intactly on the fibers driven by electrostatics. The mechanism of the vesicles/guars adsorption on cellulose fibers was identified by quartz crystal microbalance with dissipation monitoring. It was found that the guars increase the surfactant deposited quantity, in agreement with the results in the bulk. Also, the structure of the adsorbed surfactant on the fibers' surfaces (vesicle or bilayer) was influenced by the guars presence. Deposition studies on cotton fabrics were also conducted. Attenuated total reflection and scanning electron microscopy were used to study the effect of the polymers on this deposition. Finally, fluorescent microscopy was used to follow the adsorption of surfactant vesicles, labeled with a fluorescent dye, on cotton fabrics in water. It was found that, in the presence or not of polymers, the surfactant vesicles are adsorbed on fiber maintaining their vesicular structure in water (supported vesicular bilayer structure). The guars influence this process. However, upon drying the vesicles are transformed into bilayers and eventually wrap the fibers (supported lipid bilayer structure). This mechanism is proposed for the adsorption of vesicular conditioner on cotton fiber and can be affected by the presence of polymers.Keywords: cellulose nanocrystals, cotton fibers, fabric softeners, guar polymers, surfactant vesicles
Procedia PDF Downloads 17913 Early Age Behavior of Wind Turbine Gravity Foundations
Authors: Janet Modu, Jean-Francois Georgin, Laurent Briancon, Eric Antoinet
Abstract:
The current practice during the repowering phase of wind turbines is deconstruction of existing foundations and construction of new foundations to accept larger wind loads or once the foundations have reached the end of their service lives. The ongoing research project FUI25 FEDRE (Fondations d’Eoliennes Durables et REpowering) therefore serves to propose scalable wind turbine foundation designs to allow reuse of the existing foundations. To undertake this research, numerical models and laboratory-scale models are currently being utilized and implemented in the GEOMAS laboratory at INSA Lyon following instrumentation of a reference wind turbine situated in the Northern part of France. Sensors placed within both the foundation and the underlying soil monitor the evolution of stresses from the foundation’s early age to stresses during service. The results from the instrumentation form the basis of validation for both the laboratory and numerical works conducted throughout the project duration. The study currently focuses on the effect of coupled mechanisms (Thermal-Hydro-Mechanical-Chemical) that induce stress during the early age of the reinforced concrete foundation, and scale factor considerations in the replication of the reference wind turbine foundation at laboratory-scale. Using THMC 3D models on COMSOL Multi-physics software, the numerical analysis performed on both the laboratory-scale and the full-scale foundations simulate the thermal deformation, hydration, shrinkage (desiccation and autogenous) and creep so as to predict the initial damage caused by internal processes during concrete setting and hardening. Results show a prominent effect of early age properties on the damage potential in full-scale wind turbine foundations. However, a prediction of the damage potential at laboratory scale shows significant differences in early age stresses in comparison to the full-scale model depending on the spatial position in the foundation. In addition to the well-known size effect phenomenon, these differences may contribute to inaccuracies encountered when predicting ultimate deformations of the on-site foundation using laboratory scale models.Keywords: cement hydration, early age behavior, reinforced concrete, shrinkage, THMC 3D models, wind turbines
Procedia PDF Downloads 17512 Teacher-Child Interactions within Learning Contexts in Prekindergarten
Authors: Angélique Laurent, Marie-Josée Letarte, Jean-Pascal Lemelin, Marie-France Morin
Abstract:
This study aims at exploring teacher-child interactions within learning contexts in public prekindergartens of the province of Québec (Canada). It is based on previous research showing that teacher-child interactions in preschools have direct and determining effects on the quality of early childhood education and could directly or indirectly influence child development. However, throughout a typical preschool day, children experience different learning contexts to promote their learning opportunities. Depending on these specific contexts, teacher-child interactions could vary, for example, between free play and shared book reading. Indeed, some studies have found that teacher-directed or child-directed contexts might lead to significant variations in teacher-child interactions. This study drew upon both the bioecological and the Teaching Through Interactions frameworks. It was conducted through a descriptive and correlational design. Fifteen teachers were recruited to participate in the study. At Time 1 in October, they completed a diary to report the learning contexts they proposed in their classroom during a typical week. At Time 2, seven months later (May), they were videotaped three times in the morning (two weeks’ time between each recording) during a typical morning class. The quality of teacher-child interactions was then coded with the Classroom Assessment Scoring System (CLASS) through the contexts identified. This tool measures three main domains of interactions: emotional support, classroom organization, and instruction support, and10 dimensions scored on a scale from 1 (low quality) to 7 (high quality). Based on the teachers’ reports, five learning contexts were identified: 1) shared book reading, 2) free play, 3) morning meeting, 4) teacher-directed activity (such as craft), and 5) snack. Based on preliminary statistical analyses, little variation was observed within the learning contexts for each domain of the CLASS. However, the instructional support domain showed lower scores during specific learning contexts, specifically free play and teacher-directed activity. Practical implications for how preschool teachers could foster specific domains of interactions depending on learning contexts to enhance children’s social and academic development will be discussed.Keywords: teacher practices, teacher-child interactions, preschool education, learning contexts, child development
Procedia PDF Downloads 10811 Laser Paint Stripping on Large Zones on AA 2024 Based Substrates
Authors: Selen Unaldi, Emmanuel Richaud, Matthieu Gervais, Laurent Berthe
Abstract:
Aircrafts are painted with several layers to guarantee their protection from external attacks. For aluminum AA 2024-T3 (metallic structural part of the plane), a protective primer is applied to ensure its corrosion protection. On top of this layer, the top coat is applied for aesthetic aspects. During the lifetime of an aircraft, top coat stripping has an essential role which should be operated as an average of every four years. However, since conventional stripping processes create hazardous disposals and need long hours of labor work, alternative methods have been investigated. Amongst them, laser stripping appears as one of the most promising techniques not only because of the reasons mentioned above but also its controllable and monitorable aspects. The application of a laser beam from the coated side provides stripping, but the depth of the process should be well controlled in order to prevent damage to a substrate and the anticorrosion primer. Apart from that, thermal effects should be taken into account on the painted layers. As an alternative, we worked on developing a process that includes the usage of shock wave propagation to create the stripping via mechanical effects with the application of the beam from the substrate side (back face) of the samples. Laser stripping was applied on thickness-specified samples with a thickness deviation of 10-20%. First, the stripping threshold is determined as a function of power density which is the first flight off of the top coats. After obtaining threshold values, the same power densities were applied to specimens to create large stripping zones with a spot overlap of 10-40%. Layer characteristics were determined on specimens in terms of physicochemical properties and thickness range both before and after laser stripping in order to validate the substrate material health and coating properties. The substrate health is monitored by measuring the roughness of the laser-impacted zones and free surface energy tests (both before and after laser stripping). Also, Hugoniot Elastic Limit (HEL) is determined from VISAR diagnostic on AA 2024-T3 substrates (for the back face surface deformations). In addition, the coating properties are investigated as a function of adhesion levels and anticorrosion properties (neutral salt spray test). The influence of polyurethane top-coat thickness is studied in order to verify the laser stripping process window for industrial aircraft applications.Keywords: aircraft coatings, laser stripping, laser adhesion tests, epoxy, polyurethane
Procedia PDF Downloads 7810 The Interaction of Lay Judges and Professional Judges in French, German and British Labour Courts
Authors: Susan Corby, Pete Burgess, Armin Hoeland, Helene Michel, Laurent Willemez
Abstract:
In German 1st instance labour courts, lay judges always sit with a professional judge and in British and French 1st instance labour courts, lay judges sometimes sit with a professional judge. The lay judges’ main contribution is their workplace knowledge, but they act in a juridical setting where legal norms prevail. Accordingly, the research question is: does the professional judge dominate the lay judges? The research, funded by the Hans-Böckler-Stiftung, is based on over 200 qualitative interviews conducted in France, Germany and Great Britain in 2016-17 with lay and professional judges. Each interview lasted an hour on average, was audio-recorded, transcribed and then analysed using MaxQDA. Status theories, which argue that external sources of (perceived) status are imported into the court, and complementary notions of informational advantage suggest professional judges might exercise domination and control. Furthermore, previous empirical research on British and German labour courts, now some 30 years old, found that professional judges dominated. More recent research on lay judges and professional judges in criminal courts also found professional judge domination. Our findings, however, are more nuanced and distinguish between the hearing and deliberations, and also between the attitudes of judges in the three countries. First, in Germany and Great Britain the professional judge has specialist knowledge and expertise in labour law. In contrast, French professional judges do not study employment law and may only seldom adjudicate on employment law cases. Second, although the professional judge chairs and controls the hearing when he/she sits with lay judges in all three countries, exceptionally in Great Britain lay judges have some latent power as they have to take notes systematically due to the lack of recording technology. Such notes can be material if a party complains of bias, or if there is an appeal. Third, as to labour court deliberations: in France, the professional judge alone determines the outcome of the case, but only if the lay judges have been unable to agree at a previous hearing, which only occurs in 20% of cases. In Great Britain and Germany, although the two lay judges and the professional judge have equal votes, the contribution of British lay judges’ workplace knowledge is less important than that of their German counterparts. British lay judges essentially only sit on discrimination cases where the law, the purview of the professional judge, is complex. They do not sit routinely on unfair dismissal cases where workplace practices are often a key factor in the decision. Also, British professional judges are less reliant on their lay judges than German professional judges. Whereas the latter are career judges, the former only become professional judges after having had several years’ experience in the law and many know, albeit indirectly through their clients, about a wide range of workplace practices. In conclusion, whether or if the professional judge dominates lay judges in labour courts varies by country, although this is mediated by the attitudes of the interactionists.Keywords: cross-national comparisons, labour courts, professional judges, lay judges
Procedia PDF Downloads 2929 Dynamic Characterization of Shallow Aquifer Groundwater: A Lab-Scale Approach
Authors: Anthony Credoz, Nathalie Nief, Remy Hedacq, Salvador Jordana, Laurent Cazes
Abstract:
Groundwater monitoring is classically performed in a network of piezometers in industrial sites. Groundwater flow parameters, such as direction, sense and velocity, are deduced from indirect measurements between two or more piezometers. Groundwater sampling is generally done on the whole column of water inside each borehole to provide concentration values for each piezometer location. These flow and concentration values give a global ‘static’ image of potential plume of contaminants evolution in the shallow aquifer with huge uncertainties in time and space scales and mass discharge dynamic. TOTAL R&D Subsurface Environmental team is challenging this classical approach with an innovative dynamic way of characterization of shallow aquifer groundwater. The current study aims at optimizing the tools and methodologies for (i) a direct and multilevel measurement of groundwater velocities in each piezometer and, (ii) a calculation of potential flux of dissolved contaminant in the shallow aquifer. Lab-scale experiments have been designed to test commercial and R&D tools in a controlled sandbox. Multiphysics modeling were performed and took into account Darcy equation in porous media and Navier-Stockes equation in the borehole. The first step of the current study focused on groundwater flow at porous media/piezometer interface. Huge uncertainties from direct flow rate measurements in the borehole versus Darcy flow rate in the porous media were characterized during experiments and modeling. The structure and location of the tools in the borehole also impacted the results and uncertainties of velocity measurement. In parallel, direct-push tool was tested and presented more accurate results. The second step of the study focused on mass flux of dissolved contaminant in groundwater. Several active and passive commercial and R&D tools have been tested in sandbox and reactive transport modeling has been performed to validate the experiments at the lab-scale. Some tools will be selected and deployed in field assays to better assess the mass discharge of dissolved contaminants in an industrial site. The long-term subsurface environmental strategy is targeting an in-situ, real-time, remote and cost-effective monitoring of groundwater.Keywords: dynamic characterization, groundwater flow, lab-scale, mass flux
Procedia PDF Downloads 1678 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches
Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys
Abstract:
Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites
Procedia PDF Downloads 2047 Shock-Induced Densification in Glass Materials: A Non-Equilibrium Molecular Dynamics Study
Authors: Richard Renou, Laurent Soulard
Abstract:
Lasers are widely used in glass material processing, from waveguide fabrication to channel drilling. The gradual damage of glass optics under UV lasers is also an important issue to be addressed. Glass materials (including metallic glasses) can undergo a permanent densification under laser-induced shock loading. Despite increased interest on interactions between laser and glass materials, little is known about the structural mechanisms involved under shock loading. For example, the densification process in silica glasses occurs between 8 GPa and 30 GPa. Above 30 GPa, the glass material returns to the original density after relaxation. Investigating these unusual mechanisms in silica glass will provide an overall better understanding in glass behaviour. Non-Equilibrium Molecular Dynamics simulations (NEMD) were carried out in order to gain insight on the silica glass microscopic structure under shock loading. The shock was generated by the use of a piston impacting the glass material at high velocity (from 100m/s up to 2km/s). Periodic boundary conditions were used in the directions perpendicular to the shock propagation to model an infinite system. One-dimensional shock propagations were therefore studied. Simulations were performed with the STAMP code developed by the CEA. A very specific structure is observed in a silica glass. Oxygen atoms around Silicon atoms are organized in tetrahedrons. Those tetrahedrons are linked and tend to form rings inside the structure. A significant amount of empty cavities is also observed in glass materials. In order to understand how a shock loading is impacting the overall structure, the tetrahedrons, the rings and the cavities were thoroughly analysed. An elastic behaviour was observed when the shock pressure is below 8 GPa. This is consistent with the Hugoniot Elastic Limit (HEL) of 8.8 GPa estimated experimentally for silica glasses. Behind the shock front, the ring structure and the cavity distribution are impacted. The ring volume is smaller, and most cavities disappear with increasing shock pressure. However, the tetrahedral structure is not affected. The elasticity of the glass structure is therefore related to a ring shrinking and a cavity closing. Above the HEL, the shock pressure is high enough to impact the tetrahedral structure. An increasing number of hexahedrons and octahedrons are formed with the pressure. The large rings break to form smaller ones. The cavities are however not impacted as most cavities are already closed under an elastic shock. After the material relaxation, a significant amount of hexahedrons and octahedrons is still observed, and most of the cavities remain closed. The overall ring distribution after relaxation is similar to the equilibrium distribution. The densification process is therefore related to two structural mechanisms: a change in the coordination of silicon atoms and a cavity closing. To sum up, non-equilibrium molecular dynamics were carried out to investigate silica behaviour under shock loading. Analysing the structure lead to interesting conclusions upon the elastic and the densification mechanisms in glass materials. This work will be completed with a detailed study of the mechanism occurring above 30 GPa, where no sign of densification is observed after the material relaxation.Keywords: densification, molecular dynamics simulations, shock loading, silica glass
Procedia PDF Downloads 2226 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators
Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy
Abstract:
Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators
Procedia PDF Downloads 1135 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products
Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet
Abstract:
All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis
Procedia PDF Downloads 1884 Defining a Framework for Holistic Life Cycle Assessment of Building Components by Considering Parameters Such as Circularity, Material Health, Biodiversity, Pollution Control, Cost, Social Impacts, and Uncertainty
Authors: Naomi Grigoryan, Alexandros Loutsioli Daskalakis, Anna Elisse Uy, Yihe Huang, Aude Laurent (Webanck)
Abstract:
In response to the building and construction sectors accounting for a third of all energy demand and emissions, the European Union has placed new laws and regulations in the construction sector that emphasize material circularity, energy efficiency, biodiversity, and social impact. Existing design tools assess sustainability in early-stage design for products or buildings; however, there is no standardized methodology for measuring the circularity performance of building components. Existing assessment methods for building components focus primarily on carbon footprint but lack the comprehensive analysis required to design for circularity. The research conducted in this paper covers the parameters needed to assess sustainability in the design process of architectural products such as doors, windows, and facades. It maps a framework for a tool that assists designers with real-time sustainability metrics. Considering the life cycle of building components such as façades, windows, and doors involves the life cycle stages applied to product design and many of the methods used in the life cycle analysis of buildings. The current industry standards of sustainability assessment for metal building components follow cradle-to-grave life cycle assessment (LCA), track Global Warming Potential (GWP), and document the parameters used for an Environmental Product Declaration (EPD). Developed by the Ellen Macarthur Foundation, the Material Circularity Indicator (MCI) is a methodology utilizing the data from LCA and EPDs to rate circularity, with a "value between 0 and 1 where higher values indicate a higher circularity+". Expanding on the MCI with additional indicators such as the Water Circularity Index (WCI), the Energy Circularity Index (ECI), the Social Circularity Index (SCI), Life Cycle Economic Value (EV), and calculating biodiversity risk and uncertainty, the assessment methodology of an architectural product's impact can be targeted more specifically based on product requirements, performance, and lifespan. Broadening the scope of LCA calculation for products to incorporate aspects of building design allows product designers to account for the disassembly of architectural components. For example, the Material Circularity Indicator for architectural products such as windows and facades is typically low due to the impact of glass, as 70% of glass ends up in landfills due to damage in the disassembly process. The low MCI can be combatted by expanding beyond cradle-to-grave assessment and focusing the design process on disassembly, recycling, and repurposing with the help of real-time assessment tools. Design for Disassembly and Urban Mining has been integrated within the construction field on small scales as project-based exercises, not addressing the entire supply chain of architectural products. By adopting more comprehensive sustainability metrics and incorporating uncertainty calculations, the sustainability assessment of building components can be more accurately assessed with decarbonization and disassembly in mind, addressing the large-scale commercial markets within construction, some of the most significant contributors to climate change.Keywords: architectural products, early-stage design, life cycle assessment, material circularity indicator
Procedia PDF Downloads 883 Heat Transfer Phenomena Identification of a Non-Active Floor in a Stack-Ventilated Building in Summertime: Empirical Study
Authors: Miguel Chen Austin, Denis Bruneau, Alain Sempey, Laurent Mora, Alain Sommier
Abstract:
An experimental study in a Plus Energy House (PEH) prototype was conducted in August 2016. It aimed to highlight the energy charge and discharge of a concrete-slab floor submitted to the day-night-cycles heat exchanges in the southwestern part of France and to identify the heat transfer phenomena that take place in both processes: charge and discharge. The main features of this PEH, significant to this study, are the following: (i) a non-active slab covering the major part of the entire floor surface of the house, which include a concrete layer 68 mm thick as upper layer; (ii) solar window shades located on the north and south facades along with a large eave facing south, (iii) large double-glazed windows covering the majority of the south facade, (iv) a natural ventilation system (NVS) composed by ten automatized openings with different dimensions: four are located on the south facade, four on the north facade and two on the shed roof (north-oriented). To highlight the energy charge and discharge processes of the non-active slab, heat flux and temperature measurement techniques were implemented, along with airspeed measurements. Ten “measurement-poles” (MP) were distributed all over the concrete-floor surface. Each MP represented a zone of measurement, where air and surface temperatures, and convection and radiation heat fluxes, were intended to be measured. The airspeed was measured only at two points over the slab surface, near the south facade. To identify the heat transfer phenomena that take part in the charge and discharge process, some relevant dimensionless parameters were used, along with statistical analysis; heat transfer phenomena were identified based on this analysis. Experimental data, after processing, had shown that two periods could be identified at a glance: charge (heat gain, positive values) and discharge (heat losses, negative values). During the charge period, on the floor surface, radiation heat exchanges were significantly higher compared with convection. On the other hand, convection heat exchanges were significantly higher than radiation, in the discharge period. Spatially, both, convection and radiation heat exchanges are higher near the natural ventilation openings and smaller far from them, as expected. Experimental correlations have been determined using a linear regression model, showing the relation between the Nusselt number with relevant parameters: Peclet, Rayleigh, and Richardson numbers. This has led to the determination of the convective heat transfer coefficient and its comparison with the convective heat coefficient resulting from measurements. Results have shown that forced and natural convection coexists during the discharge period; more accurate correlations with the Peclet number than with the Rayleigh number, have been found. This may suggest that forced convection is stronger than natural convection. Yet, airspeed levels encountered suggest that it is natural convection that should take place rather than forced convection. Despite this, Richardson number values encountered indicate otherwise. During the charge period, air-velocity levels might indicate that none air motion occurs, which might lead to heat transfer by diffusion instead of convection.Keywords: heat flux measurement, natural ventilation, non-active concrete slab, plus energy house
Procedia PDF Downloads 4162 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 3871 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports
Authors: Carolyn Hatch, Laurent Simon
Abstract:
Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs
Procedia PDF Downloads 181