Search results for: Nicolas Brosse
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 108

Search results for: Nicolas Brosse

48 An Approach to Automate the Modeling of Life Cycle Inventory Data: Case Study on Electrical and Electronic Equipment Products

Authors: Axelle Bertrand, Tom Bauer, Carole Charbuillet, Martin Bonte, Marie Voyer, Nicolas Perry

Abstract:

The complexity of Life Cycle Assessment (LCA) can be identified as the ultimate obstacle to massification. Due to these obstacles, the diffusion of eco-design and LCA methods in the manufacturing sectors could be impossible. This article addresses the research question: How to adapt the LCA method to generalize it massively and improve its performance? This paper aims to develop an approach for automating LCA in order to carry out assessments on a massive scale. To answer this, we proceeded in three steps: First, an analysis of the literature to identify existing automation methods. Given the constraints of large-scale manual processing, it was necessary to define a new approach, drawing inspiration from certain methods and combining them with new ideas and improvements. In a second part, our development of automated construction is presented (reconciliation and implementation of data). Finally, the LCA case study of a conduit is presented to demonstrate the feature-based approach offered by the developed tool. A computerized environment supports effective and efficient decision-making related to materials and processes, facilitating the process of data mapping and hence product modeling. This method is also able to complete the LCA process on its own within minutes. Thus, the calculations and the LCA report are automatically generated. The tool developed has shown that automation by code is a viable solution to meet LCA's massification objectives. It has major advantages over the traditional LCA method and overcomes the complexity of LCA. Indeed, the case study demonstrated the time savings associated with this methodology and, therefore, the opportunity to increase the number of LCA reports generated and, therefore, to meet regulatory requirements. Moreover, this approach also presents the potential of the proposed method for a wide range of applications.

Keywords: automation, EEE, life cycle assessment, life cycle inventory, massively

Procedia PDF Downloads 90
47 Effect of Environmental Parameters on the Water Solubility of the Polycyclic Aromatic Hydrocarbons and Derivatives using Taguchi Experimental Design Methodology

Authors: Pranudda Pimsee, Caroline Sablayrolles, Pascale De Caro, Julien Guyomarch, Nicolas Lesage, Mireille Montréjaud-Vignoles

Abstract:

The MIGR’HYCAR research project was initiated to provide decisional tools for risks connected to oil spill drifts in continental waters. These tools aim to serve in the decision-making process once oil spill pollution occurs and/or as reference tools to study scenarios of potential impacts of pollutions on a given site. This paper focuses on the study of the distribution of polycyclic aromatic hydrocarbons (PAHs) and derivatives from oil spill in water as function of environmental parameters. Eight petroleum oils covering a representative range of commercially available products were tested. 41 Polycyclic Aromatic Hydrocarbons (PAHs) and derivate, among them 16 EPA priority pollutants were studied by dynamic tests at laboratory scale. The chemical profile of the water soluble fraction was different from the parent oil profile due to the various water solubility of oil components. Semi-volatile compounds (naphtalenes) constitute the major part of the water soluble fraction. A large variation in composition of the water soluble fraction was highlighted depending on oil type. Moreover, four environmental parameters (temperature, suspended solid quantity, salinity, and oil: water surface ratio) were investigated with the Taguchi experimental design methodology. The results showed that oils are divided into three groups: the solubility of Domestic fuel and Jet A1 presented a high sensitivity to parameters studied, meaning they must be taken into account. For gasoline (SP95-E10) and diesel fuel, a medium sensitivity to parameters was observed. In fact, the four others oils have shown low sensitivity to parameters studied. Finally, three parameters were found to be significant towards the water soluble fraction.

Keywords: mornitoring, PAHs, water soluble fraction, SBSE, Taguchi experimental design

Procedia PDF Downloads 325
46 Women Mayors and Management of Spanish Councils: An Empirical Analysis

Authors: Carmen Maria Hernandez-Nicolas, Juan Francisco Martín-Ugedo, Antonio Mínguez-Vera

Abstract:

This paper analyses the influence of gender of the mayors of Spanish local governments on different budget items using a sample of 8,243 town councils between 2002 and 2010 period and 64,361 observations. The system Generalized Method of Moments (GMM) technique was employed to examine this panel data. This powerful methodology allows controlling for the endogenity of the variables and the heterogeneity of the sample. Unlike previous works focused on the study of gender influence on firm decisions, the present work analyzes the influence of the gender of the major in the council’s decisions. Specifically, we examine the differences in financial liabilities, security, protection and social promotion expenses and income items relating to public management. In addition, the study focuses on the Spanish context, which is characterized by the presence of decentralization of public responsibility to a greater extent than in neighboring countries, feeding the debate on the operational efficiency of local government increased with an open debate on the importance of gender in public management. The results show that female mayors tend to have lower expenses in general without significant differences in incomes obtained for men and women majors. We also find that female majors incur fewer financial liabilities, one of the most important problems in the Spanish public sector. However, despite of cutting in the public sector, these councils have higher expenditure on security, protection and social promotion. According to these evidences, the presence of women in politics may serve to improve the councils’ economic situation and it is not only necessary for social justice but for economics efficiency. Besides, in councils with more inhabitants, women mayors are more common, but women who served for a very long time are less common.

Keywords: councils, gender, local budgets, public management, women mayors

Procedia PDF Downloads 401
45 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges

Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch

Abstract:

Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.

Keywords: big data interpretation, datathon, systems toxicology, verification

Procedia PDF Downloads 278
44 Optimization of an Electro-Submersible Pump for Crude Oil Extraction Processes

Authors: Deisy Becerra, Nicolas Rios, Miguel Asuaje

Abstract:

The Electrical Submersible Pump (ESP) is one of the most artificial lifting methods used in the last years, which consists of a serial arrangement of centrifugal pumps. One of the main concerns when handling crude oil is the formation of O/W or W/O (oil/water or water/oil) emulsions inside the pump, due to the shear rate imparted and the presence of high molecular weight substances that act as natural surfactants. Therefore, it is important to perform an analysis of the flow patterns inside the pump to increase the percentage of oil recovered using the centrifugal force and the difference in density between the oil and the water to generate the separation of liquid phases. For this study, a Computational Fluid Dynamic (CFD) model was developed on STAR-CCM+ software based on 3D geometry of a Franklin Electric 4400 4' four-stage ESP. In this case, the modification of the last stage was carried out to improve the centrifugal effect inside the pump, and a perforated double tube was designed with three different holes configurations disposed at the outlet section, through which the cut water flows. The arrangement of holes used has different geometrical configurations such as circles, rectangles, and irregular shapes determined as grating around the tube. The two-phase flow was modeled using an Eulerian approach with the Volume of Fluid (VOF) method, which predicts the distribution and movement of larger interfaces in immiscible phases. Different water-oil compositions were evaluated, such as 70-30% v/v, 80-20% v/v and 90-10% v/v, respectively. Finally, greater recovery of oil was obtained. For the several compositions evaluated, the volumetric oil fraction was greater than 0.55 at the pump outlet. Similarly, it is possible to show an inversely proportional relationship between the Water/Oil rate (WOR) and the volumetric flow. The volumetric fractions evaluated, the oil flow increased approximately between 41%-10% for circular perforations and 49%-19% for rectangular shaped perforations, regarding the inlet flow. Besides, the elimination of the pump diffuser in the last stage of the pump reduced the head by approximately 20%.

Keywords: computational fluid dynamic, CFD, electrical submersible pump, ESP, two phase flow, volume of fluid, VOF, water/oil rate, WOR

Procedia PDF Downloads 158
43 Biomolecules Based Microarray for Screening Human Endothelial Cells Behavior

Authors: Adel Dalilottojari, Bahman Delalat, Frances J. Harding, Michaelia P. Cockshell, Claudine S. Bonder, Nicolas H. Voelcker

Abstract:

Endothelial Progenitor Cell (EPC) based therapies continue to be of interest to treat ischemic events based on their proven role to promote blood vessel formation and thus tissue re-vascularisation. Current strategies for the production of clinical-grade EPCs requires the in vitro isolation of EPCs from peripheral blood followed by cell expansion to provide sufficient quantities EPCs for cell therapy. This study aims to examine the use of different biomolecules to significantly improve the current strategy of EPC capture and expansion on collagen type I (Col I). In this study, four different biomolecules were immobilised on a surface and then investigated for their capacity to support EPC capture and proliferation. First, a cell microarray platform was fabricated by coating a glass surface with epoxy functional allyl glycidyl ether plasma polymer (AGEpp) to mediate biomolecule binding. The four candidate biomolecules tested were Col I, collagen type II (Col II), collagen type IV (Col IV) and vascular endothelial growth factor A (VEGF-A), which were arrayed on the epoxy-functionalised surface using a non-contact printer. The surrounding area between the printed biomolecules was passivated with polyethylene glycol-bisamine (A-PEG) to prevent non-specific cell attachment. EPCs were seeded onto the microarray platform and cell numbers quantified after 1 h (to determine capture) and 72 h (to determine proliferation). All of the extracellular matrix (ECM) biomolecules printed demonstrated an ability to capture EPCs within 1 h of cell seeding with Col II exhibiting the highest level of attachment when compared to the other biomolecules. Interestingly, Col IV exhibited the highest increase in EPC expansion after 72 h when compared to Col I, Col II and VEGF-A. These results provide information for significant improvement in the capture and expansion of human EPC for further application.

Keywords: biomolecules, cell microarray platform, cell therapy, endothelial progenitor cells, high throughput screening

Procedia PDF Downloads 292
42 Assessing the Impact of Heatwaves on Intertidal Mudflat Colonized by an Exotic Mussel

Authors: Marie Fouet, Olivier Maire, Cécile Masse, Hugues Blanchet, Salomé Coignard, Nicolas Lavesque, Guillaume Bernard

Abstract:

Exacerbated by global change, extreme climatic events such as atmospheric and marine heat waves may interact with the spread of non-indigenous species and their associated impacts on marine ecosystems. Since the 1970’s, the introduction of non-indigenous species due to oyster exchanges has been numerous. Among them, the Asian date mussel Arcuatula senhousia has colonized a large number of ecosystems worldwide (e.g., California, New Zealand, Italy). In these places, A.senhousia led to important habitat modifications in the benthic compartment through physical, biological, and biogeochemical effects associated with the development of dense mussel populations. In Arcachon Bay (France), a coastal lagoon of the French Atlantic and hotspot of oyster farming, abundances of A. senhousia recently increased, following a lag time of ca. 20 years since the first record of the species in 2002. Here, we addressed the potential effects of the interaction between A. senhousia invasion and heatwave intensity on ecosystem functioning within an intertidal mudflat. More precisely, two realistic intensities (“High” and “Severe”) of combined marine and atmospheric heatwaves have been simulated in an experimental tidal mesocosm system onto which naturally varying densities of A. senhousia and associated benthic communities were exposed in sediment cores collected in situ. Following a six-day exposure, community-scale responses were assessed by measuring benthic metabolism (oxygen and nutrient fluxes) in each core. Results show that besides significantly enhanced benthic metabolism with increasing heatwave intensity, mussel density clearly mediated the magnitude of the community-scale response, thereby highlighting the importance of understanding the interactive effects of environmental stressors co-occurring with non-indigenous species and their dependencies for a better assessment of their impacts.

Keywords: arcuatula senhousia, benthic habitat, ecosystem functioning, heatwaves, metabolism

Procedia PDF Downloads 68
41 Modeling of Conjugate Heat Transfer including Radiation in a Kerosene/Air Certification Burner

Authors: Lancelot Boulet, Pierre Benard, Ghislain Lartigue, Vincent Moureau, Nicolas Chauvet, Sheddia Didorally

Abstract:

International aeronautic standards demand a fire certification for engines that demonstrate their resistance. This demonstration relies on tests performed with prototype engines in the late stages of the development. Hardest tests require to place a kerosene standardized flame in front of the engine casing during a given time with imposed temperature and heat flux. The purpose of this work is to provide a better characterization of a kerosene/air certification burner in order to minimize the risks of test failure. A first Large-Eddy Simulation (LES) study of the certification burner permitted to model and simulate this burner, including both adiabatic and Conjugate Heat Transfer (CHT) computations. Carried out on unstructured grids with 40 million tetrahedral cells, using the finite-volume YALES2 code, spray combustion, forced convection on walls and conduction in the solid parts of the burner were coupled to achieve a detailed description of heat transfer. It highlighted the fact that conduction inside the solid has a real impact on the flame topology and the combustion regime. However, in the absence of radiative heat transfer, unrealistic temperature of the equipment was obtained. The aim of the present study is to include the radiative heat transfer in order to reach the same temperature given by experimental measurements. First, various test-cases are conducted to validate the coupling between the different heat solvers. Then, adiabatic case, CHT case, as well as CHT including radiative transfer are studied and compared. The LES model is finally applied to investigate the heat transfer in a flame impaction configuration. The aim is to progress on fire test modeling so as to reach a good confidence level as far as success of the certification test is concerned.

Keywords: conjugate heat transfer, fire resistance test, large-eddy simulation, radiative transfer, turbulent combustion

Procedia PDF Downloads 223
40 Self-Assembled Laser-Activated Plasmonic Substrates for High-Throughput, High-Efficiency Intracellular Delivery

Authors: Marinna Madrid, Nabiha Saklayen, Marinus Huber, Nicolas Vogel, Christos Boutopoulos, Michel Meunier, Eric Mazur

Abstract:

Delivering material into cells is important for a diverse range of biological applications, including gene therapy, cellular engineering and imaging. We present a plasmonic substrate for delivering membrane-impermeable material into cells at high throughput and high efficiency while maintaining cell viability. The substrate fabrication is based on an affordable and fast colloidal self-assembly process. When illuminated with a femtosecond laser, the light interacts with the electrons at the surface of the metal substrate, creating localized surface plasmons that form bubbles via energy dissipation in the surrounding medium. These bubbles come into close contact with the cell membrane to form transient pores and enable entry of membrane-impermeable material via diffusion. We use fluorescence microscopy and flow cytometry to verify delivery of membrane-impermeable material into HeLa CCL-2 cells. We show delivery efficiency and cell viability data for a range of membrane-impermeable cargo, including dyes and biologically relevant material such as siRNA. We estimate the effective pore size by determining delivery efficiency for hard fluorescent spheres with diameters ranging from 20 nm to 2 um. To provide insight to the cell poration mechanism, we relate the poration data to pump-probe measurements of micro- and nano-bubble formation on the plasmonic substrate. Finally, we investigate substrate stability and reusability by using scanning electron microscopy (SEM) to inspect for damage on the substrate after laser treatment. SEM images show no visible damage. Our findings indicate that self-assembled plasmonic substrates are an affordable tool for high-throughput, high-efficiency delivery of material into mammalian cells.

Keywords: femtosecond laser, intracellular delivery, plasmonic, self-assembly

Procedia PDF Downloads 531
39 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”

Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen

Abstract:

Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.

Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval

Procedia PDF Downloads 171
38 Multiphase Flow Regime Detection Algorithm for Gas-Liquid Interface Using Ultrasonic Pulse-Echo Technique

Authors: Serkan Solmaz, Jean-Baptiste Gouriet, Nicolas Van de Wyer, Christophe Schram

Abstract:

Efficiency of the cooling process for cryogenic propellant boiling in engine cooling channels on space applications is relentlessly affected by the phase change occurs during the boiling. The effectiveness of the cooling process strongly pertains to the type of the boiling regime such as nucleate and film. Geometric constraints like a non-transparent cooling channel unable to use any of visualization methods. The ultrasonic (US) technique as a non-destructive method (NDT) has therefore been applied almost in every engineering field for different purposes. Basically, the discontinuities emerge between mediums like boundaries among different phases. The sound wave emitted by the US transducer is both transmitted and reflected through a gas-liquid interface which makes able to detect different phases. Due to the thermal and structural concerns, it is impractical to sustain a direct contact between the US transducer and working fluid. Hence the transducer should be located outside of the cooling channel which results in additional interfaces and creates ambiguities on the applicability of the present method. In this work, an exploratory research is prompted so as to determine detection ability and applicability of the US technique on the cryogenic boiling process for a cooling cycle where the US transducer is taken place outside of the channel. Boiling of the cryogenics is a complex phenomenon which mainly brings several hindrances for experimental protocol because of thermal properties. Thus substitute materials are purposefully selected based on such parameters to simplify experiments. Aside from that, nucleate and film boiling regimes emerging during the boiling process are simply simulated using non-deformable stainless steel balls, air-bubble injection apparatuses and air clearances instead of conducting a real-time boiling process. A versatile detection algorithm is perennially developed concerning exploratory studies afterward. According to the algorithm developed, the phases can be distinguished 99% as no-phase, air-bubble, and air-film presences. The results show the detection ability and applicability of the US technique for an exploratory purpose.

Keywords: Ultrasound, ultrasonic, multiphase flow, boiling, cryogenics, detection algorithm

Procedia PDF Downloads 170
37 Cognitive Dissonance in Robots: A Computational Architecture for Emotional Influence on the Belief System

Authors: Nicolas M. Beleski, Gustavo A. G. Lugo

Abstract:

Robotic agents are taking more and increasingly important roles in society. In order to make these robots and agents more autonomous and efficient, their systems have grown to be considerably complex and convoluted. This growth in complexity has led recent researchers to investigate forms to explain the AI behavior behind these systems in search for more trustworthy interactions. A current problem in explainable AI is the inner workings with the logic inference process and how to conduct a sensibility analysis of the process of valuation and alteration of beliefs. In a social HRI (human-robot interaction) setup, theory of mind is crucial to ease the intentionality gap and to achieve that we should be able to infer over observed human behaviors, such as cases of cognitive dissonance. One specific case inspired in human cognition is the role emotions play on our belief system and the effects caused when observed behavior does not match the expected outcome. In such scenarios emotions can make a person wrongly assume the antecedent P for an observed consequent Q, and as a result, incorrectly assert that P is true. This form of cognitive dissonance where an unproven cause is taken as truth induces changes in the belief base which can directly affect future decisions and actions. If we aim to be inspired by human thoughts in order to apply levels of theory of mind to these artificial agents, we must find the conditions to replicate these observable cognitive mechanisms. To achieve this, a computational architecture is proposed to model the modulation effect emotions have on the belief system and how it affects logic inference process and consequently the decision making of an agent. To validate the model, an experiment based on the prisoner's dilemma is currently under development. The hypothesis to be tested involves two main points: how emotions, modeled as internal argument strength modulators, can alter inference outcomes, and how can explainable outcomes be produced under specific forms of cognitive dissonance.

Keywords: cognitive architecture, cognitive dissonance, explainable ai, sensitivity analysis, theory of mind

Procedia PDF Downloads 132
36 Application of Raman Spectroscopy for Ovarian Cancer Detection: Comparative Analysis of Fresh, Formalin-Fixed, and Paraffin-Embedded Samples

Authors: Zeinab Farhat, Nicolas Errien, Romuald Wernert, Véronique Verriele, Frédéric Amiard, Philippe Daniel

Abstract:

Ovarian cancer, also known as the silent killer, is the fifth most common cancer among women worldwide, and its death rate is higher than that of other gynecological cancers. The low survival rate of women with high-grade serous ovarian carcinoma highlights the critical need for the development of new methods for early detection and diagnosis of the disease. The aim of this study was to evaluate if Raman spectroscopy combined with chemometric methods such as Principal Component Analysis (PCA) could differentiate between cancerous and normal tissues from different types of samples, such as paraffin embedding, chemical deparaffinized, formalin-fixed and fresh samples of the same normal and malignant ovarian tissue. The method was applied specifically to two critical spectral regions: the signature region (860-1000 〖cm〗^(-1)) and the high-frequency region (2800-3100 〖cm〗^(-1) ). The mean spectra of paraffin-embedded in normal and malignant tissues showed almost similar intensity. On the other hand, the mean spectra of normal and cancer tissues from chemical deparaffinized, formalin-fixed, and fresh samples show significant intensity differences. These spectral differences reflect variations in the molecular composition of the tissues, particularly lipids and proteins. PCA, which was applied to distinguish between cancer and normal tissues, was performed on whole spectra and on selected regions—the PCA score plot of paraffin-embedded shows considerable overlap between the two groups. However, the PCA score of chemicals deparaffinized, formalin-fixed, and fresh samples showed a good discrimination of tissue types. Our findings were validated by analyses of a set of samples whose status (normal and cancerous) was not previously known. The results of this study suggest that Raman Spectroscopy associated with PCA methods has the capacity to provide clinically significant differentiation between normal and cancerous ovarian tissues.

Keywords: Raman spectroscopy, ovarian cancer, signal processing, Principal Component Analysis, classification

Procedia PDF Downloads 30
35 Multi-omics Integrative Analysis with Genome-Scale Metabolic Model Simulation Reveals Reaction Essentiality data in Human Astrocytes Under the Lipotoxic Effect of Palmitic Acid

Authors: Janneth Gonzalez, Andres Pinzon Velasco, Maria Angarita, Nicolas Mendoza

Abstract:

Astrocytes play an important role in various processes in the brain, including pathological conditions such as neurodegenerative diseases. Recent studies have shown that the increase in saturated fatty acids such as palmitic acid (PA) triggers pro-inflammatory pathways in the brain. The use of synthetic neurosteroids such as tibolone has demonstrated neuro-protective mechanisms. However, there are few studies on the neuro-protective mechanisms of tibolone, especially at the systemic (omic) level. In this study, we performed the integration of multi-omic data (transcriptome and proteome) into a human astrocyte genomic scale metabolic model to study the astrocytic response during palmitate treatment. We evaluated metabolic fluxes in three scenarios (healthy, induced inflammation by PA, and tibolone treatment under PA inflammation). We also use control theory to identify those reactions that control the astrocytic system. Our results suggest that PA generates a modulation of central and secondary metabolism, showing a change in energy source use through inhibition of folate cycle and fatty acid β-oxidation and upregulation of ketone bodies formation.We found 25 metabolic switches under PA-mediated cellular regulation, 9 of which were critical only in the inflammatory scenario but not in the protective tibolone one. Within these reactions, inhibitory, total, and directional coupling profiles were key findings, playing a fundamental role in the (de)regulation in metabolic pathways that increase neurotoxicity and represent potential treatment targets. Finally, this study framework facilitates the understanding of metabolic regulation strategies, andit can be used for in silico exploring the mechanisms of astrocytic cell regulation, directing a more complex future experimental work in neurodegenerative diseases.

Keywords: astrocytes, data integration, palmitic acid, computational model, multi-omics, control theory

Procedia PDF Downloads 121
34 Maresin Like 1 Treatment: Curbing the Pathogenesis of Behavioral Dysfunction and Neurodegeneration in Alzheimer's Disease Mouse Model

Authors: Yan Lu, Song Hong, Janakiraman Udaiyappan, Aarti Nagayach, Quoc-Viet A. Duong, Masao Morita, Shun Saito, Yuichi Kobayashi, Yuhai, Zhao, Hongying Peng, Nicholas B. Pham, Walter J Lukiw, Christopher A. Vuong, Nicolas G. Bazan

Abstract:

Aims: Neurodegeneration and behavior dysfunction occurs in patients with Alzheimer's Disease (AD), and as the disease progresses many patients develop cognitive impairment. 5XFAD mouse model of AD is widely used to study AD pathogenesis and treatment. This study aimed to investigate the effect of maresin like 1 (MaR-L1) treatment in AD pathology using 5XFAD mice. Methods: We tested 12-month-old male 5XFAD mice and wild type control mice treated with MaR-L1 in a battery of behavioral tasks. We performed open field test, beam walking test, clasping test, inverted grid test, acetone test, marble burring test, elevated plus maze test, cross maze test and novel object recognition test. We also studied neuronal loss, amyloid β burden, and inflammation in the brains of 5XFAD mice using immunohistology and Western blotting. Results: MaR-L1 treatment to the 5XFAD mice showed improved cognitive function of 5XFAD mice. MaR-L1 showed decreased anxiety behavior in open field test and marble burring test, increased muscular strength in the beam walking test, clasping test and inverted grid test. Cognitive function was improved in MaR-L1 treated 5XFAD mice in the novel object recognition test. MaR-L1 prevented neuronal loss and aberrant inflammation. Conclusion: Our finding suggests that behavioral abnormalities were normalized by the administration of MaR-L1 and the neuroprotective role of MaR-L1 in the AD. It also indicates that MaR-L1 treatment is able to prevent and or ameliorate neuronal loss and aberrant inflammation. Further experiments to validate the results are warranted using other AD models in the future.

Keywords: Alzheimer's disease, motor and cognitive behavior, 5XFAD mice, Maresin Like 1, microglial cell, astrocyte, neurodegeneration, inflammation, resolution of inflammation

Procedia PDF Downloads 179
33 Evaluation of the Surveillance System for Rift Valley Fever in Ruminants in Mauritania, 2019

Authors: Mohamed El Kory Yacoub, Ahmed Bezeid El Mamy Beyatt, Djibril Barry, Yanogo Pauline, Nicolas Meda

Abstract:

Introduction: Rift Valley Fever is a zoonotic arbovirosis that severely affects ruminants, as well as humans. It causes abortions in pregnant females and deaths in young animals. The disease occurs during heavy rains followed by large numbers of mosquito vectors. The objective of this work is to evaluate the surveillance system for Rift Valley Fever. Methods: We conducted an evaluation of the Rift Valley Fiver surveillance system. Data were collected from the analysis of the national database of the Mauritanian Network of Animal Disease Epidemiological Surveillance at the Ministry of Rural Development, of RVF cases notified from the whole national territory, of questionnaires and interviews with all persons involved in RVF surveillance at the central level. The quality of the system was assessed by analyzing the quantitative attributes defined by the Centers for Disease Control and Prevention. Results: In 2019, 443 cases of RVF were notified by the surveillance system, of which 36 were positive. Among the notified cases of Rift Valley Fever, the 0- to the 3-year-old age group of small ruminants was the most represented with 49.21% of cases, followed by 33.33%, which was recorded in large ruminants in the 0 to 7-year-old age group, 11.11% of cases were older than seven years. The completeness of the data varied between 14.2% (age) and 100% (species). Most positive cases were recorded between October and November 2019 in seven different regions. Attribute analysis showed that 87% of the respondents were able to use the case definition well, and 78.8% said they were familiar with the reporting and feedback loop of the Rift Valley Fever data. 90.3% of the respondents found it easy, while 95% of them responded that it was easy for them to transmit their data to the next level. Conclusions: The epidemiological surveillance system for Rift Valley Fever in Mauritania is simple and representative. However, data quality, stability, and responsiveness are average, as the diagnosis of the disease requires laboratory confirmation and the average delay for this confirmation is long (13 days). Consequently, the lack of completeness of the recorded data and of description of cases in terms of time-place-animal, associated with the delay between the stages of the surveillance system can make prevention, early detection of epidemics, and the initiation of measures for an adequate response difficult.

Keywords: evaluation, epidemiological surveillance system, rift valley fever, mauritania, ruminants

Procedia PDF Downloads 149
32 Seismic Retrofit of Tall Building Structure with Viscous, Visco-Elastic, Visco-Plastic Damper

Authors: Nicolas Bae, Theodore L. Karavasilis

Abstract:

Increasingly, a large number of new and existing tall buildings are required to improve their resilient performance against strong winds and earthquakes to minimize direct, as well as indirect damages to society. Those advent stationary functions of tall building structures in metropolitan regions can be severely hazardous, in socio-economic terms, which also increase the requirement of advanced seismic performance. To achieve these progressive requirements, the seismic reinforcement for some old, conventional buildings have become enormously costly. The methods of increasing the buildings’ resilience against wind or earthquake loads have also become more advanced. Up to now, vibration control devices, such as the passive damper system, is still regarded as an effective and an easy-to-install option, in improving the seismic resilience of buildings at affordable prices. The main purpose of this paper is to examine 1) the optimization of the shape of visco plastic brace damper (VPBD) system which is one of hybrid damper system so that it can maximize its energy dissipation capacity in tall buildings against wind and earthquake. 2) the verification of the seismic performance of the visco plastic brace damper system in tall buildings; up to forty-storey high steel frame buildings, by comparing the results of Non-Linear Response History Analysis (NLRHA), with and without a damper system. The most significant contribution of this research is to introduce the optimized hybrid damper system that is adequate for high rise buildings. The efficiency of this visco plastic brace damper system and the advantages of its use in tall buildings can be verified since tall buildings tend to be affected by wind load at its normal state and also by earthquake load after yielding of steel plates. The modeling of the prototype tall building will be conducted using the Opensees software. Three types of modeling were used to verify the performance of the damper (MRF, MRF with visco-elastic, MRF with visco-plastic model) 22-set seismic records used and the scaling procedure was followed according to the FEMA code. It is shown that MRF with viscous, visco-elastic damper, it is superior effective to reduce inelastic deformation such as roof displacement, maximum story drift, roof velocity compared to the MRF only.

Keywords: tall steel building, seismic retrofit, viscous, viscoelastic damper, performance based design, resilience based design

Procedia PDF Downloads 193
31 An Analysis of Emmanuel Macron's Campaign Discourse

Authors: Robin Turner

Abstract:

In the context of the strengthening conservative movements such as “Brexit” and the election of US President Donald Trump, the global political stage was shaken up by the election of Emmanuel Macron to the French presidency, defeating the far-right candidate Marine Le Pen. The election itself was a first for the Fifth Republic in which neither final candidate was from the traditional two major political parties: the left Parti Socialiste (PS) and the right Les Républicains (LR). Macron, who served as the Minister of Finance under his predecessor, founded the centrist liberal political party En Marche! in April 2016 before resigning from his post in August to launch his bid for the presidency. Between the time of the party’s creation to the first round of elections a year later, Emmanuel Macron and En Marche! had garnered enough support to make it to the run-off election, finishing far ahead of many seasoned national political figures. Now months into his presidency, the youngest President of the Republic shows no sign of losing fuel anytime soon. His unprecedented success raises a lot of questions with respect to international relations, economics, and the evolving relationship between the French government and its citizens. The effectiveness of Macron’s campaign, of course, relies on many factors, one of which is his manner of communicating his platform to French voters. Using data from oral discourse and primary material from Macron and En Marche! in sources such as party publications and Twitter, the study categorizes linguistic instruments – address, lexicon, tone, register, and syntax – to identify prevailing patterns of speech and communication. The linguistic analysis in this project is two-fold. In addition to these findings’ stand-alone value, these discourse patterns are contextualized by comparable discourse of other 2017 presidential candidates with high emphasis on that of Marine Le Pen. Secondly, to provide an alternative approach, the study contextualizes Macron’s discourse using those of two immediate predecessors representing the traditional stronghold political parties, François Hollande (PS) and Nicolas Sarkozy (LR). These comparative methods produce an analysis that gives insight to not only a contributing factor to Macron’s successful 2017 campaign but also provides insight into how Macron’s platform presents itself differently to previous presidential platforms. Furthermore, this study extends analysis to supply data that contributes to a wider analysis of the defeat of “traditional” French political parties by the “start-up” movement En Marche!.

Keywords: Emmanuel Macron, French, discourse analysis, political discourse

Procedia PDF Downloads 262
30 Syntheses in Polyol Medium of Inorganic Oxides with Various Smart Optical Properties

Authors: Shian Guan, Marie Bourdin, Isabelle Trenque, Younes Messaddeq, Thierry Cardinal, Nicolas Penin, Issam Mjejri, Aline Rougier, Etienne Duguet, Stephane Mornet, Manuel Gaudon

Abstract:

At the interface of the studies performed by 3 Ph.D. students: Shian Guan (2017-2020), Marie Bourdin (2016-2019) and Isabelle Trenque (2012-2015), a single synthesis route: polyol-mediated process, was used with success for the preparation of different inorganic oxides. Both of these inorganic oxides were elaborated for their potential application as smart optical compounds. This synthesis route has allowed us to develop nanoparticles of zinc oxide, vanadium oxide or tungsten oxide. This route is with easy implementation, inexpensive and with large-scale production potentialities and leads to materials of high purity. The obtaining by this route of nanometric particles, however perfectly crystalline, has notably led to the possibility of doping these matrix materials with high doping ion concentrations (high solubility limits). Thus, Al3+ or Ga3+ doped-ZnO powder, with high doping rate in comparison with the literature, exhibits remarkable infrared absorption properties thanks to their high free carrier density. Note also that due to the narrow particle size distribution of the as-prepared nanometric doped-ZnO powder, the original correlation between crystallite size and unit-cell parameters have been established. Also, depending on the annealing atmosphere use to treat vanadium precursors, VO2, V2O3 or V2O5 oxides with thermochromic or electrochromic properties can be obtained without any impurity, despite the versatility of the oxidation state of vanadium. This is of more particular interest on vanadium dioxide, a relatively difficult-to-prepare oxide, whose first-order metal-insulator phase transition is widely explored in the literature for its thermochromic behavior (in smart windows with optimal thermal insulation). Finally, the reducing nature of the polyol solvents ensures the production of oxygen-deficient tungsten oxide, thus conferring to the nano-powders exotic colorimetric properties, as well as optimized photochromic and electrochromic behaviors.

Keywords: inorganic oxides, electrochromic, photochromic, thermochromic

Procedia PDF Downloads 221
29 Music Listening in Dementia: Current Developments and the Potential for Automated Systems in the Home: Scoping Review and Discussion

Authors: Alexander Street, Nina Wollersberger, Paul Fernie, Leonardo Muller, Ming Hung HSU, Helen Odell-Miller, Jorg Fachner, Patrizia Di Campli San Vito, Stephen Brewster, Hari Shaji, Satvik Venkatesh, Paolo Itaborai, Nicolas Farina, Alexis Kirke, Sube Banerjee, Eduardo Reck Miranda

Abstract:

Escalating neuropsychiatric symptoms (NPS) in people with dementia may lead to earlier care home admission. Music listening has been reported to stimulate cognitive function, potentially reducing agitation in this population. We present a scoping review, reporting on current developments and discussing the potential for music listening with related technology in managing agitation in dementia care. Of two searches for music listening studies, one focused on older people or people living with dementia where music listening interventions, including technology, were delivered in participants’ homes or in institutions to address neuropsychiatric symptoms, quality of life and independence. The second included any population focusing on the use of music technology for health and wellbeing. In search one 70/251 full texts were included. The majority reported either statistical significance (6, 8.5%), significance (17, 24.2%) or improvements (26, 37.1%). Agitation was specifically reported in 36 (51.4%). The second search included 51/99 full texts, reporting improvement (28, 54.9%), significance (11, 21.5%), statistical significance (1, 1.9%) and no difference compared to the control (6, 11.7%). The majority in the first focused on mood and agitation, and the second on mood and psychophysiological responses. Five studies used AI or machine learning systems to select music, all involving healthy controls and reporting benefits. Most studies in both reviews were not conducted in a home environment (review 1 = 12; 17.1%; review 2 = 11; 21.5%). Preferred music listening may help manage NPS in the care home settings. Based on these and other data extracted in the review, a reasonable progression would be to co-design and test music listening systems and protocols for NPS in all settings, including people’s homes. Machine learning and automated technology for music selection and arousal adjustment, driven by live biodata, have not been explored in dementia care. Such approaches may help deliver the right music at the appropriate time in the required dosage, reducing the use of medication and improving quality of life.

Keywords: music listening, dementia, agitation, scoping review, technology

Procedia PDF Downloads 115
28 A Comparison of Direct Water Injection with Membrane Humidifier for Proton Exchange Membrane Fuel Cells Humification

Authors: Flavien Marteau, Pedro Affonso Nóbrega, Pascal Biwole, Nicolas Autrusson, Iona De Bievre, Christian Beauger

Abstract:

Effective water management is essential for the optimal performance of fuel cells. For this reason, many vehicle systems use a membrane humidifier, a passive device that humidifies the air before the cathode inlet. Although they offer good performance, humidifiers are voluminous, costly, and fragile, hence the desire to find an alternative. Direct water injection could be an option, although this method lacks maturity. It consists of injecting liquid water as a spray in the dry heated air coming out from the compressor. This work focuses on the evaluation of direct water injection and its performance compared to the membrane humidifier selected as a reference. Two architectures were experimentally tested to humidify an industrial 2 kW short stack made up of 20 cells of 150 cm² each. For the reference architecture, the inlet air is humidified with a commercial membrane humidifier. For the direct water injection architecture, a pneumatic nozzle was selected to generate a fine spray in the air flow with a Sauter mean diameter of about 20 μm. Initial performance was compared over the entire range of current based on polarisation curves. Then, the influence of various parameters impacting water management was studied, such as the temperature, the gas stoichiometry, and the water injection flow rate. The experimental results obtained confirm the possibility of humidifying the fuel cell using direct water injection. This study, however shows the limits of this humidification method, the mean cell voltage being significantly lower in some operating conditions with direct water injection than with the membrane humidifier. The voltage drop reaches 30 mV per cell (4 %) at 1 A/cm² (1,8 bara, 80 °C) and increases in more demanding humidification conditions. It is noteworthy that the heat of compression available is not enough to evaporate all the injected liquid water in the case of DWI, resulting in a mix of liquid and vapour water entering the fuel cell, whereas only vapour is present with the humidifier. Variation of the injection flow rate shows that part of the injected water is useless for humidification and seems to cross channels without reaching the membrane. The stack was successfully humidified thanks to direct water injection. Nevertheless, our work shows that its implementation requires substantial adaptations and may reduce the fuel cell stack performance when compared to conventional membrane humidifiers, but opportunities for optimisation have been identified.

Keywords: cathode humidification, direct water injection, membrane humidifier, proton exchange membrane fuel cell

Procedia PDF Downloads 45
27 Drying Shrinkage of Concrete: Scale Effect and Influence of Reinforcement

Authors: Qier Wu, Issam Takla, Thomas Rougelot, Nicolas Burlion

Abstract:

In the framework of French underground disposal of intermediate level radioactive wastes, concrete is widely used as a construction material for containers and tunnels. Drying shrinkage is one of the most disadvantageous phenomena of concrete structures. Cracks generated by differential shrinkage could impair the mechanical behavior, increase the permeability of concrete and act as a preferential path for aggressive species, hence leading to an overall decrease in durability and serviceability. It is of great interest to understand the drying shrinkage phenomenon in order to predict and even to control the strains of concrete. The question is whether the results obtained from laboratory samples are in accordance with the measurements on a real structure. Another question concerns the influence of reinforcement on drying shrinkage of concrete. As part of a global project with Andra (French National Radioactive Waste Management Agency), the present study aims to experimentally investigate the scale effect as well as the influence of reinforcement on the development of drying shrinkage of two high performance concretes (based on CEM I and CEM V cements, according to European standards). Various sizes of samples are chosen, from ordinary laboratory specimens up to real-scale specimens: prismatic specimens with different volume-to-surface (V/S) ratios, thin slices (thickness of 2 mm), cylinders with different sizes (37 and 160 mm in diameter), hollow cylinders, cylindrical columns (height of 1000 mm) and square columns (320×320×1000 mm). The square columns have been manufactured with different reinforcement rates and can be considered as mini-structures, to approximate the behavior of a real voussoir from the waste disposal facility. All the samples are kept, in a first stage, at 20°C and 50% of relative humidity (initial conditions in the tunnel) in a specific climatic chamber developed by the Laboratory of Mechanics of Lille. The mass evolution and the drying shrinkage are monitored regularly. The obtained results show that the specimen size has a great impact on water loss and drying shrinkage of concrete. The specimens with a smaller V/S ratio and a smaller size have a bigger drying shrinkage. The correlation between mass variation and drying shrinkage follows the same tendency for all specimens in spite of the size difference. However, the influence of reinforcement rate on drying shrinkage is not clear based on the present results. The second stage of conservation (50°C and 30% of relative humidity) could give additional results on these influences.

Keywords: concrete, drying shrinkage, mass evolution, reinforcement, scale effect

Procedia PDF Downloads 185
26 The Production of Reinforced Insulation Bricks out of the Concentration of Ganoderma lucidum Fungal Inoculums and Cement Paste

Authors: Jovie Esquivias Nicolas, Ron Aldrin Lontoc Austria, Crisabelle Belleza Bautista, Mariane Chiho Espinosa Bundalian, Owwen Kervy Del Rosario Castillo, Mary Angelyn Mercado Dela Cruz, Heinrich Theraja Recana De Luna, Chriscell Gipanao Eustaquio, Desiree Laine Lauz Gilbas, Jordan Ignacio Legaspi, Larah Denise David Madrid, Charles Linelle Malapote Mendoza, Hazel Maxine Manalad Reyes, Carl Justine Nabora Saberdo, Claire Mae Rendon Santos

Abstract:

In response to the global race in discovering the next advanced sustainable material that will reduce our ecological footprint, the researchers aimed to create a masonry unit which is competent in physical edifices and other constructional facets. From different proven researches, mycelium has been concluded that when dried can be used as a robust and waterproof building material that can be grown into explicit forms, thus reducing the processing requirements. Hypothesizing inclusive measures to attest fungi’s impressive structural qualities and absorbency, the researchers projected to perform comparative analyses in creating mycelium bricks from mushroom spores of G. lucidum. Three treatments were intended to classify the most ideal concentration of clay and substrate fixings. The substrate bags fixed with 30% clay and 70% mixings indicated highest numerical frequencies in terms of full occupation of fungal mycelia. Subsequently, sorted parts of white portions from the treatment were settled in a thermoplastic mold and burnt. Three proportional concentrations of cultivated substrate and cement were also prioritized to gather results of variation focused on the weights of the bricks in the Water Absorption Test and Durability Test. Fungal inoculums with solutions of cement showed small to moderate amounts of decrease and increase in load. This proves that the treatments did not show any significant difference when it comes to strength, efficiency and absorption capacity. Each of the concentration is equally valid and could be used in supporting the worldwide demands of creating numerous bricks while also taking into consideration the recovery of our nature.

Keywords: mycelium, fungi, fungal mycelia, durability test, water absorption test

Procedia PDF Downloads 136
25 Data Analysis Tool for Predicting Water Scarcity in Industry

Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse

Abstract:

Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.

Keywords: data mining, industry, machine Learning, shortage, water resources

Procedia PDF Downloads 122
24 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado

Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha

Abstract:

The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.

Keywords: low-order streams, metabolism, net primary production, trophic state

Procedia PDF Downloads 258
23 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification

Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi

Abstract:

Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.

Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix

Procedia PDF Downloads 138
22 Innovative Fabric Integrated Thermal Storage Systems and Applications

Authors: Ahmed Elsayed, Andrew Shea, Nicolas Kelly, John Allison

Abstract:

In northern European climates, domestic space heating and hot water represents a significant proportion of total primary total primary energy use and meeting these demands from a national electricity grid network supplied by renewable energy sources provides an opportunity for a significant reduction in EU CO2 emissions. However, in order to adapt to the intermittent nature of renewable energy generation and to avoid co-incident peak electricity usage from consumers that may exceed current capacity, the demand for heat must be decoupled from its generation. Storage of heat within the fabric of dwellings for use some hours, or days, later provides a route to complete decoupling of demand from supply and facilitates the greatly increased use of renewable energy generation into a local or national electricity network. The integration of thermal energy storage into the building fabric for retrieval at a later time requires much evaluation of the many competing thermal, physical, and practical considerations such as the profile and magnitude of heat demand, the duration of storage, charging and discharging rate, storage media, space allocation, etc. In this paper, the authors report investigations of thermal storage in building fabric using concrete material and present an evaluation of several factors that impact upon performance including heating pipe layout, heating fluid flow velocity, storage geometry, thermo-physical material properties, and also present an investigation of alternative storage materials and alternative heat transfer fluids. Reducing the heating pipe spacing from 200 mm to 100 mm enhances the stored energy by 25% and high-performance Vacuum Insulation results in heat loss flux of less than 3 W/m2, compared to 22 W/m2 for the more conventional EPS insulation. Dense concrete achieved the greatest storage capacity, relative to medium and light-weight alternatives, although a material thickness of 100 mm required more than 5 hours to charge fully. Layers of 25 mm and 50 mm thickness can be charged in 2 hours, or less, facilitating a fast response that could, aggregated across multiple dwellings, provide significant and valuable reduction in demand from grid-generated electricity in expected periods of high demand and potentially eliminate the need for additional new generating capacity from conventional sources such as gas, coal, or nuclear.

Keywords: fabric integrated thermal storage, FITS, demand side management, energy storage, load shifting, renewable energy integration

Procedia PDF Downloads 166
21 Problem Based Learning and Teaching by Example in Dimensioning of Mechanisms: Feedback

Authors: Nicolas Peyret, Sylvain Courtois, Gaël Chevallier

Abstract:

This article outlines the development of the Project Based Learning (PBL) at the level of a last year’s Bachelor’s Degree. This form of pedagogy has for objective to allow a better involving of the students from the beginning of the module. The theoretical contributions are introduced during the project to solving a technological problem. The module in question is the module of mechanical dimensioning method of Supméca a French engineering school. This school issues a Master’s Degree. While the teaching methods used in primary and secondary education are frequently renewed in France at the instigation of teachers and inspectors, higher education remains relatively traditional in its practices. Recently, some colleagues have felt the need to put the application back at the heart of their theoretical teaching. This need is induced by the difficulty of covering all the knowledge deductively before its application. It is therefore tempting to make the students 'learn by doing', even if it doesn’t cover some parts of the theoretical knowledge. The other argument that supports this type of learning is the lack of motivation the students have for the magisterial courses. The role-play allowed scenarios favoring interaction between students and teachers… However, this pedagogical form known as 'pedagogy by project' is difficult to apply in the first years of university studies because of the low level of autonomy and individual responsibility that the students have. The question of what the student actually learns from the initial program as well as the evaluation of the competences acquired by the students in this type of pedagogy also remains an open problem. Thus we propose to add to the pedagogy by project format a regressive part of interventionism by the teacher based on pedagogy by example. This pedagogical scenario is based on the cognitive load theory and Bruner's constructivist theory. It has been built by relying on the six points of the encouragement process defined by Bruner, with a concrete objective, to allow the students to go beyond the basic skills of dimensioning and allow them to acquire the more global skills of engineering. The implementation of project-based teaching coupled with pedagogy by example makes it possible to compensate for the lack of experience and autonomy of first-year students, while at the same time involving them strongly in the first few minutes of the module. In this project, students have been confronted with the real dimensioning problems and are able to understand the links and influences between parameter variations and dimensioning, an objective that we did not reach in classical teaching. It is this form of pedagogy which allows to accelerate the mastery of basic skills and so spend more time on the engineer skills namely the convergence of each dimensioning in order to obtain a validated mechanism. A self-evaluation of the project skills acquired by the students will also be presented.

Keywords: Bruner's constructivist theory, mechanisms dimensioning, pedagogy by example, problem based learning

Procedia PDF Downloads 190
20 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 128
19 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise

Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou

Abstract:

Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.

Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction

Procedia PDF Downloads 123