Search results for: Jean Akiana
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 278

Search results for: Jean Akiana

68 Evaluation of the Conditions of Managed Aquifer Recharge in the West African Basement Area

Authors: Palingba Aimé Marie Doilkom, Mahamadou Koïta, Jean-michel Vouillamoz, Angelbert Biaou

Abstract:

Most African populations rely on groundwater in rural areas for their consumption. Indeed, in the face of climate change and strong demographic growth, groundwater, particularly in the basement, is increasingly in demand. The question of the sustainability of water resources in this type of environment is therefore becoming a major issue. Groundwater recharge can be natural or artificial. Unlike natural recharge, which often results from the natural infiltration of surface water (e.g. a share of rainfall), artificial recharge consists of causing water infiltration through appropriate developments to artificially replenish the water stock of an aquifer. Artificial recharge is, therefore, one of the measures that can be implemented to secure water supply, combat the effects of climate change, and, more generally, contribute to improving the quantitative status of groundwater bodies. It is in this context that the present research is conducted with the aim of developing artificial recharge in order to contribute to the sustainability of basement aquifers in a context of climatic variability and constantly increasing water needs of populations. In order to achieve the expected results, it is therefore important to determine the characteristics of the infiltration basins and to identify the areas suitable for their implementation. The geometry of the aquifer was reproduced, and the hydraulic properties of the aquifer were collected and characterized, including boundary conditions, hydraulic conductivity, effective porosity, recharge, Van Genuchten parameters, and saturation indices. The aquifer of the Sanon experimental site is made up of three layers, namely the saprolite, the fissured horizon, and the healthy basement. Indeed, the saprolite and the fissured medium were considered for the simulations. The first results with FEFLOW model show that the water table reacts continuously for the first 100 days before stabilizing. The hydraulic charge increases by an average of 1 m. The further away from the basin, the less the water table reacts. However, if a variable hydraulic head is imposed on the basins, it can be seen that the response of the water table is not uniform over time. The lower the basin hydraulic head, the less it affects the water table. These simulations must be continued by improving the characteristics of the basins in order to obtain the appropriate characteristics for a good recharge.

Keywords: basement area, FEFLOW, infiltration basin, MAR

Procedia PDF Downloads 76
67 Development and Compositional Analysis of Functional Bread and Biscuit from Soybean, Peas and Rice Flour

Authors: Jean Paul Hategekimana, Bampire Claudine, Niyonsenga Nadia, Irakoze Josiane

Abstract:

Peas, soybeans and rice are crops which are grown in Rwanda and are available in rural and urban local markets and they give contribution in reduction of health problems especially in fighting malnutrition and food insecurity in Rwanda. Several research activities have been conducted on how cereals flour can be mixed with legumes flour for developing baked products which are rich in protein, fiber, minerals as they are found in legumes. However, such activity was not yet well studied in Rwanda. The aim of the present study was to develop bread and biscuit products from peas, soybeans and rice as functional ingredients combined with wheat flour and then analyze the nutritional content and consumer acceptability of new developed products. The malnutrition problem can be reduced by producing bread and biscuits which are rich in protein and are very accessible for every individual. The processing of bread and biscuit were made by taking peas flour, soybeans flour and rice flour mixed with wheat flour and other ingredients then a dough was made followed by baking. For bread, two kind of products were processed, for each product one control and three experimental samples in different three ratios of peas and rice were prepared. These ratios were 95:5, 90:10 and 80:20 for bread from peas and 85:5:10, 80:10:10 and 70:10:20 for bread from peas and rice. For biscuit, two kind of products were also processed, for each product one control sample and three experimental samples in three different ratios were prepared. These ratios are 90:5:5,80:10:10 and 70:10:20 for biscuit from peas and rice and 90:5:5,80:10:10 and 70:10:20 for biscuit from soybean and rice. All samples including the control sample were analyzed for the consumer acceptability (sensory attributes) and nutritional composition. For sensory analysis, bread from of peas and rice flour with wheat flour at ratio 85:5:10 and bread from peas only as functional ingredient with wheat flour at ratio 95:5 and biscuits made from a of soybeans and rice at a ratio 90:5:5 and biscuit made from peas and rice at ratio 90:5:5 were most acceptable compared to control sample and other samples in different ratio. The moisture, protein, fat, fiber and minerals (Sodium and iron.) content were analyzed where bread from peas in all ratios was found to be rich in protein and fiber compare to control sample and biscuit from soybean and rice in all ratios was found to be rich in protein and fiber compare to control sample.

Keywords: bakery products, peas and rice flour, wheat flour, sensory evaluation, proximate composition

Procedia PDF Downloads 64
66 Process Modeling in an Aeronautics Context

Authors: Sophie Lemoussu, Jean-Charles Chaudemar, Robertus A. Vingerhoeds

Abstract:

Many innovative projects exist in the field of aeronautics, each addressing specific areas so to reduce weight, increase autonomy, reduction of CO2, etc. In many cases, such innovative developments are being carried out by very small enterprises (VSE’s) or small and medium sized-enterprises (SME’s). A good example concerns airships that are being studied as a real alternative to passenger and cargo transportation. Today, no international regulations propose a precise and sufficiently detailed framework for the development and certification of airships. The absence of such a regulatory framework requires a very close contact with regulatory instances. However, VSE’s/SME’s do not always have sufficient resources and internal knowledge to handle this complexity and to discuss these issues. This poses an additional challenge for those VSE’s/SME’s, in particular those that have system integration responsibilities and that must provide all the necessary evidence to demonstrate their ability to design, produce, and operate airships with the expected level of safety and reliability. The main objective of this research is to provide a methodological framework enabling VSE’s/SME’s with limited resources to organize the development of airships while taking into account the constraints of safety, cost, time and performance. This paper proposes to provide a contribution to this problematic by proposing a Model-Based Systems Engineering approach. Through a comprehensive process modeling approach applied to the development processes, the regulatory constraints, existing best practices, etc., a good image can be obtained as to the process landscape that may influence the development of airships. To this effect, not only the necessary regulatory information is taken on board, also other international standards and norms on systems engineering and project management are being modeled and taken into account. In a next step, the model can be used for analysis of the specific situation for given developments, derive critical paths for the development, identify eventual conflicting aspects between the norms, standards, and regulatory expectations, or also identify those areas where not enough information is available. Once critical paths are known, optimization approaches can be used and decision support techniques can be applied so to better support VSE’s/SME’s in their innovative developments. This paper reports on the adopted modeling approach, the retained modeling languages, and how they all fit together.

Keywords: aeronautics, certification, process modeling, project management, regulation, SME, systems engineering, VSE

Procedia PDF Downloads 161
65 Broadband Ultrasonic and Rheological Characterization of Liquids Using Longitudinal Waves

Authors: M. Abderrahmane Mograne, Didier Laux, Jean-Yves Ferrandis

Abstract:

Rheological characterizations of complex liquids like polymer solutions present an important scientific interest for a lot of researchers in many fields as biology, food industry, chemistry. In order to establish master curves (elastic moduli vs frequency) which can give information about microstructure, classical rheometers or viscometers (such as Couette systems) are used. For broadband characterization of the sample, temperature is modified in a very large range leading to equivalent frequency modifications applying the Time Temperature Superposition principle. For many liquids undergoing phase transitions, this approach is not applicable. That is the reason, why the development of broadband spectroscopic methods around room temperature becomes a major concern. In literature many solutions have been proposed but, to our knowledge, there is no experimental bench giving the whole rheological characterization for frequencies about a few Hz (Hertz) to many MHz (Mega Hertz). Consequently, our goal is to investigate in a nondestructive way in very broadband frequency (A few Hz – Hundreds of MHz) rheological properties using longitudinal ultrasonic waves (L waves), a unique experimental bench and a specific container for the liquid: a test tube. More specifically, we aim to estimate the three viscosities (longitudinal, shear and bulk) and the complex elastic moduli (M*, G* and K*) respectively longitudinal, shear and bulk moduli. We have decided to use only L waves conditioned in two ways: bulk L wave in the liquid or guided L waves in the tube test walls. In this paper, we will present first results for very low frequencies using the ultrasonic tracking of a falling ball in the test tube. This will lead to the estimation of shear viscosity from a few mPa.s to a few Pa.s (Pascal second). Corrections due to the small dimensions of the tube will be applied and discussed regarding the size of the falling ball. Then the use of bulk L wave’s propagation in the liquid and the development of a specific signal processing in order to assess longitudinal velocity and attenuation will conduct to the longitudinal viscosity evaluation in the MHz frequency range. At last, the first results concerning the propagation, the generation and the processing of guided compressional waves in the test tube walls will be discussed. All these approaches and results will be compared to standard methods available and already validated in our lab.

Keywords: nondestructive measurement for liquid, piezoelectric transducer, ultrasonic longitudinal waves, viscosities

Procedia PDF Downloads 265
64 Agronomic Test to Determine the Efficiency of Hydrothermally Treated Alkaline Igneous Rocks and Their Potassium Fertilizing Capacity

Authors: Aaron Herve Mbwe Mbissik, Lotfi Khiari, Otmane Raji, Abdellatif Elghali, Abdelkarim Lajili, Muhammad Ouabid, Martin Jemo, Jean-Louis Bodinier

Abstract:

Potassium (K) is an essential macronutrient for plant growth, helping to regulate several physiological and metabolic processes. Evaporite-related potash salts, mainly sylvite minerals (K chloride or KCl), are the principal source of K for the fertilizer industry. However, due to the high potash-supply risk associated with its considerable price fluctuations and uneven geographic distribution for most agriculture-based developing countries, the development of alternative sources of fertilizer K is imperative to maintain adequate crop yield, reduce yield gaps, and food security. Alkaline Igneous rocks containing significant K-rich silicate minerals such as K feldspar are increasingly seen as the best alternative available. However, these rocks may require to be hydrothermally treatment to enhance the release of potassium. In this study, we evaluate the fertilizing capacity of raw and hydrothermally treated K-bearing silicate rocks from different areas in Morocco. The effectiveness of rock powders was tested in a greenhouse experiment using ryegrass (Lolium multiflorum) by comparing them to a control (no K added) and to a conventional fertilizer (muriate of potash: MOP or KCl). The trial was conducted in a randomized complete block design with three replications, and plants were grown on K-depleted soils for three growing cycles. To achieve our objective, in addition to the analysis of the muriate response curve and the different biomasses, we also examined three necessary coefficients, namely: the K uptake, then apparent K recovery (AKR), and the relative K efficiency (RKE). The results showed that based on the optimum economic rate of MOP (230 kg.K.ha⁻¹) and the optimum yield (44 000 kg.K.ha⁻¹), the efficiency of K silicate rocks was as high as that of MOP. Although the plants took up only half of the K supplied by the powdered rock, the hydrothermal material was found to be satisfactory, with a biomass value reaching the optimum economic limit until the second crop cycle. In comparison, the AKR of the MOP (98.6%) and its RKE in the 1st cycle were higher than our materials: 39% and 38%, respectively. Therefore, the raw and hydrothermal materials mixture could be an appropriate solution for long-term agronomic use based on the obtained results.

Keywords: K-uptake, AKR, RKE, K-bearing silicate rock, MOP

Procedia PDF Downloads 89
63 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure

Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik

Abstract:

Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.

Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT

Procedia PDF Downloads 128
62 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 125
61 Segmented Pupil Phasing with Deep Learning

Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan

Abstract:

Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.

Keywords: wavefront sensing, deep learning, deployable telescope, space telescope

Procedia PDF Downloads 104
60 Cancer Burden and Policy Needs in the Democratic Republic of the Congo: A Descriptive Study

Authors: Jean Paul Muambangu Milambo, Peter Nyasulu, John Akudugu, Leonidas Ndayisaba, Joyce Tsoka-Gwegweni, Lebwaze Massamba Bienvenu, Mitshindo Mwambangu Chiro

Abstract:

In 2018, non-communicable diseases (NCDs) were responsible for 48% of deaths in the Democratic Republic of Congo (DRC), with cancer contributing to 5% of these deaths. There is a notable absence of cancer registries, capacity-building activities, budgets, and treatment roadmaps in the DRC. Current cancer estimates are primarily based on mathematical modeling with limited data from neighboring countries. This study aimed to assess cancer subtype prevalence in Kinshasa hospitals and compare these findings with WHO model estimates. Methods: A retrospective observational study was conducted from 2018 to 2020 at HJ Hospitals in Kinshasa. Data were collected using American Cancer Society (ACS) questionnaires and physician logs. Descriptive analysis was performed using STATA version 16 to estimate cancer burden and provide evidence-based recommendations. Results: The results from the chart review at HJ Hospitals in Kinshasa (2018-2020) indicate that out of 6,852 samples, approximately 11.16% were diagnosed with cancer. The distribution of cancer subtypes in this cohort was as follows: breast cancer (33.6%), prostate cancer (21.8%), colorectal cancer (9.6%), lymphoma (4.6%), and cervical cancer (4.4%). These figures are based on histopathological confirmation at the facility and may not fully represent the broader population due to potential selection biases related to geographic and financial accessibility to the hospital. In contrast, the World Health Organization (WHO) model estimates for cancer prevalence in the DRC show different proportions. According to WHO data, the distribution of cancer types is as follows: cervical cancer (15.9%), prostate cancer (15.3%), breast cancer (14.9%), liver cancer (6.8%), colorectal cancer (5.9%), and other cancers (41.2%) (WHO, 2020). Conclusion: The data indicate a rising cancer prevalence in DRC but highlight significant gaps in clinical, biomedical, and genetic cancer data. The establishment of a population-based cancer registry (PBCR) and a defined cancer management pathway is crucial. The current estimates are limited due to data scarcity and inconsistencies in clinical practices. There is an urgent need for multidisciplinary cancer management, integration of palliative care, and improvement in care quality based on evidence-based measures.

Keywords: cancer, risk factors, DRC, gene-environment interactions, survivors

Procedia PDF Downloads 20
59 Impact of Non-Parental Early Childhood Education on Digital Friendship Tendency

Authors: Sheel Chakraborty

Abstract:

Modern society in developed countries has distanced itself from the earlier norm of joint family living, and with the increase of economic pressure, parents' availability for their children during their infant years has been consistently decreasing over the past three decades. During the same time, the pre-primary education system - built mainly on the developmental psychology theory framework of Jean Piaget and Lev Vygotsky, has been promoted in the US through the legislature and funding. Early care and education may have a positive impact on young minds, but a growing number of kids facing social challenges in making friendships in their teenage years raises serious concerns about its effectiveness. The survey-based primary research presented here shows a statistically significant number of millennials between the ages of 10 and 25 prefer to build friendships virtually than face-to-face interactions. Moreover, many teenagers depend more on their virtual friends whom they never met. Contrary to the belief that early social interactions in a non-home setup make the kids confident and more prepared for the real world, many shy-natured kids seem to develop a sense of shakiness in forming social relationships, resulting in loneliness by the time they are young adults. Reflecting on George Mead’s theory of self that is made up of “I” and “Me”, most functioning homes provide the required freedom and forgivable, congenial environment for building the "I" of a toddler; however, daycare or preschools can barely match that. It seems social images created from the expectations perceived by preschoolers “Me" in a non-home setting may interfere and greatly overpower the formation of a confident "I" thus creating a crisis around the inability to form friendships face to face when they grow older. Though the pervasive nature of social media can’t be ignored, the non-parental early care and education practices adopted largely by the urban population have created a favorable platform of teen psychology on which social media popularity thrived, especially providing refuge to shy Gen-Z teenagers. This can explain why young adults today perceive social media as their preferred outlet of expression and a place to form dependable friendships, despite the risk of being cyberbullied.

Keywords: digital socialization, shyness, developmental psychology, friendship, early education

Procedia PDF Downloads 127
58 Study of Durability of Porous Polymer Materials, Glass-Fiber-Reinforced Polyurethane Foam (R-PUF) in MarkIII Containment Membrane System

Authors: Florent Cerdan, Anne-Gaëlle Denay, Annette Roy, Jean-Claude Grandidier, Éric Laine

Abstract:

The insulation of MarkIII membrane of the Liquid Natural Gas Carriers (LNGC) consists of a load- bearing system made of panels in reinforced polyurethane foam (R-PUF). During the shipping, the cargo containment shall be potentially subject to risk events which can be water leakage through the wall ballast tank. The aim of these present works is to further develop understanding of water transfer mechanisms and water effect on properties of R-PUF. This multi-scale approach contributes to improve the durability. Macroscale / Mesoscale Firstly, the use of the gravimetric technique has allowed to define, at room temperature, the water transfer mechanisms and kinetic diffusion, in the R-PUF. The solubility follows a first kinetic fast growing connected to the water absorption by the micro-porosity, and then evolves linearly slowly, this second stage is connected to molecular diffusion and dissolution of water in the dense membranes polyurethane. Secondly, in the purpose of improving the understanding of the transfer mechanism, the study of the evolution of the buoyant force has been established. It allowed to identify the effect of the balance of total and partial pressure of mixture gas contained in pores surface. Mesoscale / Microscale The differential scanning calorimetry (DSC) and Dynamical Mechanical Analysis (DMA), have been used to investigate the hydration of the hard and soft segments of the polyurethane matrix. The purpose was to identify the sensitivity of these two phases. It been shown that the glass transition temperatures shifts towards the low temperatures when the solubility of the water increases. These observations permit to conclude to a plasticization of the polymer matrix. Microscale The Fourier Transform Infrared (FTIR) study has been used to investigate the characterization of functional groups on the edge, the center and mid-way of the sample according the duration of submersion. More water there is in the material, more the water fix themselves on the urethanes groups and more specifically on amide groups. The pic of C=O urethane shifts at lower frequencies quickly before 24 hours of submersion then grows slowly. The intensity of the pic decreases more flatly after that.

Keywords: porous materials, water sorption, glass transition temperature, DSC, DMA, FTIR, transfer mechanisms

Procedia PDF Downloads 529
57 A Targeted Maximum Likelihood Estimation for a Non-Binary Causal Variable: An Application

Authors: Mohamed Raouf Benmakrelouf, Joseph Rynkiewicz

Abstract:

Targeted maximum likelihood estimation (TMLE) is well-established method for causal effect estimation with desirable statistical properties. TMLE is a doubly robust maximum likelihood based approach that includes a secondary targeting step that optimizes the target statistical parameter. A causal interpretation of the statistical parameter requires assumptions of the Rubin causal framework. The causal effect of binary variable, E, on outcomes, Y, is defined in terms of comparisons between two potential outcomes as E[YE=1 − YE=0]. Our aim in this paper is to present an adaptation of TMLE methodology to estimate the causal effect of a non-binary categorical variable, providing a large application. We propose coding on the initial data in order to operate a binarization of the interest variable. For each category, we get a transformation of the non-binary interest variable into a binary variable, taking value 1 to indicate the presence of category (or group of categories) for an individual, 0 otherwise. Such a dummy variable makes it possible to have a pair of potential outcomes and oppose a category (or a group of categories) to another category (or a group of categories). Let E be a non-binary interest variable. We propose a complete disjunctive coding of our variable E. We transform the initial variable to obtain a set of binary vectors (dummy variables), E = (Ee : e ∈ {1, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when its category is not present, and the value of 1 when its category is present, which allows to compute a pairwise-TMLE comparing difference in the outcome between one category and all remaining categories. In order to illustrate the application of our strategy, first, we present the implementation of TMLE to estimate the causal effect of non-binary variable on outcome using simulated data. Secondly, we apply our TMLE adaptation to survey data from the French Political Barometer (CEVIPOF), to estimate the causal effect of education level (A five-level variable) on a potential vote in favor of the French extreme right candidate Jean-Marie Le Pen. Counterfactual reasoning requires us to consider some causal questions (additional causal assumptions). Leading to different coding of E, as a set of binary vectors, E = (Ee : e ∈ {2, ..., |E|}), where each vector (variable), Ee, takes the value of 0 when the first category (reference category) is present, and the value of 1 when its category is present, which allows to apply a pairwise-TMLE comparing difference in the outcome between the first level (fixed) and each remaining level. We confirmed that the increase in the level of education decreases the voting rate for the extreme right party.

Keywords: statistical inference, causal inference, super learning, targeted maximum likelihood estimation

Procedia PDF Downloads 103
56 Effects of Nutrients Supply on Milk Yield, Composition and Enteric Methane Gas Emissions from Smallholder Dairy Farms in Rwanda

Authors: Jean De Dieu Ayabagabo, Paul A.Onjoro, Karubiu P. Migwi, Marie C. Dusingize

Abstract:

This study investigated the effects of feed on milk yield and quality through feed monitoring and quality assessment, and the consequent enteric methane gas emissions from smallholder dairy farms in drier areas of Rwanda, using the Tier II approach for four seasons in three zones, namely; Mayaga and peripheral Bugesera (MPB), Eastern Savanna and Central Bugesera (ESCB), and Eastern plateau (EP). The study was carried out using 186 dairy cows with a mean live weight of 292 Kg in three communal cowsheds. The milk quality analysis was carried out on 418 samples. Methane emission was estimated using prediction equations. Data collected were subjected to ANOVA. The dry matter intake was lower (p<0.05) in the long dry season (7.24 Kg), with the ESCB zone having the highest value of 9.10 Kg, explained by the practice of crop-livestock integration agriculture in that zone. The Dry matter digestibility varied between seasons and zones, ranging from 52.5 to 56.4% for seasons and from 51.9 to 57.5% for zones. The daily protein supply was higher (p<0.05) in the long rain season with 969 g. The mean daily milk production of lactating cows was 5.6 L with a lower value (p<0.05) during the long dry season (4.76 L), and the MPB zone having the lowest value of 4.65 L. The yearly milk production per cow was 1179 L. The milk fat varied from 3.79 to 5.49% with a seasonal and zone variation. No variation was observed with milk protein. The seasonal daily methane emission varied from 150 g for the long dry season to 174 g for the long rain season (p<0.05). The rain season had the highest methane emission as it is associated with high forage intake. The mean emission factor was 59.4 Kg of methane/year. The present EFs were higher than the default IPPC value of 41 Kg from developing countries in African, the Middle East, and other tropical regions livestock EFs using Tier I approach due to the higher live weight in the current study. The methane emission per unit of milk production was lower in the EP zone (46.8 g/L) due to the feed efficiency observed in that zone. Farmers should use high-quality feeds to increase the milk yield and reduce the methane gas produced per unit of milk. For an accurate assessment of the methane produced from dairy farms, there is a need for the use of the Life Cycle Assessment approach that considers all the sources of emissions.

Keywords: footprint, forage, girinka, tier

Procedia PDF Downloads 205
55 Intergenerational Succession within Family Businesses: The Role of Sharing and Creation Knowledge

Authors: Wissal Ben Arfi, Jean-Michel Sahut

Abstract:

The purpose of this paper is to provide a deeper understanding of the succession process from a knowledge management perspective. By doing that, succession process in family businesses, as an environment for creating and sharing knowledge, was explored. Design/Methodology/Approach: To support our reasoning, we collected qualitative data through 16 in-depth interviews conducted with all decision makers involved in the family businesses succession process in France. These open-ended responses were subsequently exposed to thematic discourse analysis. Findings: Central to this exhibit is the nature and magnitude of knowledge creation and sharing among the actors within the family succession context and how can tacit knowledge sharing facilitate the succession process. We also identified factors that inhibit down the knowledge creation and sharing processes. The sharing and creation of knowledge among members of a family business appear to be a complex process that must be part of a strategy for change. This implies that it requests trust and takes a certain amount of time because it requires organizational change and a clear and coherent strategic vision that is accepted and assimilated by all the members. Professional and leadership skills are of particular importance in knowledge sharing and creation processes. In most cases, tacit knowledge is crucial when it is shared and accumulated collectively. Our findings reveal that managers should find ways of implementing knowledge sharing and creation processes while acknowledging the succession process within family firms. This study highlights the importance of generating knowledge strategies in order to enhance the performance and the success of intergenerational succession. The empirical outcomes contribute to enrich the field of succession management process and enhance the role of knowledge in shaping family performance and longevity. To a large extent, the lessons learned from the study of succession processes in family-owned businesses are that when there is a deliberate effort to introduce a knowledge-based approach, this action becomes a seminal event in the life of the organization. Originality/Value: The paper contributes to the deep understanding of interactions among actors by examining the knowledge creation and sharing processes since current researches in family succession focused on aspects such as personal development of potential, intra-family succession intention, decision-making processes in family businesses. Besides, as succession is one of the key factors that determine the longevity and the performance of family businesses, it also contributes to literature by examining how tacit knowledge is transferred, shared and created in family businesses and how this can facilitate the intergenerational succession process.

Keywords: family-owned businesses, succession process, knowledge, performance

Procedia PDF Downloads 208
54 Stilbenes as Sustainable Antimicrobial Compounds to Control Vitis Vinifera Diseases

Authors: David Taillis, Oussama Becissa, Julien Gabaston, Jean-Michel Merillon, Tristan Richard, Stephanie Cluzet

Abstract:

Nowadays, there is a strong pressure to reduce the phytosanitary inputs of synthetic chemistry in vineyards. It is, therefore, necessary to find viable alternatives in order to protect the vine against its major diseases. For this purpose, we suggest the use of a plant extract enriched in antimicrobial compounds. Being produced from vine trunks and roots, which are co-products of wine production, the extract produced is part of a circular economy. The antimicrobial molecules present in this plant material are polyphenols and, more particularly, stilbenes, which are derived from a common base, the resveratrol unit, and that are well known vine phytoalexins. The stilbenoids were extracted from trunks and roots (30/70, w/w) by a double extraction with ethyl acetate followed by enrichment by liquid-liquid extraction. The produced extract was characterized by UHPLC-MS, then its antimicrobial activities were tested on Plasmopara viticola and Botrytis cinerea in the laboratory and/or in greenhouse and in vineyard. The major compounds were purified, and their antimicrobial activity was evaluated on B. cinerea. Moreover, after its spraying, the effect of the stilbene extract on the plant defence status was evaluated by analysis of defence gene expression. UHPLC-MS analysis revealed that the extract contains 50% stilbenes with resveratrol, ε-viniferin and r-viniferin as major compounds. The extract showed antimicrobial activities on P. viticola with IC₅₀ and IC₁₀₀ respectively of 90 and 300 mg/L in the laboratory. In addition, it inhibited 40% of downy mildew development in greenhouse. However, probably because of the sensitivity of stilbenes to the environment, such as UV degradation, no activity has been observed in vineyard towards P. viticola development. For B. cinerea, the extract IC50 was 123 mg/L, with resveratrol and ε-viniferin being the most active stilbenes (IC₅₀ of 88 and 142 mg/L, respectively). The analysis of the expression of defence genes revealed that the extract can induce the expression of some defence genes 24, 48, and 72 hours after treatment, meaning that the extract has a defence-stimulating effect at least for the first three days after treatment. In conclusion, we produced a plant extract enriched in stilbenes with antimicrobial properties against two major grapevine pathogenic agents P. viticola and B. cinerea. In addition, we showed that this extract displayed eliciting activity of plant defences. This extract can therefore represent, after formulation development, a viable eco-friendly alternative for vineyard protection. Subsequently, the effect of the stilbenoid extract on primary metabolism will be evaluated by quantitative NMR.

Keywords: antimicrobial, bioprotection, grapevine, Plasmopara viticola, stilbene

Procedia PDF Downloads 218
53 Technical and Economic Potential of Partial Electrification of Railway Lines

Authors: Rafael Martins Manzano Silva, Jean-Francois Tremong

Abstract:

Electrification of railway lines allows to increase speed, power, capacity and energetic efficiency of rolling stocks. However, this process of electrification is complex and costly. An electrification project is not just about design of catenary. It also includes installation of structures around electrification, as substation installation, electrical isolation, signalling, telecommunication and civil engineering structures. France has more than 30,000 km of railways, whose only 53% are electrified. The others 47% of railways use diesel locomotive and represent only 10% of the circulation (tons.km). For this reason, a new type of electrification, less expensive than the usual, is requested to enable the modernization of these railways. One solution could be the use of hybrids trains. This technology opens up new opportunities for less expensive infrastructure development such as the partial electrification of railway lines. In a partially electrified railway, the power supply of theses hybrid trains could be made either by the catenary or by the on-board energy storage system (ESS). Thus, the on-board ESS would feed the energetic needs of the train along the non-electrified zones while in electrified zones, the catenary would feed the train and recharge the on-board ESS. This paper’s objective deals with the technical and economic potential identification of partial electrification of railway lines. This study provides different scenarios of electrification by replacing the most expensive places to electrify using on-board ESS. The target is to reduce the cost of new electrification projects, i.e. reduce the cost of electrification infrastructures while not increasing the cost of rolling stocks. In this study, scenarios are constructed in function of the electrification’s cost of each structure. The electrification’s cost varies considerably because of the installation of catenary support in tunnels, bridges and viaducts is much more expensive than in others zones of the railway. These scenarios will be used to describe the power supply system and to choose between the catenary and the on-board energy storage depending on the position of the train on the railway. To identify the influence of each partial electrification scenario in the sizing of the on-board ESS, a model of the railway line and of the rolling stock is developed for a real case. This real case concerns a railway line located in the south of France. The energy consumption and the power demanded at each point of the line for each power supply (catenary or on-board ESS) are provided at the end of the simulation. Finally, the cost of a partial electrification is obtained by adding the civil engineering costs of the zones to be electrified plus the cost of the on-board ESS. The study of the technical and economic potential ends with the identification of the most economically interesting scenario of electrification.

Keywords: electrification, hybrid, railway, storage

Procedia PDF Downloads 431
52 Educating through Design: Eco-Architecture as a Form of Public Awareness

Authors: Carmela Cucuzzella, Jean-Pierre Chupin

Abstract:

Eco-architecture today is being assessed and judged increasingly on the basis of its environmental performance and its dedication to urgent stakes of sustainability. Architects have responded to environmental imperatives in novel ways since the 1960s. In the last two decades, however, different forms of eco-architecture practices have emerged that seem to be as dedicated to the issues of sustainability, as to their ability to 'communicate' their ecological features. The hypothesis is that some contemporary eco-architecture has been developing a characteristic 'explanatory discourse', of which it is possible to identify in buildings around the world. Some eco-architecture practices do not simply demonstrate their alignment with pressing ecological issues, rather, these buildings seem to be also driven by the urgent need to explain their ‘greenness’. The design aims specifically to teach visitors of the eco-qualities. These types of architectural practices are referred to in this paper as eco-didactic. The aim of this paper is to identify and assess this distinctive form of environmental architecture practice that aims to teach. These buildings constitute an entirely new form of design practice that places eco-messages squarely in the public realm. These eco-messages appear to have a variety of purposes: (i) to raise awareness of unsustainable quotidian habits, (ii) to become means of behavioral change, (iii) to publicly announce their responsibility through the designed eco-features, or (iv) to engage the patrons of the building into some form of sustainable interaction. To do this, a comprehensive review of Canadian eco-architecture is conducted since 1998. Their potential eco-didactic aspects are analysed through a lens of three vectors: (1) cognitive visitor experience: between the desire to inform and the poetics of form (are parts of the design dedicated to inform the visitors of the environmental aspects?); (2) formal architectural qualities: between the visibility and the invisibility of environmental features (are these eco-features clearly visible by the visitors?); and (3) communicative method for delivering eco-message: this transmission of knowledge is accomplished somewhere between consensus and dissensus as a method for disseminating the eco-message (do visitors question the eco-features or are they accepted by visitors as features that are environmental?). These architectural forms distinguish themselves in their crossing of disciplines, specifically, architecture, environmental design, and art. They also differ from other architectural practices in terms of how they aim to mobilize different publics within various urban landscapes The diversity of such buildings, from how and what they aim to communicate, to the audience they wish to engage, are all key parameters to better understand their means of knowledge transfer. Cases from the major cities across Canada are analysed, aiming to illustrate this increasing worldwide phenomenon.

Keywords: eco-architecture, public awareness, community engagement, didacticism, communication

Procedia PDF Downloads 124
51 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation

Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau

Abstract:

In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.

Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa

Procedia PDF Downloads 156
50 Semi-Empirical Modeling of Heat Inactivation of Enterococci and Clostridia During the Hygienisation in Anaerobic Digestion Process

Authors: Jihane Saad, Thomas Lendormi, Caroline Le Marechal, Anne-marie Pourcher, Céline Druilhe, Jean-louis Lanoiselle

Abstract:

Agricultural anaerobic digestion consists in the conversion of animal slurry and manure into biogas and digestate. They need, however, to be treated at 70 ºC during 60 min before anaerobic digestion according to the European regulation (EC n°1069/2009 & EU n°142/2011). The impact of such heat treatment on the outcome of bacteria has been poorly studied up to now. Moreover, a recent study¹ has shown that enterococci and clostridia are still detected despite the application of such thermal treatment, questioning the relevance of this approach for the hygienisation of digestate. The aim of this study is to establish the heat inactivation kinetics of two species of enterococci (Enterococcus faecalis and Enterococcus faecium) and two species of clostridia (Clostridioides difficile and Clostridium novyi as a non-toxic model for Clostridium botulinum of group III). A pure culture of each strain was prepared in a specific sterile medium at concentration of 10⁴ – 10⁷ MPN / mL (Most Probable number), depending on the bacterial species. Bacterial suspensions were then filled in sterilized capillary tubes and placed in a water or oil bath at desired temperature for a specific period of time. Each bacterial suspension was enumerated using a MPN approach, and tests were repeated three times for each temperature/time couple. The inactivation kinetics of the four indicator bacteria is described using the Weibull model and the classical Bigelow model of first-order kinetics. The Weibull model takes biological variation, with respect to thermal inactivation, into account and is basically a statistical model of distribution of inactivation times as the classical first-order approach is a special case of the Weibull model. The heat treatment at 70 ºC / 60 min contributes to a reduction greater than 5 log10 for E. faecium and E. faecalis. However, it results only in a reduction of about 0.7 log10 for C. difficile and an increase of 0.5 log10 for C. novyi. Application of treatments at higher temperatures is required to reach a reduction greater or equal to 3 log10 for C. novyi (such as 30 min / 100 ºC, 13 min / 105 ºC, 3 min / 110 ºC, and 1 min / 115 ºC), raising the question of the relevance of the application of heat treatment at 70 ºC / 60 min for these spore-forming bacteria. To conclude, the heat treatment (70 ºC / 60 min) defined by the European regulation is sufficient to inactivate non-sporulating bacteria. Higher temperatures (> 100 ºC) are required as far as spore-forming bacteria concerns to reach a 3 log10 reduction (sporicidal activity).

Keywords: heat treatment, enterococci, clostridia, inactivation kinetics

Procedia PDF Downloads 113
49 Experimental Recovery of Gold, Silver and Palladium from Electronic Wastes Using Ionic Liquids BmimHSO4 and BmimCl as Solvents

Authors: Lisa Shambare, Jean Mulopo, Sehliselo Ndlovu

Abstract:

One of the major challenges of sustainable development is promoting an industry which is both ecologically durable and economically viable. This requires processes that are material and energy efficient whilst also being able to limit the production of waste and toxic effluents through effective methods of process synthesis and intensification. In South Africa and globally, both miniaturisation and technological advances have substantially increased the amount of electronic wastes (e-waste) generated annually. Vast amounts of e-waste are being generated yearly with only a minute quantity being recycled officially. The passion for electronic devices cannot ignore the scarcity and cost of mining the noble metal resources which contribute significantly to the efficiency of most electronic devices. It has hence become imperative especially in an African context that sustainable strategies which are environmentally friendly be developed for recycling of the noble metals from e-waste. This paper investigates the recovery of gold, silver and palladium from electronic wastes, which consists of a vast array of metals, using ionic liquids which have the potential of reducing the gaseous and aqueous emissions associated with existing hydrometallurgical and pyrometallurgical technologies while also maintaining the economy of the overall recycling scheme through solvent recovery. The ionic liquids 1-butyl-3-methyl imidazolium hydrogen sulphate (BmimHSO4) which behaves like a protic acid and was used in the present research for the selective leaching of gold and silver from e-waste. Different concentrations of the aqueous ionic liquid were used in the experiments ranging from 10% to 50%. Thiourea was used as the complexing agent in the investigation with Fe3+ as the oxidant. The pH of the reaction was maintained in the range of 0.8 to 1.5. The preliminary investigations conducted were successful in the leaching of silver and palladium at room temperature with optimum results being at 48hrs. The leaching results could not be explained because of the leaching of palladium with the absence of gold. Hence a conclusion could not be drawn and there was the need for further experiments to be run. The leaching of palladium was carried out with hydrogen peroxide as oxidant and 1-butyl-3-methyl imidazolium chloride (BmimCl) as the solvent. The experiments at carried out at a temperature of 60 degrees celsius and a very low pH. The chloride ion was used to complex with palladium metal. From the preliminary results, it could be concluded that pretreatment of the treatment e-waste was necessary to improve the efficiency of the metal recovery process. A conclusion could not be drawn for the leaching experiments.

Keywords: BmimCl, BmimHSO4, gold, palladium, silver

Procedia PDF Downloads 290
48 Li2S Nanoparticles Impact on the First Charge of Li-ion/Sulfur Batteries: An Operando XAS/XES Coupled With XRD Analysis

Authors: Alice Robba, Renaud Bouchet, Celine Barchasz, Jean-Francois Colin, Erik Elkaim, Kristina Kvashnina, Gavin Vaughan, Matjaz Kavcic, Fannie Alloin

Abstract:

With their high theoretical energy density (~2600 Wh.kg-1), lithium/sulfur (Li/S) batteries are highly promising, but these systems are still poorly understood due to the complex mechanisms/equilibria involved. Replacing S8 by Li2S as the active material allows the use of safer negative electrodes, like silicon, instead of lithium metal. S8 and Li2S have different conductivity and solubility properties, resulting in a profoundly changed activation process during the first cycle. Particularly, during the first charge a high polarization and a lack of reproducibility between tests are observed. Differences observed between raw Li2S material (micron-sized) and that electrochemically produced in a battery (nano-sized) may indicate that the electrochemical process depends on the particle size. Then the major focus of the presented work is to deepen the understanding of the Li2S material charge mechanism, and more precisely to characterize the effect of the initial Li2S particle size both on the mechanism and the electrode preparation process. To do so, Li2S nanoparticles were synthetized according to two ways: a liquid path synthesis and a dissolution in ethanol, allowing Li2S nanoparticles/carbon composites to be made. Preliminary chemical and electrochemical tests show that starting with Li2S nanoparticles could effectively suppress the high initial polarization but also influence the electrode slurry preparation. Indeed, it has been shown that classical formulation process - a slurry composed of Polyvinylidone Fluoride polymer dissolved in N-methyle-2-pyrrolidone - cannot be used with Li2S nanoparticles. This reveals a complete different Li2S material behavior regarding polymers and organic solvents when going at the nanometric scale. Then the coupling between two operando characterizations such as X-Ray Diffraction (XRD) and X-Ray Absorption and Emission Spectroscopy (XAS/XES) have been carried out in order to interpret the poorly understood first charge. This study discloses that initial particle size of the active material has a great impact on the working mechanism and particularly on the different equilibria involved during the first charge of the Li2S based Li-ion batteries. These results explain the electrochemical differences and particularly the polarization differences observed during the first charge between micrometric and nanometric Li2S-based electrodes. Finally, this work could lead to a better active material design and so to more efficient Li2S-based batteries.

Keywords: Li-ion/Sulfur batteries, Li2S nanoparticles effect, Operando characterizations, working mechanism

Procedia PDF Downloads 266
47 Influence of Controlled Retting on the Quality of the Hemp Fibres Harvested at the Seed Maturity by Using a Designed Lab-Scale Pilot Unit

Authors: Brahim Mazian, Anne Bergeret, Jean-Charles Benezet, Sandrine Bayle, Luc Malhautier

Abstract:

Hemp fibers are increasingly used as reinforcements in polymer matrix composites due to their competitive performance (low density, mechanical properties and biodegradability) compared to conventional fibres such as glass fibers. However, the huge variation of their biochemical, physical and mechanical properties limits the use of these natural fibres in structural applications when high consistency and homogeneity are required. In the hemp industry, traditional processes termed field retting are commonly used to facilitate the extraction and separation of stem fibers. This retting treatment consists to spread out the stems on the ground for a duration ranging from a few days to several weeks. Microorganisms (fungi and bacteria) grow on the stem surface and produce enzymes that degrade pectinolytic substances in the middle lamellae surrounding the fibers. This operation depends on the weather conditions and is currently carried out very empirically in the fields so that a large variability in the hemp fibers quality (mechanical properties, color, morphology, chemical composition…) is resulting. Nonetheless, if controlled, retting might be favorable for good properties of hemp fibers and then of hemp fibers reinforced composites. Therefore, the present study aims to investigate the influence of controlled retting within a designed environmental chamber (lab-scale pilot unit) on the quality of the hemp fibres harvested at the seed maturity growth stage. Various assessments were applied directly on fibers: color observations, morphological (optical microscope), surface (ESEM), biochemical (gravimetry) analysis, spectrocolorimetric measurements (pectins content), thermogravimetric analysis (TGA) and tensile testing. The results reveal that controlled retting leads to a rapid change of color from yellow to dark grey due to development of microbial communities (fungi and bacteria) at the stem surface. An increase of thermal stability of fibres due to the removal of non-cellulosic components along retting is also observed. A separation of bast fibers to elementary fibers occurred with an evolution of chemical composition (degradation of pectins) and a rapid decrease in tensile properties (380MPa to 170MPa after 3 weeks) due to accelerated retting process. The influence of controlled retting on the biocomposite material (PP / hemp fibers) properties is under investigation.

Keywords: controlled retting, hemp fibre, mechanical properties, thermal stability

Procedia PDF Downloads 155
46 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study

Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni

Abstract:

Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.

Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation

Procedia PDF Downloads 128
45 Data Analysis Tool for Predicting Water Scarcity in Industry

Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse

Abstract:

Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.

Keywords: data mining, industry, machine Learning, shortage, water resources

Procedia PDF Downloads 121
44 Sintering of YNbO3:Eu3+ Compound: Correlation between Luminescence and Spark Plasma Sintering Effect

Authors: Veronique Jubera, Ka-Young Kim, U-Chan Chung, Amelie Veillere, Jean-Marc Heintz

Abstract:

Emitting materials and all solid state lasers are widely used in the field of optical applications and materials science as a source of excitement, instrumental measurements, medical applications, metal shaping etc. Recently promising optical efficiencies were recorded on ceramics which result from a cheaper and faster ways to obtain crystallized materials. The choice and optimization of the sintering process is the key point to fabricate transparent ceramics. It includes a high control on the preparation of the powder with the choice of an adequate synthesis, a pre-heat-treatment, the reproducibility of the sintering cycle, the polishing and post-annealing of the ceramic. The densification is the main factor needed to reach a satisfying transparency, and many technologies are now available. The symmetry of the unit cell plays a crucial role in the diffusion rate of the material. Therefore, the cubic symmetry compounds having an isotropic refractive index is preferred. The cubic Y3NbO7 matrix is an interesting host which can accept a high concentration of rare earth doping element and it has been demonstrated that SPS is an efficient way to sinter this material. The optimization of diffusion losses requires a microstructure of fine ceramics, generally less than one hundred nanometers. In this case, grain growth is not an obstacle to transparency. The ceramics properties are then isotropic thereby to free-shaping step by orienting the ceramics as this is the case for the compounds of lower symmetry. After optimization of the synthesis route, several SPS parameters as heating rate, holding, dwell time and pressure were adjusted in order to increase the densification of the Eu3+ doped Y3NbO7 pellets. The luminescence data coupled with X-Ray diffraction analysis and electronic diffraction microscopy highlight the existence of several distorted environments of the doping element in the studied defective fluorite-type host lattice. Indeed, the fast and high crystallization rate obtained to put in evidence a lack of miscibility in the phase diagram, being the final composition of the pellet driven by the ratio between niobium and yttrium elements. By following the luminescence properties, we demonstrate a direct impact on the SPS process on this material.

Keywords: emission, niobate of rare earth, Spark plasma sintering, lack of miscibility

Procedia PDF Downloads 268
43 Management of Caverno-Venous Leakage: A Series of 133 Patients with Symptoms, Hemodynamic Workup, and Results of Surgery

Authors: Allaire Eric, Hauet Pascal, Floresco Jean, Beley Sebastien, Sussman Helene, Virag Ronald

Abstract:

Background: Caverno-venous leakage (CVL) is devastating, although barely known disease, the first cause of major physical impairment in men under 25, and responsible for 50% of resistances to phosphodiesterase 5-inhibitors (PDE5-I), affecting 30 to 40% of users in this medication class. In this condition, too early blood drainage from corpora cavernosa prevents penile rigidity and penetration during sexual intercourse. The role of conservative surgery in this disease remains controversial. Aim: Assess complications and results of combined open surgery and embolization for CVL. Method: Between June 2016 and September 2021, 133 consecutive patients underwent surgery in our institution for CVL, causing severe erectile dysfunction (ED) resistance to oral medical treatment. Procedures combined vein embolization and ligation with microsurgical techniques. We performed a pre-and post-operative clinical (Erection Harness Scale: EHS) hemodynamic evaluation by duplex sonography in all patients. Before surgery, the CVL network was visualized by computed tomography cavernography. Penile EMG was performed in case of diabetes or suspected other neurological conditions. All patients were optimized for hormonal status—data we prospectively recorded. Results: Clinical signs suggesting CVL were ED since age lower than 25, loss of erection when changing position, penile rigidity varying according to the position. Main complications were minor pulmonary embolism in 2 patients, one after airline travel, one with Factor V Leiden heterozygote mutation, one infection and three hematomas requiring reoperation, one decreased gland sensitivity lasting for more than one year. Mean pre-operative pharmacologic EHS was 2.37+/-0.64, mean pharmacologic post-operative EHS was 3.21+/-0.60, p<0.0001 (paired t-test). The mean EHS variation was 0.87+/-0.74. After surgery, 81.5% of patients had a pharmacologic EHS equal to or over 3, allowing for intercourse with penetration. Three patients (2.2%) experienced lower post-operative EHS. The main cause of failure was leakage from the deep dorsal aspect of the corpus cavernosa. In a 14 months follow-up, 83.2% of patients had a clinical EHS equal to or over 3, allowing for sexual intercourse with penetration, one-third of them without any medication. 5 patients had a penile implant after unsuccessful conservative surgery. Conclusion: Open surgery combined with embolization for CVL is an efficient approach to CVL causing severe erectile dysfunction.

Keywords: erectile dysfunction, cavernovenous leakage, surgery, embolization, treatment, result, complications, penile duplex sonography

Procedia PDF Downloads 149
42 Randomized Controlled Trial for the Management of Pain and Anxiety Using Virtual Reality During the Care of Older Hospitalized Patients

Authors: Corbel Camille, Le Cerf Flora, Capriz Françoise, Vaillant-Ciszewicz Anne-Julie, Breaud Jean, Guerin Olivier, Corveleyn Xavier

Abstract:

Background: The medical environment can generate stressful and anxiety-provoking situations for patients, particularly during painful care procedures for the older population. These stressful environments have deleterious effects on the quality of care and can even put the patient at risk and set the care team up for failure. The search for a solution is, therefore, imperative. The development of new technologies, such as virtual reality (VR), seems to be an answer to this problem. Objectives: The objective of this study is to compare the effects of virtual reality on pain and anxiety when caring for older hospitalized people with the effects of usual care. More precisely, different individual factors (age, cognitive level, individual preferences, etc.) and different virtual reality universes (personalized or non-personalized) are studied to understand the role of these factors in reducing pain and anxiety during care procedures. The aim of this study is to improve the quality of life of patients and caregivers in their work environment. Method: This mono-centered, randomized, controlled study was conducted from September 2023 to September 2024 on 120 participants recruited from the geriatric departments of the Cimiez Hospital, Nice, France. Participants are randomized into three groups: a control group, a personalized VR group and a non-personalized VR group. Each participant is followed during a painful care session. Data are collected before, during and after the care, using measures of pain (Algoplus and numerical scale) and anxiety (Hospital anxiety scale and numerical scale). Physiological assessments with an oximeter are also performed to collect both heart and respiratory rate measurements. The implementation of the care will be assessed among healthcare providers to evaluate its effects on the difficulty and fatigue associated with the care. Additionally, a questionnaire (System Usability Scale) will be administered at the conclusion of the study to determine the willingness of healthcare providers to integrate VR into their daily care practices. Result: The preliminary results indicate significant effects on anxiety (p=.001) and pain (p=<.001) following the VR intervention during care, as compared to the control group. Conclusion: The preliminary results suggest that VRI appears to be a suitable and effective method for reducing anxiety and pain among older hospitalized individuals compared with standard care. Finally, the experiences of healthcare professionals involved will also be considered to assess the impact of these interventions on working conditions and patient support.

Keywords: anxiety, care, pain, older adults, virtual reality

Procedia PDF Downloads 73
41 Combining Nitrocarburisation and Dry Lubrication for Improving Component Lifetime

Authors: Kaushik Vaideeswaran, Jean Gobet, Patrick Margraf, Olha Sereda

Abstract:

Nitrocarburisation is a surface hardening technique often applied to improve the wear resistance of steel surfaces. It is considered to be a promising solution in comparison with other processes such as flame spraying, owing to the formation of a diffusion layer which provides mechanical integrity, as well as its cost-effectiveness. To improve other tribological properties of the surface such as the coefficient of friction (COF), dry lubricants are utilized. Currently, the lifetime of steel components in many applications using either of these techniques individually are faced with the limitations of the two: high COF for nitrocarburized surfaces and low wear resistance of dry lubricant coatings. To this end, the current study involves the creation of a hybrid surface using the impregnation of a dry lubricant on to a nitrocarburized surface. The mechanical strength and hardness of Gerster SA’s nitrocarburized surfaces accompanied by the impregnation of the porous outermost layer with a solid lubricant will create a hybrid surface possessing both outstanding wear resistance and a low friction coefficient and with high adherence to the substrate. Gerster SA has the state-of-the-art technology for the surface hardening of various steels. Through their expertise in the field, the nitrocarburizing process parameters (atmosphere, temperature, dwelling time) were optimized to obtain samples that have a distinct porous structure (in terms of size, shape, and density) as observed by metallographic and microscopic analyses. The porosity thus obtained is suitable for the impregnation of a dry lubricant. A commercially available dry lubricant with a thermoplastic matrix was employed for the impregnation process, which was optimized to obtain a void-free interface with the surface of the nitrocarburized layer (henceforth called hybrid surface). In parallel, metallic samples without nitrocarburisation were also impregnated with the same dry lubricant as a reference (henceforth called reference surface). The reference and the nitrocarburized surfaces, with and without the dry lubricant were tested for their tribological behavior by sliding against a quenched steel ball using a nanotribometer. Without any lubricant, the nitrocarburized surface showed a wear rate 5x lower than the reference metal. In the presence of a thin film of dry lubricant ( < 2 micrometers) and under the application of high loads (500 mN or ~800 MPa), while the COF for the reference surface increased from ~0.1 to > 0.3 within 120 m, the hybrid surface retained a COF < 0.2 for over 400m of sliding. In addition, while the steel ball sliding against the reference surface showed heavy wear, the corresponding ball sliding against the hybrid surface showed very limited wear. Observations of the sliding tracks in the hybrid surface using Electron Microscopy show the presence of the nitrocarburized nodules as well as the lubricant, whereas no traces of the lubricant were found in the sliding track on the reference surface. In this manner, the clear advantage of combining nitrocarburisation with the impregnation of a dry lubricant towards forming a hybrid surface has been demonstrated.

Keywords: dry lubrication, hybrid surfaces, improved wear resistance, nitrocarburisation, steels

Procedia PDF Downloads 122
40 Argos System: Improvements and Future of the Constellation

Authors: Sophie Baudel, Aline Duplaa, Jean Muller, Stephan Lauriol, Yann Bernard

Abstract:

Argos is the main satellite telemetry system used by the wildlife research community, since its creation in 1978, for animal tracking and scientific data collection all around the world, to analyze and understand animal migrations and behavior. The marine mammals' biology is one of the major disciplines which had benefited from Argos telemetry, and conversely, marine mammals biologists’ community has contributed a lot to the growth and development of Argos use cases. The Argos constellation with 6 satellites in orbit in 2017 (Argos 2 payload on NOAA 15, NOAA 18, Argos 3 payload on NOAA 19, SARAL, METOP A and METOP B) is being extended in the following years with Argos 3 payload on METOP C (launch in October 2018), and Argos 4 payloads on Oceansat 3 (launch in 2019), CDARS in December 2021 (to be confirmed), METOP SG B1 in December 2022, and METOP-SG-B2 in 2029. Argos 4 will allow more frequency bands (600 kHz for Argos4NG, instead of 110 kHz for Argos 3), new modulation dedicated to animal (sea turtle) tracking allowing very low transmission power transmitters (50 to 100mW), with very low data rates (124 bps), enhancement of high data rates (1200-4800 bps), and downlink performance, at the whole contribution to enhance the system capacity (50,000 active beacons per month instead of 20,000 today). In parallel of this ‘institutional Argos’ constellation, in the context of a miniaturization trend in the spatial industry in order to reduce the costs and multiply the satellites to serve more and more societal needs, the French Space Agency CNES, which designs the Argos payloads, is innovating and launching the Argos ANGELS project (Argos NEO Generic Economic Light Satellites). ANGELS will lead to a nanosatellite prototype with an Argos NEO instrument (30 cm x 30 cm x 20cm) that will be launched in 2019. In the meantime, the design of the renewal of the Argos constellation, called Argos For Next Generations (Argos4NG), is on track and will be operational in 2022. Based on Argos 4 and benefitting of the feedback from ANGELS project, this constellation will allow revisiting time of fewer than 20 minutes in average between two satellite passes, and will also bring more frequency bands to improve the overall capacity of the system. The presentation will then be an overview of the Argos system, present and future and new capacities coming with it. On top of that, use cases of two Argos hardware modules will be presented: the goniometer pathfinder allowing recovering Argos beacons at sea or on the ground in a 100 km radius horizon-free circle around the beacon location and the new Argos 4 chipset called ‘Artic’, already available and tested by several manufacturers.

Keywords: Argos satellite telemetry, marine protected areas, oceanography, maritime services

Procedia PDF Downloads 181
39 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 69