Search results for: Serge Desmarais
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26

Search results for: Serge Desmarais

26 Challenging Heteronormativity and Mononormativity in Academia: Incorporating Consensual Non-Monogamy into Psychological Romantic Relationship Research

Authors: Jessica Wood, Serge Desmarais

Abstract:

There has been recent resurgence in the popular and academic interest of consensual non-monogamous (CNM) relationships- an umbrella term that defines relationships in which each partner has openly agreed to engage in additional romantic and/or sexual relationships outside of their primary partnership. Despite an increase in the academic study of CNM, little psychological attention has been paid to the study of CNM, with the consideration of these relationships commonly occurring within related social science disciplines such as sociology or anthropology. As a discipline, psychology has a history of conducting research in the area of intimate relationships, and psychologists have amassed a wealth of theoretical knowledge in this field. However, historically individuals who engage in "alternative" sexual and romantic behaviours, such as non-heterosexual sex or sex with multiple partners have been pathologized within psychological research. Individuals in CNM relationships or individuals identifying as lesbian, gay, bisexual, trans or queer have often been excluded from research or “othered” in psychological interpretations of what healthy relationships entail. Thus, our current theoretical understandings of romantic relationships are limited to heterosexual, monogamous relationships. The goal of this presentation is to examine commonly cited components of relationship satisfaction (e.g., commitment, communication) and to critically assess how CNM experiences are presented in, or missing from, the psychological literature on romantic relationships. Additionally, the presentation will also consider how CNM relationships may add to our understanding or enhancement of traditional psychological theories and address issues related to heteronormativity and mononormativity within the discipline. Finally, we will close with a consideration of additional theoretical perspectives that may aid in our understanding of CNM relationships and suggest directions for future research.

Keywords: heteronormativity, mononormativity, psychological research, diverse relationships, gender, sexuality, feminism, queer theory

Procedia PDF Downloads 348
25 An Extension of the Generalized Extreme Value Distribution

Authors: Serge Provost, Abdous Saboor

Abstract:

A q-analogue of the generalized extreme value distribution which includes the Gumbel distribution is introduced. The additional parameter q allows for increased modeling flexibility. The resulting distribution can have a finite, semi-infinite or infinite support. It can also produce several types of hazard rate functions. The model parameters are determined by making use of the method of maximum likelihood. It will be shown that it compares favourably to three related distributions in connection with the modeling of a certain hydrological data set.

Keywords: extreme value theory, generalized extreme value distribution, goodness-of-fit statistics, Gumbel distribution

Procedia PDF Downloads 304
24 A Methodology for Characterising the Tail Behaviour of a Distribution

Authors: Serge Provost, Yishan Zang

Abstract:

Following a review of various approaches that are utilized for classifying the tail behavior of a distribution, an easily implementable methodology that relies on an arctangent transformation is presented. The classification criterion is actually based on the difference between two specific quantiles of the transformed distribution. The resulting categories enable one to classify distributional tails as distinctly short, short, nearly medium, medium, extended medium and somewhat long, providing that at least two moments exist. Distributions possessing a single moment are said to be long tailed while those failing to have any finite moments are classified as having an extremely long tail. Several illustrative examples will be presented.

Keywords: arctangent transformation, tail classification, heavy-tailed distributions, distributional moments

Procedia PDF Downloads 84
23 On Modeling Data Sets by Means of a Modified Saddlepoint Approximation

Authors: Serge B. Provost, Yishan Zhang

Abstract:

A moment-based adjustment to the saddlepoint approximation is introduced in the context of density estimation. First applied to univariate distributions, this methodology is extended to the bivariate case. It then entails estimating the density function associated with each marginal distribution by means of the saddlepoint approximation and applying a bivariate adjustment to the product of the resulting density estimates. The connection to the distribution of empirical copulas will be pointed out. As well, a novel approach is proposed for estimating the support of distribution. As these results solely rely on sample moments and empirical cumulant-generating functions, they are particularly well suited for modeling massive data sets. Several illustrative applications will be presented.

Keywords: empirical cumulant-generating function, endpoints identification, saddlepoint approximation, sample moments, density estimation

Procedia PDF Downloads 121
22 The Assessment of Bilingual Students: How Bilingual Can It Really Be?

Authors: Serge Lacroix

Abstract:

The proposed study looks at the psychoeducational assessment of bilingual students, in English and French in this case. It will be the opportunity to look at language of assessment and specifically how certain tests can be administered in one language and others in another language. It is also a look into the questioning of the validity of the test scores that are obtained as well as the quality and generalizability of the conclusions that can be drawn. Bilingualism and multiculturalism, although in constant expansion, is not considered in norms development and remains a poorly understood factor when it is at play in the context of a psychoeducational assessment. Student placement, diagnoses, accurate measures of intelligence and achievement are all impacted by the quality of the assessment procedure. The same is true for questionnaires administered to parents and self-reports completed by bilingual students who, more often than not, are assessed in a language that is not their primary one or are compared to monolinguals not dealing with the same challenges or the same skills. Results show that students, when offered to work in a bilingual fashion, chooses to do so in a significant proportion. Recommendations will be offered to support educators aiming at expanding their skills when confronted with multilingual students in an assessment context.

Keywords: psychoeducational assessment, bilingualism, multiculturalism, intelligence, achievement

Procedia PDF Downloads 422
21 Corrosion Behavior of Induced Stress Duplex Stainless Steel in Chloride Environment

Authors: Serge Mudinga Lemika, Samuel Olukayode Akinwamide, Aribo Sunday, Babatunde Abiodun Obadele, Peter Apata Olubambi

Abstract:

Use of Duplex stainless steel has become predominant in applications where excellent corrosion resistance is of utmost importance. Corrosion behavior of duplex stainless steel induced with varying stress in a chloride media were studied. Characterization of as received 2205 duplex stainless steels were carried out to reveal its structure and properties tensile sample produced from duplex stainless steel was initially subjected to tensile test to obtain the yield strength. Stresses obtained by various percentages (20, 40, 60 and 80%) of the yield strength was induced in DSS samples. Corrosion tests were carried out in magnesium chloride solution at room temperature. Morphologies of cracks observed with optical and scanning electron microscope showed that samples induced with higher stress had its austenite and ferrite grains affected by pitting.

Keywords: duplex stainless steel, hardness, nanoceramics, spark plasma sintering

Procedia PDF Downloads 265
20 Nonparametric Copula Approximations

Authors: Serge Provost, Yishan Zang

Abstract:

Copulas are currently utilized in finance, reliability theory, machine learning, signal processing, geodesy, hydrology and biostatistics, among several other fields of scientific investigation. It follows from Sklar's theorem that the joint distribution function of a multidimensional random vector can be expressed in terms of its associated copula and marginals. Since marginal distributions can easily be determined by making use of a variety of techniques, we address the problem of securing the distribution of the copula. This will be done by using several approaches. For example, we will obtain bivariate least-squares approximations of the empirical copulas, modify the kernel density estimation technique and propose a criterion for selecting appropriate bandwidths, differentiate linearized empirical copulas, secure Bernstein polynomial approximations of suitable degrees, and apply a corollary to Sklar's result. Illustrative examples involving actual observations will be presented. The proposed methodologies will as well be applied to a sample generated from a known copula distribution in order to validate their effectiveness.

Keywords: copulas, Bernstein polynomial approximation, least-squares polynomial approximation, kernel density estimation, density approximation

Procedia PDF Downloads 35
19 Neural Network Based Compressor Flow Estimator in an Aircraft Vapor Cycle System

Authors: Justin Reverdi, Sixin Zhang, Serge Gratton, Said Aoues, Thomas Pellegrini

Abstract:

In Vapor Cycle Systems, the flow sensor plays a key role in different monitoring and control purposes. However, physical sensors can be expensive, inaccurate, heavy, cumbersome, or highly sensitive to vibrations, which is especially problematic when embedded into an aircraft. The conception of a virtual sensor based on other standard sensors is a good alternative. In this paper, a data-driven model using a Convolutional Neural Network is proposed to estimate the flow of the compressor. To fit the model to our dataset, we tested different loss functions. We show in our application that a Dynamic Time Warping based loss function called DILATE leads to better dynamical performance than the vanilla mean squared error (MSE) loss function. DILATE allows choosing a trade-off between static and dynamic performance.

Keywords: deep learning, dynamic time warping, vapor cycle system, virtual sensor

Procedia PDF Downloads 111
18 Characterization of the Queuine Salvage Pathway From Bacteria in the Human Parasite Entamoeba Histolytica

Authors: Lotem Sarid, Meirav Trebicz-Geffen, Serge Ankri

Abstract:

Queuosine (Q) is a naturally occurring modified nucleoside that occurs in the first position of transfer RNA anticodons such as Asp, Asn, His, and Tyr. As eukaryotes lack pathways to synthesize queuine, the nucleobase of queuosine, they must obtain it from their diet or gut microbiota. Our previous work investigated the effects of queuine on the physiology of the eukaryotic parasite Entamoeba histolytica and defined the enzyme EhTGT responsible for its incorporation into tRNA. To our best knowledge, it is unknown how E. histolytica salvages Q from gut bacteria. We used N-acryloyl-3-aminophenylboronic acid (APB) PAGE analysis to demonstrate that E. histolytica trophozoites can salvage queuine from Q or E. coli K12 but not from the modified E. coli QueC strain, which cannot produce queuine. Next, we examined the role of EhDUF2419, a protein with homology to DNA glycosylase, as a queuine salvage enzyme in E. histolytica. When EhDUF2419 expression is silenced, it inhibits Q's conversion to queuine, resulting in a decrease in Q-tRNA levels. We also observed that Q protects control trophozoites from oxidative stress (OS), but not siEhDUF2419 trophozoites. Overall, our data reveal that EhDUF2419 is central for the salvaging of queuine from bacteria and for the resistance of the parasite to OS.

Keywords: entamoeba histolytica, epitranscriptomics, gut microbiota, queuine, queuosine, response to oxidative stress, tRNA modification.

Procedia PDF Downloads 82
17 CNN-Based Compressor Mass Flow Estimator in Industrial Aircraft Vapor Cycle System

Authors: Justin Reverdi, Sixin Zhang, Saïd Aoues, Fabrice Gamboa, Serge Gratton, Thomas Pellegrini

Abstract:

In vapor cycle systems, the mass flow sensor plays a key role for different monitoring and control purposes. However, physical sensors can be inaccurate, heavy, cumbersome, expensive, or highly sensitive to vibrations, which is especially problematic when embedded into an aircraft. The conception of a virtual sensor, based on other standard sensors, is a good alternative. This paper has two main objectives. Firstly, a data-driven model using a convolutional neural network is proposed to estimate the mass flow of the compressor. We show that it significantly outperforms the standard polynomial regression model (thermodynamic maps) in terms of the standard MSE metric and engineer performance metrics. Secondly, a semi-automatic segmentation method is proposed to compute the engineer performance metrics for real datasets, as the standard MSE metric may pose risks in analyzing the dynamic behavior of vapor cycle systems.

Keywords: deep learning, convolutional neural network, vapor cycle system, virtual sensor

Procedia PDF Downloads 12
16 A Theorem Related to Sample Moments and Two Types of Moment-Based Density Estimates

Authors: Serge B. Provost

Abstract:

Numerous statistical inference and modeling methodologies are based on sample moments rather than the actual observations. A result justifying the validity of this approach is introduced. More specifically, it will be established that given the first n moments of a sample of size n, one can recover the original n sample points. This implies that a sample of size n and its first associated n moments contain precisely the same amount of information. However, it is efficient to make use of a limited number of initial moments as most of the relevant distributional information is included in them. Two types of density estimation techniques that rely on such moments will be discussed. The first one expresses a density estimate as the product of a suitable base density and a polynomial adjustment whose coefficients are determined by equating the moments of the density estimate to the sample moments. The second one assumes that the derivative of the logarithm of a density function can be represented as a rational function. This gives rise to a system of linear equations involving sample moments, the density estimate is then obtained by solving a differential equation. Unlike kernel density estimation, these methodologies are ideally suited to model ‘big data’ as they only require a limited number of moments, irrespective of the sample size. What is more, they produce simple closed form expressions that are amenable to algebraic manipulations. They also turn out to be more accurate as will be shown in several illustrative examples.

Keywords: density estimation, log-density, polynomial adjustments, sample moments

Procedia PDF Downloads 120
15 Improving Student Programming Skills in Introductory Computer and Data Science Courses Using Generative AI

Authors: Genady Grabarnik, Serge Yaskolko

Abstract:

Generative Artificial Intelligence (AI) has significantly expanded its applicability with the incorporation of Large Language Models (LLMs) and become a technology with promise to automate some areas that were very difficult to automate before. The paper describes the introduction of generative Artificial Intelligence into Introductory Computer and Data Science courses and analysis of effect of such introduction. The generative Artificial Intelligence is incorporated in the educational process two-fold: For the instructors, we create templates of prompts for generation of tasks, and grading of the students work, including feedback on the submitted assignments. For the students, we introduce them to basic prompt engineering, which in turn will be used for generation of test cases based on description of the problems, generating code snippets for the single block complexity programming, and partitioning into such blocks of an average size complexity programming. The above-mentioned classes are run using Large Language Models, and feedback from instructors and students and courses’ outcomes are collected. The analysis shows statistically significant positive effect and preference of both stakeholders.

Keywords: introductory computer and data science education, generative AI, large language models, application of LLMS to computer and data science education

Procedia PDF Downloads 26
14 Mechanical Characterization of Porcine Skin with the Finite Element Method Based Inverse Optimization Approach

Authors: Djamel Remache, Serge Dos Santos, Michael Cliez, Michel Gratton, Patrick Chabrand, Jean-Marie Rossi, Jean-Louis Milan

Abstract:

Skin tissue is an inhomogeneous and anisotropic material. Uniaxial tensile testing is one of the primary testing techniques for the mechanical characterization of skin at large scales. In order to predict the mechanical behavior of materials, the direct or inverse analytical approaches are often used. However, in case of an inhomogeneous and anisotropic material as skin tissue, analytical approaches are not able to provide solutions. The numerical simulation is thus necessary. In this work, the uniaxial tensile test and the FEM (finite element method) based inverse method were used to identify the anisotropic mechanical properties of porcine skin tissue. The uniaxial tensile experiments were performed using Instron 8800 tensile machine®. The uniaxial tensile test was simulated with FEM, and then the inverse optimization approach (or the inverse calibration) was used for the identification of mechanical properties of the samples. Experimentally results were compared to finite element solutions. The results showed that the finite element model predictions of the mechanical behavior of the tested skin samples were well correlated with experimental results.

Keywords: mechanical skin tissue behavior, uniaxial tensile test, finite element analysis, inverse optimization approach

Procedia PDF Downloads 371
13 Yawning Computing Using Bayesian Networks

Authors: Serge Tshibangu, Turgay Celik, Zenzo Ncube

Abstract:

Road crashes kill nearly over a million people every year, and leave millions more injured or permanently disabled. Various annual reports reveal that the percentage of fatal crashes due to fatigue/driver falling asleep comes directly after the percentage of fatal crashes due to intoxicated drivers. This percentage is higher than the combined percentage of fatal crashes due to illegal/Un-Safe U-turn and illegal/Un-Safe reversing. Although a relatively small percentage of police reports on road accidents highlights drowsiness and fatigue, the importance of these factors is greater than we might think, hidden by the undercounting of their events. Some scenarios show that these factors are significant in accidents with killed and injured people. Thus the need for an automatic drivers fatigue detection system in order to considerably reduce the number of accidents owing to fatigue.This research approaches the drivers fatigue detection problem in an innovative way by combining cues collected from both temporal analysis of drivers’ faces and environment. Monotony in driving environment is inter-related with visual symptoms of fatigue on drivers’ faces to achieve fatigue detection. Optical and infrared (IR) sensors are used to analyse the monotony in driving environment and to detect the visual symptoms of fatigue on human face. Internal cues from drivers faces and external cues from environment are combined together using machine learning algorithms to automatically detect fatigue.

Keywords: intelligent transportation systems, bayesian networks, yawning computing, machine learning algorithms

Procedia PDF Downloads 430
12 Battery Energy Storage System Economic Benefits Assessment on a Network Frequency Control

Authors: Kréhi Serge Agbli, Samuel Portebos, Michaël Salomon

Abstract:

Here a methodology is considered aiming at evaluating the economic benefit of the provision of a primary frequency control unit using a Battery Energy Storage System (BESS). In this methodology, two control types (basic and hysteresis) are implemented and the corresponding minimum energy storage system power allowing to maintain the frequency drop inside a given threshold under a given contingency is identified and compared using DigSilent’s PowerFactory software. Following this step, the corresponding energy storage capacity (in MWh) is calculated. As PowerFactory is dedicated to dynamic simulation for transient analysis, a first order model related to the IEEE 9 bus grid used for the analysis under PowerFactory is characterized and implemented on MATLAB-Simulink. Primary frequency control is simulated using the two control types over one-month grid's frequency deviation data on this Simulink model. This simulation results in the energy throughput both basic and hysteresis BESSs. It emerges that the 15 minutes operation band of the battery capacity allocated to frequency control is sufficient under the considered disturbances. A sensitivity analysis on the width of the control deadband is then performed for the two control types. The deadband width variation leads to an identical sizing with the hysteresis control showing a better frequency control at the cost of a higher delivered throughput compared to the basic control. An economic analysis comparing the cost of the sized BESS to the potential revenues is then performed.

Keywords: battery energy storage system, electrical network frequency stability, frequency control unit, PowerFactor

Procedia PDF Downloads 97
11 Gender-Specific Vulnerability on Climate Change and Food Security Status - A Catchment Approach on Agroforestry Systems - A Multi-Country Case Study

Authors: Zerihun Yohannes Amare Id, Bernhard Freyer, Ky Serge Stephane, Ouéda Adama, Blessing Mudombi, Jean Nzuma, Mekonen Getachew Abebe, Adane Tesfaye, Birtukan Atinkut Asmare, Tesfahun Asmamaw Kassie

Abstract:

The study was conducted in Ethiopia (Zege Catchment) (ZC), Zimbabwe (Upper Save Catchment) (USC), and Burkina Faso (Nakambe Catchment) (NC). The study utilized a quantitative approach with 180 participants and complemented it with qualitative methods, including 33 key informant interviews and 6 focus group discussions. Households in ZC (58%), NC (55%), and US (40%) do not cover their household food consumption from crop production. The households rely heavily on perennial cash crops rather than annual crop production. Exposure indicators in ZC (0.758), USC (0.774), and NC (0.944), and sensitivity indicators in ZC (0.849) and NC (0.937) show statistically significant and high correlation with vulnerability. In the USC, adaptive capacity (0.746) and exposure (0.774) are also statistically significant and highly correlated with vulnerability. Vulnerability levels of the NC are very high (0.75) (0.85 female and 0.65 male participants) compared to the USC (0.66) (0.69 female and 0.61 male participants) and ZC (0.47) (0.34 female and 0.58 male participants). Female-headed households had statistically significantly lower vulnerability index compared to males in ZC, while male-headed households had statistically significantly lower vulnerability index compared to females in USC and NC. The reason is land certification in ZC (80%) is higher than in the US (10%) and NC (8%). Agroforestry practices variables across the study catchments had statistically significant contributions to households' adaptive capacity. We conclude that agroforestry practices do have substantial benefits in increasing women's adaptive capacity and reducing their vulnerability to climate change and food insecurity.

Keywords: climate change vulnerability, agroforestry, gender, food security, Sub-Saharan Africa

Procedia PDF Downloads 49
10 The Dressing Field Method of Gauge Symmetries Reduction: Presentation and Examples

Authors: Jeremy Attard, Jordan François, Serge Lazzarini, Thierry Masson

Abstract:

Gauge theories are the natural background for describing geometrically fundamental interactions using principal and associated fiber bundles as dynamical entities. The central notion of these theories is their local gauge symmetry implemented by the local action of a Lie group H. There exist several methods used to reduce the symmetry of a gauge theory, like gauge fixing, bundle reduction theorem or spontaneous symmetry breaking mechanism (SSBM). This paper is a presentation of another method of gauge symmetry reduction, distinct from those three. Given a symmetry group H acting on a fiber bundle and its naturally associated fields (Ehresmann (or Cartan) connection, curvature, matter fields, etc.) there sometimes exists a way to erase (in whole or in part) the H-action by just reconfiguring these fields, i.e. by making a mere change of field variables in order to get new (‘composite‘) fields on which H (in whole or in part) does not act anymore. Two examples: the re-interpretation of the BEHGHK (Higgs) mechanism, on the one hand, and the top-down construction of Tractor and Penrose's Twistor spaces and connections in the framework of conformal Cartan geometry, one the other, will be discussed. They have, of course, nothing to do with each other but the dressing field method can be applied on both to get a new insight. In the first example, it turns out, indeed, that generation of masses in the Standard Model can be separated from the symmetry breaking, the latter being a mere change of field variables, i.e. a dressing. This offers an interpretation in opposition with the one usually found in textbooks. In the second case, the dressing field method applied to the conformal Cartan geometry offer a way of understanding the deep geometric nature of the so-called Tractors and Twistors. The dressing field method, distinct from a gauge transformation (even if it can have apparently the same form), is a systematic way of finding and erasing artificial symmetries of a theory, by a mere change of field variables which redistributes the degrees of freedom of the theories.

Keywords: BEHGHK (Higgs) mechanism, conformal gravity, gauge theory, spontaneous symmetry breaking, symmetry reduction, twistors and tractors

Procedia PDF Downloads 200
9 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 105
8 Inf-γ and Il-2 Asses the Therapeutic Response in Anti-tuberculosis Patients at Jamot Hospital Yaounde, Cameroon

Authors: Alexandra Emmanuelle Membangbi, Jacky Njiki Bikoï, Esther Del-florence Moni Ndedi, Marie Joseph Nkodo Mindimi, Donatien Serge Mbaga, Elsa Nguiffo Makue, André Chris Mikangue Mbongue, Martha Mesembe, George Ikomey Mondinde, Eric Walter Perfura-yone, Sara Honorine Riwom Essama

Abstract:

Background: Tuberculosis (TB) is one of the top lethal infectious diseases worldwide. In recent years, interferon-γ (INF-γ) release assays (IGRAs) have been established as routine tests for diagnosing TB infection. However, produced INF-γ assessment failed to distinguish active TB (ATB) from latent TB infection (LTBI), especially in TB epidemic areas. In addition to IFN-γ, interleukin-2 (IL-2), another cytokine secreted by activated T cells, is also involved in immune response against Mycobacterium tuberculosis. The aim of the study was to assess the capacity of IFN-γ and IL2 to evaluate the therapeutic response of patients on anti-tuberculosis treatment. Material and Methods: We conducted a cross-sectional study in the Pneumonology Departments of the Jamot Hospital in Yaoundé between May and August 2021. After signed the informed consent, the sociodemographic data, as well as 5 mL of blood, were collected in the crook of the elbow of each participant. Sixty-one subjects were selected (n= 61) and divided into 4 groups as followed: group 1: resistant tuberculosis (n=13), group 2: active tuberculosis (n=19), group 3 cured tuberculosis (n=16), and group 4: presumed healthy persons (n=13). The cytokines of interest were determined using an indirect Enzyme-linked Immuno-Sorbent Assay (ELISA) according to the manufacturer's recommendations. P-values < 0.05 were interpreted as statistically significant. All statistical calculations were performed using SPSS version 22.0 Results: The results showed that men were more 14/61 infected (31,8%) with a high presence in active and resistant TB groups. The mean age was 41.3±13.1 years with a 95% CI = [38.2-44.7], the age group with the highest infection rate was ranged between 31 and 40 years. The IL-2 and INF-γ means were respectively 327.6±160.6 pg/mL and 26.6±13.0 pg/mL in active tuberculosis patients, 251.1±30.9 pg/mL and 21.4±9.2 pg/mL in patients with resistant tuberculosis, while it was 149.3±93.3 pg/mL and 17.9±9.4 pg/mL in cured patients, 15.1±8.4 pg/mL and 5.3±2.6 pg/mL in participants presumed healthy (p <0.0001). Significant differences in IFN-γ and IL-2 rates were observed between the different groups. Conclusion: Monitoring the serum levels of INF-γ and IL-2 would be useful to evaluate the therapeutic response of anti-tuberculosis patients, particularly in the both cytokines association case, that could improve the accuracy of routine examinations.

Keywords: antibiotic therapy, interferon gamma, interleukin 2, tuberculosis

Procedia PDF Downloads 66
7 The Quantum Theory of Music and Human Languages

Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: language, music, sciences, quantum entenglement

Procedia PDF Downloads 40
6 Control of Helminthosporiosis in Oryza sativa Varieties Treated with 24-Epibrassinolide

Authors: Kuate Tueguem William Norbert, Ngoh Dooh Jules Patrice, Kone Sangou Abdou Nourou, Mboussi Serge Bertrand, Chewachang Godwill Mih, Essome Sale Charles, Djuissi Tohoto Doriane, Ambang Zachee

Abstract:

The objectives of this study were to evaluate the effects of foliar application of 24-epibrassinolide (EBR) on the development of rice helminthosporiosis caused by Bipolaris oryzae and its influence on the improvement of growth parameters and induction of the synthesis of defense substances in the rice plants. The experimental asset up involved a multifactorial split-plot with two varieties (NERICA 3 and local variety KAMKOU) and five treatments (T0: control, T1: EBR, T2: BANKO PLUS (fungicide), T3: NPK (chemical fertilizer), T4: mixture: NPK + BANKO PLUS + EBR) with three repetitions. Agro-morphological and epidemiological parameters, as well as substances for plant resistance, were evaluated over two growing seasons. The application of the EBR induced significant growth of the rice plants for the 2015 and 2016 growing seasons on the two varieties tested compared to the T0 treatment. At 74 days after sowing (DAS), NERICA 3 showed plant heights of 58.9 ± 5.4; 83.1 ± 10.4; 86.01 ± 9.4; 69.4 ± 11.1 and 87.12 ± 7.4 cm at T0; T1; T2; T3, and T4, respectively. Plant height for the variety KAMKOU varied from 87,12 ± 8,1; 88.1 ± 8.1 and 92.02 ± 6.3 cm in T1, T2, and T3 to 74.1 ± 8.6 and 74.21 ± 11.4 cm in T0 and T3. In accordance with the low rate of expansion of helminthosporiosis in experimental plots, EBR (T1) significantly reduced the development of the disease with severities of 0.0; 1.29, and 2.04%, respectively at 78; 92, and 111 DAS on the variety NERICA 3 compared with1; 3.15 and 3.79% in the control T0. The reduction of disease development/severity as a result of the application of EBR is due to the induction of acquired resistance of rice varieties through increased phenol (13.73 eqAG/mg/PMF) and total protein (117.89 eqBSA/mg/PMF) in the T1 treatment against 5.37 eqAG/mg/PMF and 104.97 eqBSA/mg/PMF in T0 for the NERICA 3 variety. Similarly, on the KAMKOU variety, 148.53 eqBSA/mg/PMF were protein and 6.10 eqAG/mg/PMF of phenol in T1. In summary, the results show the significant effect of EBR on plant growth, yield, synthesis of secondary metabolites and defense proteins, and disease resistance. The EBR significantly reduced losses of rice grains by causing an average gain of about 1.55 t/ha compared to the control and 1.00 t/ha compared to the NPK-based treatment for the two varieties studied. Further, the enzymatic activities of PPOs, POXs, and PR2s were higher in leaves from treated EBR-based plants. These results show that 24-epibrassinolide can be used in the control of helminthosporiosis of rice to reduce disease and increase yields.

Keywords: Oryza sativa, 24-epibrassinolide, helminthosporiosis, secondary metabolites, PR proteins, acquired resistance

Procedia PDF Downloads 153
5 Tuberculosis Disease Characteristics Associated with Mortality, Severe Morbidity and Unsuccessful Treatment in People Living with HIV Treated for Tuberculosis – A Secondary Analysis of the ANRS 12300 Reflate TB2 Trial

Authors: Robert Akpata, Jean-Baptiste Ntakpe, Eugène Messou, Nathalie de Castro, Corine Chazallon, Isabel Timana, Rodrigo Escada, Sandra Wagner Cardoso, Nilesh Bhatt, Celso Khosa, Didier Laureillard, Giang Do Chau, Frédéric Ello Nogbou, Donald Diomande Glao, Valdilea Veloso, Jean-Michel Molina, Beatriz Grinsztejn, Marcel Zannou, Serge Eholie, Olivier Marcy

Abstract:

Background: Tuberculosis is a severe disease, not only due to its lethality but also to a significant morbidity occurring in people living with HIV (PLWH). If factors associated with mortality, severe morbidity and unsuccessful treatment related to the host are well identified in PLWH, there is scarce knowledge on factors related to the disease itself, such as bacillary load, the extent of lung involvement and disease dissemination to other organs. We sought to assess whether tuberculosis-related factors were associated with key patient outcomes in PLWH using data from an international clinical trial. Methods: We conducted a secondary analysis of the ANRS 12300 Reflate TB2 international phase III open-label randomized trial that assessed different antiretroviral regimens in PLWH treated for tuberculosis. We evaluated whether bacillary load (smear positivity grade), extent of lung involvement (cavitation on chest x-ray) and disease dissemination (urine LAM positivity) were associated with mortality using Cox proportional hazard models and to severe morbidity and unsuccessful tuberculosis treatment using logistic regressions. Results: Of 457 participants included in this study, 90 (20.4%) had grade 2+ or 3+smear positivity, 39 (10.8%) had cavitation on chest X-ray, and 147 (32.2%) had a positive urinary LAM. Overall, 19 (4.2%) participants died, 113 (24.7%) presented severe morbidity, and 33 (7.2%) had unsuccessful tuberculosis treatment. Factors that remained independently associated with mortality were cavitation on chest x-ray (aHR=7.92, 95% CI, 1.74-35.94, p=.0073) and LAM positivity (aHR=5.53, 95% CI, 1.09-28.06, p=.0389). The only factor that remained significantly associated with severe morbidity was LAM positivity (aOR=2.04, 95% CI, 1.06-3.92, p=.0323). No factor remained significantly associated with unsuccessful tuberculosis treatment. Conclusions: In PLWH with tuberculosis enrolled in a trial, tuberculosis disease characteristics related to disease severity were cavitation on chest x-ray and urine LAM positivity. Early identification of these factors could help improve the management of PLWH with tuberculosis and improve their survival.

Keywords: tuberculosis; HIV, mortality, severe morbidity, unsuccessful treatment, bacillary load, extent of lung involvement, disease dissemination.

Procedia PDF Downloads 5
4 Medial Temporal Tau Predicts Memory Decline in Cognitively Unimpaired Elderly

Authors: Angela T. H. Kwan, Saman Arfaie, Joseph Therriault, Zahra Azizi, Firoza Z. Lussier, Cecile Tissot, Mira Chamoun, Gleb Bezgin, Stijn Servaes, Jenna Stevenon, Nesrine Rahmouni, Vanessa Pallen, Serge Gauthier, Pedro Rosa-Neto

Abstract:

Alzheimer’s disease (AD) can be detected in living people using in vivo biomarkers of amyloid-β (Aβ) and tau, even in the absence of cognitive impairment during the preclinical phase. [¹⁸F]-MK-6420 is a high affinity positron emission tomography (PET) tracer that quantifies tau neurofibrillary tangles, but its ability to predict cognitive changes associated with early AD symptoms, such as memory decline, is unclear. Here, we assess the prognostic accuracy of baseline [18F]-MK-6420 tau PET for predicting longitudinal memory decline in asymptomatic elderly individuals. In a longitudinal observational study, we evaluated a cohort of cognitively normal elderly participants (n = 111) from the Translational Biomarkers in Aging and Dementia (TRIAD) study (data collected between October 2017 and July 2020, with a follow-up period of 12 months). All participants underwent tau PET with [¹⁸F]-MK-6420 and Aβ PET with [¹⁸F]-AZD-4694. The exclusion criteria included the presence of head trauma, stroke, or other neurological disorders. There were 111 eligible participants who were chosen based on the availability of Aβ PET, tau PET, magnetic resonance imaging (MRI), and APOEε4 genotyping. Among these participants, the mean (SD) age was 70.1 (8.6) years; 20 (18%) were tau PET positive, and 71 of 111 (63.9%) were women. A significant association between baseline Braak I-II [¹⁸F]-MK-6240 SUVR positivity and change in composite memory score was observed at the 12-month follow-up, after correcting for age, sex, and years of education (Logical Memory and RAVLT, standardized beta = -0.52 (-0.82-0.21), p < 0.001, for dichotomized tau PET and -1.22 (-1.84-(-0.61)), p < 0.0001, for continuous tau PET). Moderate cognitive decline was observed for A+T+ over the follow-up period, whereas no significant change was observed for A-T+, A+T-, and A-T-, though it should be noted that the A-T+ group was small.Our results indicate that baseline tau neurofibrillary tangle pathology is associated with longitudinal changes in memory function, supporting the use of [¹⁸F]-MK-6420 PET to predict the likelihood of asymptomatic elderly individuals experiencing future memory decline. Overall, [¹⁸F]-MK-6420 PET is a promising tool for predicting memory decline in older adults without cognitive impairment at baseline. This is of critical relevance as the field is shifting towards a biological model of AD defined by the aggregation of pathologic tau. Therefore, early detection of tau pathology using [¹⁸F]-MK-6420 PET provides us with the hope that living patients with AD may be diagnosed during the preclinical phase before it is too late.

Keywords: alzheimer’s disease, braak I-II, in vivo biomarkers, memory, PET, tau

Procedia PDF Downloads 48
3 The Quantum Theory of Music and Languages

Authors: Mballa Abanda Serge, Henda Gnakate Biba, Romaric Guemno Kuate, Akono Rufine Nicole, Petfiang Sidonie, Bella Sidonie

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization, It designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and world music or variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, entanglement, langauge, science

Procedia PDF Downloads 45
2 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 228
1 Fully Instrumented Small-Scale Fire Resistance Benches for Aeronautical Composites Assessment

Authors: Fabienne Samyn, Pauline Tranchard, Sophie Duquesne, Emilie Goncalves, Bruno Estebe, Serge Boubigot

Abstract:

Stringent fire safety regulations are enforced in the aeronautical industry due to the consequences that potential fire event on an aircraft might imply. This is so much true that the fire issue is considered right from the design of the aircraft structure. Due to the incorporation of an increasing amount of polymer matrix composites in replacement of more conventional materials like metals, the nature of the fire risks is changing. The choice of materials used is consequently of prime importance as well as the evaluation of its resistance to fire. The fire testing is mostly done using the so-called certification tests according to standards such as the ISO2685:1998(E). The latter describes a protocol to evaluate the fire resistance of structures located in fire zone (ability to withstand fire for 5min). The test consists in exposing an at least 300x300mm² sample to an 1100°C propane flame with a calibrated heat flux of 116kW/m². This type of test is time-consuming, expensive and gives access to limited information in terms of fire behavior of the materials (pass or fail test). Consequently, it can barely be used for material development purposes. In this context, the laboratory UMET in collaboration with industrial partners has developed a horizontal and a vertical small-scale instrumented fire benches for the characterization of the fire behavior of composites. The benches using smaller samples (no more than 150x150mm²) enables to cut downs costs and hence to increase sampling throughput. However, the main added value of our benches is the instrumentation used to collect useful information to understand the behavior of the materials. Indeed, measurements of the sample backside temperature are performed using IR camera in both configurations. In addition, for the vertical set up, a complete characterization of the degradation process, can be achieved via mass loss measurements and quantification of the gasses released during the tests. These benches have been used to characterize and study the fire behavior of aeronautical carbon/epoxy composites. The horizontal set up has been used in particular to study the performances and durability of protective intumescent coating on 2mm thick 2D laminates. The efficiency of this approach has been validated, and the optimized coating thickness has been determined as well as the performances after aging. Reductions of the performances after aging were attributed to the migration of some of the coating additives. The vertical set up has enabled to investigate the degradation process of composites under fire. An isotropic and a unidirectional 4mm thick laminates have been characterized using the bench and post-fire analyses. The mass loss measurements and the gas phase analyses of both composites do not present significant differences unlike the temperature profiles in the thickness of the samples. The differences have been attributed to differences of thermal conductivity as well as delamination that is much more pronounced for the isotropic composite (observed on the IR-images). This has been confirmed by X-ray microtomography. The developed benches have proven to be valuable tools to develop fire safe composites.

Keywords: aeronautical carbon/epoxy composite, durability, intumescent coating, small-scale ‘ISO 2685 like’ fire resistance test, X-ray microtomography

Procedia PDF Downloads 242