Search results for: Laurent Simon
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 255

Search results for: Laurent Simon

75 Prevalence of Urinary Tract Infections and Risk Factors among Pregnant Women Attending Ante Natal Clinics in Government Primary Health Care Centres in Akure

Authors: Adepeju Simon-Oke, Olatunji Odeyemi, Mobolanle Oniya

Abstract:

Urinary tract infection has become the most common bacterial infections in humans, both at the community and hospital settings; it has been reported in all age groups and in both sexes. This study was carried out in order to determine and evaluate the prevalence, current drug susceptibility pattern of the isolated organisms and identify the associated risk factors of UTIs among the pregnant women in Akure, Ondo State, Nigeria. A cross-sectional study was conducted on the urine of pregnant women, and socio-demographic information of the women was collected. A total of 300 clean midstream urine samples were collected, and a general urine microscopic examination and culture were carried out, the Microbact identification system was used to identify gram-negative bacteria. Out of the 300 urine samples cultured, 183(61.0%) yielded significant growth of urinary pathogens while 117(39.0%) yielded either insignificant growth or no growth of any urinary pathogen. Prevalence of UTI was significantly associated with the type of toilet used, symptoms of UTI, and previous history of urinary tract infection (p<0.05). Escherichia coli 58(31.7%) was the dominant pathogen isolated, and the least isolated uropathogens were Citrobacter freudii and Providencia retgerri 2(1.1%) respectively. Gram-negative bacteria showed 77.6%, 67.9%, and 61.2% susceptibility to ciprofloxacin, augmentin, and chloramphenicol, respectively. Resistance against septrin, chloramphenicol, sparfloxacin, amoxicillin, augmentin, gentamycin, pefloxacin, trivid, and streptomycin was observed in the range of 23.1 to 70.1%. Gram-positive uropathogens isolated showed high resistance to amoxicillin (68.4%) and high susceptibility to the remaining nine antibiotics in the range 65.8% to 89.5%. This study justifies that pregnant women are at high risk of UTI. Therefore screening of pregnant women during antenatal clinics should be considered very important to avoid complications. Health education with regular antenatal and personal hygiene is recommended as precautionary measures to UTI.

Keywords: pregnant women, prevalence, risk factor, UTIs

Procedia PDF Downloads 147
74 Risk Factors for Severe Typhoid Fever in Children: A French Retrospective Study about 78 Cases from 2000-2017 in Six Parisian Hospitals

Authors: Jonathan Soliman, Thomas Cavasino, Virginie Pommelet, Lahouari Amor, Pierre Mornand, Simon Escoda, Nina Droz, Soraya Matczak, Julie Toubiana, François Angoulvant, Etienne Carbonnelle, Albert Faye, Loic de Pontual, Luu-Ly Pham

Abstract:

Background: Typhoid and paratyphoid fever are systemic infections caused by Salmonella enterica serovar Typhi or paratyphi (A, B, C). Children traveling to tropical areas are at risk to contract these diseases which can be complicated. Methods: Clinical, biological and bacteriological data were collected from 78 pediatric cases reported between 2000 and 2017 in six Parisian hospitals. Children aged 0 to 18 years old, with a diagnosis of typhoid or paratyphoid fever confirmed by bacteriological exams, were included. Epidemiologic, clinical, biological features and presence of multidrug-resistant (MDR) bacteria or intermediate susceptibility to ciprofloxacin (nalidixic acid resistant) were examined by univariate analysis and by logistic regression analysis to identify risk factors of severe typhoid in children. Results: 84,6% of the children were imported cases of typhoid fever (n=66/78) and 15,4% were autochthonous cases (n=12/78). 89,7% were caused by S.typhi (n=70/78) and 12,8% by S.paratyphi (n=10/78) including 2 co-infections. 19,2% were intrafamilial cases (n=15/78). Median age at diagnosis was 6,4 years-old [6 months-17,9 years]. 28,2% of the cases were complicated forms (n=22/78): digestive (n=8; 10,3%), neurological (n=7; 9%), pulmonary complications (n=4; 5,1%) and hemophagocytic syndrome (n=4; 5,1%). Only 5% of the children had prior immunization with typhoid non-conjugated vaccine (n=4/78). 28% of the cases (n=22/78) were caused by resistant bacteria. Thrombocytopenia and diagnosis delay was significantly associated with severe infection (p= 0.029 and p=0,01). Complicated forms were more common with MDR (p=0,1) and not statistically associated with a young age or sex in this study. Conclusions: Typhoid and paratyphoid fever are not rare in children back from tropical areas. This multicentric pediatric study seems to show that thrombocytopenia, diagnosis delay, and multidrug resistant bacteria are associated with severe typhoid fever and complicated forms in children.

Keywords: antimicrobial resistance, children, Salmonella enterica typhi and paratyphi, severe typhoid

Procedia PDF Downloads 181
73 Investigation of the Mechanical and Thermal Properties of a Silver Oxalate Nanoporous Structured Sintered Joint for Micro-joining in Relation to the Sintering Process Parameters

Authors: L. Vivet, L. Benabou, O. Simon

Abstract:

With highly demanding applications in the field of power electronics, there is an increasing need to have interconnection materials with properties that can ensure both good mechanical assembly and high thermal/electrical conductivities. So far, lead-free solders have been considered an attractive solution, but recently, sintered joints based on nano-silver paste have been used for die attach and have proved to be a promising solution offering increased performances in high-temperature applications. In this work, the main parameters of the bonding process using silver oxalates are studied, i.e., the heating rate and the bonding pressure mainly. Their effects on both the mechanical and thermal properties of the sintered layer are evaluated following an experimental design. Pairs of copper substrates with gold metallization are assembled through the sintering process to realize the samples that are tested using a micro-traction machine. In addition, the obtained joints are examined through microscopy to identify the important microstructural features in relation to the measured properties. The formation of an intermetallic compound at the junction between the sintered silver layer and the gold metallization deposited on copper is also analyzed. Microscopy analysis exhibits a nanoporous structure of the sintered material. It is found that higher temperature and bonding pressure result in higher densification of the sintered material, with higher thermal conductivity of the joint but less mechanical flexibility to accommodate the thermo-mechanical stresses arising during service. The experimental design allows hence the determination of the optimal process parameters to reach sufficient thermal/mechanical properties for a given application. It is also found that the interphase formed between silver and gold metallization is the location where the fracture occurred after the mechanical testing, suggesting that the inter-diffusion mechanism between the different elements of the assembly leads to the formation of a relatively brittle compound.

Keywords: nanoporous structure, silver oxalate, sintering, mechanical strength, thermal conductivity, microelectronic packaging

Procedia PDF Downloads 93
72 An Unexpected Helping Hand: Consequences of Redistribution on Personal Ideology

Authors: Simon B.A. Egli, Katja Rost

Abstract:

Literature on redistributive preferences has proliferated in past decades. A core assumption behind it is that variation in redistributive preferences can explain different levels of redistribution. In contrast, this paper considers the reverse. What if it is redistribution that changes redistributive preferences? The core assumption behind the argument is that if self-interest - which we label concrete preferences - and ideology - which we label abstract preferences - come into conflict, the former will prevail and lead to an adjustment of the latter. To test the hypothesis, data from a survey conducted in Switzerland during the first wave of the COVID-19 crisis is used. A significant portion of the workforce at the time unexpectedly received state money through the short-time working program. Short-time work was used as a proxy for self-interest and was tested (1) on the support given to hypothetical, ailing firms during the crisis and (2) on the prioritization of justice principles guiding state action. In a first step, several models using OLS-regressions on political orientation were estimated to test our hypothesis as well as to check for non-linear effects. We expected support for ailing firms to be the same regardless of ideology but only for people on short-time work. The results both confirm our hypothesis and suggest a non-linear effect. Far-right individuals on short-time work were disproportionally supportive compared to moderate ones. In a second step, ordered logit models were estimated to test the impact of short-time work and political orientation on the rankings of the distributive justice principles need, performance, entitlement, and equality. The results show that being on short-time work significantly alters the prioritization of justice principles. Right-wing individuals are much more likely to prioritize need and equality over performance and entitlement when they receive government assistance. No such effect is found among left-wing individuals. In conclusion, we provide moderate to strong evidence that unexpectedly finding oneself at the receiving end changes redistributive preferences if personal ideology is antithetical to redistribution. The implications of our findings on the study of populism, personal ideologies, and political change are discussed.

Keywords: COVID-19, ideology, redistribution, redistributive preferences, self-interest

Procedia PDF Downloads 140
71 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit

Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek

Abstract:

In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.

Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage

Procedia PDF Downloads 268
70 Shock-Induced Densification in Glass Materials: A Non-Equilibrium Molecular Dynamics Study

Authors: Richard Renou, Laurent Soulard

Abstract:

Lasers are widely used in glass material processing, from waveguide fabrication to channel drilling. The gradual damage of glass optics under UV lasers is also an important issue to be addressed. Glass materials (including metallic glasses) can undergo a permanent densification under laser-induced shock loading. Despite increased interest on interactions between laser and glass materials, little is known about the structural mechanisms involved under shock loading. For example, the densification process in silica glasses occurs between 8 GPa and 30 GPa. Above 30 GPa, the glass material returns to the original density after relaxation. Investigating these unusual mechanisms in silica glass will provide an overall better understanding in glass behaviour. Non-Equilibrium Molecular Dynamics simulations (NEMD) were carried out in order to gain insight on the silica glass microscopic structure under shock loading. The shock was generated by the use of a piston impacting the glass material at high velocity (from 100m/s up to 2km/s). Periodic boundary conditions were used in the directions perpendicular to the shock propagation to model an infinite system. One-dimensional shock propagations were therefore studied. Simulations were performed with the STAMP code developed by the CEA. A very specific structure is observed in a silica glass. Oxygen atoms around Silicon atoms are organized in tetrahedrons. Those tetrahedrons are linked and tend to form rings inside the structure. A significant amount of empty cavities is also observed in glass materials. In order to understand how a shock loading is impacting the overall structure, the tetrahedrons, the rings and the cavities were thoroughly analysed. An elastic behaviour was observed when the shock pressure is below 8 GPa. This is consistent with the Hugoniot Elastic Limit (HEL) of 8.8 GPa estimated experimentally for silica glasses. Behind the shock front, the ring structure and the cavity distribution are impacted. The ring volume is smaller, and most cavities disappear with increasing shock pressure. However, the tetrahedral structure is not affected. The elasticity of the glass structure is therefore related to a ring shrinking and a cavity closing. Above the HEL, the shock pressure is high enough to impact the tetrahedral structure. An increasing number of hexahedrons and octahedrons are formed with the pressure. The large rings break to form smaller ones. The cavities are however not impacted as most cavities are already closed under an elastic shock. After the material relaxation, a significant amount of hexahedrons and octahedrons is still observed, and most of the cavities remain closed. The overall ring distribution after relaxation is similar to the equilibrium distribution. The densification process is therefore related to two structural mechanisms: a change in the coordination of silicon atoms and a cavity closing. To sum up, non-equilibrium molecular dynamics were carried out to investigate silica behaviour under shock loading. Analysing the structure lead to interesting conclusions upon the elastic and the densification mechanisms in glass materials. This work will be completed with a detailed study of the mechanism occurring above 30 GPa, where no sign of densification is observed after the material relaxation.

Keywords: densification, molecular dynamics simulations, shock loading, silica glass

Procedia PDF Downloads 222
69 Characterizing Nasal Microbiota in COVID-19 Patients: Insights from Nanopore Technology and Comparative Analysis

Authors: David Pinzauti, Simon De Jaegher, Maria D'Aguano, Manuele Biazzo

Abstract:

The COVID-19 pandemic has left an indelible mark on global health, leading to a pressing need for understanding the intricate interactions between the virus and the human microbiome. This study focuses on characterizing the nasal microbiota of patients affected by COVID-19, with a specific emphasis on the comparison with unaffected individuals, to shed light on the crucial role of the microbiome in the development of this viral disease. To achieve this objective, Nanopore technology was employed to analyze the bacterial 16s rRNA full-length gene present in nasal swabs collected in Malta between January 2021 and August 2022. A comprehensive dataset consisting of 268 samples (126 SARS-negative samples and 142 SARS-positive samples) was subjected to a comparative analysis using an in-house, custom pipeline. The findings from this study revealed that individuals affected by COVID-19 possess a nasal microbiota that is significantly less diverse, as evidenced by lower α diversity, and is characterized by distinct microbial communities compared to unaffected individuals. The beta diversity analyses were carried out at different taxonomic resolutions. At the phylum level, Bacteroidota was found to be more prevalent in SARS-negative samples, suggesting a potential decrease during the course of viral infection. At the species level, the identification of several specific biomarkers further underscores the critical role of the nasal microbiota in COVID-19 pathogenesis. Notably, species such as Finegoldia magna, Moraxella catarrhalis, and others exhibited relative abundance in SARS-positive samples, potentially serving as significant indicators of the disease. This study presents valuable insights into the relationship between COVID-19 and the nasal microbiota. The identification of distinct microbial communities and potential biomarkers associated with the disease offers promising avenues for further research and therapeutic interventions aimed at enhancing public health outcomes in the context of COVID-19.

Keywords: COVID-19, nasal microbiota, nanopore technology, 16s rRNA gene, biomarkers

Procedia PDF Downloads 68
68 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 162
67 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 354
66 Health Psychology Intervention: Identifying Early Symptoms in Neurological Disorders

Authors: Simon B. N. Thompson

Abstract:

Early indicator of neurological disease has been proposed by the expanded Thompson Cortisol Hypothesis which suggests that yawning is linked to rises in cortisol levels. Cortisol is essential to the regulation of the immune system and pathological yawning is a symptom of multiple sclerosis (MS). Electromyography activity (EMG) in the jaw muscles typically rises when the muscles are moved – extended or flexed; and yawning has been shown to be highly correlated with cortisol levels in healthy people. It is likely that these elevated cortisol levels are also seen in people with MS. The possible link between EMG in the jaw muscles and rises in saliva cortisol levels during yawning were investigated in a randomized controlled trial of 60 volunteers aged 18-69 years who were exposed to conditions that were designed to elicit the yawning response. Saliva samples were collected at the start and after yawning, or at the end of the presentation of yawning-provoking stimuli, in the absence of a yawn, and EMG data was additionally collected during rest and yawning phases. Hospital Anxiety and Depression Scale, Yawning Susceptibility Scale, General Health Questionnaire, demographic, and health details were collected and the following exclusion criteria were adopted: chronic fatigue, diabetes, fibromyalgia, heart condition, high blood pressure, hormone replacement therapy, multiple sclerosis, and stroke. Significant differences were found between the saliva cortisol samples for the yawners, t (23) = -4.263, p = 0.000, as compared with the non-yawners between rest and post-stimuli, which was non-significant. There were also significant differences between yawners and non-yawners for the EMG potentials with the yawners having higher rest and post-yawning potentials. Significant evidence was found to support the Thompson Cortisol Hypothesis suggesting that rises in cortisol levels are associated with the yawning response. Further research is underway to explore the use of cortisol as a potential diagnostic tool as an assist to the early diagnosis of symptoms related to neurological disorders. Bournemouth University Research & Ethics approval granted: JC28/1/13-KA6/9/13. Professional code of conduct, confidentiality, and safety issues have been addressed and approved in the Ethics submission. Trials identification number: ISRCTN61942768. http://www.controlled-trials.com/isrctn/

Keywords: cortisol, electromyography, neurology, yawning

Procedia PDF Downloads 590
65 A Factor-Analytical Approach on Identities in Environmentally Significant Behavior

Authors: Alina M. Udall, Judith de Groot, Simon de Jong, Avi Shankar

Abstract:

There are many ways in which environmentally significant behavior can be explained. Dominant psychological theories, namely, the theory of planned behavior, the norm-activation theory, its extension, the value-belief-norm theory, and the theory of habit do not explain large parts of environmentally significant behaviors. A new and rapidly growing approach is to focus on how consumer’s identities predict environmentally significant behavior. Identity may be relevant because consumers have many identities that are assumed to guide their behavior. Therefore, we assume that many identities will guide environmentally significant behavior. Many identities can be relevant for environmentally significant behavior. In reviewing the literature, over 200 identities have been studied making it difficult to establish the key identities for explaining environmentally significant behavior. Therefore, this paper first aims to establish the key identities previously used for explaining environmentally significant behavior. Second, the aim is to test which key identities explain environmentally significant behavior. To address the aims, an online survey study (n = 578) is conducted. First, the exploratory factor analysis reveals 15 identity factors. The identity factors are namely, environmentally concerned identity, anti-environmental self-identity, environmental place identity, connectedness with nature identity, green space visitor identity, active ethical identity, carbon off-setter identity, thoughtful self-identity, close community identity, anti-carbon off-setter identity, environmental group member identity, national identity, identification with developed countries, cyclist identity, and thoughtful organisation identity. Furthermore, to help researchers understand and operationalize the identities, the article provides theoretical definitions for each of the identities, in line with identity theory, social identity theory, and place identity theory. Second, the hierarchical regression shows only 10 factors significantly uniquely explain the variance in environmentally significant behavior. In order of predictive power the identities are namely, environmentally concerned identity, anti-environmental self-identity, thoughtful self-identity, environmental group member identity, anti-carbon off-setter identity, carbon off-setter identity, connectedness with nature identity, national identity, and green space visitor identity. The identities explain over 60% of the variance in environmentally significant behavior, a large effect size. Based on this finding, the article reveals a new, theoretical framework showing the key identities explaining environmentally significant behavior, to help improve and align the field.

Keywords: environmentally significant behavior, factor analysis, place identity, social identity

Procedia PDF Downloads 451
64 Quercetin Nanoparticles and Their Hypoglycemic Effect in a CD1 Mouse Model with Type 2 Diabetes Induced by Streptozotocin and a High-Fat and High-Sugar Diet

Authors: Adriana Garcia-Gurrola, Carlos Adrian Peña Natividad, Ana Laura Martinez Martinez, Alberto Abraham Escobar Puentes, Estefania Ochoa Ruiz, Aracely Serrano Medina, Abraham Wall Medrano, Simon Yobanny Reyes Lopez

Abstract:

Type 2 diabetes mellitus (T2DM) is a metabolic disease characterized by elevated blood glucose levels. Quercetin is a natural flavonoid with a hypoglycemic effect, but reported data are inconsistent due mainly to the structural instability and low solubility of quercetin. Nanoencapsulation is a distinct strategy to overcome the intrinsic limitations of quercetin. Therefore, this work aims to develop a quercetin nano-formulation based on biopolymeric starch nanoparticles to enhance the release and hypoglycemic effect of quercetin in T2DM induced mice model. Starch-quercetin nanoparticles were synthesized using high-intensity ultrasonication, and structural and colloidal properties were determined by FTIR and DLS. For in vivo studies, CD1 male mice (n=25) were divided into five groups (n=5). T2DM was induced using a high-fat and high-sugar diet for 32 weeks and streptozotocin injection. Group 1 consisted of healthy mice fed with a normal diet and water ad libitum; Group 2 were diabetic mice treated with saline solution; Group 3 were diabetic mice treated with glibenclamide; Group 4 were diabetic mice treated with empty nanoparticles; and Group 5 was diabetic mice treated with quercetin nanoparticles. Quercetin nanoparticles had a hydrodynamic size of 232 ± 88.45 nm, a PDI of 0.310 ± 0.04 and a zeta potential of -4 ± 0.85 mV. The encapsulation efficiency of nanoparticles was 58 ± 3.33 %. No significant differences (p = > 0.05) were observed in biochemical parameters (lipids, insulin, and peptide C). Groups 3 and 5 showed a similar hypoglycemic effect, but quercetin nanoparticles showed a longer-lasting effect. Histopathological studies reveal that T2DM mice groups showed degenerated and fatty liver tissue; however, a treated group with quercetin nanoparticles showed liver tissue like that of the healthy mice group. These results demonstrate that quercetin nano-formulations based on starch nanoparticles are effective alternatives with hypoglycemic effects.

Keywords: quercetin, diabetes mellitus tipo 2, in vivo study, nanoparticles

Procedia PDF Downloads 33
63 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators

Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy

Abstract:

Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.

Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators

Procedia PDF Downloads 113
62 Attitude to the Types of Organizational Change

Authors: O. Y. Yurieva, O. V. Yurieva, O. V. Kiselkina, A. V. Kamaseva

Abstract:

Since the early 2000s, there are some innovative changes in the civil service in Russia due to administrative reform. Perspectives of the reform of the civil service include a fundamental change in the personnel component, increasing the level of professionalism of officials, increasing their capacity for self-organization and self-regulation. In order to achieve this, the civil service must be able to continuously change. Organizational changes have long become the subject of scientific understanding; problems of research in the field of organizational change is presented by topics focused on the study of the methodological aspects of the implementation of the changes, the specifics of changes in different types of organizations (business, government, and so on), design changes in the organization, including based on the change in organizational culture. In this case, the organizational changes in the civil service are the least studied areas; research of problems of its transformation is carried out in fragments. According to the theory of resistance of Herbert Simon, the root of the opposition and rejection of change is in the person who will resist any change, if it threatens to undermine the degree of satisfaction as a member of the organization (regardless of the reasons for this change). Thus, the condition for successful adaptation to changes in the organization is the ability of its staff to perceive innovation. As part of the problem, the study sought to identify the innovation civil servants, to determine readiness for the development of proposals for the implementation of organizational change in the public service. To identify the relationship to organizational changes case study carried out by the method of "Attitudes to organizational change" of I. Motovilina, which allowed predicting the type of resistance to changes, to reveal the contradictions and hidden results. The advantage of the method of I. Motovilina is its brevity, simplicity, the analysis of the responses to each question, the use of "overlapping" issues potentially conflicting factors. Based on the study made by the authors, it was found that respondents have a positive attitude to change more local than those that take place in reality, such as "increase opportunities for professional growth", "increase the requirements for the level of professionalism of", "the emergence of possible manifestations initiatives from below". Implemented by the authors diagnostics related to organizational changes in the public service showed the presence of specific problem areas, with roots in the lack of understanding of the importance of innovation personnel in the process of bureaucratization of innovation in public service organizations.

Keywords: innovative changes, self-organization, self-regulation, civil service

Procedia PDF Downloads 459
61 Analysis of the Brazilian Trade Balance in Relation to Mercosur: A Comparison between the Period 1989-1994 and 1994-2012

Authors: Luciana Aparecida Bastos, Tatiana Diair L. F. Rosa, Jesus Creapldi

Abstract:

The idea of Latin American integration occurred from the ideals of Simón Bolívar that, in 1824, called the Ibero-American nations to Amphictyonic Congress of Panama, on June 22, 1826, where he would defend the importance of Latin American unity. However, this congress was frustrating and the idea of Bolívar went no further. It was only after the European Union to start the process, driven by the end of World War II that the subject returned to emerge in Latin America. Thus, in 1960, supported by the European integration process, started in 1957 with the excellent result of the ECSC - European Coal and Steel Community, a result of the Customs Union of the BENELUX (integration between Belgium, the Netherlands and Luxembourg) in 1948, was created in Latin America, LAFTA - Latin American Free Trade Association, in 1960. In 1980, LAFTA was replaced by LAAI- Latin American Association, both with the same goal: to integrate Latin America, it´s economy and its trade. Most researchers in this period agree that the regional market would be expanded through the integration. The creation of one or more economic blocs in the region would provide the union of Latin American countries through a fusion of common interests and by their geographical proximity, which would try to develop common projects to promote mutual growth and economic development, tariff reductions, promotion of increased trade between, among many other goals set together. Thus, taking into account Mercosur, the main Latin-American block, created in 1994, the aim of this paper is to make a brief analysis of the trade balance performance of Brazil (larger economy of the block) in Mercosur in the periods: 1989-1994 and 1994-2012. The choice of this period was because the objective is to compare the period before and after the integration of Brazil in Mercosur. The methodologies used were the literature review and descriptive statistics. The results showed that after the integration of Brazil in Mercosur, the exports and imports grew within the bloc and the country turned out to become the leading importer of other economies of Mercosur after integration, that is, Brazil, after integration to Mercosur, was largely responsible for promoting the expansion of regional trade through the import of products from other members of the block.

Keywords: Brazil, mercosur, integration, trade balance, comparison

Procedia PDF Downloads 324
60 Establishment of Diagnostic Reference Levels for Computed Tomography Examination at the University of Ghana Medical Centre

Authors: Shirazu Issahaku, Isaac Kwesi Acquah, Simon Mensah Amoh, George Nunoo

Abstract:

Introduction: Diagnostic Reference Levels are important indicators for monitoring and optimizing protocol and procedure in medical imaging between facilities and equipment. This helps to evaluate whether, in routine clinical conditions, the median value obtained for a representative group of patients within an agreed range from a specified procedure is unusually high or low for that procedure. This study aimed to propose Diagnostic Reference Levels for Computed Tomography examination of the most common routine examination of the head, chest and abdominal pelvis regions at the University of Ghana Medical Centre. Methods: The Diagnostic Reference Levels were determined based on the investigation of the most common routine examinations, including head Computed Tomography examination with and without contrast, abdominopelvic Computed Tomography examination with and without contrast, and chest Computed Tomography examination without contrast. The study was based on two dose indicators: the volumetric Computed Tomography Dose Index and Dose-Length Product. Results: The estimated median distribution for head Computed Tomography with contrast for volumetric-Computed Tomography dose index and Dose-Length Product were 38.33 mGy and 829.35 mGy.cm, while without contrast, were 38.90 mGy and 860.90 mGy.cm respectively. For an abdominopelvic Computed Tomography examination with contrast, the estimated volumetric-Computed Tomography dose index and Dose-Length Product values were 40.19 mGy and 2096.60 mGy.cm. In the absence of contrast, the calculated values were 14.65 mGy and 800.40 mGy.cm, respectively. Additionally, for chest Computed Tomography examination, the estimated values were 12.75 mGy and 423.95 mGy.cm for volumetric-Computed Tomography dose index and Dose-Length Product, respectively. These median values represent the proposed diagnostic reference values of the head, chest, and abdominal pelvis regions. Conclusions: The proposed Diagnostic Reference Level is comparable to the recommended International Atomic Energy Agency and International Commission Radiation Protection Publication 135 and other regional published data by the European Commission and Regional National Diagnostic Reference Level in Africa. These reference levels will serve as benchmarks to guide clinicians in optimizing radiation dose levels while ensuring accurate diagnostic image quality at the facility.

Keywords: diagnostic reference levels, computed tomography dose index, computed tomography, radiation exposure, dose-length product, radiation protection

Procedia PDF Downloads 50
59 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)

Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin

Abstract:

The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.

Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory

Procedia PDF Downloads 106
58 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 144
57 Public Procurement and Innovation: A Municipal Approach

Authors: M. Moso-Diez, J. L. Moragues-Oregi, K. Simon-Elorz

Abstract:

Innovation procurement is designed to steer the development of solutions towards concrete public sector needs as a driver for innovation from the demand side (in public services as well as in market opportunities for companies), is horizontally emerging as a new policy instrument. In 2014 the new EU public procurement directives 2014/24/EC and 2014/25/EC reinforced the support for Public Procurement for Innovation, dedicating funding instruments that can be used across all areas supported by Horizon 2020, and targeting potential buyers of innovative solutions: groups of public procurers with similar needs. Under this programme, new policy adapters and networks emerge, aiming to embed innovation criteria into new procurement processes. As these initiatives are in process, research related to is scarce. We argue that Innovation Public Procurement can arise as an innovative policy instrument to public procurement in different policy domains, in spite of existing institutional and cultural barriers (legal guarantee versus innovation). The presentation combines insights from public procurement to supply management chain management in a sustainability and innovation policy arena, as a means of providing understanding of: (1) the circumstances that emerge; (2) the relationship between public and private actors; and (3) the emerging capacities in the definition of the agenda. The policy adopters are the contracting authorities that mainly are at municipal level where they interact with the supply management chain, interconnecting sustainability and climate measures with other policy priorities such as innovation and urban planning; and through the Competitive Dialogue procedure. We found that geography and territory affect both the level of municipal budget (due to municipal income per capita) and its institutional competencies (due to demographic reasons). In spite of the relevance of institutional determinants for public procurement, other factors play an important role such as human factors as well as both public policy and private intervention. The experience is a ‘city project’ (Bilbao) in the field of brownfield decontamination. Brownfield sites typically refer to abandoned or underused industrial and commercial properties—such as old process plants, mining sites, and landfills—that are available but contain low levels of environmental contaminants that may complicate reuse or redevelopment of the land. This article concludes that Innovation Public Procurement in sustainability and climate issues should be further developed both as a policy instrument and as a policy research line that could enable further relevant changes in public procurement as well as in climate innovation.

Keywords: innovation, city projects, public policy, public procurement

Procedia PDF Downloads 309
56 Reducing Flood Risk through Value Capture and Risk Communication: A Case Study in Cocody-Abidjan

Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama

Abstract:

Abidjan city (Republic of Ivory Coast) is an emerging megacity and an urban coastal area where the number of floods reported is on a rapid increase due to climate change and unplanned urbanization. However, comprehensive disaster mitigation plans, policies, and financial resources are still lacking as the population ignores the extent and location of the flood zones; making them unprepared to mitigate the damages. Considering the existing condition, this paper aims to discuss an approach for flood risk reduction in Cocody Commune through value capture strategy and flood risk communication. Using geospatial techniques and hydrological simulation, we start our study by delineating flood zones and depths under several return periods in the study area. Then, through a questionnaire a field survey is conducted in order to validate the flood maps, to estimate the flood risk and to collect some sample of the opinion of residents on how the flood risk information disclosure could affect the values of property located inside and outside the flood zones. The results indicate that the study area is highly vulnerable to 5-year floods and more, which can cause serious harm to human lives and to properties as demonstrated by the extent of the 5-year flood of 2014. Also, it is revealed there is a high probability that the values of property located within flood zones could decline, and the values of surrounding property in the safe area could increase when risk information disclosure commences. However in order to raise public awareness of flood disaster and to prevent future housing promotion in high-risk prospective areas, flood risk information should be disseminated through the establishment of an early warning system. In order to reduce the effect of risk information disclosure and to protect the values of property within the high-risk zone, we propose that property tax increments in flood free zones should be captured and be utilized for infrastructure development and to maintain the early warning system that will benefit people living in flood prone areas. Through this case study, it is shown that combination of value capture strategy and risk communication could be an effective tool to educate citizen and to invest in flood risk reduction in emerging countries.

Keywords: Cocody-Abidjan, flood, geospatial techniques, risk communication, value capture

Procedia PDF Downloads 273
55 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 51
54 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 183
53 Developing an Intervention Program to Promote Healthy Eating in a Catering System Based on Qualitative Research Results

Authors: O. Katz-Shufan, T. Simon-Tuval, L. Sabag, L. Granek, D. R. Shahar

Abstract:

Meals provided at catering systems are a common source of workers' nutrition and were found as contributing high amounts calories and fat. Thus, eating daily catering food can lead to overweight and chronic diseases. On the other hand, the institutional dining room may be an ideal environment for implementation of intervention programs that promote healthy eating. This may improve diners' lifestyle and reduce their prevalence of overweight, obesity and chronic diseases. The significance of this study is in developing an intervention program based on the diners’ dietary habits, preferences and their attitudes towards various intervention programs. In addition, a successful catering-based intervention program may have a significant effect simultaneously on a large group of diners, leading to improved nutrition, healthier lifestyle, and disease-prevention on a large scale. In order to develop the intervention program, we conducted a qualitative study. We interviewed 13 diners who eat regularly at catering systems, using a semi-structured interview. The interviews were recorded, transcribed and then analyzed by the thematic method, which identifies, analyzes and reports themes within the data. The interviews revealed several major themes, including expectation of diners to be provided with healthy food choices; their request for nutrition-expert involvement in planning the meals; the diners' feel that there is a conflict between sensory attractiveness of the food and its' nutritional quality. In the context of the catering-based intervention programs, the diners prefer scientific and clear messages focusing on labeling healthy dishes only, as opposed to the labeling of unhealthy dishes; they were interested in a nutritional education program to accompany the intervention program. Based on these findings, we have developed an intervention program that includes: changes in food served such as replacing several menu items and nutritional improvement of some of the recipes; as well as, environmental changes such as changing the location of some food items presented on the buffet, placing positive nutritional labels on healthy dishes and an ongoing healthy nutrition campaign, all accompanied by a nutrition education program. The intervention program is currently being tested for its impact on health outcomes and its cost-effectiveness.

Keywords: catering system, food services, intervention, nutrition policy, public health, qualitative research

Procedia PDF Downloads 194
52 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products

Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet

Abstract:

All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.

Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis

Procedia PDF Downloads 188
51 Defining a Framework for Holistic Life Cycle Assessment of Building Components by Considering Parameters Such as Circularity, Material Health, Biodiversity, Pollution Control, Cost, Social Impacts, and Uncertainty

Authors: Naomi Grigoryan, Alexandros Loutsioli Daskalakis, Anna Elisse Uy, Yihe Huang, Aude Laurent (Webanck)

Abstract:

In response to the building and construction sectors accounting for a third of all energy demand and emissions, the European Union has placed new laws and regulations in the construction sector that emphasize material circularity, energy efficiency, biodiversity, and social impact. Existing design tools assess sustainability in early-stage design for products or buildings; however, there is no standardized methodology for measuring the circularity performance of building components. Existing assessment methods for building components focus primarily on carbon footprint but lack the comprehensive analysis required to design for circularity. The research conducted in this paper covers the parameters needed to assess sustainability in the design process of architectural products such as doors, windows, and facades. It maps a framework for a tool that assists designers with real-time sustainability metrics. Considering the life cycle of building components such as façades, windows, and doors involves the life cycle stages applied to product design and many of the methods used in the life cycle analysis of buildings. The current industry standards of sustainability assessment for metal building components follow cradle-to-grave life cycle assessment (LCA), track Global Warming Potential (GWP), and document the parameters used for an Environmental Product Declaration (EPD). Developed by the Ellen Macarthur Foundation, the Material Circularity Indicator (MCI) is a methodology utilizing the data from LCA and EPDs to rate circularity, with a "value between 0 and 1 where higher values indicate a higher circularity+". Expanding on the MCI with additional indicators such as the Water Circularity Index (WCI), the Energy Circularity Index (ECI), the Social Circularity Index (SCI), Life Cycle Economic Value (EV), and calculating biodiversity risk and uncertainty, the assessment methodology of an architectural product's impact can be targeted more specifically based on product requirements, performance, and lifespan. Broadening the scope of LCA calculation for products to incorporate aspects of building design allows product designers to account for the disassembly of architectural components. For example, the Material Circularity Indicator for architectural products such as windows and facades is typically low due to the impact of glass, as 70% of glass ends up in landfills due to damage in the disassembly process. The low MCI can be combatted by expanding beyond cradle-to-grave assessment and focusing the design process on disassembly, recycling, and repurposing with the help of real-time assessment tools. Design for Disassembly and Urban Mining has been integrated within the construction field on small scales as project-based exercises, not addressing the entire supply chain of architectural products. By adopting more comprehensive sustainability metrics and incorporating uncertainty calculations, the sustainability assessment of building components can be more accurately assessed with decarbonization and disassembly in mind, addressing the large-scale commercial markets within construction, some of the most significant contributors to climate change.

Keywords: architectural products, early-stage design, life cycle assessment, material circularity indicator

Procedia PDF Downloads 88
50 Towards Creative Movie Title Generation Using Deep Neural Models

Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie

Abstract:

Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.

Keywords: creativity, deep machine learning, natural language generation, movies

Procedia PDF Downloads 326
49 Heat Transfer Phenomena Identification of a Non-Active Floor in a Stack-Ventilated Building in Summertime: Empirical Study

Authors: Miguel Chen Austin, Denis Bruneau, Alain Sempey, Laurent Mora, Alain Sommier

Abstract:

An experimental study in a Plus Energy House (PEH) prototype was conducted in August 2016. It aimed to highlight the energy charge and discharge of a concrete-slab floor submitted to the day-night-cycles heat exchanges in the southwestern part of France and to identify the heat transfer phenomena that take place in both processes: charge and discharge. The main features of this PEH, significant to this study, are the following: (i) a non-active slab covering the major part of the entire floor surface of the house, which include a concrete layer 68 mm thick as upper layer; (ii) solar window shades located on the north and south facades along with a large eave facing south, (iii) large double-glazed windows covering the majority of the south facade, (iv) a natural ventilation system (NVS) composed by ten automatized openings with different dimensions: four are located on the south facade, four on the north facade and two on the shed roof (north-oriented). To highlight the energy charge and discharge processes of the non-active slab, heat flux and temperature measurement techniques were implemented, along with airspeed measurements. Ten “measurement-poles” (MP) were distributed all over the concrete-floor surface. Each MP represented a zone of measurement, where air and surface temperatures, and convection and radiation heat fluxes, were intended to be measured. The airspeed was measured only at two points over the slab surface, near the south facade. To identify the heat transfer phenomena that take part in the charge and discharge process, some relevant dimensionless parameters were used, along with statistical analysis; heat transfer phenomena were identified based on this analysis. Experimental data, after processing, had shown that two periods could be identified at a glance: charge (heat gain, positive values) and discharge (heat losses, negative values). During the charge period, on the floor surface, radiation heat exchanges were significantly higher compared with convection. On the other hand, convection heat exchanges were significantly higher than radiation, in the discharge period. Spatially, both, convection and radiation heat exchanges are higher near the natural ventilation openings and smaller far from them, as expected. Experimental correlations have been determined using a linear regression model, showing the relation between the Nusselt number with relevant parameters: Peclet, Rayleigh, and Richardson numbers. This has led to the determination of the convective heat transfer coefficient and its comparison with the convective heat coefficient resulting from measurements. Results have shown that forced and natural convection coexists during the discharge period; more accurate correlations with the Peclet number than with the Rayleigh number, have been found. This may suggest that forced convection is stronger than natural convection. Yet, airspeed levels encountered suggest that it is natural convection that should take place rather than forced convection. Despite this, Richardson number values encountered indicate otherwise. During the charge period, air-velocity levels might indicate that none air motion occurs, which might lead to heat transfer by diffusion instead of convection.

Keywords: heat flux measurement, natural ventilation, non-active concrete slab, plus energy house

Procedia PDF Downloads 416
48 Evaluation of Dry Matter Yield of Panicum maximum Intercropped with Pigeonpea and Sesbania Sesban

Authors: Misheck Musokwa, Paramu Mafongoya, Simon Lorentz

Abstract:

Seasonal shortages of fodder during the dry season is a major constraint to smallholder livestock farmers in South Africa. To mitigate the shortage of fodder, legume trees can be intercropped with pastures which can diversify the sources of feed and increase the amount of protein for grazing animals. The objective was to evaluate dry matter yield of Panicum maximum and land productivity under different fodder production systems during 2016/17-2017/18 seasons at Empangeni (28.6391° S and 31.9400° E). A randomized complete block design, replicated three times was used, the treatments were sole Panicum maximum, Panicum maximum + Sesbania sesban, Panicum maximum + pigeonpea, sole Sesbania sesban, Sole pigeonpea. Three months S.sesbania seedlings were transplanted whilst pigeonpea was direct seeded at spacing of 1m x 1m. P. maximum seeds were drilled at a respective rate of 7.5 kg/ha having an inter-row spacing of 0.25 m apart. In between rows of trees P. maximum seeds were drilled. The dry matter yield harvesting times were separated by six months’ timeframe. A 0.25 m² quadrant randomly placed on 3 points on the plot was used as sampling area during harvesting P. maximum. There was significant difference P < 0.05 across 3 harvests and total dry matter. P. maximum had higher dry matter yield as compared to both intercrops at first harvest and total. The second and third harvest had no significant difference with pigeonpea intercrop. The results was in this order for all 3 harvest: P. maximum (541.2c, 1209.3b and 1557b) kg ha¹ ≥ P. maximum + pigeonpea (157.2b, 926.7b and 1129b) kg ha¹ > P. maximum + S. sesban (36.3a, 282a and 555a) kg ha¹. Total accumulation of dry matter yield of P. maximum (3307c kg ha¹) > P. maximum + pigeonpea (2212 kg ha¹) ≥ P. maximum + S. sesban (874 kg ha¹). There was a significant difference (P< 0.05) on seed yield for trees. Pigeonpea (1240.3 kg ha¹) ≥ Pigeonpea + P. maximum (862.7 kg ha¹) > S.sesbania (391.9 kg ha¹) ≥ S.sesbania + P. maximum. The Land Equivalent Ratio (LER) was in the following order P. maximum + pigeonpea (1.37) > P. maximum + S. sesban (0.84) > Pigeonpea (0.59) ≥ S. Sesbania (0.57) > P. maximum (0.26). Results indicates that it is beneficial to have P. maximum intercropped with pigeonpea because of higher land productivity. Planting grass with pigeonpea was more beneficial than S. sesban with grass or sole cropping in terms of saving the shortage of arable land. P. maximum + pigeonpea saves a substantial (37%) land which can be subsequently be used for other crop production. Pigeonpea is recommended as an intercrop with P. maximum due to its higher LER and combined production of livestock feed, human food, and firewood. Panicum grass is low in crude protein though high in carbohydrates, there is a need for intercropping it with legume trees. A farmer who buys concentrates can reduce costs by combining P. maximum with pigeonpea this will provide a balanced diet at low cost.

Keywords: fodder, livestock, productivity, smallholder farmers

Procedia PDF Downloads 149
47 Velma-ARC’s Rehabilitation of Repentant Cybercriminals in Nigeria

Authors: Umukoro Omonigho Simon, Ashaolu David ‘Diya, Aroyewun-Olaleye Temitope Folashade

Abstract:

The VELMA Action to Reduce Cybercrime (ARC) is an initiative, the first of its kind in Nigeria, designed to identify, rehabilitate and empower repentant cybercrime offenders popularly known as ‘yahoo boys’ in Nigerian parlance. Velma ARC provides social inclusion boot camps with the goal of rehabilitating cybercriminals via psychotherapeutic interventions, improving their IT skills, and empowering them to make constructive contributions to society. This report highlights the psychological interventions provided for participants of the maiden edition of the Velma ARC boot camp and presents the outcomes of these interventions. The boot camp was set up in a hotel premises which was booked solely for the 1 month event. The participants were selected and invited via the Velma online recruitment portal based on an objective double-blind selection process from a pool of potential participants who signified interest via the registration portal. The participants were first taken through psychological profiling (personality, symptomology and psychopathology) before the individual and group sessions began. They were profiled using the Minnesota Multiphasic Personality Inventory -2- Restructured Form (MMPI-2-RF), the latest version of its series. Individual psychotherapy sessions were conducted for all participants based on what was interpreted on their profiles. Focus group discussion was held later to discuss a movie titled ‘catch me if you can’ directed by Steven Spielberg, featuring Leonardo De Caprio and Tom Hanks. The movie was based on the true life story of Frank Abagnale, who was a notorious scammer and con artist in his youthful years. Emergent themes from the movie were discussed as psycho-educative parameters for the participants. The overall evaluation of outcomes from the VELMA ARC rehabilitation boot camp stemmed from a disaggregated assessment of observed changes which are summarized in the final report of the clinical psychologist and was detailed enough to infer genuine repentance and positive change in attitude towards cybercrime among the participants. Follow up services were incorporated to validate initial observations. This gives credence to the potency of the psycho-educative intervention provided during the Velma ARC boot camp. It was recommended that support and collaborations from the government and other agencies/individuals would assist the VELMA foundation in expanding the scope and quality of the Velma ARC initiative as an additional requirement for cybercrime offenders following incarceration.

Keywords: Velma-ARC, cybercrime offenders, rehabilitation, Nigeria

Procedia PDF Downloads 153
46 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly

Authors: Alex Eldo Simon, Abhishek Yadav

Abstract:

This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.

Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio

Procedia PDF Downloads 81