Search results for: Simon Schmidt
76 Risk Factors for Severe Typhoid Fever in Children: A French Retrospective Study about 78 Cases from 2000-2017 in Six Parisian Hospitals
Authors: Jonathan Soliman, Thomas Cavasino, Virginie Pommelet, Lahouari Amor, Pierre Mornand, Simon Escoda, Nina Droz, Soraya Matczak, Julie Toubiana, François Angoulvant, Etienne Carbonnelle, Albert Faye, Loic de Pontual, Luu-Ly Pham
Abstract:
Background: Typhoid and paratyphoid fever are systemic infections caused by Salmonella enterica serovar Typhi or paratyphi (A, B, C). Children traveling to tropical areas are at risk to contract these diseases which can be complicated. Methods: Clinical, biological and bacteriological data were collected from 78 pediatric cases reported between 2000 and 2017 in six Parisian hospitals. Children aged 0 to 18 years old, with a diagnosis of typhoid or paratyphoid fever confirmed by bacteriological exams, were included. Epidemiologic, clinical, biological features and presence of multidrug-resistant (MDR) bacteria or intermediate susceptibility to ciprofloxacin (nalidixic acid resistant) were examined by univariate analysis and by logistic regression analysis to identify risk factors of severe typhoid in children. Results: 84,6% of the children were imported cases of typhoid fever (n=66/78) and 15,4% were autochthonous cases (n=12/78). 89,7% were caused by S.typhi (n=70/78) and 12,8% by S.paratyphi (n=10/78) including 2 co-infections. 19,2% were intrafamilial cases (n=15/78). Median age at diagnosis was 6,4 years-old [6 months-17,9 years]. 28,2% of the cases were complicated forms (n=22/78): digestive (n=8; 10,3%), neurological (n=7; 9%), pulmonary complications (n=4; 5,1%) and hemophagocytic syndrome (n=4; 5,1%). Only 5% of the children had prior immunization with typhoid non-conjugated vaccine (n=4/78). 28% of the cases (n=22/78) were caused by resistant bacteria. Thrombocytopenia and diagnosis delay was significantly associated with severe infection (p= 0.029 and p=0,01). Complicated forms were more common with MDR (p=0,1) and not statistically associated with a young age or sex in this study. Conclusions: Typhoid and paratyphoid fever are not rare in children back from tropical areas. This multicentric pediatric study seems to show that thrombocytopenia, diagnosis delay, and multidrug resistant bacteria are associated with severe typhoid fever and complicated forms in children.Keywords: antimicrobial resistance, children, Salmonella enterica typhi and paratyphi, severe typhoid
Procedia PDF Downloads 18175 Investigation of the Mechanical and Thermal Properties of a Silver Oxalate Nanoporous Structured Sintered Joint for Micro-joining in Relation to the Sintering Process Parameters
Authors: L. Vivet, L. Benabou, O. Simon
Abstract:
With highly demanding applications in the field of power electronics, there is an increasing need to have interconnection materials with properties that can ensure both good mechanical assembly and high thermal/electrical conductivities. So far, lead-free solders have been considered an attractive solution, but recently, sintered joints based on nano-silver paste have been used for die attach and have proved to be a promising solution offering increased performances in high-temperature applications. In this work, the main parameters of the bonding process using silver oxalates are studied, i.e., the heating rate and the bonding pressure mainly. Their effects on both the mechanical and thermal properties of the sintered layer are evaluated following an experimental design. Pairs of copper substrates with gold metallization are assembled through the sintering process to realize the samples that are tested using a micro-traction machine. In addition, the obtained joints are examined through microscopy to identify the important microstructural features in relation to the measured properties. The formation of an intermetallic compound at the junction between the sintered silver layer and the gold metallization deposited on copper is also analyzed. Microscopy analysis exhibits a nanoporous structure of the sintered material. It is found that higher temperature and bonding pressure result in higher densification of the sintered material, with higher thermal conductivity of the joint but less mechanical flexibility to accommodate the thermo-mechanical stresses arising during service. The experimental design allows hence the determination of the optimal process parameters to reach sufficient thermal/mechanical properties for a given application. It is also found that the interphase formed between silver and gold metallization is the location where the fracture occurred after the mechanical testing, suggesting that the inter-diffusion mechanism between the different elements of the assembly leads to the formation of a relatively brittle compound.Keywords: nanoporous structure, silver oxalate, sintering, mechanical strength, thermal conductivity, microelectronic packaging
Procedia PDF Downloads 9374 An Unexpected Helping Hand: Consequences of Redistribution on Personal Ideology
Authors: Simon B.A. Egli, Katja Rost
Abstract:
Literature on redistributive preferences has proliferated in past decades. A core assumption behind it is that variation in redistributive preferences can explain different levels of redistribution. In contrast, this paper considers the reverse. What if it is redistribution that changes redistributive preferences? The core assumption behind the argument is that if self-interest - which we label concrete preferences - and ideology - which we label abstract preferences - come into conflict, the former will prevail and lead to an adjustment of the latter. To test the hypothesis, data from a survey conducted in Switzerland during the first wave of the COVID-19 crisis is used. A significant portion of the workforce at the time unexpectedly received state money through the short-time working program. Short-time work was used as a proxy for self-interest and was tested (1) on the support given to hypothetical, ailing firms during the crisis and (2) on the prioritization of justice principles guiding state action. In a first step, several models using OLS-regressions on political orientation were estimated to test our hypothesis as well as to check for non-linear effects. We expected support for ailing firms to be the same regardless of ideology but only for people on short-time work. The results both confirm our hypothesis and suggest a non-linear effect. Far-right individuals on short-time work were disproportionally supportive compared to moderate ones. In a second step, ordered logit models were estimated to test the impact of short-time work and political orientation on the rankings of the distributive justice principles need, performance, entitlement, and equality. The results show that being on short-time work significantly alters the prioritization of justice principles. Right-wing individuals are much more likely to prioritize need and equality over performance and entitlement when they receive government assistance. No such effect is found among left-wing individuals. In conclusion, we provide moderate to strong evidence that unexpectedly finding oneself at the receiving end changes redistributive preferences if personal ideology is antithetical to redistribution. The implications of our findings on the study of populism, personal ideologies, and political change are discussed.Keywords: COVID-19, ideology, redistribution, redistributive preferences, self-interest
Procedia PDF Downloads 14073 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit
Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek
Abstract:
In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage
Procedia PDF Downloads 26872 RAD-Seq Data Reveals Evidence of Local Adaptation between Upstream and Downstream Populations of Australian Glass Shrimp
Authors: Sharmeen Rahman, Daniel Schmidt, Jane Hughes
Abstract:
Paratya australiensis Kemp (Decapoda: Atyidae) is a widely distributed indigenous freshwater shrimp, highly abundant in eastern Australia. This species has been considered as a model stream organism to study genetics, dispersal, biology, behaviour and evolution in Atyids. Paratya has a filter feeding and scavenging habit which plays a significant role in the formation of lotic community structure. It has been shown to reduce periphyton and sediment from hard substrates of coastal streams and hence acts as a strongly-interacting ecosystem macroconsumer. Besides, Paratya is one of the major food sources for stream dwelling fishes. Paratya australiensis is a cryptic species complex consisting of 9 highly divergent mitochondrial DNA lineages. Among them, one lineage has been observed to favour upstream sites at higher altitudes, with cooler water temperatures. This study aims to identify local adaptation in upstream and downstream populations of this lineage in three streams in the Conondale Range, North-eastern Brisbane, Queensland, Australia. Two populations (up and down stream) from each stream have been chosen to test for local adaptation, and a parallel pattern of adaptation is expected across all streams. Six populations each consisting of 24 individuals were sequenced using the Restriction Site Associated DNA-seq (RAD-seq) technique. Genetic markers (SNPs) were developed using double digest RAD sequencing (ddRAD-seq). These were used for de novo assembly of Paratya genome. De novo assembly was done using the STACKs program and produced 56, 344 loci for 47 individuals from one stream. Among these individuals, 39 individuals shared 5819 loci, and these markers are being used to test for local adaptation using Fst outlier tests (Arlequin) and Bayesian analysis (BayeScan) between up and downstream populations. Fst outlier test detected 27 loci likely to be under selection and the Bayesian analysis also detected 27 loci as under selection. Among these 27 loci, 3 loci showed evidence of selection at a significance level using BayeScan program. On the other hand, up and downstream populations are strongly diverged at neutral loci with a Fst =0.37. Similar analysis will be done with all six populations to determine if there is a parallel pattern of adaptation across all streams. Furthermore, multi-locus among population covariance analysis will be done to identify potential markers under selection as well as to compare single locus versus multi-locus approaches for detecting local adaptation. Adaptive genes identified in this study can be used for future studies to design primers and test for adaptation in related crustacean species.Keywords: Paratya australiensis, rainforest streams, selection, single nucleotide polymorphism (SNPs)
Procedia PDF Downloads 25571 Characterizing Nasal Microbiota in COVID-19 Patients: Insights from Nanopore Technology and Comparative Analysis
Authors: David Pinzauti, Simon De Jaegher, Maria D'Aguano, Manuele Biazzo
Abstract:
The COVID-19 pandemic has left an indelible mark on global health, leading to a pressing need for understanding the intricate interactions between the virus and the human microbiome. This study focuses on characterizing the nasal microbiota of patients affected by COVID-19, with a specific emphasis on the comparison with unaffected individuals, to shed light on the crucial role of the microbiome in the development of this viral disease. To achieve this objective, Nanopore technology was employed to analyze the bacterial 16s rRNA full-length gene present in nasal swabs collected in Malta between January 2021 and August 2022. A comprehensive dataset consisting of 268 samples (126 SARS-negative samples and 142 SARS-positive samples) was subjected to a comparative analysis using an in-house, custom pipeline. The findings from this study revealed that individuals affected by COVID-19 possess a nasal microbiota that is significantly less diverse, as evidenced by lower α diversity, and is characterized by distinct microbial communities compared to unaffected individuals. The beta diversity analyses were carried out at different taxonomic resolutions. At the phylum level, Bacteroidota was found to be more prevalent in SARS-negative samples, suggesting a potential decrease during the course of viral infection. At the species level, the identification of several specific biomarkers further underscores the critical role of the nasal microbiota in COVID-19 pathogenesis. Notably, species such as Finegoldia magna, Moraxella catarrhalis, and others exhibited relative abundance in SARS-positive samples, potentially serving as significant indicators of the disease. This study presents valuable insights into the relationship between COVID-19 and the nasal microbiota. The identification of distinct microbial communities and potential biomarkers associated with the disease offers promising avenues for further research and therapeutic interventions aimed at enhancing public health outcomes in the context of COVID-19.Keywords: COVID-19, nasal microbiota, nanopore technology, 16s rRNA gene, biomarkers
Procedia PDF Downloads 6870 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.Keywords: classification, achine learning, predictive quality, feature selection
Procedia PDF Downloads 16269 Multiscale Modeling of Damage in Textile Composites
Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese
Abstract:
Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites
Procedia PDF Downloads 35468 Health Psychology Intervention: Identifying Early Symptoms in Neurological Disorders
Authors: Simon B. N. Thompson
Abstract:
Early indicator of neurological disease has been proposed by the expanded Thompson Cortisol Hypothesis which suggests that yawning is linked to rises in cortisol levels. Cortisol is essential to the regulation of the immune system and pathological yawning is a symptom of multiple sclerosis (MS). Electromyography activity (EMG) in the jaw muscles typically rises when the muscles are moved – extended or flexed; and yawning has been shown to be highly correlated with cortisol levels in healthy people. It is likely that these elevated cortisol levels are also seen in people with MS. The possible link between EMG in the jaw muscles and rises in saliva cortisol levels during yawning were investigated in a randomized controlled trial of 60 volunteers aged 18-69 years who were exposed to conditions that were designed to elicit the yawning response. Saliva samples were collected at the start and after yawning, or at the end of the presentation of yawning-provoking stimuli, in the absence of a yawn, and EMG data was additionally collected during rest and yawning phases. Hospital Anxiety and Depression Scale, Yawning Susceptibility Scale, General Health Questionnaire, demographic, and health details were collected and the following exclusion criteria were adopted: chronic fatigue, diabetes, fibromyalgia, heart condition, high blood pressure, hormone replacement therapy, multiple sclerosis, and stroke. Significant differences were found between the saliva cortisol samples for the yawners, t (23) = -4.263, p = 0.000, as compared with the non-yawners between rest and post-stimuli, which was non-significant. There were also significant differences between yawners and non-yawners for the EMG potentials with the yawners having higher rest and post-yawning potentials. Significant evidence was found to support the Thompson Cortisol Hypothesis suggesting that rises in cortisol levels are associated with the yawning response. Further research is underway to explore the use of cortisol as a potential diagnostic tool as an assist to the early diagnosis of symptoms related to neurological disorders. Bournemouth University Research & Ethics approval granted: JC28/1/13-KA6/9/13. Professional code of conduct, confidentiality, and safety issues have been addressed and approved in the Ethics submission. Trials identification number: ISRCTN61942768. http://www.controlled-trials.com/isrctn/Keywords: cortisol, electromyography, neurology, yawning
Procedia PDF Downloads 59067 A Factor-Analytical Approach on Identities in Environmentally Significant Behavior
Authors: Alina M. Udall, Judith de Groot, Simon de Jong, Avi Shankar
Abstract:
There are many ways in which environmentally significant behavior can be explained. Dominant psychological theories, namely, the theory of planned behavior, the norm-activation theory, its extension, the value-belief-norm theory, and the theory of habit do not explain large parts of environmentally significant behaviors. A new and rapidly growing approach is to focus on how consumer’s identities predict environmentally significant behavior. Identity may be relevant because consumers have many identities that are assumed to guide their behavior. Therefore, we assume that many identities will guide environmentally significant behavior. Many identities can be relevant for environmentally significant behavior. In reviewing the literature, over 200 identities have been studied making it difficult to establish the key identities for explaining environmentally significant behavior. Therefore, this paper first aims to establish the key identities previously used for explaining environmentally significant behavior. Second, the aim is to test which key identities explain environmentally significant behavior. To address the aims, an online survey study (n = 578) is conducted. First, the exploratory factor analysis reveals 15 identity factors. The identity factors are namely, environmentally concerned identity, anti-environmental self-identity, environmental place identity, connectedness with nature identity, green space visitor identity, active ethical identity, carbon off-setter identity, thoughtful self-identity, close community identity, anti-carbon off-setter identity, environmental group member identity, national identity, identification with developed countries, cyclist identity, and thoughtful organisation identity. Furthermore, to help researchers understand and operationalize the identities, the article provides theoretical definitions for each of the identities, in line with identity theory, social identity theory, and place identity theory. Second, the hierarchical regression shows only 10 factors significantly uniquely explain the variance in environmentally significant behavior. In order of predictive power the identities are namely, environmentally concerned identity, anti-environmental self-identity, thoughtful self-identity, environmental group member identity, anti-carbon off-setter identity, carbon off-setter identity, connectedness with nature identity, national identity, and green space visitor identity. The identities explain over 60% of the variance in environmentally significant behavior, a large effect size. Based on this finding, the article reveals a new, theoretical framework showing the key identities explaining environmentally significant behavior, to help improve and align the field.Keywords: environmentally significant behavior, factor analysis, place identity, social identity
Procedia PDF Downloads 45166 Quercetin Nanoparticles and Their Hypoglycemic Effect in a CD1 Mouse Model with Type 2 Diabetes Induced by Streptozotocin and a High-Fat and High-Sugar Diet
Authors: Adriana Garcia-Gurrola, Carlos Adrian Peña Natividad, Ana Laura Martinez Martinez, Alberto Abraham Escobar Puentes, Estefania Ochoa Ruiz, Aracely Serrano Medina, Abraham Wall Medrano, Simon Yobanny Reyes Lopez
Abstract:
Type 2 diabetes mellitus (T2DM) is a metabolic disease characterized by elevated blood glucose levels. Quercetin is a natural flavonoid with a hypoglycemic effect, but reported data are inconsistent due mainly to the structural instability and low solubility of quercetin. Nanoencapsulation is a distinct strategy to overcome the intrinsic limitations of quercetin. Therefore, this work aims to develop a quercetin nano-formulation based on biopolymeric starch nanoparticles to enhance the release and hypoglycemic effect of quercetin in T2DM induced mice model. Starch-quercetin nanoparticles were synthesized using high-intensity ultrasonication, and structural and colloidal properties were determined by FTIR and DLS. For in vivo studies, CD1 male mice (n=25) were divided into five groups (n=5). T2DM was induced using a high-fat and high-sugar diet for 32 weeks and streptozotocin injection. Group 1 consisted of healthy mice fed with a normal diet and water ad libitum; Group 2 were diabetic mice treated with saline solution; Group 3 were diabetic mice treated with glibenclamide; Group 4 were diabetic mice treated with empty nanoparticles; and Group 5 was diabetic mice treated with quercetin nanoparticles. Quercetin nanoparticles had a hydrodynamic size of 232 ± 88.45 nm, a PDI of 0.310 ± 0.04 and a zeta potential of -4 ± 0.85 mV. The encapsulation efficiency of nanoparticles was 58 ± 3.33 %. No significant differences (p = > 0.05) were observed in biochemical parameters (lipids, insulin, and peptide C). Groups 3 and 5 showed a similar hypoglycemic effect, but quercetin nanoparticles showed a longer-lasting effect. Histopathological studies reveal that T2DM mice groups showed degenerated and fatty liver tissue; however, a treated group with quercetin nanoparticles showed liver tissue like that of the healthy mice group. These results demonstrate that quercetin nano-formulations based on starch nanoparticles are effective alternatives with hypoglycemic effects.Keywords: quercetin, diabetes mellitus tipo 2, in vivo study, nanoparticles
Procedia PDF Downloads 3365 Attitude to the Types of Organizational Change
Authors: O. Y. Yurieva, O. V. Yurieva, O. V. Kiselkina, A. V. Kamaseva
Abstract:
Since the early 2000s, there are some innovative changes in the civil service in Russia due to administrative reform. Perspectives of the reform of the civil service include a fundamental change in the personnel component, increasing the level of professionalism of officials, increasing their capacity for self-organization and self-regulation. In order to achieve this, the civil service must be able to continuously change. Organizational changes have long become the subject of scientific understanding; problems of research in the field of organizational change is presented by topics focused on the study of the methodological aspects of the implementation of the changes, the specifics of changes in different types of organizations (business, government, and so on), design changes in the organization, including based on the change in organizational culture. In this case, the organizational changes in the civil service are the least studied areas; research of problems of its transformation is carried out in fragments. According to the theory of resistance of Herbert Simon, the root of the opposition and rejection of change is in the person who will resist any change, if it threatens to undermine the degree of satisfaction as a member of the organization (regardless of the reasons for this change). Thus, the condition for successful adaptation to changes in the organization is the ability of its staff to perceive innovation. As part of the problem, the study sought to identify the innovation civil servants, to determine readiness for the development of proposals for the implementation of organizational change in the public service. To identify the relationship to organizational changes case study carried out by the method of "Attitudes to organizational change" of I. Motovilina, which allowed predicting the type of resistance to changes, to reveal the contradictions and hidden results. The advantage of the method of I. Motovilina is its brevity, simplicity, the analysis of the responses to each question, the use of "overlapping" issues potentially conflicting factors. Based on the study made by the authors, it was found that respondents have a positive attitude to change more local than those that take place in reality, such as "increase opportunities for professional growth", "increase the requirements for the level of professionalism of", "the emergence of possible manifestations initiatives from below". Implemented by the authors diagnostics related to organizational changes in the public service showed the presence of specific problem areas, with roots in the lack of understanding of the importance of innovation personnel in the process of bureaucratization of innovation in public service organizations.Keywords: innovative changes, self-organization, self-regulation, civil service
Procedia PDF Downloads 46064 Analysis of the Brazilian Trade Balance in Relation to Mercosur: A Comparison between the Period 1989-1994 and 1994-2012
Authors: Luciana Aparecida Bastos, Tatiana Diair L. F. Rosa, Jesus Creapldi
Abstract:
The idea of Latin American integration occurred from the ideals of Simón Bolívar that, in 1824, called the Ibero-American nations to Amphictyonic Congress of Panama, on June 22, 1826, where he would defend the importance of Latin American unity. However, this congress was frustrating and the idea of Bolívar went no further. It was only after the European Union to start the process, driven by the end of World War II that the subject returned to emerge in Latin America. Thus, in 1960, supported by the European integration process, started in 1957 with the excellent result of the ECSC - European Coal and Steel Community, a result of the Customs Union of the BENELUX (integration between Belgium, the Netherlands and Luxembourg) in 1948, was created in Latin America, LAFTA - Latin American Free Trade Association, in 1960. In 1980, LAFTA was replaced by LAAI- Latin American Association, both with the same goal: to integrate Latin America, it´s economy and its trade. Most researchers in this period agree that the regional market would be expanded through the integration. The creation of one or more economic blocs in the region would provide the union of Latin American countries through a fusion of common interests and by their geographical proximity, which would try to develop common projects to promote mutual growth and economic development, tariff reductions, promotion of increased trade between, among many other goals set together. Thus, taking into account Mercosur, the main Latin-American block, created in 1994, the aim of this paper is to make a brief analysis of the trade balance performance of Brazil (larger economy of the block) in Mercosur in the periods: 1989-1994 and 1994-2012. The choice of this period was because the objective is to compare the period before and after the integration of Brazil in Mercosur. The methodologies used were the literature review and descriptive statistics. The results showed that after the integration of Brazil in Mercosur, the exports and imports grew within the bloc and the country turned out to become the leading importer of other economies of Mercosur after integration, that is, Brazil, after integration to Mercosur, was largely responsible for promoting the expansion of regional trade through the import of products from other members of the block.Keywords: Brazil, mercosur, integration, trade balance, comparison
Procedia PDF Downloads 32463 Establishment of Diagnostic Reference Levels for Computed Tomography Examination at the University of Ghana Medical Centre
Authors: Shirazu Issahaku, Isaac Kwesi Acquah, Simon Mensah Amoh, George Nunoo
Abstract:
Introduction: Diagnostic Reference Levels are important indicators for monitoring and optimizing protocol and procedure in medical imaging between facilities and equipment. This helps to evaluate whether, in routine clinical conditions, the median value obtained for a representative group of patients within an agreed range from a specified procedure is unusually high or low for that procedure. This study aimed to propose Diagnostic Reference Levels for Computed Tomography examination of the most common routine examination of the head, chest and abdominal pelvis regions at the University of Ghana Medical Centre. Methods: The Diagnostic Reference Levels were determined based on the investigation of the most common routine examinations, including head Computed Tomography examination with and without contrast, abdominopelvic Computed Tomography examination with and without contrast, and chest Computed Tomography examination without contrast. The study was based on two dose indicators: the volumetric Computed Tomography Dose Index and Dose-Length Product. Results: The estimated median distribution for head Computed Tomography with contrast for volumetric-Computed Tomography dose index and Dose-Length Product were 38.33 mGy and 829.35 mGy.cm, while without contrast, were 38.90 mGy and 860.90 mGy.cm respectively. For an abdominopelvic Computed Tomography examination with contrast, the estimated volumetric-Computed Tomography dose index and Dose-Length Product values were 40.19 mGy and 2096.60 mGy.cm. In the absence of contrast, the calculated values were 14.65 mGy and 800.40 mGy.cm, respectively. Additionally, for chest Computed Tomography examination, the estimated values were 12.75 mGy and 423.95 mGy.cm for volumetric-Computed Tomography dose index and Dose-Length Product, respectively. These median values represent the proposed diagnostic reference values of the head, chest, and abdominal pelvis regions. Conclusions: The proposed Diagnostic Reference Level is comparable to the recommended International Atomic Energy Agency and International Commission Radiation Protection Publication 135 and other regional published data by the European Commission and Regional National Diagnostic Reference Level in Africa. These reference levels will serve as benchmarks to guide clinicians in optimizing radiation dose levels while ensuring accurate diagnostic image quality at the facility.Keywords: diagnostic reference levels, computed tomography dose index, computed tomography, radiation exposure, dose-length product, radiation protection
Procedia PDF Downloads 5062 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)
Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin
Abstract:
The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory
Procedia PDF Downloads 10861 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 14460 A Nonlinear Feature Selection Method for Hyperspectral Image Classification
Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo
Abstract:
For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine
Procedia PDF Downloads 26559 Adopting Data Science and Citizen Science to Explore the Development of African Indigenous Agricultural Knowledge Platform
Authors: Steven Sam, Ximena Schmidt, Hugh Dickinson, Jens Jensen
Abstract:
The goal of this study is to explore the potential of data science and citizen science approaches to develop an interactive, digital, open infrastructure that pulls together African indigenous agriculture and food systems data from multiple sources, making it accessible and reusable for policy, research and practice in modern food production efforts. The World Bank has recognised that African Indigenous Knowledge (AIK) is innovative and unique among local and subsistent smallholder farmers, and it is central to sustainable food production and enhancing biodiversity and natural resources in many poor, rural societies. AIK refers to tacit knowledge held in different languages, cultures and skills passed down from generation to generation by word of mouth. AIK is a key driver of food production, preservation, and consumption for more than 80% of citizens in Africa, and can therefore assist modern efforts of reducing food insecurity and hunger. However, the documentation and dissemination of AIK remain a big challenge confronting librarians and other information professionals in Africa, and there is a risk of losing AIK owing to urban migration, modernisation, land grabbing, and the emergence of relatively small-scale commercial farming businesses. There is also a clear disconnect between the AIK and scientific knowledge and modern efforts for sustainable food production. The study combines data science and citizen science approaches through active community participation to generate and share AIK for facilitating learning and promoting knowledge that is relevant for policy intervention and sustainable food production through a curated digital platform based on FAIR principles. The study adopts key informant interviews along with participatory photo and video elicitation approach, where farmers are given digital devices (mobile phones) to record and document their every practice involving agriculture, food production, processing, and consumption by traditional means. Data collected are analysed using the UK Science and Technology Facilities Council’s proven methodology of citizen science (Zooniverse) and data science. Outcomes are presented in participatory stakeholder workshops, where the researchers outline plans for creating the platform and developing the knowledge sharing standard framework and copyrights agreement. Overall, the study shows that learning from AIK, by investigating what local communities know and have, can improve understanding of food production and consumption, in particular in times of stress or shocks affecting the food systems and communities. Thus, the platform can be useful for local populations, research, and policy-makers, and it could lead to transformative innovation in the food system, creating a fundamental shift in the way the North supports sustainable, modern food production efforts in Africa.Keywords: Africa indigenous agriculture knowledge, citizen science, data science, sustainable food production, traditional food system
Procedia PDF Downloads 8258 Public Procurement and Innovation: A Municipal Approach
Authors: M. Moso-Diez, J. L. Moragues-Oregi, K. Simon-Elorz
Abstract:
Innovation procurement is designed to steer the development of solutions towards concrete public sector needs as a driver for innovation from the demand side (in public services as well as in market opportunities for companies), is horizontally emerging as a new policy instrument. In 2014 the new EU public procurement directives 2014/24/EC and 2014/25/EC reinforced the support for Public Procurement for Innovation, dedicating funding instruments that can be used across all areas supported by Horizon 2020, and targeting potential buyers of innovative solutions: groups of public procurers with similar needs. Under this programme, new policy adapters and networks emerge, aiming to embed innovation criteria into new procurement processes. As these initiatives are in process, research related to is scarce. We argue that Innovation Public Procurement can arise as an innovative policy instrument to public procurement in different policy domains, in spite of existing institutional and cultural barriers (legal guarantee versus innovation). The presentation combines insights from public procurement to supply management chain management in a sustainability and innovation policy arena, as a means of providing understanding of: (1) the circumstances that emerge; (2) the relationship between public and private actors; and (3) the emerging capacities in the definition of the agenda. The policy adopters are the contracting authorities that mainly are at municipal level where they interact with the supply management chain, interconnecting sustainability and climate measures with other policy priorities such as innovation and urban planning; and through the Competitive Dialogue procedure. We found that geography and territory affect both the level of municipal budget (due to municipal income per capita) and its institutional competencies (due to demographic reasons). In spite of the relevance of institutional determinants for public procurement, other factors play an important role such as human factors as well as both public policy and private intervention. The experience is a ‘city project’ (Bilbao) in the field of brownfield decontamination. Brownfield sites typically refer to abandoned or underused industrial and commercial properties—such as old process plants, mining sites, and landfills—that are available but contain low levels of environmental contaminants that may complicate reuse or redevelopment of the land. This article concludes that Innovation Public Procurement in sustainability and climate issues should be further developed both as a policy instrument and as a policy research line that could enable further relevant changes in public procurement as well as in climate innovation.Keywords: innovation, city projects, public policy, public procurement
Procedia PDF Downloads 30957 Examining Gender Bias in the Sport Concussion Assessment Tool 3 (SCAT3): A Differential Item Functioning Analysis in NCAA Sports
Authors: Rachel M. Edelstein, John D. Van Horn, Karen M. Schmidt, Sydney N. Cushing
Abstract:
As a consequence of sports-related concussions, female athletes have been documented as reporting more symptoms than their male counterparts, in addition to incurring longer periods of recovery. However, the role of sex and its potential influence on symptom reporting and recovery outcomes in concussion management has not been completely explored. The present aims to investigate the relationship between female concussion symptom severity and the presence of assessment bias. The Sport Concussion Assessment Tool 3 (SCAT3), collected by the NCAA and DoD CARE Consortium, was quantified at five different time points post-concussion. N= 1,258 NCAA athletes, n= 473 female (soccer, rugby, lacrosse, ice hockey) and n=785 male athletes (football, rugby, lacrosse, ice hockey). A polytomous Item Response Theory (IRT) Graded Response Model (GRM) was used to assess the relationship between sex and symptom reporting. Differential Item Functioning (DIF) and Differential Group Functioning (DGF) were used to examine potential group-level bias. Interactions for DIF were utilized to explore the impact of sex on symptom reporting among NCAA male and female athletes throughout and after their concussion recovery. DIF was significantly detected after B-H corrections displayed in limited items; however, one symptom, “Pressure in Head” (-0.29, p=0.04 vs -0.20, p =0.04), was statistically significant at both < 6 hours and 24-48 hours. Thus, implies that at < 6 hours, males were 29% less likely to indicate “Pressure in the Head” compared to female athletes and 20% less likely at 24-48 hours. Overall, the DGF suggested significant group differences, suggesting that male athletes might be at a higher risk for returning to play prematurely (logits = -0.38, p < 0.001). However, after analyzing the SCAT 3, a clinically relevant trend was discovered. Twelve out of the twenty-two symptoms suggest higher difficulty in female athletes within three or more of the five-time points. These symptoms include Balance Problems, Blurry Vision, Confusion, Dizziness, Don’t Feel Right, Feel in Fog, Feel Slow Down, Low Energy, Neck Pain, Sensitivity to Light, Sensitivity to Noise, Trouble Falling Asleep. Despite a lack of statistical significance, this tendency is contrary to current literature stating that males may be unclear on symptoms, but females may be more honest in reporting symptoms. Further research, which includes possible modifying socioecological factors, is needed to determine whether females may consistently experience more symptoms and require longer recovery times or if, parsimoniously, males tend to present their symptoms and readiness for play differently than females. Such research will help to improve the validity of current assumptions concerning male as compared to female head injuries and optimize individualized treatments for sports-related head injuries.Keywords: female athlete, sports-related concussion, item response theory, concussion assessment
Procedia PDF Downloads 7756 Reducing Flood Risk through Value Capture and Risk Communication: A Case Study in Cocody-Abidjan
Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama
Abstract:
Abidjan city (Republic of Ivory Coast) is an emerging megacity and an urban coastal area where the number of floods reported is on a rapid increase due to climate change and unplanned urbanization. However, comprehensive disaster mitigation plans, policies, and financial resources are still lacking as the population ignores the extent and location of the flood zones; making them unprepared to mitigate the damages. Considering the existing condition, this paper aims to discuss an approach for flood risk reduction in Cocody Commune through value capture strategy and flood risk communication. Using geospatial techniques and hydrological simulation, we start our study by delineating flood zones and depths under several return periods in the study area. Then, through a questionnaire a field survey is conducted in order to validate the flood maps, to estimate the flood risk and to collect some sample of the opinion of residents on how the flood risk information disclosure could affect the values of property located inside and outside the flood zones. The results indicate that the study area is highly vulnerable to 5-year floods and more, which can cause serious harm to human lives and to properties as demonstrated by the extent of the 5-year flood of 2014. Also, it is revealed there is a high probability that the values of property located within flood zones could decline, and the values of surrounding property in the safe area could increase when risk information disclosure commences. However in order to raise public awareness of flood disaster and to prevent future housing promotion in high-risk prospective areas, flood risk information should be disseminated through the establishment of an early warning system. In order to reduce the effect of risk information disclosure and to protect the values of property within the high-risk zone, we propose that property tax increments in flood free zones should be captured and be utilized for infrastructure development and to maintain the early warning system that will benefit people living in flood prone areas. Through this case study, it is shown that combination of value capture strategy and risk communication could be an effective tool to educate citizen and to invest in flood risk reduction in emerging countries.Keywords: Cocody-Abidjan, flood, geospatial techniques, risk communication, value capture
Procedia PDF Downloads 27355 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates
Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe
Abstract:
Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.Keywords: machine learning, MTB, WGS, drug resistant TB
Procedia PDF Downloads 5254 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity
Authors: Justus Enninga
Abstract:
Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.Keywords: degrowth, green political theory, polycentricity, institutional robustness
Procedia PDF Downloads 18353 Antibacterial Bioactive Glasses in Orthopedic Surgery and Traumatology
Authors: V. Schmidt, L. Janovák, N. Wiegand, B. Patczai, K. Turzó
Abstract:
Large bone defects are not able to heal spontaneously. Bioactive glasses seem to be appropriate (bio)materials for bone reconstruction. Bioactive glasses are osteoconductive and osteoinductive, therefore, play a useful role in bony regeneration and repair. Because of their not optimal mechanical properties (e.g., brittleness, low bending strength, and fracture toughness), their applications are limited. Bioactive glass can be used as a coating material applied on metal surfaces. In this way -when using them as implants- the excellent mechanical properties of metals and the biocompatibility and bioactivity of glasses will be utilized. Furthermore, ion release effects of bioactive glasses regarding osteogenic and angiogenic responses have been shown. Silicate bioactive glasses (45S5 Bioglass) induce the release and exchange of soluble Si, Ca, P, and Na ions on the material surface. This will lead to special cellular responses inducing bone formation, which is favorable in the biointegration of the orthopedic prosthesis. The incorporation of other additional elements in the silicate network such as fluorine, magnesium, iron, silver, potassium, or zinc has been shown, as the local delivery of these ions is able to enhance specific cell functions. Although hip and knee prostheses present a high success rate, bacterial infections -mainly implant associated- are serious and frequent complications. Infection can also develop after implantation of hip prostheses, the elimination of which means more surgeries for the patient and additional costs for the clinic. Prosthesis-related infection is a severe complication of orthopedic surgery, which often causes prolonged illness, pain, and functional loss. While international efforts are made to reduce the risk of these infections, orthopedic surgical infections (SSIs) continue to occur in high numbers. It is currently estimated that up to 2.5% of primary hip and knee surgeries and up to 20% of revision arthroplasties are complicated by periprosthetic joint infection (PJIs). According to some authors, these numbers are underestimated, and they are also increasing. Staphylococcus aureus is the leading cause of both SSIs and PJIs, and the prevalence of methicillin-resistant S. aureus (MRSA) is on the rise, particularly in the United States. These deep infections lead to implant removal and consequently increase morbidity and mortality. The study targets this clinical problem using our experience so far with the Ag-doped polymer coatings on Titanium implants. Non-modified or modified (e.g., doped with antibacterial agents, like Ag) bioactive glasses could play a role in the prevention of infections or the therapy of infected tissues. Bioactive glasses have excellent biocompatibility, proved by in vitro cell culture studies of human osteoblast-like MG-63 cells. Ag-doped bioactive glass-scaffold has a good antibacterial ability against Escherichia coli and other bacteria. It may be concluded that these scaffolds have great potential in the prevention and therapy of implant-associated bone infection.Keywords: antibacterial agents, bioactive glass, hip and knee prosthesis, medical implants
Procedia PDF Downloads 19352 Developing an Intervention Program to Promote Healthy Eating in a Catering System Based on Qualitative Research Results
Authors: O. Katz-Shufan, T. Simon-Tuval, L. Sabag, L. Granek, D. R. Shahar
Abstract:
Meals provided at catering systems are a common source of workers' nutrition and were found as contributing high amounts calories and fat. Thus, eating daily catering food can lead to overweight and chronic diseases. On the other hand, the institutional dining room may be an ideal environment for implementation of intervention programs that promote healthy eating. This may improve diners' lifestyle and reduce their prevalence of overweight, obesity and chronic diseases. The significance of this study is in developing an intervention program based on the diners’ dietary habits, preferences and their attitudes towards various intervention programs. In addition, a successful catering-based intervention program may have a significant effect simultaneously on a large group of diners, leading to improved nutrition, healthier lifestyle, and disease-prevention on a large scale. In order to develop the intervention program, we conducted a qualitative study. We interviewed 13 diners who eat regularly at catering systems, using a semi-structured interview. The interviews were recorded, transcribed and then analyzed by the thematic method, which identifies, analyzes and reports themes within the data. The interviews revealed several major themes, including expectation of diners to be provided with healthy food choices; their request for nutrition-expert involvement in planning the meals; the diners' feel that there is a conflict between sensory attractiveness of the food and its' nutritional quality. In the context of the catering-based intervention programs, the diners prefer scientific and clear messages focusing on labeling healthy dishes only, as opposed to the labeling of unhealthy dishes; they were interested in a nutritional education program to accompany the intervention program. Based on these findings, we have developed an intervention program that includes: changes in food served such as replacing several menu items and nutritional improvement of some of the recipes; as well as, environmental changes such as changing the location of some food items presented on the buffet, placing positive nutritional labels on healthy dishes and an ongoing healthy nutrition campaign, all accompanied by a nutrition education program. The intervention program is currently being tested for its impact on health outcomes and its cost-effectiveness.Keywords: catering system, food services, intervention, nutrition policy, public health, qualitative research
Procedia PDF Downloads 19451 Towards Creative Movie Title Generation Using Deep Neural Models
Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie
Abstract:
Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.Keywords: creativity, deep machine learning, natural language generation, movies
Procedia PDF Downloads 32650 Evaluation of Dry Matter Yield of Panicum maximum Intercropped with Pigeonpea and Sesbania Sesban
Authors: Misheck Musokwa, Paramu Mafongoya, Simon Lorentz
Abstract:
Seasonal shortages of fodder during the dry season is a major constraint to smallholder livestock farmers in South Africa. To mitigate the shortage of fodder, legume trees can be intercropped with pastures which can diversify the sources of feed and increase the amount of protein for grazing animals. The objective was to evaluate dry matter yield of Panicum maximum and land productivity under different fodder production systems during 2016/17-2017/18 seasons at Empangeni (28.6391° S and 31.9400° E). A randomized complete block design, replicated three times was used, the treatments were sole Panicum maximum, Panicum maximum + Sesbania sesban, Panicum maximum + pigeonpea, sole Sesbania sesban, Sole pigeonpea. Three months S.sesbania seedlings were transplanted whilst pigeonpea was direct seeded at spacing of 1m x 1m. P. maximum seeds were drilled at a respective rate of 7.5 kg/ha having an inter-row spacing of 0.25 m apart. In between rows of trees P. maximum seeds were drilled. The dry matter yield harvesting times were separated by six months’ timeframe. A 0.25 m² quadrant randomly placed on 3 points on the plot was used as sampling area during harvesting P. maximum. There was significant difference P < 0.05 across 3 harvests and total dry matter. P. maximum had higher dry matter yield as compared to both intercrops at first harvest and total. The second and third harvest had no significant difference with pigeonpea intercrop. The results was in this order for all 3 harvest: P. maximum (541.2c, 1209.3b and 1557b) kg ha¹ ≥ P. maximum + pigeonpea (157.2b, 926.7b and 1129b) kg ha¹ > P. maximum + S. sesban (36.3a, 282a and 555a) kg ha¹. Total accumulation of dry matter yield of P. maximum (3307c kg ha¹) > P. maximum + pigeonpea (2212 kg ha¹) ≥ P. maximum + S. sesban (874 kg ha¹). There was a significant difference (P< 0.05) on seed yield for trees. Pigeonpea (1240.3 kg ha¹) ≥ Pigeonpea + P. maximum (862.7 kg ha¹) > S.sesbania (391.9 kg ha¹) ≥ S.sesbania + P. maximum. The Land Equivalent Ratio (LER) was in the following order P. maximum + pigeonpea (1.37) > P. maximum + S. sesban (0.84) > Pigeonpea (0.59) ≥ S. Sesbania (0.57) > P. maximum (0.26). Results indicates that it is beneficial to have P. maximum intercropped with pigeonpea because of higher land productivity. Planting grass with pigeonpea was more beneficial than S. sesban with grass or sole cropping in terms of saving the shortage of arable land. P. maximum + pigeonpea saves a substantial (37%) land which can be subsequently be used for other crop production. Pigeonpea is recommended as an intercrop with P. maximum due to its higher LER and combined production of livestock feed, human food, and firewood. Panicum grass is low in crude protein though high in carbohydrates, there is a need for intercropping it with legume trees. A farmer who buys concentrates can reduce costs by combining P. maximum with pigeonpea this will provide a balanced diet at low cost.Keywords: fodder, livestock, productivity, smallholder farmers
Procedia PDF Downloads 14949 Polarization as a Proxy of Misinformation Spreading
Authors: Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, Ana Lucía Schmidt, Fabiana Zollo
Abstract:
Information, rumors, and debates may shape and impact public opinion heavily. In the latest years, several concerns have been expressed about social influence on the Internet and the outcome that online debates might have on real-world processes. Indeed, on online social networks users tend to select information that is coherent to their system of beliefs and to form groups of like-minded people –i.e., echo chambers– where they reinforce and polarize their opinions. In this way, the potential benefits coming from the exposure to different points of view may be reduced dramatically, and individuals' views may become more and more extreme. Such a context fosters misinformation spreading, which has always represented a socio-political and economic risk. The persistence of unsubstantiated rumors –e.g., the hypothetical and hazardous link between vaccines and autism– suggests that social media do have the power to misinform, manipulate, or control public opinion. As an example, current approaches such as debunking efforts or algorithmic-driven solutions based on the reputation of the source seem to prove ineffective against collective superstition. Indeed, experimental evidence shows that confirmatory information gets accepted even when containing deliberately false claims while dissenting information is mainly ignored, influences users’ emotions negatively and may even increase group polarization. Moreover, confirmation bias has been shown to play a pivotal role in information cascades, posing serious warnings about the efficacy of current debunking efforts. Nevertheless, mitigation strategies have to be adopted. To generalize the problem and to better understand social dynamics behind information spreading, in this work we rely on a tight quantitative analysis to investigate the behavior of more than 300M users w.r.t. news consumption on Facebook over a time span of six years (2010-2015). Through a massive analysis on 920 news outlets pages, we are able to characterize the anatomy of news consumption on a global and international scale. We show that users tend to focus on a limited set of pages (selective exposure) eliciting a sharp and polarized community structure among news outlets. Moreover, we find similar patterns around the Brexit –the British referendum to leave the European Union– debate, where we observe the spontaneous emergence of two well segregated and polarized groups of users around news outlets. Our findings provide interesting insights into the determinants of polarization and the evolution of core narratives on online debating. Our main aim is to understand and map the information space on online social media by identifying non-trivial proxies for the early detection of massive informational cascades. Furthermore, by combining users traces, we are finally able to draft the main concepts and beliefs of the core narrative of an echo chamber and its related perceptions.Keywords: information spreading, misinformation, narratives, online social networks, polarization
Procedia PDF Downloads 29048 Velma-ARC’s Rehabilitation of Repentant Cybercriminals in Nigeria
Authors: Umukoro Omonigho Simon, Ashaolu David ‘Diya, Aroyewun-Olaleye Temitope Folashade
Abstract:
The VELMA Action to Reduce Cybercrime (ARC) is an initiative, the first of its kind in Nigeria, designed to identify, rehabilitate and empower repentant cybercrime offenders popularly known as ‘yahoo boys’ in Nigerian parlance. Velma ARC provides social inclusion boot camps with the goal of rehabilitating cybercriminals via psychotherapeutic interventions, improving their IT skills, and empowering them to make constructive contributions to society. This report highlights the psychological interventions provided for participants of the maiden edition of the Velma ARC boot camp and presents the outcomes of these interventions. The boot camp was set up in a hotel premises which was booked solely for the 1 month event. The participants were selected and invited via the Velma online recruitment portal based on an objective double-blind selection process from a pool of potential participants who signified interest via the registration portal. The participants were first taken through psychological profiling (personality, symptomology and psychopathology) before the individual and group sessions began. They were profiled using the Minnesota Multiphasic Personality Inventory -2- Restructured Form (MMPI-2-RF), the latest version of its series. Individual psychotherapy sessions were conducted for all participants based on what was interpreted on their profiles. Focus group discussion was held later to discuss a movie titled ‘catch me if you can’ directed by Steven Spielberg, featuring Leonardo De Caprio and Tom Hanks. The movie was based on the true life story of Frank Abagnale, who was a notorious scammer and con artist in his youthful years. Emergent themes from the movie were discussed as psycho-educative parameters for the participants. The overall evaluation of outcomes from the VELMA ARC rehabilitation boot camp stemmed from a disaggregated assessment of observed changes which are summarized in the final report of the clinical psychologist and was detailed enough to infer genuine repentance and positive change in attitude towards cybercrime among the participants. Follow up services were incorporated to validate initial observations. This gives credence to the potency of the psycho-educative intervention provided during the Velma ARC boot camp. It was recommended that support and collaborations from the government and other agencies/individuals would assist the VELMA foundation in expanding the scope and quality of the Velma ARC initiative as an additional requirement for cybercrime offenders following incarceration.Keywords: Velma-ARC, cybercrime offenders, rehabilitation, Nigeria
Procedia PDF Downloads 15347 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly
Authors: Alex Eldo Simon, Abhishek Yadav
Abstract:
This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio
Procedia PDF Downloads 81