Search results for: Michal Simon
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 292

Search results for: Michal Simon

82 Risk Factors for Severe Typhoid Fever in Children: A French Retrospective Study about 78 Cases from 2000-2017 in Six Parisian Hospitals

Authors: Jonathan Soliman, Thomas Cavasino, Virginie Pommelet, Lahouari Amor, Pierre Mornand, Simon Escoda, Nina Droz, Soraya Matczak, Julie Toubiana, François Angoulvant, Etienne Carbonnelle, Albert Faye, Loic de Pontual, Luu-Ly Pham

Abstract:

Background: Typhoid and paratyphoid fever are systemic infections caused by Salmonella enterica serovar Typhi or paratyphi (A, B, C). Children traveling to tropical areas are at risk to contract these diseases which can be complicated. Methods: Clinical, biological and bacteriological data were collected from 78 pediatric cases reported between 2000 and 2017 in six Parisian hospitals. Children aged 0 to 18 years old, with a diagnosis of typhoid or paratyphoid fever confirmed by bacteriological exams, were included. Epidemiologic, clinical, biological features and presence of multidrug-resistant (MDR) bacteria or intermediate susceptibility to ciprofloxacin (nalidixic acid resistant) were examined by univariate analysis and by logistic regression analysis to identify risk factors of severe typhoid in children. Results: 84,6% of the children were imported cases of typhoid fever (n=66/78) and 15,4% were autochthonous cases (n=12/78). 89,7% were caused by S.typhi (n=70/78) and 12,8% by S.paratyphi (n=10/78) including 2 co-infections. 19,2% were intrafamilial cases (n=15/78). Median age at diagnosis was 6,4 years-old [6 months-17,9 years]. 28,2% of the cases were complicated forms (n=22/78): digestive (n=8; 10,3%), neurological (n=7; 9%), pulmonary complications (n=4; 5,1%) and hemophagocytic syndrome (n=4; 5,1%). Only 5% of the children had prior immunization with typhoid non-conjugated vaccine (n=4/78). 28% of the cases (n=22/78) were caused by resistant bacteria. Thrombocytopenia and diagnosis delay was significantly associated with severe infection (p= 0.029 and p=0,01). Complicated forms were more common with MDR (p=0,1) and not statistically associated with a young age or sex in this study. Conclusions: Typhoid and paratyphoid fever are not rare in children back from tropical areas. This multicentric pediatric study seems to show that thrombocytopenia, diagnosis delay, and multidrug resistant bacteria are associated with severe typhoid fever and complicated forms in children.

Keywords: antimicrobial resistance, children, Salmonella enterica typhi and paratyphi, severe typhoid

Procedia PDF Downloads 147
81 Investigation of the Mechanical and Thermal Properties of a Silver Oxalate Nanoporous Structured Sintered Joint for Micro-joining in Relation to the Sintering Process Parameters

Authors: L. Vivet, L. Benabou, O. Simon

Abstract:

With highly demanding applications in the field of power electronics, there is an increasing need to have interconnection materials with properties that can ensure both good mechanical assembly and high thermal/electrical conductivities. So far, lead-free solders have been considered an attractive solution, but recently, sintered joints based on nano-silver paste have been used for die attach and have proved to be a promising solution offering increased performances in high-temperature applications. In this work, the main parameters of the bonding process using silver oxalates are studied, i.e., the heating rate and the bonding pressure mainly. Their effects on both the mechanical and thermal properties of the sintered layer are evaluated following an experimental design. Pairs of copper substrates with gold metallization are assembled through the sintering process to realize the samples that are tested using a micro-traction machine. In addition, the obtained joints are examined through microscopy to identify the important microstructural features in relation to the measured properties. The formation of an intermetallic compound at the junction between the sintered silver layer and the gold metallization deposited on copper is also analyzed. Microscopy analysis exhibits a nanoporous structure of the sintered material. It is found that higher temperature and bonding pressure result in higher densification of the sintered material, with higher thermal conductivity of the joint but less mechanical flexibility to accommodate the thermo-mechanical stresses arising during service. The experimental design allows hence the determination of the optimal process parameters to reach sufficient thermal/mechanical properties for a given application. It is also found that the interphase formed between silver and gold metallization is the location where the fracture occurred after the mechanical testing, suggesting that the inter-diffusion mechanism between the different elements of the assembly leads to the formation of a relatively brittle compound.

Keywords: nanoporous structure, silver oxalate, sintering, mechanical strength, thermal conductivity, microelectronic packaging

Procedia PDF Downloads 69
80 Functional Surfaces and Edges for Cutting and Forming Tools Created Using Directed Energy Deposition

Authors: Michal Brazda, Miroslav Urbanek, Martina Koukolikova

Abstract:

This work focuses on the development of functional surfaces and edges for cutting and forming tools created through the Directed Energy Deposition (DED) technology. In the context of growing challenges in modern engineering, additive technologies, especially DED, present an innovative approach to manufacturing tools for forming and cutting. One of the key features of DED is its ability to precisely and efficiently deposit Fully dense metals from powder feedstock, enabling the creation of complex geometries and optimized designs. Gradually, it becomes an increasingly attractive choice for tool production due to its ability to achieve high precision while simultaneously minimizing waste and material costs. Tools created using DED technology gain significant durability through the utilization of high-performance materials such as nickel alloys and tool steels. For high-temperature applications, Nimonic 80A alloy is applied, while for cold applications, M2 tool steel is used. The addition of ceramic materials, such as tungsten carbide, can significantly increase the tool's resistance. The introduction of functionally graded materials is a significant contribution, opening up new possibilities for gradual changes in the mechanical properties of the tool and optimizing its performance in different sections according to specific requirements. In this work, you will find an overview of individual applications and their utilization in the industry. Microstructural analyses have been conducted, providing detailed insights into the structure of individual components alongside examinations of the mechanical properties and tool life. These analyses offer a deeper understanding of the efficiency and reliability of the created tools, which is a key element for successful development in the field of cutting and forming tools. The production of functional surfaces and edges using DED technology can result in financial savings, as the entire tool doesn't have to be manufactured from expensive special alloys. The tool can be made from common steel, onto which a functional surface from special materials can be applied. Additionally, it allows for tool repairs after wear and tear, eliminating the need for producing a new part and contributing to an overall cost while reducing the environmental footprint. Overall, the combination of DED technology, functionally graded materials, and verified technologies collectively set a new standard for innovative and efficient development of cutting and forming tools in the modern industrial environment.

Keywords: additive manufacturing, directed energy deposition, DED, laser, cutting tools, forming tools, steel, nickel alloy

Procedia PDF Downloads 19
79 An Unexpected Helping Hand: Consequences of Redistribution on Personal Ideology

Authors: Simon B.A. Egli, Katja Rost

Abstract:

Literature on redistributive preferences has proliferated in past decades. A core assumption behind it is that variation in redistributive preferences can explain different levels of redistribution. In contrast, this paper considers the reverse. What if it is redistribution that changes redistributive preferences? The core assumption behind the argument is that if self-interest - which we label concrete preferences - and ideology - which we label abstract preferences - come into conflict, the former will prevail and lead to an adjustment of the latter. To test the hypothesis, data from a survey conducted in Switzerland during the first wave of the COVID-19 crisis is used. A significant portion of the workforce at the time unexpectedly received state money through the short-time working program. Short-time work was used as a proxy for self-interest and was tested (1) on the support given to hypothetical, ailing firms during the crisis and (2) on the prioritization of justice principles guiding state action. In a first step, several models using OLS-regressions on political orientation were estimated to test our hypothesis as well as to check for non-linear effects. We expected support for ailing firms to be the same regardless of ideology but only for people on short-time work. The results both confirm our hypothesis and suggest a non-linear effect. Far-right individuals on short-time work were disproportionally supportive compared to moderate ones. In a second step, ordered logit models were estimated to test the impact of short-time work and political orientation on the rankings of the distributive justice principles need, performance, entitlement, and equality. The results show that being on short-time work significantly alters the prioritization of justice principles. Right-wing individuals are much more likely to prioritize need and equality over performance and entitlement when they receive government assistance. No such effect is found among left-wing individuals. In conclusion, we provide moderate to strong evidence that unexpectedly finding oneself at the receiving end changes redistributive preferences if personal ideology is antithetical to redistribution. The implications of our findings on the study of populism, personal ideologies, and political change are discussed.

Keywords: COVID-19, ideology, redistribution, redistributive preferences, self-interest

Procedia PDF Downloads 112
78 Socioeconomic Disparities in the Prevalence of Obesity in Adults with Diabetes in Israel

Authors: Yael Wolff Sagy, Yiska Loewenberg Weisband, Vered Kaufman Shriqui, Michal Krieger, Arie Ben Yehuda, Ronit Calderon Margalit

Abstract:

Background: Obesity is both a risk factor and common comorbidity of diabetes. Obesity impedes the achievement of glycemic control, and enhances damage caused by hyperglycemia to blood vessels; thus it increases diabetes-related complications. This study assessed the prevalence of obesity and morbid obesity among Israeli adults with diabetes, and estimated disparities associated with sex and socioeconomic position (SEP). Methods: A cross-sectional study was conducted in the setting of the Israeli National Program for Quality Indicators in Community Healthcare. Data on all the Israeli population is retrieved from electronic medical records of the four health maintenance organizations (HMOs). The study population included all Israeli patients with diabetes aged 20-64 with documented body mass index (BMI) in 2016 (N=180,451). Diabetes was defined as the existence of one or more of the following criteria: (a) Plasma glucose level >200 mg% in at least two tests conducted at least one month apart in the previous year; (b) HbA1c>6.5% at least once in the previous year (c) at least three prescriptions of diabetes medications were dispensed during the previous year. Two measures were included: the prevalence of obesity (defined as last BMI≥ 30 kg/m2 and <35 kg/m2) and the prevalence of morbid obesity (defined as last BMI≥ 35 kg/m2) in individuals aged 20-64 with diabetes. The cut-off value for morbid obesity was set in accordance with the eligibility criteria for bariatric surgery in diabetics. Data were collected by the HMOs and aggregated by age, sex and SEP. SEP was based on statistical areas ranking by the Israeli Central Bureau of Statistics and divided into 4 categories, ranking from 1 (lowest) to 4 (highest). Results: BMI documentation among adults with diabetes was 84.9% in 2016. The prevalence of obesity in the study population was 30.5%. Although the overall rate was similar in both sexes (30.8% in females, 30.3% in males), SEP disparities were stronger in females (32.7% in SEP level 1 vs. 27.7% in SEP level 4; 18.1% relative difference) compared to males (30.6% in SEP level 1 vs. 29.3% in SEP level 4; 4.4% relative difference). The overall prevalence of morbid obesity in this population was 20.8% in 2016. The rate among females was almost double compared to the rate in males (28.1% and 14.6%, respectively). In both sexes, the prevalence of morbid obesity was strongly associated with lower SEP. However, in females, disparities between SEP levels were much stronger (34.3% in SEP level 1 vs. 18.7% in SEP level 4; 83.4% relative difference) compared to SEP-disparities in males (15.7% in SEP level 1 vs. 12.3% in SEP level 4; 27.6% relative difference). Conclusions: The overall prevalence of BMI≥ 30 kg/m2 among adults with diabetes in Israel exceeds 50%; and the prevalence of morbid obesity suggests that 20% meet the BMI-criteria for bariatric surgery. Prevalence rates show major SEP- and sex-disparities; especially strong SEP disparities in morbid obesity among females. These findings highlight the need for greater consideration of different population groups when implementing interventions.

Keywords: diabetes, health disparities, health policy, obesity, socio-economic position

Procedia PDF Downloads 181
77 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit

Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek

Abstract:

In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.

Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage

Procedia PDF Downloads 240
76 Characterizing Nasal Microbiota in COVID-19 Patients: Insights from Nanopore Technology and Comparative Analysis

Authors: David Pinzauti, Simon De Jaegher, Maria D'Aguano, Manuele Biazzo

Abstract:

The COVID-19 pandemic has left an indelible mark on global health, leading to a pressing need for understanding the intricate interactions between the virus and the human microbiome. This study focuses on characterizing the nasal microbiota of patients affected by COVID-19, with a specific emphasis on the comparison with unaffected individuals, to shed light on the crucial role of the microbiome in the development of this viral disease. To achieve this objective, Nanopore technology was employed to analyze the bacterial 16s rRNA full-length gene present in nasal swabs collected in Malta between January 2021 and August 2022. A comprehensive dataset consisting of 268 samples (126 SARS-negative samples and 142 SARS-positive samples) was subjected to a comparative analysis using an in-house, custom pipeline. The findings from this study revealed that individuals affected by COVID-19 possess a nasal microbiota that is significantly less diverse, as evidenced by lower α diversity, and is characterized by distinct microbial communities compared to unaffected individuals. The beta diversity analyses were carried out at different taxonomic resolutions. At the phylum level, Bacteroidota was found to be more prevalent in SARS-negative samples, suggesting a potential decrease during the course of viral infection. At the species level, the identification of several specific biomarkers further underscores the critical role of the nasal microbiota in COVID-19 pathogenesis. Notably, species such as Finegoldia magna, Moraxella catarrhalis, and others exhibited relative abundance in SARS-positive samples, potentially serving as significant indicators of the disease. This study presents valuable insights into the relationship between COVID-19 and the nasal microbiota. The identification of distinct microbial communities and potential biomarkers associated with the disease offers promising avenues for further research and therapeutic interventions aimed at enhancing public health outcomes in the context of COVID-19.

Keywords: COVID-19, nasal microbiota, nanopore technology, 16s rRNA gene, biomarkers

Procedia PDF Downloads 37
75 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 136
74 Multiscale Modeling of Damage in Textile Composites

Authors: Jaan-Willem Simon, Bertram Stier, Brett Bednarcyk, Evan Pineda, Stefanie Reese

Abstract:

Textile composites, in which the reinforcing fibers are woven or braided, have become very popular in numerous applications in aerospace, automotive, and maritime industry. These textile composites are advantageous due to their ease of manufacture, damage tolerance, and relatively low cost. However, physics-based modeling of the mechanical behavior of textile composites is challenging. Compared to their unidirectional counterparts, textile composites introduce additional geometric complexities, which cause significant local stress and strain concentrations. Since these internal concentrations are primary drivers of nonlinearity, damage, and failure within textile composites, they must be taken into account in order for the models to be predictive. The macro-scale approach to modeling textile-reinforced composites treats the whole composite as an effective, homogenized material. This approach is very computationally efficient, but it cannot be considered predictive beyond the elastic regime because the complex microstructural geometry is not considered. Further, this approach can, at best, offer a phenomenological treatment of nonlinear deformation and failure. In contrast, the mesoscale approach to modeling textile composites explicitly considers the internal geometry of the reinforcing tows, and thus, their interaction, and the effects of their curved paths can be modeled. The tows are treated as effective (homogenized) materials, requiring the use of anisotropic material models to capture their behavior. Finally, the micro-scale approach goes one level lower, modeling the individual filaments that constitute the tows. This paper will compare meso- and micro-scale approaches to modeling the deformation, damage, and failure of textile-reinforced polymer matrix composites. For the mesoscale approach, the woven composite architecture will be modeled using the finite element method, and an anisotropic damage model for the tows will be employed to capture the local nonlinear behavior. For the micro-scale, two different models will be used, the one being based on the finite element method, whereas the other one makes use of an embedded semi-analytical approach. The goal will be the comparison and evaluation of these approaches to modeling textile-reinforced composites in terms of accuracy, efficiency, and utility.

Keywords: multiscale modeling, continuum damage model, damage interaction, textile composites

Procedia PDF Downloads 323
73 Health Psychology Intervention: Identifying Early Symptoms in Neurological Disorders

Authors: Simon B. N. Thompson

Abstract:

Early indicator of neurological disease has been proposed by the expanded Thompson Cortisol Hypothesis which suggests that yawning is linked to rises in cortisol levels. Cortisol is essential to the regulation of the immune system and pathological yawning is a symptom of multiple sclerosis (MS). Electromyography activity (EMG) in the jaw muscles typically rises when the muscles are moved – extended or flexed; and yawning has been shown to be highly correlated with cortisol levels in healthy people. It is likely that these elevated cortisol levels are also seen in people with MS. The possible link between EMG in the jaw muscles and rises in saliva cortisol levels during yawning were investigated in a randomized controlled trial of 60 volunteers aged 18-69 years who were exposed to conditions that were designed to elicit the yawning response. Saliva samples were collected at the start and after yawning, or at the end of the presentation of yawning-provoking stimuli, in the absence of a yawn, and EMG data was additionally collected during rest and yawning phases. Hospital Anxiety and Depression Scale, Yawning Susceptibility Scale, General Health Questionnaire, demographic, and health details were collected and the following exclusion criteria were adopted: chronic fatigue, diabetes, fibromyalgia, heart condition, high blood pressure, hormone replacement therapy, multiple sclerosis, and stroke. Significant differences were found between the saliva cortisol samples for the yawners, t (23) = -4.263, p = 0.000, as compared with the non-yawners between rest and post-stimuli, which was non-significant. There were also significant differences between yawners and non-yawners for the EMG potentials with the yawners having higher rest and post-yawning potentials. Significant evidence was found to support the Thompson Cortisol Hypothesis suggesting that rises in cortisol levels are associated with the yawning response. Further research is underway to explore the use of cortisol as a potential diagnostic tool as an assist to the early diagnosis of symptoms related to neurological disorders. Bournemouth University Research & Ethics approval granted: JC28/1/13-KA6/9/13. Professional code of conduct, confidentiality, and safety issues have been addressed and approved in the Ethics submission. Trials identification number: ISRCTN61942768. http://www.controlled-trials.com/isrctn/

Keywords: cortisol, electromyography, neurology, yawning

Procedia PDF Downloads 558
72 A Factor-Analytical Approach on Identities in Environmentally Significant Behavior

Authors: Alina M. Udall, Judith de Groot, Simon de Jong, Avi Shankar

Abstract:

There are many ways in which environmentally significant behavior can be explained. Dominant psychological theories, namely, the theory of planned behavior, the norm-activation theory, its extension, the value-belief-norm theory, and the theory of habit do not explain large parts of environmentally significant behaviors. A new and rapidly growing approach is to focus on how consumer’s identities predict environmentally significant behavior. Identity may be relevant because consumers have many identities that are assumed to guide their behavior. Therefore, we assume that many identities will guide environmentally significant behavior. Many identities can be relevant for environmentally significant behavior. In reviewing the literature, over 200 identities have been studied making it difficult to establish the key identities for explaining environmentally significant behavior. Therefore, this paper first aims to establish the key identities previously used for explaining environmentally significant behavior. Second, the aim is to test which key identities explain environmentally significant behavior. To address the aims, an online survey study (n = 578) is conducted. First, the exploratory factor analysis reveals 15 identity factors. The identity factors are namely, environmentally concerned identity, anti-environmental self-identity, environmental place identity, connectedness with nature identity, green space visitor identity, active ethical identity, carbon off-setter identity, thoughtful self-identity, close community identity, anti-carbon off-setter identity, environmental group member identity, national identity, identification with developed countries, cyclist identity, and thoughtful organisation identity. Furthermore, to help researchers understand and operationalize the identities, the article provides theoretical definitions for each of the identities, in line with identity theory, social identity theory, and place identity theory. Second, the hierarchical regression shows only 10 factors significantly uniquely explain the variance in environmentally significant behavior. In order of predictive power the identities are namely, environmentally concerned identity, anti-environmental self-identity, thoughtful self-identity, environmental group member identity, anti-carbon off-setter identity, carbon off-setter identity, connectedness with nature identity, national identity, and green space visitor identity. The identities explain over 60% of the variance in environmentally significant behavior, a large effect size. Based on this finding, the article reveals a new, theoretical framework showing the key identities explaining environmentally significant behavior, to help improve and align the field.

Keywords: environmentally significant behavior, factor analysis, place identity, social identity

Procedia PDF Downloads 419
71 Big Data for Local Decision-Making: Indicators Identified at International Conference on Urban Health 2017

Authors: Dana R. Thomson, Catherine Linard, Sabine Vanhuysse, Jessica E. Steele, Michal Shimoni, Jose Siri, Waleska Caiaffa, Megumi Rosenberg, Eleonore Wolff, Tais Grippa, Stefanos Georganos, Helen Elsey

Abstract:

The Sustainable Development Goals (SDGs) and Urban Health Equity Assessment and Response Tool (Urban HEART) identify dozens of key indicators to help local decision-makers prioritize and track inequalities in health outcomes. However, presentations and discussions at the International Conference on Urban Health (ICUH) 2017 suggested that additional indicators are needed to make decisions and policies. A local decision-maker may realize that malaria or road accidents are a top priority. However, s/he needs additional health determinant indicators, for example about standing water or traffic, to address the priority and reduce inequalities. Health determinants reflect the physical and social environments that influence health outcomes often at community- and societal-levels and include such indicators as access to quality health facilities, access to safe parks, traffic density, location of slum areas, air pollution, social exclusion, and social networks. Indicator identification and disaggregation are necessarily constrained by available datasets – typically collected about households and individuals in surveys, censuses, and administrative records. Continued advancements in earth observation, data storage, computing and mobile technologies mean that new sources of health determinants indicators derived from 'big data' are becoming available at fine geographic scale. Big data includes high-resolution satellite imagery and aggregated, anonymized mobile phone data. While big data are themselves not representative of the population (e.g., satellite images depict the physical environment), they can provide information about population density, wealth, mobility, and social environments with tremendous detail and accuracy when combined with population-representative survey, census, administrative and health system data. The aim of this paper is to (1) flag to data scientists important indicators needed by health decision-makers at the city and sub-city scale - ideally free and publicly available, and (2) summarize for local decision-makers new datasets that can be generated from big data, with layperson descriptions of difficulties in generating them. We include SDGs and Urban HEART indicators, as well as indicators mentioned by decision-makers attending ICUH 2017.

Keywords: health determinant, health outcome, mobile phone, remote sensing, satellite imagery, SDG, urban HEART

Procedia PDF Downloads 179
70 The Gut Microbiome in Cirrhosis and Hepatocellular Carcinoma: Characterization of Disease-Related Microbial Signature and the Possible Impact of Life Style and Nutrition

Authors: Lena Lapidot, Amir Amnon, Rita Nosenko, Veitsman Ella, Cohen-Ezra Oranit, Davidov Yana, Segev Shlomo, Koren Omry, Safran Michal, Ben-Ari Ziv

Abstract:

Introduction: Hepatocellular carcinoma (HCC) is one of the leading causes of cancer related mortality worldwide. Liver Cirrhosis is the main predisposing risk factor for the development of HCC. The factor(s) influencing disease progression from Cirrhosis to HCC remain unknown. Gut microbiota has recently emerged as a major player in different liver diseases, however its association with HCC is still a mystery. Moreover, there might be an important association between the gut microbiota, nutrition, life style and the progression of Cirrhosis and HCC. The aim of our study was to characterize the gut microbial signature in association with life style and nutrition of patients with Cirrhosis, HCC-Cirrhosis and healthy controls. Design: Stool samples were collected from 95 individuals (30 patients with HCC, 38 patients with Cirrhosis and 27 age, gender and BMI-matched healthy volunteers). All participants answered lifestyle and Food Frequency Questionnaires. 16S rRNA sequencing of fecal DNA was performed (MiSeq Illumina). Results: There was a significant decrease in alpha diversity in patients with Cirrhosis (qvalue=0.033) and in patients with HCC-Cirrhosis (qvalue=0.032) compared to healthy controls. The microbiota of patients with HCC-cirrhosis compared to patients with Cirrhosis, was characterized by a significant overrepresentation of Clostridium (pvalue=0.024) and CF231 (pvalue=0.010) and lower expression of Alphaproteobacteria (pvalue=0.039) and Verrucomicrobia (pvalue=0.036) in several taxonomic levels: Verrucomicrobiae, Verrucomicrobiales, Verrucomicrobiaceae and the genus Akkermansia (pvalue=0.039). Furthermore, we performed an analysis of predicted metabolic pathways (Kegg level 2) that resulted in a significant decrease in the diversity of metabolic pathways in patients with HCC-Cirrhosis (qvalue=0.015) compared to controls, one of which was amino acid metabolism. Furthermore, investigating the life style and nutrition habits of patients with HCC-Cirrhosis, we found significant correlations between intake of artificial sweeteners and Verrucomicrobia (qvalue=0.12), High sugar intake and Synergistetes (qvalue=0.021) and High BMI and the pathogen Campylobacter (qvalue=0.066). Furthermore, overweight in patients with HCC-Cirrhosis modified bacterial diversity (qvalue=0.023) and composition (qvalue=0.033). Conclusions: To the best of the our knowledge, we present the first report of the gut microbial composition in patients with HCC-Cirrhosis, compared with Cirrhotic patients and healthy controls. We have demonstrated in our study that there are significant differences in the gut microbiome of patients with HCC-cirrhosis compared to Cirrhotic patients and healthy controls. Our findings are even more pronounced because the significantly increased bacteria Clostridium and CF231 in HCC-Cirrhosis weren't influenced by diet and lifestyle, implying this change is due to the development of HCC. Further studies are needed to confirm these findings and assess causality.

Keywords: Cirrhosis, Hepatocellular carcinoma, life style, liver disease, microbiome, nutrition

Procedia PDF Downloads 93
69 Attitude to the Types of Organizational Change

Authors: O. Y. Yurieva, O. V. Yurieva, O. V. Kiselkina, A. V. Kamaseva

Abstract:

Since the early 2000s, there are some innovative changes in the civil service in Russia due to administrative reform. Perspectives of the reform of the civil service include a fundamental change in the personnel component, increasing the level of professionalism of officials, increasing their capacity for self-organization and self-regulation. In order to achieve this, the civil service must be able to continuously change. Organizational changes have long become the subject of scientific understanding; problems of research in the field of organizational change is presented by topics focused on the study of the methodological aspects of the implementation of the changes, the specifics of changes in different types of organizations (business, government, and so on), design changes in the organization, including based on the change in organizational culture. In this case, the organizational changes in the civil service are the least studied areas; research of problems of its transformation is carried out in fragments. According to the theory of resistance of Herbert Simon, the root of the opposition and rejection of change is in the person who will resist any change, if it threatens to undermine the degree of satisfaction as a member of the organization (regardless of the reasons for this change). Thus, the condition for successful adaptation to changes in the organization is the ability of its staff to perceive innovation. As part of the problem, the study sought to identify the innovation civil servants, to determine readiness for the development of proposals for the implementation of organizational change in the public service. To identify the relationship to organizational changes case study carried out by the method of "Attitudes to organizational change" of I. Motovilina, which allowed predicting the type of resistance to changes, to reveal the contradictions and hidden results. The advantage of the method of I. Motovilina is its brevity, simplicity, the analysis of the responses to each question, the use of "overlapping" issues potentially conflicting factors. Based on the study made by the authors, it was found that respondents have a positive attitude to change more local than those that take place in reality, such as "increase opportunities for professional growth", "increase the requirements for the level of professionalism of", "the emergence of possible manifestations initiatives from below". Implemented by the authors diagnostics related to organizational changes in the public service showed the presence of specific problem areas, with roots in the lack of understanding of the importance of innovation personnel in the process of bureaucratization of innovation in public service organizations.

Keywords: innovative changes, self-organization, self-regulation, civil service

Procedia PDF Downloads 425
68 Analysis of the Brazilian Trade Balance in Relation to Mercosur: A Comparison between the Period 1989-1994 and 1994-2012

Authors: Luciana Aparecida Bastos, Tatiana Diair L. F. Rosa, Jesus Creapldi

Abstract:

The idea of Latin American integration occurred from the ideals of Simón Bolívar that, in 1824, called the Ibero-American nations to Amphictyonic Congress of Panama, on June 22, 1826, where he would defend the importance of Latin American unity. However, this congress was frustrating and the idea of Bolívar went no further. It was only after the European Union to start the process, driven by the end of World War II that the subject returned to emerge in Latin America. Thus, in 1960, supported by the European integration process, started in 1957 with the excellent result of the ECSC - European Coal and Steel Community, a result of the Customs Union of the BENELUX (integration between Belgium, the Netherlands and Luxembourg) in 1948, was created in Latin America, LAFTA - Latin American Free Trade Association, in 1960. In 1980, LAFTA was replaced by LAAI- Latin American Association, both with the same goal: to integrate Latin America, it´s economy and its trade. Most researchers in this period agree that the regional market would be expanded through the integration. The creation of one or more economic blocs in the region would provide the union of Latin American countries through a fusion of common interests and by their geographical proximity, which would try to develop common projects to promote mutual growth and economic development, tariff reductions, promotion of increased trade between, among many other goals set together. Thus, taking into account Mercosur, the main Latin-American block, created in 1994, the aim of this paper is to make a brief analysis of the trade balance performance of Brazil (larger economy of the block) in Mercosur in the periods: 1989-1994 and 1994-2012. The choice of this period was because the objective is to compare the period before and after the integration of Brazil in Mercosur. The methodologies used were the literature review and descriptive statistics. The results showed that after the integration of Brazil in Mercosur, the exports and imports grew within the bloc and the country turned out to become the leading importer of other economies of Mercosur after integration, that is, Brazil, after integration to Mercosur, was largely responsible for promoting the expansion of regional trade through the import of products from other members of the block.

Keywords: Brazil, mercosur, integration, trade balance, comparison

Procedia PDF Downloads 296
67 Nudging the Criminal Justice System into Listening to Crime Victims in Plea Agreements

Authors: Dana Pugach, Michal Tamir

Abstract:

Most criminal cases end with a plea agreement, an issue whose many aspects have been discussed extensively in legal literature. One important feature, however, has gained little notice, and that is crime victims’ place in plea agreements following the federal Crime Victims Rights Act of 2004. This law has provided victims some meaningful and potentially revolutionary rights, including the right to be heard in the proceeding and a right to appeal against a decision made while ignoring the victim’s rights. While victims’ rights literature has always emphasized the importance of such right, references to this provision in the general literature about plea agreements are sparse, if existing at all. Furthermore, there are a few cases only mentioning this right. This article purports to bridge between these two bodies of legal thinking – the vast literature concerning plea agreements and victims’ rights research– by using behavioral economics. The article will, firstly, trace the possible structural reasons for the failure of this right to be materialized. Relevant incentives of all actors involved will be identified as well as their inherent consequential processes that lead to the victims’ rights malfunction. Secondly, the article will use nudge theory in order to suggest solutions that will enhance incentives for the repeat players in the system (prosecution, judges, defense attorneys) and lead to the strengthening of weaker group’s interests – the crime victims. Behavioral psychology literature recognizes that the framework in which an individual confronts a decision can significantly influence his decision. Richard Thaler and Cass Sunstein developed the idea of ‘choice architecture’ - ‘the context in which people make decisions’ - which can be manipulated to make particular decisions more likely. Choice architectures can be changed by adjusting ‘nudges,’ influential factors that help shape human behavior, without negating their free choice. The nudges require decision makers to make choices instead of providing a familiar default option. In accordance with this theory, we suggest a rule, whereby a judge should inquire the victim’s view prior to accepting the plea. This suggestion leaves the judge’s discretion intact; while at the same time nudges her not to go directly to the default decision, i.e. automatically accepting the plea. Creating nudges that force actors to make choices is particularly significant when an actor intends to deviate from routine behaviors but experiences significant time constraints, as in the case of judges and plea bargains. The article finally recognizes some far reaching possible results of the suggestion. These include meaningful changes to the earlier stages of criminal process even before reaching court, in line with the current criticism of the plea agreements machinery.

Keywords: plea agreements, victims' rights, nudge theory, criminal justice

Procedia PDF Downloads 300
66 Carbonyl Iron Particles Modified with Pyrrole-Based Polymer and Electric and Magnetic Performance of Their Composites

Authors: Miroslav Mrlik, Marketa Ilcikova, Martin Cvek, Josef Osicka, Michal Sedlacik, Vladimir Pavlinek, Jaroslav Mosnacek

Abstract:

Magnetorheological elastomers (MREs) are a unique type of materials consisting of two components, magnetic filler, and elastomeric matrix. Their properties can be tailored upon application of an external magnetic field strength. In this case, the change of the viscoelastic properties (viscoelastic moduli, complex viscosity) are influenced by two crucial factors. The first one is magnetic performance of the particles and the second one is off-state stiffness of the elastomeric matrix. The former factor strongly depends on the intended applications; however general rule is that higher magnetic performance of the particles provides higher MR performance of the MRE. Since magnetic particles possess low stability properties against temperature and acidic environment, several methods how to improve these drawbacks have been developed. In the most cases, the preparation of the core-shell structures was employed as a suitable method for preservation of the magnetic particles against thermal and chemical oxidations. However, if the shell material is not single-layer substance, but polymer material, the magnetic performance is significantly suppressed, due to the in situ polymerization technique, when it is very difficult to control the polymerization rate and the polymer shell is too thick. The second factor is the off-state stiffness of the elastomeric matrix. Since the MR effectivity is calculated as the relative value of the elastic modulus upon magnetic field application divided by elastic modulus in the absence of the external field, also the tuneability of the cross-linking reaction is highly desired. Therefore, this study is focused on the controllable modification of magnetic particles using a novel monomeric system based on 2-(1H-pyrrol-1-yl)ethyl methacrylate. In this case, the short polymer chains of different chain lengths and low polydispersity index will be prepared, and thus tailorable stability properties can be achieved. Since the relatively thin polymer chains will be grafted on the surface of magnetic particles, their magnetic performance will be affected only slightly. Furthermore, also the cross-linking density will be affected, due to the presence of the short polymer chains. From the application point of view, such MREs can be utilized for, magneto-resistors, piezoresistors or pressure sensors especially, when the conducting shell on the magnetic particles will be created. Therefore, the selection of the pyrrole-based monomer is very crucial and controllably thin layer of conducting polymer can be prepared. Finally, such composite particle consisting of magnetic core and conducting shell dispersed in elastomeric matrix can find also the utilization in shielding application of electromagnetic waves.

Keywords: atom transfer radical polymerization, core-shell, particle modification, electromagnetic waves shielding

Procedia PDF Downloads 181
65 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)

Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin

Abstract:

The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.

Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory

Procedia PDF Downloads 77
64 Comparison of Quality of Life One Year after Bariatric Intervention: Systematic Review of the Literature with Bayesian Network Meta-Analysis

Authors: Piotr Tylec, Alicja Dudek, Grzegorz Torbicz, Magdalena Mizera, Natalia Gajewska, Michael Su, Tanawat Vongsurbchart, Tomasz Stefura, Magdalena Pisarska, Mateusz Rubinkiewicz, Piotr Malczak, Piotr Major, Michal Pedziwiatr

Abstract:

Introduction: Quality of life after bariatric surgery is an important factor when evaluating the final result of the treatment. Considering the vast surgical options, we tried to globally compare available methods in terms of quality of following the surgery. The aim of the study is to compare the quality of life a year after bariatric intervention using network meta-analysis methods. Material and Methods: We performed a systematic review according to PRISMA guidelines with Bayesian network meta-analysis. Inclusion criteria were: studies comparing at least two methods of weight loss treatment of which at least one is surgical, assessment of the quality of life one year after surgery by validated questionnaires. Primary outcomes were quality of life one year after bariatric procedure. The following aspects of quality of life were analyzed: physical, emotional, general health, vitality, role physical, social, mental, and bodily pain. All questionnaires were standardized and pooled to a single scale. Lifestyle intervention was considered as a referenced point. Results: An initial reference search yielded 5636 articles. 18 studies were evaluated. In comparison of total score of quality of life, we observed that laparoscopic sleeve gastrectomy (LSG) (median (M): 3.606, Credible Interval 97.5% (CrI): 1.039; 6.191), laparoscopic Roux en-Y gastric by-pass (LRYGB) (M: 4.973, CrI: 2.627; 7.317) and open Roux en-Y gastric by-pass (RYGB) (M: 9.735, CrI: 6.708; 12.760) had better results than other bariatric intervention in relation to lifestyle interventions. In the analysis of the physical aspects of quality of life, we notice better results in LSG (M: 3.348, CrI: 0.548; 6.147) and in LRYGB procedure (M: 5.070, CrI: 2.896; 7.208) than control intervention, and worst results in open RYGB (M: -9.212, CrI: -11.610; -6.844). Analyzing emotional aspects, we found better results than control intervention in LSG, in LRYGB, in open RYGB, and laparoscopic gastric plication. In general health better results were in LSG (M: 9.144, CrI: 4.704; 13.470), in LRYGB (M: 6.451, CrI: 10.240; 13.830) and in single-anastomosis gastric by-pass (M: 8.671, CrI: 1.986; 15.310), and worst results in open RYGB (M: -4.048, CrI: -7.984; -0.305). In social and vital aspects of quality of life, better results were observed in LSG and LRYGB than control intervention. We did not find any differences between bariatric interventions in physical role, mental and bodily aspects of quality of life. Conclusion: The network meta-analysis revealed that better quality of life in total score one year after bariatric interventions were after LSG, LRYGB, open RYGB. In physical and general health aspects worst quality of life was in open RYGB procedure. Other interventions did not significantly affect the quality of life after a year compared to dietary intervention.

Keywords: bariatric surgery, network meta-analysis, quality of life, one year follow-up

Procedia PDF Downloads 121
63 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 116
62 Public Procurement and Innovation: A Municipal Approach

Authors: M. Moso-Diez, J. L. Moragues-Oregi, K. Simon-Elorz

Abstract:

Innovation procurement is designed to steer the development of solutions towards concrete public sector needs as a driver for innovation from the demand side (in public services as well as in market opportunities for companies), is horizontally emerging as a new policy instrument. In 2014 the new EU public procurement directives 2014/24/EC and 2014/25/EC reinforced the support for Public Procurement for Innovation, dedicating funding instruments that can be used across all areas supported by Horizon 2020, and targeting potential buyers of innovative solutions: groups of public procurers with similar needs. Under this programme, new policy adapters and networks emerge, aiming to embed innovation criteria into new procurement processes. As these initiatives are in process, research related to is scarce. We argue that Innovation Public Procurement can arise as an innovative policy instrument to public procurement in different policy domains, in spite of existing institutional and cultural barriers (legal guarantee versus innovation). The presentation combines insights from public procurement to supply management chain management in a sustainability and innovation policy arena, as a means of providing understanding of: (1) the circumstances that emerge; (2) the relationship between public and private actors; and (3) the emerging capacities in the definition of the agenda. The policy adopters are the contracting authorities that mainly are at municipal level where they interact with the supply management chain, interconnecting sustainability and climate measures with other policy priorities such as innovation and urban planning; and through the Competitive Dialogue procedure. We found that geography and territory affect both the level of municipal budget (due to municipal income per capita) and its institutional competencies (due to demographic reasons). In spite of the relevance of institutional determinants for public procurement, other factors play an important role such as human factors as well as both public policy and private intervention. The experience is a ‘city project’ (Bilbao) in the field of brownfield decontamination. Brownfield sites typically refer to abandoned or underused industrial and commercial properties—such as old process plants, mining sites, and landfills—that are available but contain low levels of environmental contaminants that may complicate reuse or redevelopment of the land. This article concludes that Innovation Public Procurement in sustainability and climate issues should be further developed both as a policy instrument and as a policy research line that could enable further relevant changes in public procurement as well as in climate innovation.

Keywords: innovation, city projects, public policy, public procurement

Procedia PDF Downloads 281
61 Reducing Flood Risk through Value Capture and Risk Communication: A Case Study in Cocody-Abidjan

Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama

Abstract:

Abidjan city (Republic of Ivory Coast) is an emerging megacity and an urban coastal area where the number of floods reported is on a rapid increase due to climate change and unplanned urbanization. However, comprehensive disaster mitigation plans, policies, and financial resources are still lacking as the population ignores the extent and location of the flood zones; making them unprepared to mitigate the damages. Considering the existing condition, this paper aims to discuss an approach for flood risk reduction in Cocody Commune through value capture strategy and flood risk communication. Using geospatial techniques and hydrological simulation, we start our study by delineating flood zones and depths under several return periods in the study area. Then, through a questionnaire a field survey is conducted in order to validate the flood maps, to estimate the flood risk and to collect some sample of the opinion of residents on how the flood risk information disclosure could affect the values of property located inside and outside the flood zones. The results indicate that the study area is highly vulnerable to 5-year floods and more, which can cause serious harm to human lives and to properties as demonstrated by the extent of the 5-year flood of 2014. Also, it is revealed there is a high probability that the values of property located within flood zones could decline, and the values of surrounding property in the safe area could increase when risk information disclosure commences. However in order to raise public awareness of flood disaster and to prevent future housing promotion in high-risk prospective areas, flood risk information should be disseminated through the establishment of an early warning system. In order to reduce the effect of risk information disclosure and to protect the values of property within the high-risk zone, we propose that property tax increments in flood free zones should be captured and be utilized for infrastructure development and to maintain the early warning system that will benefit people living in flood prone areas. Through this case study, it is shown that combination of value capture strategy and risk communication could be an effective tool to educate citizen and to invest in flood risk reduction in emerging countries.

Keywords: Cocody-Abidjan, flood, geospatial techniques, risk communication, value capture

Procedia PDF Downloads 241
60 In Search of Innovation: Exploring the Dynamics of Innovation

Authors: Michal Lysek, Mike Danilovic, Jasmine Lihua Liu

Abstract:

HMS Industrial Networks AB has been recognized as one of the most innovative companies in the industrial communication industry worldwide. The creation of their Anybus innovation during the 1990s contributed considerably to the company’s success. From inception, HMS’ employees were innovating for the purpose of creating new business (the creation phase). After the Anybus innovation, they began the process of internationalization (the commercialization phase), which in turn led them to concentrate on cost reduction, product quality, delivery precision, operational efficiency, and increasing growth (the growth phase). As a result of this transformation, performing new radical innovations have become more complicated. The purpose of our research was to explore the dynamics of innovation at HMS from the aspect of key actors, activities, and events, over the three phases, in order to understand what led to the creation of their Anybus innovation, and why it has become increasingly challenging for HMS to create new radical innovations for the future. Our research methodology was based on a longitudinal, retrospective study from the inception of HMS in 1988 to 2014, a single case study inspired by the grounded theory approach. We conducted 47 interviews and collected 1 024 historical documents for our research. Our analysis has revealed that HMS’ success in creating the Anybus, and developing a successful business around the innovation, was based on three main capabilities – cultivating customer relations on different managerial and organizational levels, inspiring business relations, and balancing complementary human assets for the purpose of business creation. The success of HMS has turned the management’s attention away from past activities of key actors, of their behavior, and how they influenced and stimulated the creation of radical innovations. Nowadays, they are rhetorically focusing on creativity and innovation. All the while, their real actions put emphasis on growth, cost reduction, product quality, delivery precision, operational efficiency, and moneymaking. In the process of becoming an international company, HMS gradually refocused. In so doing they became profitable and successful, but they also forgot what made them innovative in the first place. Fortunately, HMS’ management has come to realize that this is the case and they are now in search of recapturing innovation once again. Our analysis indicates that HMS’ management is facing several barriers to innovation related path dependency and other lock-in phenomena. HMS’ management has been captured, trapped in their mindset and actions, by the success of the past. But now their future has to be secured, and they have come to realize that moneymaking is not everything. In recent years, HMS’ management have begun to search for innovation once more, in order to recapture their past capabilities for creating radical innovations. In order to unlock their managerial perceptions of customer needs and their counter-innovation driven activities and events, to utilize the full potential of their employees and capture the innovation opportunity for the future.

Keywords: barriers to innovation, dynamics of innovation, in search of excellence and innovation, radical innovation

Procedia PDF Downloads 351
59 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates

Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe

Abstract:

Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.

Keywords: machine learning, MTB, WGS, drug resistant TB

Procedia PDF Downloads 23
58 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 150
57 Developing an Intervention Program to Promote Healthy Eating in a Catering System Based on Qualitative Research Results

Authors: O. Katz-Shufan, T. Simon-Tuval, L. Sabag, L. Granek, D. R. Shahar

Abstract:

Meals provided at catering systems are a common source of workers' nutrition and were found as contributing high amounts calories and fat. Thus, eating daily catering food can lead to overweight and chronic diseases. On the other hand, the institutional dining room may be an ideal environment for implementation of intervention programs that promote healthy eating. This may improve diners' lifestyle and reduce their prevalence of overweight, obesity and chronic diseases. The significance of this study is in developing an intervention program based on the diners’ dietary habits, preferences and their attitudes towards various intervention programs. In addition, a successful catering-based intervention program may have a significant effect simultaneously on a large group of diners, leading to improved nutrition, healthier lifestyle, and disease-prevention on a large scale. In order to develop the intervention program, we conducted a qualitative study. We interviewed 13 diners who eat regularly at catering systems, using a semi-structured interview. The interviews were recorded, transcribed and then analyzed by the thematic method, which identifies, analyzes and reports themes within the data. The interviews revealed several major themes, including expectation of diners to be provided with healthy food choices; their request for nutrition-expert involvement in planning the meals; the diners' feel that there is a conflict between sensory attractiveness of the food and its' nutritional quality. In the context of the catering-based intervention programs, the diners prefer scientific and clear messages focusing on labeling healthy dishes only, as opposed to the labeling of unhealthy dishes; they were interested in a nutritional education program to accompany the intervention program. Based on these findings, we have developed an intervention program that includes: changes in food served such as replacing several menu items and nutritional improvement of some of the recipes; as well as, environmental changes such as changing the location of some food items presented on the buffet, placing positive nutritional labels on healthy dishes and an ongoing healthy nutrition campaign, all accompanied by a nutrition education program. The intervention program is currently being tested for its impact on health outcomes and its cost-effectiveness.

Keywords: catering system, food services, intervention, nutrition policy, public health, qualitative research

Procedia PDF Downloads 163
56 Bedouin Dispersion in Israel: Between Sustainable Development and Social Non-Recognition

Authors: Tamir Michal

Abstract:

The subject of Bedouin dispersion has accompanied the State of Israel from the day of its establishment. From a legal point of view, this subject has offered a launchpad for creative judicial decisions. Thus, for example, the first court decision in Israel to recognize affirmative action (Avitan), dealt with a petition submitted by a Jew appealing the refusal of the State to recognize the Petitioner’s entitlement to the long-term lease of a plot designated for Bedouins. The Supreme Court dismissed the petition, holding that there existed a public interest in assisting Bedouin to establish permanent urban settlements, an interest which justifies giving them preference by selling them plots at subsidized prices. In another case (The Forum for Coexistence in the Negev) the Supreme Court extended equitable relief for the purpose of constructing a bridge, even though the construction infringed the Law, in order to allow the children of dispersed Bedouin to reach school. Against this background, the recent verdict, delivered during the Protective Edge military campaign, which dismissed a petition aimed at forcing the State to spread out Protective Structures in Bedouin villages in the Negev against the risk of being hit from missiles launched from Gaza (Abu Afash) is disappointing. Even if, in arguendo, no selective discrimination was involved in the State’s decision not to provide such protection, the decision, and its affirmation by the Court, is problematic when examined through the prism of the Theory of Recognition. The article analyses the issue by tools of theory of Recognition, according to which people develop their identities through mutual relations of recognition in different fields. In the social context, the path to recognition is cognitive respect, which is provided by means of legal rights. By seeing other participants in Society as bearers of rights and obligations, the individual develops an understanding of his legal condition as reflected in the attitude to others. Consequently, even if the Court’s decision may be justified on strict legal grounds, the fact that Jewish settlements were protected during the military operation, whereas Bedouin villages were not, is a setback in the struggle to make the Bedouin citizens with equal rights in Israeli society. As the Court held, ‘Beyond their protective function, the Migunit [Protective Structures] may make a moral and psychological contribution that should not be undervalued’. This contribution is one that the Bedouin did not receive in the Abu Afash verdict. The basic thesis is that the Court’s verdict analyzed above clearly demonstrates that the reliance on classical liberal instruments (e.g., equality) cannot secure full appreciation of all aspects of Bedouin life, and hence it can in fact prejudice them. Therefore, elements of the recognition theory should be added, in order to find the channel for cognitive dignity, thereby advancing the Bedouins’ ability to perceive themselves as equal human beings in the Israeli society.

Keywords: bedouin dispersion, cognitive respect, recognition theory, sustainable development

Procedia PDF Downloads 327
55 The Location of Park and Ride Facilities Using the Fuzzy Inference Model

Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas

Abstract:

Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.

Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location

Procedia PDF Downloads 307
54 Towards Creative Movie Title Generation Using Deep Neural Models

Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie

Abstract:

Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.

Keywords: creativity, deep machine learning, natural language generation, movies

Procedia PDF Downloads 301
53 Evaluation of Dry Matter Yield of Panicum maximum Intercropped with Pigeonpea and Sesbania Sesban

Authors: Misheck Musokwa, Paramu Mafongoya, Simon Lorentz

Abstract:

Seasonal shortages of fodder during the dry season is a major constraint to smallholder livestock farmers in South Africa. To mitigate the shortage of fodder, legume trees can be intercropped with pastures which can diversify the sources of feed and increase the amount of protein for grazing animals. The objective was to evaluate dry matter yield of Panicum maximum and land productivity under different fodder production systems during 2016/17-2017/18 seasons at Empangeni (28.6391° S and 31.9400° E). A randomized complete block design, replicated three times was used, the treatments were sole Panicum maximum, Panicum maximum + Sesbania sesban, Panicum maximum + pigeonpea, sole Sesbania sesban, Sole pigeonpea. Three months S.sesbania seedlings were transplanted whilst pigeonpea was direct seeded at spacing of 1m x 1m. P. maximum seeds were drilled at a respective rate of 7.5 kg/ha having an inter-row spacing of 0.25 m apart. In between rows of trees P. maximum seeds were drilled. The dry matter yield harvesting times were separated by six months’ timeframe. A 0.25 m² quadrant randomly placed on 3 points on the plot was used as sampling area during harvesting P. maximum. There was significant difference P < 0.05 across 3 harvests and total dry matter. P. maximum had higher dry matter yield as compared to both intercrops at first harvest and total. The second and third harvest had no significant difference with pigeonpea intercrop. The results was in this order for all 3 harvest: P. maximum (541.2c, 1209.3b and 1557b) kg ha¹ ≥ P. maximum + pigeonpea (157.2b, 926.7b and 1129b) kg ha¹ > P. maximum + S. sesban (36.3a, 282a and 555a) kg ha¹. Total accumulation of dry matter yield of P. maximum (3307c kg ha¹) > P. maximum + pigeonpea (2212 kg ha¹) ≥ P. maximum + S. sesban (874 kg ha¹). There was a significant difference (P< 0.05) on seed yield for trees. Pigeonpea (1240.3 kg ha¹) ≥ Pigeonpea + P. maximum (862.7 kg ha¹) > S.sesbania (391.9 kg ha¹) ≥ S.sesbania + P. maximum. The Land Equivalent Ratio (LER) was in the following order P. maximum + pigeonpea (1.37) > P. maximum + S. sesban (0.84) > Pigeonpea (0.59) ≥ S. Sesbania (0.57) > P. maximum (0.26). Results indicates that it is beneficial to have P. maximum intercropped with pigeonpea because of higher land productivity. Planting grass with pigeonpea was more beneficial than S. sesban with grass or sole cropping in terms of saving the shortage of arable land. P. maximum + pigeonpea saves a substantial (37%) land which can be subsequently be used for other crop production. Pigeonpea is recommended as an intercrop with P. maximum due to its higher LER and combined production of livestock feed, human food, and firewood. Panicum grass is low in crude protein though high in carbohydrates, there is a need for intercropping it with legume trees. A farmer who buys concentrates can reduce costs by combining P. maximum with pigeonpea this will provide a balanced diet at low cost.

Keywords: fodder, livestock, productivity, smallholder farmers

Procedia PDF Downloads 123