Search results for: heterogeneous catalysts
75 Thermo-Mechanical Processing Scheme to Obtain Micro-Duplex Structure Favoring Superplasticity in an As-Cast and Homogenized Medium Alloyed Nickel Base Superalloy
Authors: K. Sahithya, I. Balasundar, Pritapant, T. Raghua
Abstract:
Ni-based superalloy with a nominal composition Ni-14% Cr-11% Co-5.8% Mo-2.4% Ti-2.4% Nb-2.8% Al-0.26 % Fe-0.032% Si-0.069% C (all in wt %) is used as turbine discs in a variety of aero engines. Like any other superalloy, the primary processing of the as-cast superalloy poses a major challenge due to its complex alloy chemistry. The challenge was circumvented by characterizing the different phases present in the material, optimizing the homogenization treatment, identifying a suitable thermomechanical processing window using dynamic materials modeling. The as-cast material was subjected to homogenization at 1200°C for a soaking period of 8 hours and quenched using different media. Water quenching (WQ) after homogenization resulted in very fine spherical γꞌ precipitates of sizes 30-50 nm, whereas furnace cooling (FC) after homogenization resulted in bimodal distribution of precipitates (primary gamma prime of size 300nm and secondary gamma prime of size 5-10 nm). MC type primary carbides that are stable till the melting point of the material were found in both WQ and FC samples. Deformation behaviour of both the materials below (1000-1100°C) and above gamma prime solvus (1100-1175°C) was evaluated by subjecting the material to series of compression tests at different constant true strain rates (0.0001/sec-1/sec). An in-detail examination of the precipitate dislocation interaction mechanisms carried out using TEM revealed precipitate shearing and Orowan looping as the mechanisms governing deformation in WQ and FC, respectively. Incoherent/semi coherent gamma prime precipitates in the case of FC material facilitates better workability of the material, whereas the coherent precipitates in WQ material contributed to higher resistance to deformation of the material. Both the materials exhibited discontinuous dynamic recrystallization (DDRX) above gamma prime solvus temperature. The recrystallization kinetics was slower in the case of WQ material. Very fine grain boundary carbides ( ≤ 300 nm) retarded the recrystallisation kinetics in WQ. Coarse carbides (1-5 µm) facilitate particle stimulated nucleation in FC material. The FC material was cogged (primary hot working) 1120˚C, 0.03/sec resulting in significant grain refinement, i.e., from 3000 μm to 100 μm. The primary processed material was subjected to intensive thermomechanical deformation subsequently by reducing the temperature by 50˚C in each processing step with intermittent heterogenization treatment at selected temperatures aimed at simultaneous coarsening of the gamma prime precipitates and refinement of the gamma matrix grains. The heterogeneous annealing treatment carried out, resulted in gamma grains of 10 μm and gamma prime precipitates of 1-2 μm. Further thermo mechanical processing of the material was carried out at 1025˚C to increase the homogeneity of the obtained micro-duplex structure.Keywords: superalloys, dynamic material modeling, nickel alloys, dynamic recrystallization, superplasticity
Procedia PDF Downloads 12174 Bacterial Diversity in Human Intestinal Microbiota and Correlations with Nutritional Behavior, Physiology, Xenobiotics Intake and Antimicrobial Resistance in Obese, Overweight and Eutrophic Individuals
Authors: Thais O. de Paula, Marjorie R. A. Sarmiento, Francis M. Borges, Alessandra B. Ferreira-Machado, Juliana A. Resende, Dioneia E. Cesar, Vania L. Silva, Claudio G. Diniz
Abstract:
Obesity is currently a worldwide public health threat, being considered a pandemic multifactorial disease related to the human gut microbiota (GM). Add to that GM is considered an important reservoir of antimicrobial resistance genes (ARG) and little is known on GM and ARG in obesity, considering the altered physiology and xenobiotics intake. As regional and social behavior may play important roles in GM modulation, and most of the studies are based on small sample size and various methodological approaches resulting in difficulties for data comparisons, this study was focused on the investigation of GM bacterial diversity in obese (OB), overweight (OW) and eutrophic individuals (ET) considering their nutritional, clinical and social characteristics; and comparative screening of AGR related to their physiology and xenobiotics intake. Microbial community was accessed by FISH considering phyla as a taxonomic level, and PCR-DGGE followed by dendrograms evaluation (UPGMA method) from fecal metagenome of 72 volunteers classified according to their body mass index (BMI). Nutritional, clinical, social parameters and xenobiotics intake were recorded for correlation analysis. The fecal metagenome was also used as template for PCR targeting 59 different ARG. Overall, 62% of OB were hypertensive, and 12% or 4% were, regarding the OW and ET individuals. Most of the OB were rated as low income (80%). Lower relative bacterial densities were observed in the OB compared to ET for almost all studied taxa (p < 0.05) with Firmicutes/Bacteroidetes ratio increased in the OB group. OW individuals showed a bacterial density representative of GM more likely to the OB. All the participants were clustered in 3 different groups based on the PCR-DGGE fingerprint patterns (C1, C2, C3), being OB mostly grouped in C1 (83.3%) and ET mostly grouped in C3 (50%). The cluster C2 showed to be transitional. Among 27 ARG detected, a cluster of 17 was observed in all groups suggesting a common core. In general, ARG were observed mostly within OB individuals followed by OW and ET. The ratio between ARG and bacterial groups may suggest that AGR were more related to enterobacteria. Positive correlations were observed between ARG and BMI, calories and xenobiotics intake (especially use of sweeteners). As with nutritional and clinical characteristics, our data may suggest that GM of OW individuals behave in a heterogeneous pattern, occasionally more likely to the OB or to the ET. Regardless the regional and social behaviors of our population, the methodological approaches in this study were complementary and confirmatory. The imbalance of GM over the health-disease interface in obesity is a matter of fact, but its influence in host's physiology is still to be clearly elucidated to help understanding the multifactorial etiology of obesity. Although the results are in agreement with observations that GM is altered in obesity, the altered physiology in OB individuals seems to be also associated to the increased xenobiotics intake and may interfere with GM towards antimicrobial resistance, as observed by the fecal metagenome and ARG screening. Support: FAPEMIG, CNPQ, CAPES, PPGCBIO/UFJF.Keywords: antimicrobial resistance, bacterial diversity, gut microbiota, obesity
Procedia PDF Downloads 17073 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling
Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie
Abstract:
Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling
Procedia PDF Downloads 9372 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico
Authors: Ismene Ithai Bras-Ruiz
Abstract:
Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise
Procedia PDF Downloads 12871 Pricing Effects on Equitable Distribution of Forest Products and Livelihood Improvement in Nepalese Community Forestry
Authors: Laxuman Thakuri
Abstract:
Despite the large number of in-depth case studies focused on policy analysis, institutional arrangement, and collective action of common property resource management; how the local institutions take the pricing decision of forest products in community forest management and what kinds of effects produce it, the answers of these questions are largely silent among the policy-makers and researchers alike. The study examined how the local institutions take the pricing decision of forest products in the lowland community forestry of Nepal and how the decisions affect to equitable distribution of benefits and livelihood improvement which are also objectives of Nepalese community forestry. The study assumes that forest products pricing decisions have multiple effects on equitable distribution and livelihood improvement in the areas having heterogeneous socio-economic conditions. The dissertation was carried out at four community forests of lowland, Nepal that has characteristics of high value species, matured-experience of community forest management and better record-keeping system of forest products production, pricing and distribution. The questionnaire survey, individual to group discussions and direct field observation were applied for data collection from the field, and Lorenz curve, gini-coefficient, χ²-text, and SWOT (Strong, Weak, Opportunity, and Threat) analysis were performed for data analysis and results interpretation. The dissertation demonstrates that the low pricing strategy of high-value forest products was supposed crucial to increase the access of socio-economically weak households, and to and control over the important forest products such as timber, but found counter productive as the strategy increased the access of socio-economically better-off households at higher rate. In addition, the strategy contradicts to collect a large-scale community fund and carry out livelihood improvement activities as per the community forestry objectives. The crucial part of the study is despite the fact of low pricing strategy; the timber alone contributed large part of community fund collection. The results revealed close relation between pricing decisions and livelihood objectives. The action research result shows that positive price discrimination can slightly reduce the prevailing inequality and increase the fund. However, it lacks to harness the full price of forest products and collects a large-scale community fund. For broader outcomes of common property resource management in terms of resource sustainability, equity, and livelihood opportunity, the study suggests local institutions to harness the full price of resource products with respect to the local market.Keywords: community, equitable, forest, livelihood, socioeconomic, Nepal
Procedia PDF Downloads 53770 Photocatalytic Properties of Pt/Er-KTaO3
Authors: Anna Krukowska, Tomasz Klimczuk, Adriana Zaleska-Medynska
Abstract:
Photoactive materials have attracted attention due to their potential application in the degradation of environmental pollutants to non-hazardous compounds in an eco-friendly route. Among semiconductor photocatalysts, tantalates such as potassium tantalate (KTaO3) is one of the excellent functional photomaterial. However, tantalates-based materials are less active under visible-light irradiation, the enhancement in photoactivity could be improved with the modification of opto-eletronic properties of KTaO3 by doping rare earth metal (Er) and further photodeposition of noble metal nanoparticles (Pt). Inclusion of rare earth element in orthorhombic structure of tantalate can generate one high-energy photon by absorbing two or more incident low-energy photons, which convert visible-light and infrared-light into the ultraviolet-light to satisfy the requirement of KTaO3 photocatalysts. On the other hand, depositions of noble metal nanoparticles on the surface of semiconductor strongly absorb visible-light due to their surface plasmon resonance, in which their conducting electrons undergo a collective oscillation induced by electric field of visible-light. Furthermore, the high dispersion of Pt nanoparticles, which will be obtained by photodeposition process is additional important factor to improve the photocatalytic activity. The present work is aimed to study the effect of photocatalytic process of the prepared Er-doped KTaO3 and further incorporation of Pt nanoparticles by photodeposition. Moreover, the research is also studied correlations between photocatalytic activity and physico-chemical properties of obtained Pt/Er-KTaO3 samples. The Er-doped KTaO3 microcomposites were synthesized by a hydrothermal method. Then photodeposition method was used for Pt loading over Er-KTaO3. The structural and optical properties of Pt/Er-KTaO3 photocatalytic were characterized using scanning electron microscope (SEM), X-ray diffraction (XRD), volumetric adsorption method (BET), UV-Vis absorption measurement, Raman spectroscopy and luminescence spectroscopy. The photocatalytic properties of Pt/Er-KTaO3 microcomposites were investigated by degradation of phenol in aqueous phase as model pollutant under visible and ultraviolet-light irradiation. Results of this work show that all the prepared photocatalysis exhibit low BET surface area, although doping of the bare KTaO3 with rare earth element (Er) presents a slight increase in this value. The crystalline structure of Pt/Er-KTaO3 powders exhibited nearly identical positions for the main peak at about 22,8o and the XRD pattern could be assigned to an orthorhombic distorted perovskite structure. The Raman spectra of obtained semiconductors confirmed demonstrating perovskite-like structure. The optical absorption spectra of Pt nanoparticles exhibited plasmon absorption band for main peaks at about 216 and 264 nm. The addition of Pt nanoparticles increased photoactivity compared to Er-KTaO3 and pure KTaO3. Summary optical properties of KTaO3 change with its doping Er-element and further photodeposition of Pt nanoparticles.Keywords: heterogeneous photocatalytic, KTaO3 photocatalysts, Er3+ ion doping, Pt photodeposition
Procedia PDF Downloads 36169 Disability in the Course of a Chronic Disease: The Example of People Living with Multiple Sclerosis in Poland
Authors: Milena Trojanowska
Abstract:
Disability is a phenomenon for which meanings and definitions have evolved over the decades. This became the trigger to start a project to answer the question of what disability constitutes in the course of an incurable chronic disease. The chosen research group are people living with multiple sclerosis.The contextual phase of the research was participant observation at the Polish Multiple Sclerosis Society, the largest NGO in Poland supporting people living with MS and their relatives. The research techniques used in the project are (in order of implementation): group interviews with people living with MS and their relatives, narrative interviews, asynchronous technique, participant observation during events organised for people living with MS and their relatives.The researcher is currently conducting follow-up interviews, as inaccuracies in the respondents' narratives were identified during the data analysis. Interviews and supplementary research techniques were used over the four years of the research, and the researcher also benefited from experience gained from 12 years of working with NGOs (diaries, notes). The research was carried out in Poland with the participation of people living in this country only.The research has been based on grounded theory methodology in a constructivist perspectivedeveloped by Kathy Charmaz. The goal was to follow the idea that research must be reliable, original, and useful. The aim was to construct an interpretive theory that assumes temporality and the processualityof social life. TheAtlas.ti software was used to collect research material and analyse it. It is a program from the CAQDAS(Computer-Assisted Qualitative Data Analysis Software) group.Several key factors influencing the construction of a disability identity by people living with multiple sclerosis was identified:-course of interaction with significant relatives,- the expectation of identification with disability (expressed by close relatives),- economic profitability (pension, allowances),- institutional advantages (e.g. parking card),- independence and autonomy (not equated with physical condition, but access to adapted infrastructure and resources to support daily functioning),- the way a person with MS construes the meaning of disability,- physical and mental state,- medical diagnosis of illness.In addition, it has been shown that making an assumption about the experience of disability in the course of MS is a form of cognitive reductionism leading to further phenomenon such as: the expectation of the person with MS to construct a social identity as a person with a disability (e.g. giving up work), the occurrence of institutional inequalities. It can also be a determinant of the choice of a life strategy that limits social and individual functioning, even if this necessity is not influenced by the person's physical or psychological condition.The results of the research are important for the development of knowledge about the phenomenon of disability. It indicates the contextuality and complexity of the disability phenomenon, which in the light of the research is a set of different phenomenon of heterogeneous nature and multifaceted causality. This knowledge can also be useful for institutions and organisations in the non-governmental sector supporting people with disabilities and people living with multiple sclerosis.Keywords: disability, multiple sclerosis, grounded theory, poland
Procedia PDF Downloads 10868 Narratives of Self-Renewal: Looking for A Middle Earth In-Between Psychoanalysis and the Search for Consciousness
Authors: Marilena Fatigante
Abstract:
Contemporary psychoanalysis is increasingly acknowledging the existential demands of clients in psychotherapy. A significant aspect of the personal crises that patients face today is often rooted in the difficulty to find meaning in their own existence, even after working through or resolving traumatic memories and experiences. Tracing back to the correspondence between Freud and Romain Rolland (1927), psychoanalysis could not ignore that investigation of the psyche also encompasses the encounter with deep, psycho-sensory experiences, which involve a sense of "being one with the external world as a whole", the well-known “oceanic feeling”, as Rolland posed it. Despite the recognition of Non-ordinary States of Consciousness (NSC) as catalysts for transformation in clinical practice, highlighted by neuroscience and results from psychedelic-assisted therapies, there is few research on how psychoanalytic knowledge can integrate with other treatment traditions. These traditions, commonly rooted in non -Western, unconventional, and non-formal psychological knowledge, emphasize the individual’s innate tendency toward existential integrity and transcendence of self-boundaries. Inspired by an autobiographical account, this paper examines narratives of 12 individuals, who engaged in psychoanalytic therapy and also underwent treatment involving a non-formal helping relationship with an expert guide in consciousness, which included experience of this nature. The guide relies on 35 yrs of experience in Psychological, multidisciplinary studies in Human Sciences and Art, and demonstrates knowledge of many wisdom traditions, ranging from Eastern to Western philosophy, including Psychoanalysis and its development in cultural perspective (e.g, Ethnopsychiatry). Analyses focused primarily on two dimensions that research has identified as central in assessing the degree of treatment “success” in the patients’ narrative accounts of their therapies: agency and coherence, defined respectively as the increase, expressed in language, of the client’s perceived ability to manage his/her own challenges and the capacity, inherent in “narrative” itself as a resource for meaning making (Bruner, 1990), to provide the subject with a sense of unity, endowing his /her life experience with temporal and logical sequentiality. The present study reports that, in all narratives from the participants, agency and coherence are described differently than in “common” psychotherapy narratives. Although the participants consistently identified themselves as responsible agentic subject, the sense of agency derived from the non-conventional guidance pathway is never reduced to a personal, individual accomplishment. Rather, the more a new, fuller sense of “Life” (more than “Self”) develops out of the guidance pathway they engage with the expert guide, the more they “surrender” their own sense of autonomy and self-containment. Something, which Safran (2016) identified as well talking about the sense of surrender and “grace” in psychoanalytic sessions. Secondly, narratives of individuals engaging with the expert guide describe coherence not as repairing or enforcing continuity but as enhancing their ability to navigate dramatic discontinuities, falls, abrupt leaps and passages marked by feelings of loss and bereavement. The paper ultimately explores whether valid criteria can be established to analyze experiences of non-conventional paths of self-evolution. These paths are not opposed or alternative to conventional ones, and should not be simplistically dismissed as exotic or magical.Keywords: oceanic feeling, non conventional guidance, consciousness, narratives, treatment outcomes
Procedia PDF Downloads 3967 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands
Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé
Abstract:
The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis
Procedia PDF Downloads 16466 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life
Authors: Sandra Young
Abstract:
The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics
Procedia PDF Downloads 13865 Injunctions, Disjunctions, Remnants: The Reverse of Unity
Authors: Igor Guatelli
Abstract:
The universe of aesthetic perception entails impasses about sensitive divergences that each text or visual object may be subjected to. If approached through intertextuality that is not based on the misleading notion of kinships or similarities a priori admissible, the possibility of anachronistic, heterogeneous - and non-diachronic - assemblies can enhance the emergence of interval movements, intermediate, and conflicting, conducive to a method of reading, interpreting, and assigning meaning that escapes the rigid antinomies of the mere being and non-being of things. In negative, they operate in a relationship built by the lack of an adjusted meaning set by their positive existences, with no remainders; the generated interval becomes the remnant of each of them; it is the opening that obscures the stable positions of each one. Without the negative of absence, of that which is always missing or must be missing in a text, concept, or image made positive by history, nothing is perceived beyond what has been already given. Pairings or binary oppositions cannot lead only to functional syntheses; on the contrary, methodological disturbances accumulated by the approximation of signs and entities can initiate a process of becoming as an opening to an unforeseen other, transformation until a moment when the difficulties of [re]conciliation become the mainstay of a future of that sign/entity, not envisioned a priori. A counter-history can emerge from these unprecedented, misadjusted approaches, beginnings of unassigned injunctions and disjunctions, in short, difficult alliances that open cracks in a supposedly cohesive history, chained in its apparent linearity with no remains, understood as a categorical historical imperative. Interstices are minority fields that, because of their opening, are capable of causing opacity in that which, apparently, presents itself with irreducible clarity. Resulting from an incomplete and maladjusted [at the least dual] marriage between the signs/entities that originate them, this interval may destabilize and cause disorder in these entities and their own meanings. The interstitials offer a hyphenated relationship: a simultaneous union and separation, a spacing between the entity’s identity and its otherness or, alterity. One and the other may no longer be seen without the crack or fissure that now separates them, uniting, by a space-time lapse. Ontological, semantic shifts are caused by this fissure, an absence between one and the other, one with and against the other. Based on an improbable approximation between some conceptual and semantic shifts within the design production of architect Rem Koolhaas and the textual production of the philosopher Jacques Derrida, this article questions the notion of unity, coherence, affinity, and complementarity in the process of construction of thought from these ontological, epistemological, and semiological fissures that rattle the signs/entities and their stable meanings. Fissures in a thought that is considered coherent, cohesive, formatted are the negativity that constitutes the interstices that allow us to move towards what still remains as non-identity, which allows us to begin another story.Keywords: clearing, interstice, negative, remnant, spectrum
Procedia PDF Downloads 13564 Computational Approaches to Study Lineage Plasticity in Human Pancreatic Ductal Adenocarcinoma
Authors: Almudena Espin Perez, Tyler Risom, Carl Pelz, Isabel English, Robert M. Angelo, Rosalie Sears, Andrew J. Gentles
Abstract:
Pancreatic ductal adenocarcinoma (PDAC) is one of the most deadly malignancies. The role of the tumor microenvironment (TME) is gaining significant attention in cancer research. Despite ongoing efforts, the nature of the interactions between tumors, immune cells, and stromal cells remains poorly understood. The cell-intrinsic properties that govern cell lineage plasticity in PDAC and extrinsic influences of immune populations require technically challenging approaches due to the inherently heterogeneous nature of PDAC. Understanding the cell lineage plasticity of PDAC will improve the development of novel strategies that could be translated to the clinic. Members of the team have demonstrated that the acquisition of ductal to neuroendocrine lineage plasticity in PDAC confers therapeutic resistance and is a biomarker of poor outcomes in patients. Our approach combines computational methods for deconvolving bulk transcriptomic cancer data using CIBERSORTx and high-throughput single-cell imaging using Multiplexed Ion Beam Imaging (MIBI) to study lineage plasticity in PDAC and its relationship to the infiltrating immune system. The CIBERSORTx algorithm uses signature matrices from immune cells and stroma from sorted and single-cell data in order to 1) infer the fractions of different immune cell types and stromal cells in bulked gene expression data and 2) impute a representative transcriptome profile for each cell type. We studied a unique set of 300 genomically well-characterized primary PDAC samples with rich clinical annotation. We deconvolved the PDAC transcriptome profiles using CIBERSORTx, leveraging publicly available single-cell RNA-seq data from normal pancreatic tissue and PDAC to estimate cell type proportions in PDAC, and digitally reconstruct cell-specific transcriptional profiles from our study dataset. We built signature matrices and optimized by simulations and comparison to ground truth data. We identified cell-type-specific transcriptional programs that contribute to cancer cell lineage plasticity, especially in the ductal compartment. We also studied cell differentiation hierarchies using CytoTRACE and predict cell lineage trajectories for acinar and ductal cells that we believe are pinpointing relevant information on PDAC progression. Collaborators (Angelo lab, Stanford University) has led the development of the Multiplexed Ion Beam Imaging (MIBI) platform for spatial proteomics. We will use in the very near future MIBI from tissue microarray of 40 PDAC samples to understand the spatial relationship between cancer cell lineage plasticity and stromal cells focused on infiltrating immune cells, using the relevant markers of PDAC plasticity identified from the RNA-seq analysis.Keywords: deconvolution, imaging, microenvironment, PDAC
Procedia PDF Downloads 12863 Correlation between the Levels of Some Inflammatory Cytokines/Haematological Parameters and Khorana Scores of Newly Diagnosed Ambulatory Cancer Patients
Authors: Angela O. Ugwu, Sunday Ocheni
Abstract:
Background: Cancer-associated thrombosis (CAT) is a cause of morbidity and mortality among cancer patients. Several risk factors for developing venous thromboembolism (VTE) also coexist with cancer patients, such as chemotherapy and immobilization, thus contributing to the higher risk of VTE in cancer patients when compared to non-cancer patients. This study aimed to determine if there is any correlation between levels of some inflammatory cytokines/haematological parameters and Khorana scores of newly diagnosed chemotherapy naïve ambulatory cancer patients (CNACP). Methods: This was a cross-sectional analytical study carried out from June 2021 to May 2022. Eligible newly diagnosed cancer patients 18 years and above (case group) were enrolled consecutively from the adult Oncology Clinics of the University of Nigeria Teaching Hospital, Ituku/Ozalla (UNTH). The control group was blood donors at UNTH Ituku/Ozalla, Enugu blood bank, and healthy members of the Medical and Dental Consultants Association of Nigeria (MDCAN), UNTH Chapter. Blood samples collected from the participants were assayed for IL-6, TNF-Alpha, and haematological parameters such as haemoglobin, white blood cell count (WBC), and platelet count. Data were entered into an Excel worksheet and were then analyzed using Statistical Package for Social Sciences (SPSS) computer software version 21.0 for windows. A P value of < 0.05 was considered statistically significant. Results: A total of 200 participants (100 cases and 100 controls) were included in the study. The overall mean age of the participants was 47.42 ±15.1 (range 20-76). The sociodemographic characteristics of the two groups, including age, sex, educational level, body mass index (BMI), and occupation, were similar (P > 0.05). Following One Way ANOVA, there were significant differences between the mean levels of interleukin-6 (IL-6) (p = 0.036) and tumor necrotic factor-α (TNF-α) (p = 0.001) in the three Khorana score groups of the case group. Pearson’s correlation analysis showed a significant positive correlation between the Khorana scores and IL-6 (r=0.28, p = 0.031), TNF-α (r= 0.254, p= 0.011), and PLR (r= 0.240, p=0.016). The mean serum levels of IL-6 were significantly higher in CNACP than in the healthy controls [8.98 (8-12) pg/ml vs. 8.43 (2-10) pg/ml, P=0.0005]. There were also significant differences in the mean levels of the haemoglobin (Hb) level (P < 0.001)); white blood cell (WBC) count ((P < 0.001), and platelet (PL) count (P = 0.005) between the two groups of participants. Conclusion: There is a significant positive correlation between the serum levels of IL-6, TNF-α, and PLR and the Khorana scores of CNACP. The mean serum levels of IL-6, TNF-α, PLR, WBC, and PL count were significantly higher in CNACP than in the healthy controls. Ambulatory cancer patients with high-risk Khorana scores may benefit from anti-inflammatory drugs because of the positive correlation with inflammatory cytokines. Recommendations: Ambulatory cancer patients with 2 Khorana scores may benefit from thromboprophylaxis since they have higher Khorana scores. A multicenter study with a heterogeneous population and larger sample size is recommended in the future to further elucidate the relationship between IL-6, TNF-α, PLR, and the Khorana scores among cancer patients in the Nigerian population.Keywords: thromboprophylaxis, cancer, Khorana scores, inflammatory cytokines, haematological parameters
Procedia PDF Downloads 8262 Patterns and Predictors of Intended Service Use among Frail Older Adults in Urban China
Authors: Yuanyuan Fu
Abstract:
Background and Purpose: Along with the change of society and economy, the traditional home function of old people has gradually weakened in the contemporary China. Acknowledging these situations, to better meet old people’s needs on formal services and improve the quality of later life, this study seeks to identify patterns of intended service use among frail old people living in the communities and examined determinants that explain heterogeneous variations in old people’s intended service use patterns. Additionally, this study also tested the relationship between culture value and intended service use patterns and the mediating role of enabling factors in terms of culture value and intended service use patterns. Methods:Participants were recruited from Haidian District, Beijing, China in 2015. The multi-stage sampling method was adopted to select sub-districts, communities and old people aged 70 years old or older. After screening, 577 old people with limitations in daily life, were successfully interviewed. After data cleaning, 550 samples were included for data analysis. This study establishes a conceptual framework based on the Anderson Model (including predisposing factors, enabling factors and need factors), and further developed it by adding culture value factors (including attitudes towards filial piety and attitudes towards social face). Using a latent class analysis (LCA), this study classifies overall patterns of old people’s formal service utilization. Fourteen types of formal services were taken into account, including housework, voluntary support, transportation, home-delivered meals, and home-delivery medical care, elderly’s canteen and day-care center/respite care and so on. Structural equation modeling (SEM) was used to examine the direct effect of culture value on service use pattern, and the mediating effect of the enabling factors. Results: The LCA classified a hierarchical structure of service use patterns: multiple intended service use (N=69, 23%), selective intended service use (N=129, 23%), and light intended service use (N=352, 64%). Through SEM, after controlling predisposing factors and need factors, the results showed the significant direct effect of culture value on older people’s intended service use patterns. Enabling factors had a partial mediation effect on the relationship between culture value and the patterns. Conclusions and Implications: Differentiation of formal services may be important for meeting frail old people’s service needs and distributing program resources by identifying target populations for intervention, which may make reference to specific interventions to better support frail old people. Additionally, culture value had a unique direct effect on the intended service use patterns of frail old people in China, enriching our theoretical understanding of sources of culture value and their impacts. The findings also highlighted the mediation effects of enabling factors on the relationship between culture value factors and intended service use patterns. This study suggests that researchers and service providers should pay more attention to the important role of culture value factors in contributing to intended service use patterns and also be more sensitive to the mediating effect of enabling factors when discussing the relationship between culture value and the patterns.Keywords: frail old people, intended service use pattern, culture value, enabling factors, contemporary China, latent class analysis
Procedia PDF Downloads 22661 Detection, Analysis and Determination of the Origin of Copy Number Variants (CNVs) in Intellectual Disability/Developmental Delay (ID/DD) Patients and Autistic Spectrum Disorders (ASD) Patients by Molecular and Cytogenetic Methods
Authors: Pavlina Capkova, Josef Srovnal, Vera Becvarova, Marie Trkova, Zuzana Capkova, Andrea Stefekova, Vaclava Curtisova, Alena Santava, Sarka Vejvalkova, Katerina Adamova, Radek Vodicka
Abstract:
ASDs are heterogeneous and complex developmental diseases with a significant genetic background. Recurrent CNVs are known to be a frequent cause of ASD. These CNVs can have, however, a variable expressivity which results in a spectrum of phenotypes from asymptomatic to ID/DD/ASD. ASD is associated with ID in ~75% individuals. Various platforms are used to detect pathogenic mutations in the genome of these patients. The performed study is focused on a determination of the frequency of pathogenic mutations in a group of ASD patients and a group of ID/DD patients using various strategies along with a comparison of their detection rate. The possible role of the origin of these mutations in aetiology of ASD was assessed. The study included 35 individuals with ASD and 68 individuals with ID/DD (64 males and 39 females in total), who underwent rigorous genetic, neurological and psychological examinations. Screening for pathogenic mutations involved karyotyping, screening for FMR1 mutations and for metabolic disorders, a targeted MLPA test with probe mixes Telomeres 3 and 5, Microdeletion 1 and 2, Autism 1, MRX and a chromosomal microarray analysis (CMA) (Illumina or Affymetrix). Chromosomal aberrations were revealed in 7 (1 in the ASD group) individuals by karyotyping. FMR1 mutations were discovered in 3 (1 in the ASD group) individuals. The detection rate of pathogenic mutations in ASD patients with a normal karyotype was 15.15% by MLPA and CMA. The frequencies of the pathogenic mutations were 25.0% by MLPA and 35.0% by CMA in ID/DD patients with a normal karyotype. CNVs inherited from asymptomatic parents were more abundant than de novo changes in ASD patients (11.43% vs. 5.71%) in contrast to the ID/DD group where de novo mutations prevailed over inherited ones (26.47% vs. 16.18%). ASD patients shared more frequently their mutations with their fathers than patients from ID/DD group (8.57% vs. 1.47%). Maternally inherited mutations predominated in the ID/DD group in comparison with the ASD group (14.7% vs. 2.86 %). CNVs of an unknown significance were found in 10 patients by CMA and in 3 patients by MLPA. Although the detection rate is the highest when using CMA, recurrent CNVs can be easily detected by MLPA. CMA proved to be more efficient in the ID/DD group where a larger spectrum of rare pathogenic CNVs was revealed. This study determined that maternally inherited highly penetrant mutations and de novo mutations more often resulted in ID/DD without ASD in patients. The paternally inherited mutations could be, however, a source of the greater variability in the genome of the ASD patients and contribute to the polygenic character of the inheritance of ASD. As the number of the subjects in the group is limited, a larger cohort is needed to confirm this conclusion. Inherited CNVs have a role in aetiology of ASD possibly in combination with additional genetic factors - the mutations elsewhere in the genome. The identification of these interactions constitutes a challenge for the future. Supported by MH CZ – DRO (FNOl, 00098892), IGA UP LF_2016_010, TACR TE02000058 and NPU LO1304.Keywords: autistic spectrum disorders, copy number variant, chromosomal microarray, intellectual disability, karyotyping, MLPA, multiplex ligation-dependent probe amplification
Procedia PDF Downloads 35260 Unfolding Architectural Assemblages: Mapping Contemporary Spatial Objects' Affective Capacity
Authors: Panagiotis Roupas, Yota Passia
Abstract:
This paper aims at establishing an index of design mechanisms - immanent in spatial objects - based on the affective capacity of their material formations. While spatial objects (design objects, buildings, urban configurations, etc.) are regarded as systems composed of interacting parts, within the premises of assemblage theory, their ability to affect and to be affected has not yet been mapped or sufficiently explored. This ability lies in excess, a latent potentiality they contain, not transcendental but immanent in their pre-subjective aesthetic power. As spatial structures are theorized as assemblages - composed of heterogeneous elements that enter into relations with one another - and since all assemblages are parts of larger assemblages, their components' ability to engage is contingent. We thus seek to unfold the mechanisms inherent in spatial objects that allow to the constituent parts of design assemblages to perpetually enter into new assemblages. To map architectural assemblage's affective ability, spatial objects are analyzed in two axes. The first axis focuses on the relations that the assemblage's material and expressive components develop in order to enter the assemblages. Material components refer to those material elements that an assemblage requires in order to exist, while expressive components includes non-linguistic (sense impressions) as well as linguistic (beliefs). The second axis records the processes known as a-signifying signs or a-signs, which are the triggering mechanisms able to territorialize or deterritorialize, stabilize or destabilize the assemblage and thus allow it to assemble anew. As a-signs cannot be isolated from matter, we point to their resulting effects, which without entering the linguistic level they are expressed in terms of intensity fields: modulations, movements, speeds, rhythms, spasms, etc. They belong to a molecular level where they operate in the pre-subjective world of perceptions, effects, drives, and emotions. A-signs have been introduced as intensities that transform the object beyond meaning, beyond fixed or known cognitive procedures. To that end, from an archive of more than 100 spatial objects by contemporary architects and designers, we have created an effective mechanisms index is created, where each a-sign is now connected with the list of effects it triggers and which thoroughly defines it. And vice versa, the same effect can be triggered by different a-signs, allowing the design object to lie in a perpetual state of becoming. To define spatial objects, A-signs are categorized in terms of their aesthetic power to affect and to be affected on the basis of the general categories of form, structure and surface. Thus, different part's degree of contingency are evaluated and measured and finally, we introduce as material information that is immanent in the spatial object while at the same time they confer no meaning; they only convey some information without semantic content. Through this index, we are able to analyze and direct the final form of the spatial object while at the same time establishing the mechanism to measure its continuous transformation.Keywords: affective mechanisms index, architectural assemblages, a-signifying signs, cartography, virtual
Procedia PDF Downloads 13059 Urban Stratification as a Basis for Analyzing Political Instability: Evidence from Syrian Cities
Authors: Munqeth Othman Agha
Abstract:
The historical formation of urban centres in the eastern Arab world was shaped by rapid urbanization and sudden transformation from the age of the pre-industrial to a post-industrial economy, coupled with uneven development, informal urban expansion, and constant surges in unemployment and poverty rates. The city was stratified accordingly as overlapping layers of division and inequality that have been built on top of each other, creating complex horizontal and vertical divisions based on economic, social, political, and ethno-sectarian basis. This has been further exacerbated during the neoliberal era, which transferred the city into a sort of dual city that is inhabited by heterogeneous and often antagonistic social groups. Economic deprivation combined with a growing sense of marginalization and inequality across the city planted the seeds of political instability, outbreaking in 2011. Unlike other popular uprisings that occupy central squares, as in Egypt and Tunisia, the Syrian uprising in 2011 took place mainly within inner streets and neighborhood squares, mobilizing primarily on more or less upon the lines of stratification. This has emphasized the role of micro-urban and social settings in shaping mobilization and resistance tactics, which necessitates us to understand the way the city was stratified and place it at the center of the city-conflict nexus analysis. This research aims to understand to what extent pre-conflict urban stratification lines played a role in determining the different trajectories of three cities’ neighborhoods (Homs, Dara’a and Deir-ez-Zor). The main argument of the paper is that the way the Syrian city has been stratified creates various social groups within the city who have enjoyed different levels of accessibility to life chances, material resources and social statuses. This determines their relationship with other social groups in the city and, more importantly, their relationship with the state. The advent of a political opportunity will be depicted differently across the city’s different social groups according to their perceived interests and threats, which consequently leads to either political mobilization or demobilization. Several factors, including the type of social structures, built environment, and state response, determine the ability of social actors to transfer the repertoire of contention to collective action or transfer from social actors to political actors. The research uses urban stratification lines as the basis for understanding the different patterns of political upheavals in urban areas while explaining why neighborhoods with different social and urban environment settings had different abilities and capacities to mobilize, resist state repression and then descend into a military conflict. It particularly traces the transformation from social groups to social actors and political actors by applying the Explaining-outcome Process-Tracing method to depict the causal mechanisms that led to including or excluding different neighborhoods from each stage of the uprising, namely mobilization (M1), response (M2), and control (M3).Keywords: urban stratification, syrian conflict, social movement, process tracing, divided city
Procedia PDF Downloads 7358 Fabrication of Zeolite Modified Cu Doped ZnO Films and Their Response towards Nitrogen Monoxide
Authors: Irmak Karaduman, Tugba Corlu, Sezin Galioglu, Burcu Akata, M. Ali Yildirim, Aytunç Ateş, Selim Acar
Abstract:
Breath analysis represents a promising non-invasive, fast and cost-effective alternative to well-established diagnostic and monitoring techniques such as blood analysis, endoscopy, ultrasonic and tomographic monitoring. Portable, non-invasive, and low-cost breath analysis devices are becoming increasingly desirable for monitoring different diseases, especially asthma. Beacuse of this, NO gas sensing at low concentrations has attracted progressive attention for clinical analysis in asthma. Recently, nanomaterials based sensors are considered to be a promising clinical and laboratory diagnostic tool, because its large surface–to–volume ratio, controllable structure, easily tailored chemical and physical properties, which bring high sensitivity, fast dynamic processand even the increasing specificity. Among various nanomaterials, semiconducting metal oxides are extensively studied gas-sensing materials and are potential sensing elements for breathanalyzer due to their high sensitivity, simple design, low cost and good stability.The sensitivities of metal oxide semiconductor gas sensors can be enhanced by adding noble metals. Doping contents, distribution, and size of metallic or metal oxide catalysts are key parameters for enhancing gas selectivity as well as sensitivity. By manufacturing doping MOS structures, it is possible to develop more efficient sensor sensing layers. Zeolites are perhaps the most widely employed group of silicon-based nanoporous solids. Their well-defined pores of sub nanometric size have earned them the name of molecular sieves, meaning that operation in the size exclusion regime is possible by selecting, among over 170 structures available, the zeolite whose pores allow the pass of the desired molecule, while keeping larger molecules outside.In fact it is selective adsorption, rather than molecular sieving, the mechanism that explains most of the successful gas separations achieved with zeolite membranes. In view of their molecular sieving and selective adsorption properties, it is not surprising that zeolites have found use in a number of works dealing with gas sensing devices. In this study, the Cu doped ZnO nanostructure film was produced by SILAR method and investigated the NO gas sensing properties. To obtain the selectivity of the sample, the gases including CO,NH3,H2 and CH4 were detected to compare with NO. The maximum response is obtained at 85 C for 20 ppb NO gas. The sensor shows high response to NO gas. However, acceptable responses are calculated for CO and NH3 gases. Therefore, there are no responses obtain for H2 and CH4 gases. Enhanced to selectivity, Cu doped ZnO nanostructure film was coated with zeolite A thin film. It is found that the sample possess an acceptable response towards NO hardly respond to CO, NH3, H2 and CH4 at room temperature. This difference in the response can be expressed in terms of differences in the molecular structure, the dipole moment, strength of the electrostatic interaction and the dielectric constant. The as-synthesized thin film is considered to be one of the extremely promising candidate materials in electronic nose applications. This work is supported by The Scientific and Technological Research Council of Turkey (TUBİTAK) under Project No, 115M658 and Gazi University Scientific Research Fund under project no 05/2016-21.Keywords: Cu doped ZnO, electrical characterization, gas sensing, zeolite
Procedia PDF Downloads 28657 Feasibility and Acceptability of Mindfulness-Based Cognitive Therapy in People with Depression and Cardiovascular Disorders: A Feasibility Randomised Controlled Trial
Authors: Modi Alsubaie, Chris Dickens, Barnaby Dunn, Andy Gibson, Obioha Ukoumunned, Alison Evans, Rachael Vicary, Manish Gandhi, Willem Kuyken
Abstract:
Background: Depression co-occurs in 20% of people with cardiovascular disorders, can persist for years and predicts worse physical health outcomes. While psychosocial treatments have been shown to effectively treat acute depression in those with comorbid cardiovascular disorders, to date there has been no evaluation of approaches aiming to prevent relapse and treat residual depression symptoms in this group. Therefore, the current study aimed to examine the feasibility and acceptability of a randomised controlled trial design evaluating an adapted version of mindfulness-based cognitive therapy (MBCT) designed specifically for people with co-morbid depression and cardiovascular disorders. Methods: A 3-arm feasibility randomised controlled trial was conducted, comparing MBCT adapted for people with cardiovascular disorders plus treatment as usual (TAU), mindfulness-based stress reduction (MBSR) plus TAU, and TAU alone. Participants completed a set of self-report measures of depression severity, anxiety, quality of life, illness perceptions, mindfulness, self-compassion and affect and had their blood pressure taken immediately before, immediately after, and three months following the intervention. Those in the adapted-MBCT arm additionally underwent a qualitative interview to gather their views about the adapted intervention. Results: 3400 potentially eligible participants were approached when attending an outpatient appointment at a cardiology clinic or via a GP letter following a case note search. 242 (7.1%) were interested in taking part, 59 (1.7%) were screened as being suitable, and 33 (<1%) were eventually randomised to the three groups. The sample was heterogeneous in terms of whether they reported current depression or had a history of depression and the time since the onset of cardiovascular disease (one to 25 years). Of 11 participants randomised to adapted MBCT seven completed the full course, levels of home mindfulness practice were high, and positive qualitative feedback about the intervention was given. Twenty-nine out of 33 participants randomised completed all the assessment measures at all three-time points. With regards to the primary outcome (depression), five out of the seven people who completed the adapted MBCT and three out of five under MBSR showed significant clinical change, while in TAU no one showed any clinical change at the three-month follow-up. Conclusions: The adapted MBCT intervention was feasible and acceptable to participants. However, aspects of the trial design were not feasible. In particular, low recruitment rates were achieved, and there was a high withdrawal rate between screening and randomisation. Moreover, the heterogeneity in the sample was high meaning the adapted intervention was unlikely to be well tailored to all participants needs. This suggests that if the decision is made to move to a definitive trial, study recruitment procedures will need to be revised to more successfully recruit a target sample that optimally matches the adapted intervention.Keywords: mindfulness-based cognitive therapy (MBCT), depression, cardiovascular disorders, feasibility, acceptability
Procedia PDF Downloads 21956 Influence of Packing Density of Layers Placed in Specific Order in Composite Nonwoven Structure for Improved Filtration Performance
Authors: Saiyed M Ishtiaque, Priyal Dixit
Abstract:
Objectives: An approach is being suggested to design the filter media to maximize the filtration efficiency with minimum possible pressure drop of composite nonwoven by incorporating the layers of different packing densities induced by fibre of different deniers and punching parameters by using the concept of sequential punching technique in specific order in layered composite nonwoven structure. X-ray computed tomography technique is used to measure the packing density along the thickness of layered nonwoven structure composed by placing the layer of differently oriented fibres influenced by fibres of different deniers and punching parameters in various combinations to minimize the pressure drop at maximum possible filtration efficiency. Methodology Used: This work involves preparation of needle punched layered structure with batts 100g/m2 basis weight having fibre denier, punch density and needle penetration depth as variables to produce 300 g/m2 basis weight nonwoven composite. X-ray computed tomography technique is used to measure the packing density along the thickness of layered nonwoven structure composed by placing the layers of differently oriented fibres influenced by considered variables in various combinations. to minimize the pressure drop at maximum possible filtration efficiencyFor developing layered nonwoven fabrics, batts made of fibre of different deniers having 100g/m2 each basis weight were placed in various combinations. For second set of experiment, the composite nonwoven fabrics were prepared by using 3 denier circular cross section polyester fibre having 64 mm length on needle punched nonwoven machine by using the sequential punching technique to prepare the composite nonwoven fabrics. In this technique, three semi punched fabrics of 100 g/m2 each having either different punch densities or needle penetration depths were prepared for first phase of fabric preparation. These fabrics were later punched altogether to obtain the overall basis weight of 300 g/m2. The total punch density of the composite nonwoven fabric was kept at 200 punches/ cm2 with a needle penetration depth of 10 mm. The layered structures so formed were subcategorised into two groups- homogeneous layered structure in which all the three batts comprising the nonwoven fabric were made from same denier of fibre, punch density and needle penetration depth and were placed in different positions in respective fabric and heterogeneous layered structure in which batts were made from fibres of different deniers, punch densities and needle penetration depths and were placed in different positions. Contributions: The results concluded that reduction in pressure drop is not derived by the overall packing density of the layered nonwoven fabric rather sequencing of layers of specific packing density in layered structure decides the pressure drop. Accordingly, creation of inverse gradient of packing density in layered structure provided maximum filtration efficiency with least pressure drop. This study paves the way for the possibility of customising the composite nonwoven fabrics by the incorporation of differently oriented fibres in constituent layers induced by considered variablres for desired filtration properties.Keywords: filtration efficiency, layered nonwoven structure, packing density, pressure drop
Procedia PDF Downloads 7655 Nutritional Genomics Profile Based Personalized Sport Nutrition
Authors: Eszter Repasi, Akos Koller
Abstract:
Our genetic information determines our look, physiology, sports performance and all our features. Maximizing the performances of athletes have adopted a science-based approach to the nutritional support. Nowadays genetics studies have blended with nutritional sciences, and a dynamically evolving, new research field have appeared. Nutritional genomics is needed to be used by nutritional experts. This is a recent field of nutritional science, which can provide a solution to reach the best sport performance using correlations between the athlete’s genome, nutritions, molecules, included human microbiome (links between food, microbiome and epigenetics), nutrigenomics and nutrigenetics. Nutritional genomics has a tremendous potential to change the future of dietary guidelines and personal recommendations. Experts need to use new technology to get information about the athletes, like nutritional genomics profile (included the determination of the oral and gut microbiome and DNA coded reaction for food components), which can modify the preparation term and sports performance. The influence of nutrients on the genes expression is called Nutrigenomics. The heterogeneous response of gene variants to nutrients, dietary components is called Nutrigenetics. The human microbiome plays a critical role in the state of health and well-being, and there are more links between food or nutrition and the human microbiome composition, which can develop diseases and epigenetic changes as well. A nutritional genomics-based profile of athletes can be the best technic for a dietitian to make a unique sports nutrition diet plan. Using functional food and the right food components can be effected on health state, thus sports performance. Scientists need to determine the best response, due to the effect of nutrients on health, through altering genome promote metabolites and result changes in physiology. Nutritional biochemistry explains why polymorphisms in genes for the absorption, circulation, or metabolism of essential nutrients (such as n-3 polyunsaturated fatty acids or epigallocatechin-3-gallate), would affect the efficacy of that nutrient. Controlled nutritional deficiencies and failures, prevented the change of health state or a newly discovered food intolerance are observed by a proper medical team, can support better sports performance. It is important that the dietetics profession informed on gene-diet interactions, that may be leading to optimal health, reduced risk of injury or disease. A special medical application for documentation and monitoring of data of health state and risk factors can uphold and warn the medical team for an early action and help to be able to do a proper health service in time. This model can set up a personalized nutrition advice from the status control, through the recovery, to the monitoring. But more studies are needed to understand the mechanisms and to be able to change the composition of the microbiome, environmental and genetic risk factors in cases of athletes.Keywords: gene-diet interaction, multidisciplinary team, microbiome, diet plan
Procedia PDF Downloads 17254 Anti-proliferative Activity and HER2 Receptor Expression Analysis of MCF-7 (Breast Cancer Cell) Cells by Plant Extract Coleus Barbatus (Andrew)
Authors: Anupalli Roja Rani, Pavithra Dasari
Abstract:
Background: Among several, breast cancer has emerged as the most common female cancer in developing countries. It is the most common cause of cancer-related deaths worldwide among women. It is a molecularly and clinically heterogeneous disease. Moreover, it is a hormone–dependent tumor in which estrogens can regulate the growth of breast cells by binding with estrogen receptors (ERs). Moreover, the use of natural products in cancer therapeutics is due to their properties of biocompatibility and less toxicity. Plants are the vast reservoirs for various bioactive compounds. Coleus barbatus (Lamiaceae) contains anticancer properties against several cancer cell lines. Method: In the present study, an attempt is being made to enrich the knowledge of the anticancer activity of pure compounds extracted from Coleus barbatus (Andrew). On human breast cancer cell lines MCF-7. Here in, we are assessing the antiproliferative activity of Coleus barbatus (Andrew) plant extracts against MCF 7 and also evaluating their toxicity in normal human mammary cell lines such as Human Mammary Epithelial Cells (HMEC). The active fraction of plant extract was further purified with the help of Flash chromatography, Medium Pressure Liquid Chromatography (MPLC) and preparative High-Performance Liquid Chromatography (HPLC). The structure of pure compounds will be elucidated by using modern spectroscopic methods like Nuclear magnetic resonance (NMR), Electrospray Ionisation Mass Spectrometry (ESI-MS) methods. Later, the growth inhibition morphological assessment of cancer cells and cell cycle analysis of purified compounds were assessed using FACS. The growth and progression of signaling molecules HER2, GRP78 was studied by secretion assay using ELISA and expression analysis by flow cytometry. Result: Cytotoxic effect against MCF-7 with IC50 values were derived from dose response curves, using six concentrations of twofold serially diluted samples, by SOFTMax Pro software (Molecular device) and respectively Ellipticine and 0.5% DMSO were used as a positive and negative control. Conclusion: The present study shows the significance of various bioactive compounds extracted from Coleus barbatus (Andrew) root material. It acts as an anti-proliferative and shows cytotoxic effects on human breast cancer cell lines MCF7. The plant extracts play an important role pharmacologically. The whole plant has been used in traditional medicine for decades and the studies done have authenticated the practice. Earlier, as described, the plant has been used in the ayurveda and homeopathy medicine. However, more clinical and pathological studies must be conducted to investigate the unexploited potential of the plant. These studies will be very useful for drug designing in the future.Keywords: coleus barbatus, HPLC, MPLC, NMR, MCF7, flash chromatograph, ESI-MS, FACS, ELISA.
Procedia PDF Downloads 11453 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)
Authors: Eric Pla Erra, Mariana Jimenez Martinez
Abstract:
While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)
Procedia PDF Downloads 10652 Enhancement to Green Building Rating Systems for Industrial Facilities by Including the Assessment of Impact on the Landscape
Authors: Lia Marchi, Ernesto Antonini
Abstract:
The impact of industrial sites on people’s living environment both involves detrimental effects on the ecosystem and perceptual-aesthetic interferences with the scenery. These, in turn, affect the economic and social value of the landscape, as well as the wellbeing of workers and local communities. Given the diffusion of the phenomenon and the relevance of its effects, it emerges the need for a joint approach to assess and thus mitigate the impact of factories on the landscape –being this latest assumed as the result of the action and interaction of natural and human factors. However, the impact assessment tools suitable for the purpose are quite heterogeneous and mostly monodisciplinary. On the one hand, green building rating systems (GBRSs) are increasingly used to evaluate the performance of manufacturing sites, mainly by quantitative indicators focused on environmental issues. On the other hand, methods to detect the visual and social impact of factories on the landscape are gradually emerging in the literature, but they generally adopt only qualitative gauges. The research addresses the integration of the environmental impact assessment and the perceptual-aesthetic interferences of factories on the landscape. The GBRSs model is assumed as a reference since it is adequate to simultaneously investigate different topics which affect sustainability, returning a global score. A critical analysis of GBRSs relevant to industrial facilities has led to select the U.S. GBC LEED protocol as the most suitable to the scope. A revision of LEED v4 Building Design+Construction has then been provided by including specific indicators to measure the interferences of manufacturing sites with the perceptual-aesthetic and social aspects of the territory. To this end, a new impact category was defined, namely ‘PA - Perceptual-aesthetic aspects’, comprising eight new credits which are specifically designed to assess how much the buildings are in harmony with their surroundings: these investigate, for example the morphological and chromatic harmonization of the facility with the scenery or the site receptiveness and attractiveness. The credits weighting table was consequently revised, according to the LEED points allocation system. As all LEED credits, each new PA credit is thoroughly described in a sheet setting its aim, requirements, and the available options to gauge the interference and get a score. Lastly, each credit is related to mitigation tactics, which are drawn from a catalogue of exemplary case studies, it also developed by the research. The result is a modified LEED scheme which includes compatibility with the landscape within the sustainability assessment of the industrial sites. The whole system consists of 10 evaluation categories, which contain in total 62 credits. Lastly, a test of the tool on an Italian factory was performed, allowing the comparison of three mitigation scenarios with increasing compatibility level. The study proposes a holistic and viable approach to the environmental impact assessment of factories by a tool which integrates the multiple involved aspects within a worldwide recognized rating protocol.Keywords: environmental impact, GBRS, landscape, LEED, sustainable factory
Procedia PDF Downloads 11451 Carbon Nanotubes Functionalization via Ullmann-Type Reactions Yielding C-C, C-O and C-N Bonds
Authors: Anna Kolanowska, Anna Kuziel, Sławomir Boncel
Abstract:
Carbon nanotubes (CNTs) represent a combination of lightness and nanoscopic size with high tensile strength, excellent thermal and electrical conductivity. By now, CNTs have been used as a support in heterogeneous catalysis (CuCl anchored to pre-functionalized CNTs) in the Ullmann-type coupling with aryl halides toward formation of C-N and C-O bonds. The results indicated that the stability of the catalyst was much improved and the elaborated catalytic system was efficient and recyclable. However, CNTs have not been considered as the substrate itself in the Ullmann-type reactions. But if successful, this functionalization would open new areas of CNT chemistry leading to enhanced in-solvent/matrix nanotube individualization. The copper-catalyzed Ullmann-type reaction is an attractive method for the formation of carbon-heteroatom and carbon-carbon bonds in organic synthesis. This condensation reaction is usually conducted at temperature as high as 200 oC, often in the presence of stoichiometric amounts of copper reagent and with activated aryl halides. However, a small amount of organic additive (e.g. diamines, amino acids, diols, 1,10-phenanthroline) can be applied in order to increase the solubility and stability of copper catalyst, and at the same time to allow performing the reaction under mild conditions. The copper (pre-)catalyst is prepared by in situ mixing of copper salt and the appropriate chelator. Our research is focused on the application of Ullmann-type reaction for the covalent functionalization of CNTs. Firstly, CNTs were chlorinated by using iodine trichloride (ICl3) in carbon tetrachloride (CCl4). This method involves formation of several chemical species (ICl, Cl2 and I2Cl6), but the most reactive is the dimer. The fact (that the dimer is the main individual in CCl4) is the reason for high reactivity and possibly high functionalization levels of CNTs. This method, indeed, yielded a notable amount of chlorine onto the MWCNT surface. The next step was the reaction of CNT-Cl with three substrates: aniline, iodobenzene and phenol for the formation C-N, C-C and C-O bonds, respectively, in the presence of 1,10-phenanthroline and cesium carbonate (Cs2CO3) as a base. As the CNT substrates, two multi-wall CNT (MWCNT) types were used: commercially available Nanocyl NC7000™ (9.6 nm diameter, 1.5 µm length, 90% purity) and thicker MWCNTs (in-house) synthesized in our laboratory using catalytic chemical vapour deposition (c-CVD). In-house CNTs had diameter ranging between 60-70 nm and length up to 300 µm. Since classical Ullmann reaction was found as suffering from poor yields, we have investigated the effect of various solvents (toluene, acetonitrile, dimethyl sulfoxide and N,N-dimethylformamide) on the coupling of substrates. Owing to the fact that the aryl halides show the reactivity order of I>Br>Cl>F, we have also investigated the effect of iodine presence on CNT surface on reaction yield. In this case, in first step we have used iodine monochloride instead of iodine trichloride. Finally, we have used the optimized reaction conditions with p-bromophenol and 1,2,4-trihydroxybenzene for the control of CNT dispersion.Keywords: carbon nanotubes, coupling reaction, functionalization, Ullmann reaction
Procedia PDF Downloads 16850 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach
Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi
Abstract:
Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.
Procedia PDF Downloads 7349 Adapting an Accurate Reverse-time Migration Method to USCT Imaging
Authors: Brayden Mi
Abstract:
Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation
Procedia PDF Downloads 7548 Measuring Enterprise Growth: Pitfalls and Implications
Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić
Abstract:
Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises
Procedia PDF Downloads 25347 Review of Concepts and Tools Applied to Assess Risks Associated with Food Imports
Authors: A. Falenski, A. Kaesbohrer, M. Filter
Abstract:
Introduction: Risk assessments can be performed in various ways and in different degrees of complexity. In order to assess risks associated with imported foods additional information needs to be taken into account compared to a risk assessment on regional products. The present review is an overview on currently available best practise approaches and data sources used for food import risk assessments (IRAs). Methods: A literature review has been performed. PubMed was searched for articles about food IRAs published in the years 2004 to 2014 (English and German texts only, search string “(English [la] OR German [la]) (2004:2014 [dp]) import [ti] risk”). Titles and abstracts were screened for import risks in the context of IRAs. The finally selected publications were analysed according to a predefined questionnaire extracting the following information: risk assessment guidelines followed, modelling methods used, data and software applied, existence of an analysis of uncertainty and variability. IRAs cited in these publications were also included in the analysis. Results: The PubMed search resulted in 49 publications, 17 of which contained information about import risks and risk assessments. Within these 19 cross references were identified to be of interest for the present study. These included original articles, reviews and guidelines. At least one of the guidelines of the World Organisation for Animal Health (OIE) and the Codex Alimentarius Commission were referenced in any of the IRAs, either for import of animals or for imports concerning foods, respectively. Interestingly, also a combination of both was used to assess the risk associated with the import of live animals serving as the source of food. Methods ranged from full quantitative IRAs using probabilistic models and dose-response models to qualitative IRA in which decision trees or severity tables were set up using parameter estimations based on expert opinions. Calculations were done using @Risk, R or Excel. Most heterogeneous was the type of data used, ranging from general information on imported goods (food, live animals) to pathogen prevalence in the country of origin. These data were either publicly available in databases or lists (e.g., OIE WAHID and Handystatus II, FAOSTAT, Eurostat, TRACES), accessible on a national level (e.g., herd information) or only open to a small group of people (flight passenger import data at national airport customs office). In the IRAs, an uncertainty analysis has been mentioned in some cases, but calculations have been performed only in a few cases. Conclusion: The current state-of-the-art in the assessment of risks of imported foods is characterized by a great heterogeneity in relation to general methodology and data used. Often information is gathered on a case-by-case basis and reformatted by hand in order to perform the IRA. This analysis therefore illustrates the need for a flexible, modular framework supporting the connection of existing data sources with data analysis and modelling tools. Such an infrastructure could pave the way to IRA workflows applicable ad-hoc, e.g. in case of a crisis situation.Keywords: import risk assessment, review, tools, food import
Procedia PDF Downloads 30246 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer
Authors: Binder Hans
Abstract:
Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas
Procedia PDF Downloads 149