Search results for: specific methane production
750 In vitro Antimicrobial Resistance Pattern of Bovine Mastitis Bacteria in Ethiopia
Authors: Befekadu Urga Wakayo
Abstract:
Introduction: Bacterial infections represent major human and animal health problems in Ethiopia. In the face of poor antibiotic regulatory mechanisms, development of antimicrobial resistance (AMR) to commonly used drugs has become a growing health and livelihood threat in the country. Monitoring and control of AMR demand close coloration between human and veterinary services as well as other relevant stakeholders. However, risk of AMR transfer from animal to human population’s remains poorly explored in Ethiopia. This systematic research literature review attempted to give an overview on AMR challenges of bovine mastitis bacteria in Ethiopia. Methodology: A web based research literature search and analysis strategy was used. Databases are considered including; PubMed, Google Scholar, Ethiopian Veterinary Association (EVA) and Ethiopian Society of Animal Production (ESAP). The key search terms and phrases were; Ethiopia, dairy, cattle, mastitis, bacteria isolation, antibiotic sensitivity and antimicrobial resistance. Ultimately, 15 research reports were used for the current analysis. Data extraction was performed using a structured Microsoft Excel format. Frequency AMR prevalence (%) was registered directly or calculated from reported values. Statistical analysis was performed on SPSS – 16. Variables were summarized by giving frequencies (n or %), Mean ± SE and demonstrative box plots. One way ANOVA and independent t test were used to evaluate variations in AMR prevalence estimates (Ln transformed). Statistical significance was determined at p < 0.050). Results: AMR in bovine mastitis bacteria was investigated in a total of 592 in vitro antibiotic sensitivity trials involving 12 different mastitis bacteria (including 1126 Gram positive and 77 Gram negative isolates) and 14 antibiotics. Bovine mastitis bacteria exhibited AMR to most of the antibiotics tested. Gentamycin had the lowest average AMR in both Gram positive (2%) and negative (1.8%) bacteria. Gram negative mastitis bacteria showed higher mean in vitro resistance levels to; Erythromycin (72.6%), Tetracycline (56.65%), Amoxicillin (49.6%), Ampicillin (47.6%), Clindamycin (47.2%) and Penicillin (40.6%). Among Gram positive mastitis bacteria higher mean in vitro resistance was observed in; Ampicillin (32.8%), Amoxicillin (32.6%), Penicillin (24.9%), Streptomycin (20.2%), Penicillinase Resistant Penicillin’s (15.4%) and Tetracycline (14.9%). More specifically, S. aurues exhibited high mean AMR against Penicillin (76.3%) and Ampicillin (70.3%) followed by Amoxicillin (45%), Streptomycin (40.6%), Tetracycline (24.5%) and Clindamycin (23.5%). E. coli showed high mean AMR to Erythromycin (78.7%), Tetracycline (51.5%), Ampicillin (49.25%), Amoxicillin (43.3%), Clindamycin (38.4%) and Penicillin (33.8%). Streptococcus spp. demonstrated higher (p =0.005) mean AMR against Kanamycin (> 20%) and full sensitivity (100%) to Clindamycin. Overall, mean Tetracycline (p = 0.013), Gentamycin (p = 0.001), Polymixin (p = 0.034), Erythromycin (p = 0.011) and Ampicillin (p = 0.009) resistance increased from the 2010’s than the 2000’s. Conclusion; the review indicated a rising AMR challenge among bovine mastitis bacteria in Ethiopia. Corresponding, public health implications demand a deeper, integrated investigation.Keywords: antimicrobial resistance, dairy cattle, Ethiopia, Mastitis bacteria
Procedia PDF Downloads 245749 Comparison and Validation of a dsDNA biomimetic Quality Control Reference for NGS based BRCA CNV analysis versus MLPA
Authors: A. Delimitsou, C. Gouedard, E. Konstanta, A. Koletis, S. Patera, E. Manou, K. Spaho, S. Murray
Abstract:
Background: There remains a lack of International Standard Control Reference materials for Next Generation Sequencing-based approaches or device calibration. We have designed and validated dsDNA biomimetic reference materials for targeted such approaches incorporating proprietary motifs (patent pending) for device/test calibration. They enable internal single-sample calibration, alleviating sample comparisons to pooled historical population-based data assembly or statistical modelling approaches. We have validated such an approach for BRCA Copy Number Variation analytics using iQRS™-CNVSUITE versus Mixed Ligation-dependent Probe Amplification. Methods: Standard BRCA Copy Number Variation analysis was compared between mixed ligation-dependent probe amplification and next generation sequencing using a cohort of 198 breast/ovarian cancer patients. Next generation sequencing based copy number variation analysis of samples spiked with iQRS™ dsDNA biomimetics were analysed using proprietary CNVSUITE software. Mixed ligation-dependent probe amplification analyses were performed on an ABI-3130 Sequencer and analysed with Coffalyser software. Results: Concordance of BRCA – copy number variation events for mixed ligation-dependent probe amplification and CNVSUITE indicated an overall sensitivity of 99.88% and specificity of 100% for iQRS™-CNVSUITE. The negative predictive value of iQRS-CNVSUITE™ for BRCA was 100%, allowing for accurate exclusion of any event. The positive predictive value was 99.88%, with no discrepancy between mixed ligation-dependent probe amplification and iQRS™-CNVSUITE. For device calibration purposes, precision was 100%, spiking of patient DNA demonstrated linearity to 1% (±2.5%) and range from 100 copies. Traditional training was supplemented by predefining the calibrator to sample cut-off (lock-down) for amplicon gain or loss based upon a relative ratio threshold, following training of iQRS™-CNVSUITE using spiked iQRS™ calibrator and control mocks. BRCA copy number variation analysis using iQRS™-CNVSUITE™ was successfully validated and ISO15189 accredited and now enters CE-IVD performance evaluation. Conclusions: The inclusion of a reference control competitor (iQRS™ dsDNA mimetic) to next generation sequencing-based sequencing offers a more robust sample-independent approach for the assessment of copy number variation events compared to mixed ligation-dependent probe amplification. The approach simplifies data analyses, improves independent sample data analyses, and allows for direct comparison to an internal reference control for sample-specific quantification. Our iQRS™ biomimetic reference materials allow for single sample copy number variation analytics and further decentralisation of diagnostics to single patient sample assessment.Keywords: validation, diagnostics, oncology, copy number variation, reference material, calibration
Procedia PDF Downloads 66748 The Predictive Power of Successful Scientific Theories: An Explanatory Study on Their Substantive Ontologies through Theoretical Change
Authors: Damian Islas
Abstract:
Debates on realism in science concern two different questions: (I) whether the unobservable entities posited by theories can be known; and (II) whether any knowledge we have of them is objective or not. Question (I) arises from the doubt that since observation is the basis of all our factual knowledge, unobservable entities cannot be known. Question (II) arises from the doubt that since scientific representations are inextricably laden with the subjective, idiosyncratic, and a priori features of human cognition and scientific practice, they cannot convey any reliable information on how their objects are in themselves. A way of understanding scientific realism (SR) is through three lines of inquiry: ontological, semantic, and epistemological. Ontologically, scientific realism asserts the existence of a world independent of human mind. Semantically, scientific realism assumes that theoretical claims about reality show truth values and, thus, should be construed literally. Epistemologically, scientific realism believes that theoretical claims offer us knowledge of the world. Nowadays, the literature on scientific realism has proceeded rather far beyond the realism versus antirealism debate. This stance represents a middle-ground position between the two according to which science can attain justified true beliefs concerning relational facts about the unobservable realm but cannot attain justified true beliefs concerning the intrinsic nature of any objects occupying that realm. That is, the structural content of scientific theories about the unobservable can be known, but facts about the intrinsic nature of the entities that figure as place-holders in those structures cannot be known. There are two possible versions of SR: Epistemological Structural Realism (ESR) and Ontic Structural Realism (OSR). On ESR, an agnostic stance is preserved with respect to the natures of unobservable entities, but the possibility of knowing the relations obtaining between those entities is affirmed. OSR includes the rather striking claim that when it comes to the unobservables theorized about within fundamental physics, relations exist, but objects do not. Focusing on ESR, questions arise concerning its ability to explain the empirical success of a theory. Empirical success certainly involves predictive success, and predictive success implies a theory’s power to make accurate predictions. But a theory’s power to make any predictions at all seems to derive precisely from its core axioms or laws concerning unobservable entities and mechanisms, and not simply the sort of structural relations often expressed in equations. The specific challenge to ESR concerns its ability to explain the explanatory and predictive power of successful theories without appealing to their substantive ontologies, which are often not preserved by their successors. The response to this challenge will depend on the various and subtle different versions of ESR and OSR stances, which show a sort of progression through eliminativist OSR to moderate OSR of gradual increase in the ontological status accorded to objects. Knowing the relations between unobserved entities is methodologically identical to assert that these relations between unobserved entities exist.Keywords: eliminativist ontic structural realism, epistemological structuralism, moderate ontic structural realism, ontic structuralism
Procedia PDF Downloads 118747 Aspects Concerning the Use of Recycled Concrete Aggregates
Authors: Ion Robu, Claudiu Mazilu, Radu Deju
Abstract:
Natural aggregates (gravel and crushed) are essential non-renewable resources which are used for infrastructure works and civil engineering. In European Union member states from Southeast Europe, it is estimated that the construction industry will grow by 4.2% thereafter complicating aggregate supply management. In addition, a significant additional problem that can be associated to the aggregates industry is wasting potential resources through waste dumping of inert waste, especially waste from construction and demolition activities. In 2012, in Romania, less than 10% of construction and demolition waste (including concrete) are valorized, while the European Union requires that by 2020 this proportion should be at least 70% (Directive 2008/98/EC on waste, transposed into Romanian legislation by Law 211/2011). Depending on the efficiency of waste processing and the quality of recycled aggregate concrete (RCA) obtained, poor quality aggregate can be used as foundation material for roads and at the high quality for new concrete on construction. To obtain good quality concrete using recycled aggregate is necessary to meet the minimum requirements defined by the rules for the manufacture of concrete with natural aggregate. Properties of recycled aggregate (density, granulosity, granule shape, water absorption, weight loss to Los Angeles test, attached mortar content etc.) are the basis for concrete quality; also establishing appropriate proportions between components and the concrete production methods are extremely important for its quality. This paper presents a study on the use of recycled aggregates, from a concrete of specified class, to acquire new cement concrete with different percentages of recycled aggregates. To achieve recycled aggregates several batches of concrete class C16/20, C25/30 and C35/45 were made, the compositions calculation being made according NE012/2007 CP012/2007. Tests for producing recycled aggregate was carried out using concrete samples of the established three classes after 28 days of storage under the above conditions. Cubes with 150mm side were crushed in a first stage with a jaw crusher Liebherr type set at 50 mm nominally. The resulting material was separated by sieving on granulometric sorts and 10-50 sort was used for preliminary tests of crushing in the second stage with a jaw crusher BB 200 Retsch model, respectively a hammer crusher Buffalo Shuttle WA-12-H model. It was highlighted the influence of the type of crusher used to obtain recycled aggregates on granulometry and granule shape and the influence of the attached mortar on the density, water absorption, behavior to the Los Angeles test etc. The proportion of attached mortar was determined and correlated with provenance concrete class of the recycled aggregates and their granulometric sort. The aim to characterize the recycled aggregates is their valorification in new concrete used in construction. In this regard have been made a series of concrete in which the recycled aggregate content was varied from 0 to 100%. The new concrete were characterized by point of view of the change in the density and compressive strength with the proportion of recycled aggregates. It has been shown that an increase in recycled aggregate content not necessarily mean a reduction in compressive strength, quality of the aggregate having a decisive role.Keywords: recycled concrete aggregate, characteristics, recycled aggregate concrete, properties
Procedia PDF Downloads 213746 Flexural Performance of the Sandwich Structures Having Aluminum Foam Core with Different Thicknesses
Authors: Emre Kara, Ahmet Fatih Geylan, Kadir Koç, Şura Karakuzu, Metehan Demir, Halil Aykul
Abstract:
The structures obtained with the use of sandwich technologies combine low weight with high energy absorbing capacity and load carrying capacity. Hence, there is a growing and markedly interest in the use of sandwiches with aluminium foam core because of very good properties such as flexural rigidity and energy absorption capability. The static (bending and penetration) and dynamic (dynamic bending and low velocity impact) tests were already performed on the aluminum foam cored sandwiches with different types of outer skins by some of the authors. In the current investigation, the static three-point bending tests were carried out on the sandwiches with aluminum foam core and glass fiber reinforced polymer (GFRP) skins at different values of support span distances (L= 55, 70, 80, 125 mm) aiming the analyses of their flexural performance. The influence of the core thickness and the GFRP skin type was reported in terms of peak load, energy absorption capacity and energy efficiency. For this purpose, the skins with two different types of fabrics ([0°/90°] cross ply E-Glass Woven and [0°/90°] cross ply S-Glass Woven which have same thickness value of 1.5 mm) and the aluminum foam core with two different thicknesses (h=10 and 15 mm) were bonded with a commercial polyurethane based flexible adhesive in order to combine the composite sandwich panels. The GFRP skins fabricated via Vacuum Assisted Resin Transfer Molding (VARTM) technique used in the study can be easily bonded to the aluminum foam core and it is possible to configure the base materials (skin, adhesive and core), fiber angle orientation and number of layers for a specific application. The main results of the bending tests are: force-displacement curves, peak force values, absorbed energy, energy efficiency, collapse mechanisms and the effect of the support span length and core thickness. The results of the experimental study showed that the sandwich with the skins made of S-Glass Woven fabrics and with the thicker foam core presented higher mechanical values such as load carrying and energy absorption capacities. The increment of the support span distance generated the decrease of the mechanical values for each type of panels, as expected, because of the inverse proportion between the force and span length. The most common failure types of the sandwiches are debonding of the upper or lower skin and the core shear. The obtained results have particular importance for applications that require lightweight structures with a high capacity of energy dissipation, such as the transport industry (automotive, aerospace, shipbuilding and marine industry), where the problems of collision and crash have increased in the last years.Keywords: aluminum foam, composite panel, flexure, transport application
Procedia PDF Downloads 338745 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem
Authors: Nan Xu
Abstract:
In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC
Procedia PDF Downloads 146744 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics
Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere
Abstract:
Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciencesKeywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet
Procedia PDF Downloads 137743 Reflections on the Trajectory of an Online Literature Cafe through Its Music and Arts Activities
Authors: Mariko Hara, Mari Aoki, Takako Ito, Masao Sugita
Abstract:
Social distancing measures due to the COVID-19 crisis had a severe impact on music and art practices based in community settings. They had to re-think how to connect with their dispersed community using online tools. As the social distancing continues, there is an urgent need to investigate the possibilities of online community music and art practices. Are they sustainable actions that can have positive impacts on the community and the quality of lives of people over time? The Online Lindgren Café (hereafter ‘OLC’) is a monthly online literature event which started in June 2020. In the OLC, up to 14 members meet online to discuss the works of Astrid Lindgren and similar authors. Members come from various places in Japan and Norway, with a variety of expertise from music therapy, music education, psychotherapy, music sociology, storytelling, and theatre, and their family members join them. In these meetings, music and arts activities emerged in response to interests among the members. The resources and experiences of the members helped to develop these activities further. This paper first introduces one of the music and art activities in one specific event, a collaborative picture book-making with music, which was initiated and led by the second author. The third author chose the music, and the activity itself was recorded. This is followed by the description of a reflecting event, where the recording of the collaborative picture book-making activity was shared to facilitate further creations (drawings, haiku, and fabric weaving) as well as group reflections on the trajectories of the Online Lindgren Café. Finally, we will discuss the preliminary findings using the data collected at the reflecting event. Key findings suggest that the resource-driven approach of the OLC leveled the relationships among the intergenerational, multi-cultural, and interdisciplinary members. This enabled the members to set aside their professional and/or predominant identities, which allowed them to discover their own and others’ resources. The relaxed, unstructured, and liminal phenomenon at OLC can be regarded as a form of communitas, where members gain a sense of liberation and belonging in a different way from in-person communications. Participation from one’s home, as well as a video conferencing function that allowed the members to position themselves among the other participants in equal-sized windows, seems to have enabled members to feel safe to express themselves openly at the same time feel a sense of belonging. Furthermore, in the OLC, music and arts activities acted to inclusively connect and re-connect dispersed, intergenerational members with each other. For instance, in a music and drawing activity, music acted as a means for each member to engage in their own ‘drawing space’ while still feeling connected with the others. The positive experiences from these activities inspired the members to use similar approaches outside of the OLC. The finding suggests that, because of its resource-driven approach supported by the music and arts activities, the OLC could be developed further as a permeable and sustainable action even after any current social distancing measures are lifted.Keywords: communitas, COVID-19, musical affordances, online community of practices, resource-driven approach
Procedia PDF Downloads 134742 Writing the Roaming Female Self: Identity and Romantic Selfhood in Mary Wollstonecraft’s Letters Written during a Short Stay in Sweden, Denmark, and Norway (1796)
Authors: Kalyani Gandhi
Abstract:
The eighteenth century in Britain saw a great burst of activity in writing (letters, journals, newspapers, essays); often these modes of writing had a public-spirited bent in-step with the prevailing intellectual atmosphere. Mary Wollstonecraft was one of the leading intellectuals of that period who utilized letter writing to convey her thoughts on the exciting political developments of the late eighteenth century. Fusing together her anxieties and concerns about humanity in general and herself in particular, Wollstonecraft’s views of the world around her are filtered through the lens of her subjectivity. Thus, Wollstonecraft’s letters covered a wide range of topics on both the personal and political level (for the two are often entwined in Wollstonecraft’s characteristic style of analysis) such as sentiment, gender, nature, peasantry, the class system, the legal system, political duties and rights of both rulers and subjects, death, immortality, religion, family and education. Therefore, this paper intends to examine the manner in which Wollstonecraft utilizes letter-writing to constitute and develop Romantic self-hood, understand the world around her and illustrate her ideas on the political and social happenings in Europe. The primary text analyzed will be Mary Wollstonecraft's Letters Written During a Short Stay in Sweden, Denmark and Norway (1796) and the analysis of this text will be supplemented by researching 18th-century British letter writing culture, with a special emphasis on the epistolary habits of women. Within this larger framework, this paper intends to examine the manner in which this hybrid of travel and epistolary writing aided Mary Wollstonecraft's expression on Romantic selfhood and how it was complicated by ideas of gender. This paper reveals Wollstonecraft's text to be wrought with anxiety about the world around her and within her; thus, the personal-public nature of the epistolary format particularly suits her characteristic point of view that looks within and without. That is to say, Wollstonecraft’s anxieties about gender and self, are as much about the women she sees in the world around her as much as they are about her young daughter and herself. Wollstonecraft constantly explores and examines this anxiety within the different but interconnected realms of politics, economics, history and society. In fact, it is her complex technique of entwining these aforementioned concerns with a closer look at interpersonal relationships among men and women (she often mentions specific anecdotes and instances) that make Wollstonecraft's Letters so engaging and insightful. Thus, Wollstonecraft’s Letters is an exemplar of British Romantic writing due to the manner in which it explores the bond between the individual and society. Mary Wollstonecraft's nuances this exploration by incorporating her concerns about women and the playing out of gender in society. Thus, Wollstonecraft’s Letters is an invaluable contribution to the field of British Romanticism, particularly as it offers crucial insight on female Romantic writing that can broaden and enrich the current academic understanding of the field.Keywords: British romanticism, letters, feminism, travel writing
Procedia PDF Downloads 215741 Academic Knowledge Transfer Units in the Western Balkans: Building Service Capacity and Shaping the Business Model
Authors: Andrea Bikfalvi, Josep Llach, Ferran Lazaro, Bojan Jovanovski
Abstract:
Due to the continuous need to foster university-business cooperation in both developed and developing countries, some higher education institutions face the challenge of designing, piloting, operating, and consolidating knowledge and technology transfer units. University-business cooperation has different maturity stages worldwide, with some higher education institutions excelling in these practices, but with lots of others that could be qualified as intermediate, or even some situated at the very beginning of their knowledge transfer adventure. These latter face the imminent necessity to formally create the technology transfer unit and to draw its roadmap. The complexity of this operation is due to various aspects that need to align and coordinate, including a major change in mission, vision, structure, priorities, and operations. Qualitative in approach, this study presents 5 case studies, consisting of higher education institutions located in the Western Balkans – 2 in Albania, 2 in Bosnia and Herzegovina, 1 in Montenegro- fully immersed in the entrepreneurial journey of creating their knowledge and technology transfer unit. The empirical evidence is developed in a pan-European project, illustratively called KnowHub (reconnecting universities and enterprises to unleash regional innovation and entrepreneurial activity), which is being implemented in three countries and has resulted in at least 15 pilot cooperation agreements between academia and business. Based on a peer-mentoring approach including more experimented and more mature technology transfer models of European partners located in Spain, Finland, and Austria, a series of initial lessons learned are already available. The findings show that each unit developed its tailor-made approach to engage with internal and external stakeholders, offer value to the academic staff, students, as well as business partners. The latest technology underpinning KnowHub services and institutional commitment are found to be key success factors. Although specific strategies and plans differ, they are based on a general strategy jointly developed and based on common tools and methods of strategic planning and business modelling. The main output consists of providing good practice for designing, piloting, and initial operations of units aiming to fully valorise knowledge and expertise available in academia. Policymakers can also find valuable hints on key aspects considered vital for initial operations. The value of this contribution is its focus on the intersection of three perspectives (service orientation, organisational innovation, business model) since previous research has only relied on a single topic or dual approaches, most frequently in the business context and less frequently in higher education.Keywords: business model, capacity building, entrepreneurial education, knowledge transfer
Procedia PDF Downloads 141740 Molecular Characterization, Host Plant Resistance and Epidemiology of Bean Common Mosaic Virus Infecting Cowpea (Vigna unguiculata L. Walp)
Authors: N. Manjunatha, K. T. Rangswamy, N. Nagaraju, H. A. Prameela, P. Rudraswamy, M. Krishnareddy
Abstract:
The identification of virus in cowpea especially potyviruses is confusing. Even though there are several studies on viruses causing diseases in cowpea, difficult to distinguish based on symptoms and serological detection. The differentiation of potyviruses considering as a constraint, the present study is initiated for molecular characterization, host plant resistance and epidemiology of the BCMV infecting cowpea. The etiological agent causing cowpea mosaic was identified as Bean Common Mosaic Virus (BCMV) on the basis of RT-PCR and electron microscopy. An approximately 750bp PCR product corresponding to coat protein (CP) region of the virus and the presence of long flexuous filamentous particles measuring about 952 nm in size typical to genus potyvirus were observed under electron microscope. The characterized virus isolate genome had 10054 nucleotides, excluding the 3’ terminal poly (A) tail. Comparison of polyprotein of the virus with other potyviruses showed similar genome organization with 9 cleavage sites resulted in 10 functional proteins. The pairwise sequence comparison of individual genes, P1 showed most divergent, but CP gene was less divergent at nucleotide and amino acid level. A phylogenetic tree constructed based on multiple sequence alignments of the polyprotein nucleotide and amino acid sequences of cowpea BCMV and potyviruses showed virus is closely related to BCMV-HB. Whereas, Soybean variant of china (KJ807806) and NL1 isolate (AY112735) showed 93.8 % (5’UTR) and 94.9 % (3’UTR) homology respectively with other BCMV isolates. This virus transmitted to different leguminous plant species and produced systemic symptoms under greenhouse conditions. Out of 100 cowpea genotypes screened, three genotypes viz., IC 8966, V 5 and IC 202806 showed immune reaction in both field and greenhouse conditions. Single marker analysis (SMA) was revealed out of 4 SSR markers linked to BCMV resistance, M135 marker explains 28.2 % of phenotypic variation (R2) and Polymorphic information content (PIC) value of these markers was ranged from 0.23 to 0.37. The correlation and regression analysis showed rainfall, and minimum temperature had significant negative impact and strong relationship with aphid population, whereas weak correlation was observed with disease incidence. Path coefficient analysis revealed most of the weather parameters exerted their indirect contributions to the aphid population and disease incidence except minimum temperature. This study helps to identify specific gaps in knowledge for researchers who may wish to further analyse the science behind complex interactions between vector-virus and host in relation to the environment. The resistant genotypes identified are could be effectively used in resistance breeding programme.Keywords: cowpea, epidemiology, genotypes, virus
Procedia PDF Downloads 236739 The Connection between Qom Seminaries and Interpretation of Sacred Sources in Ja‘farī Jurisprudence
Authors: Sumeyra Yakar, Emine Enise Yakar
Abstract:
Iran presents itself as Islamic, first and foremost, and thus, it can be said that sharī’a is the political and social centre of the states. However, actual practice reveals distinct interpretations and understandings of the sharī’a. The research can be categorised inside the framework of logic in Islamic law and theology. The first task of this paper will be to identify how the sharī’a is understood in Iran by mapping out how the judges apply the law in their respective jurisdictions. The attention will then move from a simple description of the diversity of sharī’a understandings to the question of how that diversity relates to social concepts and cultures. This, of course, necessitates a brief exploration of Iran’s historical background which will also allow for an understanding of sectarian influences and the significance of certain events. The main purpose is to reach an understanding of the process of applying sources to formulate solutions which are in accordance with sharī’a and how religious education is pursued in order to become official judges. Ultimately, this essay will explore the attempts to gain an understanding by linking the practices to the secondary sources of Islamic law. It is important to emphasise that these cultural components of Islamic law must be compatible with the aims of Islamic law and their fundamental sources. The sharī’a consists of more than just legal doctrines (fiqh) and interpretive activities (ijtihād). Its contextual and theoretical framework reveals a close relationship with cultural and historical elements of society. This has meant that its traditional reproduction over time has relied on being embedded into a highly particular form of life. Thus, as acknowledged by pre-modern jurists, the sharī’a encompasses a comprehensive approach to the requirements of justice in legal, historical and political contexts. In theological and legal areas that have the specific authority of tradition, Iran adheres to Shīa’ doctrine, and this explains why the Shīa’ religious establishment maintains a dominant position in matters relating to law and the interpretation of sharī’a. The statements and interpretations of the tradition are distinctly different from sunnī interpretations, and so the use of different sources could be understood as the main reason for the discrepancies in the application of sharī’a between Iran and other Muslim countries. The sharī’a has often accommodated prevailing customs; moreover, it has developed legal mechanisms to all for its adaptation to particular needs and circumstances in society. While jurists may operate within the realm of governance and politics, the moral authority of the sharī’a ensures that these actors legitimate their actions with reference to God’s commands. The Iranian regime enshrines the principle of vilāyāt-i faqīh (guardianship of the jurist) which enables jurists to solve the conflict between law as an ideal system, in theory, and law in practice. The paper aims to show how the religious, educational system works in harmony with the governmental authorities with the concept of vilāyāt-i faqīh in Iran and contributes to the creation of religious custom in the society.Keywords: guardianship of the jurist (vilāyāt-i faqīh), imitation (taqlīd), seminaries (hawza), Shi’i jurisprudence
Procedia PDF Downloads 223738 Gathering Space after Disaster: Understanding the Communicative and Collective Dimensions of Resilience through Field Research across Time in Hurricane Impacted Regions of the United States
Authors: Jack L. Harris, Marya L. Doerfel, Hyunsook Youn, Minkyung Kim, Kautuki Sunil Jariwala
Abstract:
Organizational resilience refers to the ability to sustain business or general work functioning despite wide-scale interruptions. We focus on organization and businesses as a pillar of their communities and how they attempt to sustain work when a natural disaster impacts their surrounding regions and economies. While it may be more common to think of resilience as a trait possessed by an organization, an emerging area of research recognizes that for organizations and businesses, resilience is a set of processes that are constituted through communication, social networks, and organizing. Indeed, five processes, robustness, rapidity, resourcefulness, redundancy, and external availability through social media have been identified as critical to organizational resilience. These organizing mechanisms involve multi-level coordination, where individuals intersect with groups, organizations, and communities. Because the nature of such interactions are often networks of people and organizations coordinating material resources, information, and support, they necessarily require some way to coordinate despite being displaced. Little is known, however, if physical and digital spaces can substitute one for the other. We thus are guided by the question, is digital space sufficient when disaster creates a scarcity of physical space? This study presents a cross-case comparison based on field research from four different regions of the United States that were impacted by Hurricanes Katrina (2005), Sandy (2012), Maria (2017), and Harvey (2017). These four cases are used to extend the science of resilience by examining multi-level processes enacted by individuals, communities, and organizations that together, contribute to the resilience of disaster-struck organizations, businesses, and their communities. Using field research about organizations and businesses impacted by the four hurricanes, we code data from interviews, participant observations, field notes, and document analysis drawn from New Orleans (post-Katrina), coastal New Jersey (post-Sandy), Houston Texas (post-Harvey), and the lower keys of Florida (post-Maria). This paper identifies an additional organizing mechanism, networked gathering spaces, where citizens and organizations, alike, coordinate and facilitate information sharing, material resource distribution, and social support. Findings show that digital space, alone, is not a sufficient substitute to effectively sustain organizational resilience during a disaster. Because the data are qualitative, we expand on this finding with specific ways in which organizations and the people who lead them worked around the problem of scarce space. We propose that gatherings after disaster are a sixth mechanism that contributes to organizational resilience.Keywords: communication, coordination, disaster management, information and communication technologies, interorganizational relationships, resilience, work
Procedia PDF Downloads 171737 Information and Communication Technology (ICT) Education Improvement for Enhancing Learning Performance and Social Equality
Authors: Heichia Wang, Yalan Chao
Abstract:
Social inequality is a persistent problem. One of the ways to solve this problem is through education. At present, vulnerable groups are often less geographically accessible to educational resources. However, compared with educational resources, communication equipment is easier for vulnerable groups. Now that information and communication technology (ICT) has entered the field of education, today we can accept the convenience that ICT provides in education, and the mobility that it brings makes learning independent of time and place. With mobile learning, teachers and students can start discussions in an online chat room without the limitations of time or place. However, because liquidity learning is quite convenient, people tend to solve problems in short online texts with lack of detailed information in a lack of convenient online environment to express ideas. Therefore, the ICT education environment may cause misunderstanding between teachers and students. Therefore, in order to better understand each other's views between teachers and students, this study aims to clarify the essays of the analysts and classify the students into several types of learning questions to clarify the views of teachers and students. In addition, this study attempts to extend the description of possible omissions in short texts by using external resources prior to classification. In short, by applying a short text classification, this study can point out each student's learning problems and inform the instructor where the main focus of the future course is, thus improving the ICT education environment. In order to achieve the goals, this research uses convolutional neural network (CNN) method to analyze short discussion content between teachers and students in an ICT education environment. Divide students into several main types of learning problem groups to facilitate answering student problems. In addition, this study will further cluster sub-categories of each major learning type to indicate specific problems for each student. Unlike most neural network programs, this study attempts to extend short texts with external resources before classifying them to improve classification performance. In short, by applying the classification of short texts, we can point out the learning problems of each student and inform the instructors where the main focus of future courses will improve the ICT education environment. The data of the empirical process will be used to pre-process the chat records between teachers and students and the course materials. An action system will be set up to compare the most similar parts of the teaching material with each student's chat history to improve future classification performance. Later, the function of short text classification uses CNN to classify rich chat records into several major learning problems based on theory-driven titles. By applying these modules, this research hopes to clarify the main learning problems of students and inform teachers that they should focus on future teaching.Keywords: ICT education improvement, social equality, short text analysis, convolutional neural network
Procedia PDF Downloads 128736 Intriguing Modulations in the Excited State Intramolecular Proton Transfer Process of Chrysazine Governed by Host-Guest Interactions with Macrocyclic Molecules
Authors: Poojan Gharat, Haridas Pal, Sharmistha Dutta Choudhury
Abstract:
Tuning photophysical properties of guest dyes through host-guest interactions involving macrocyclic hosts are the attractive research areas since past few decades, as these changes can directly be implemented in chemical sensing, molecular recognition, fluorescence imaging and dye laser applications. Excited state intramolecular proton transfer (ESIPT) is an intramolecular prototautomerization process display by some specific dyes. The process is quite amenable to tunability by the presence of different macrocyclic hosts. The present study explores the interesting effect of p-sulfonatocalix[n]arene (SCXn) and cyclodextrin (CD) hosts on the excited-state prototautomeric equilibrium of Chrysazine (CZ), a model antitumour drug. CZ exists exclusively in its normal form (N) in the ground state. However, in the excited state, the excited N* form undergoes ESIPT along with its pre-existing intramolecular hydrogen bonds, giving the excited state prototautomer (T*). Accordingly, CZ shows a single absorption band due to N form, but two emission bands due to N* and T* forms. Facile prototautomerization of CZ is considerably inhibited when the dye gets bound to SCXn hosts. However, in spite of lower binding affinity, the inhibition is more profound with SCX6 host as compared to SCX4 host. For CD-CZ system, while prototautomerization process is hindered by the presence of β-CD, it remains unaffected in the presence of γCD. Reduction in the prototautomerization process of CZ by SCXn and βCD hosts is unusual, because T* form is less dipolar in nature than the N*, hence binding of CZ within relatively hydrophobic hosts cavities should have enhanced the prototautomerization process. At the same time, considering the similar chemical nature of two CD hosts, their effect on prototautomerization process of CZ would have also been similar. The atypical effects on the prototautomerization process of CZ by the studied hosts are suggested to arise due to the partial inclusion or external binding of CZ with the hosts. As a result, there is a strong possibility of intermolecular H-bonding interaction between CZ dye and the functional groups present at the portals of SCXn and βCD hosts. Formation of these intermolecular H-bonds effectively causes the pre-existing intramolecular H-bonding network within CZ molecule to become weak, and this consequently reduces the prototautomerization process for the dye. Our results suggest that rather than the binding affinity between the dye and host, it is the orientation of CZ in the case of SCXn-CZ complexes and the binding stoichiometry in the case of CD-CZ complexes that play the predominant role in influencing the prototautomeric equilibrium of the dye CZ. In the case of SCXn-CZ complexes, the results obtained through experimental findings are well supported by quantum chemical calculations. Similarly for CD-CZ systems, binding stoichiometries obtained through geometry optimization studies on the complexes between CZ and CD hosts correlate nicely with the experimental results. Formation of βCD-CZ complexes with 1:1 stoichiometry while formation of γCD-CZ complexes with 1:1, 1:2 and 2:2 stoichiometries are revealed from geometry optimization studies and these results are in good accordance with the observed effects by the βCD and γCD hosts on the ESIPT process of CZ dye.Keywords: intermolecular proton transfer, macrocyclic hosts, quantum chemical studies, photophysical studies
Procedia PDF Downloads 121735 Antibacterial Activity of Rosmarinus officinalis (Rosemary) and Murraya koenigii (Curry Leaves) against Multidrug Resistant S. aureus and Coagulase Negative Staphylococcus Species
Authors: Asma Naim, Warda Mushtaq
Abstract:
Staphylococcus species are the most versatile and adaptive organism. They are widespread and naturally found on the skin, mucosa and nose in humans. Among these, Staphylococcus aureus is the most important species. These organisms act as opportunistic pathogens and can infect various organs of the host, causing minor skin infection to severe toxin mediated diseases, and life threatening nosocomial infections. Staphylococcus aureus has acquired resistance against β-lactam antibiotics by the production of β-lactamase, and Methicillin-Resistant Staphylococcus aureus (MRSA) strains have also been reported with increasing frequency. MRSA strains have been associated with nosocomial as well as community acquired infections. Medicinal plants have enormous potential as antimicrobial substances and have been used in traditional medicine. Search for medicinally valuable plants with antimicrobial activity is being emphasized due to increasing antibiotic resistance in bacteria. In the present study, the antibacterial potential of Rosmarinus officinalis (Rosemary) and Murraya koenigii (curry leaves) was evaluated. These are common household herbs used in food as enhancer of flavor and aroma. The crude aqueous infusion, decoction and ethanolic extracts of curry leaves and rosemary and essential oil of rosemary were investigated in the present study for antibacterial activity against multi-drug resistant Staphylococcus strains using well diffusion method. In the present study, 60 Multi-drug resistant clinical isolates of S. aureus (43) and Coagulase Negative Staphylococci (CoNS) (17) were screened against different concentrations of crude extracts of Rosmarinus officinalis and Murraya koenigii. Out of these 60 isolates, 43 were sensitive to the aqueous infusion of rosemary; 23 to aqueous decoction and 58 to ethanolic extract whereas, 24 isolates were sensitive to the essential oil. In the case of the curry leaves, no antibacterial activity was observed in aqueous infusion and decoction while only 14 isolates were sensitive to the ethanolic extract. The aqueous infusion of rosemary (50% concentration) exhibited a zone of inhibition of 21(±5.69) mm. against CoNS and 17(±4.77) mm. against S. aureus, the zone of inhibition of 50% concentration of aqueous decoction of rosemary was also larger against CoNS 17(±5.78) mm. then S. aureus 13(±6.91) mm. and the 50% concentrated ethanolic extract showed almost similar zone of inhibition in S. aureus 22(±3.61) mm. and CoNS 21(±7.64) mm. whereas, the essential oil of rosemary showed greater zone of inhibition against S. aureus i.e., 16(±4.67) mm. while CoNS showed 15(±6.94) mm. These results show that ethanolic extract of rosemary has significant antibacterial activity. Aqueous infusion and decoction of curry leaves revealed no significant antibacterial potential against all Staphylococcal species and ethanolic extract also showed only a weak response. Staphylococcus strains were susceptible to crude extracts and essential oil of rosemary in a dose depend manner, where the aqueous infusion showed highest zone of inhibition and ethanolic extract also demonstrated antistaphylococcal activity. These results demonstrate that rosemary possesses antistaphylococcal activity.Keywords: antibacterial activity, curry leaves, multidrug resistant, rosemary, S. aureus
Procedia PDF Downloads 248734 Visuospatial Perspective Taking and Theory of Mind in a Clinical Approach: Development of a Task for Adults
Authors: Britt Erni, Aldara Vazquez Fernandez, Roland Maurer
Abstract:
Visuospatial perspective taking (VSPT) is a process that allows to integrate spatial information from different points of view, and to transform the mental images we have of the environment to properly orient our movements and anticipate the location of landmarks during navigation. VSPT is also related to egocentric perspective transformations (imagined rotations or translations of one's point of view) and to infer the visuospatial experiences of another person (e.g. if and how another person sees objects). This process is deeply related to a wide-ranging capacity called the theory of mind (ToM), an essential cognitive function that allows us to regulate our social behaviour by attributing mental representations to individuals in order to make behavioural predictions. VSPT is often considered in the literature as the starting point of the development of the theory of mind. VSPT and ToM include several levels of knowledge that have to be assessed by specific tasks. Unfortunately, the lack of tasks assessing these functions in clinical neuropsychology leads to underestimate, in brain-damaged patients, deficits of these functions which are essential, in everyday life, to regulate our social behaviour (ToM) and to navigate in known and unknown environments (VSPT). Therefore, this study aims to create and standardize a VSPT task in order to explore the cognitive requirements of VSPT and ToM, and to specify their relationship in healthy adults and thereafter in brain-damaged patients. Two versions of a computerized VSPT task were administered to healthy participants (M = 28.18, SD = 4.8 years). In both versions the environment was a 3D representation of 10 different geometric shapes placed on a circular base. Two sets of eight pictures were generated from this: of the environment with an avatar somewhere on its periphery (locations) and of what the avatar sees from that place (views). Two types of questions were asked: a) identify the location from the view, and b) identify the view from the location. Twenty participants completed version 1 of the task and 20 completed the second version, where the views were offset by ±15° (i.e., clockwise or counterclockwise) and participants were asked to choose the closest location or the closest view. The preliminary findings revealed that version 1 is significantly easier than version 2 for accuracy (with ceiling scores for version 1). In version 2, participants responded significantly slower when they had to infer the avatar's view from the latter's location, probably because they spent more time visually exploring the different views (responses). Furthermore, men significantly performed better than women in version 1 but not in version 2. Most importantly, a sensitive task (version 2) has been created for which the participants do not seem to easily and automatically compute what someone is looking at yet which does not involve more heavily other cognitive functions. This study is further completed by including analysis on non-clinical participants with low and high degrees of schizotypy, different socio-educational status, and with a range of older adults to examine age-related and other differences in VSPT processing.Keywords: mental transformation, spatial cognition, theory of mind, visuospatial perspective taking
Procedia PDF Downloads 203733 Foslip Loaded and CEA-Affimer Functionalised Silica Nanoparticles for Fluorescent Imaging of Colorectal Cancer Cells
Authors: Yazan S. Khaled, Shazana Shamsuddin, Jim Tiernan, Mike McPherson, Thomas Hughes, Paul Millner, David G. Jayne
Abstract:
Introduction: There is a need for real-time imaging of colorectal cancer (CRC) to allow tailored surgery to the disease stage. Fluorescence guided laparoscopic imaging of primary colorectal cancer and the draining lymphatics would potentially bring stratified surgery into clinical practice and realign future CRC management to the needs of patients. Fluorescent nanoparticles can offer many advantages in terms of intra-operative imaging and therapy (theranostic) in comparison with traditional soluble reagents. Nanoparticles can be functionalised with diverse reagents and then targeted to the correct tissue using an antibody or Affimer (artificial binding protein). We aimed to develop and test fluorescent silica nanoparticles and targeted against CRC using an anti-carcinoembryonic antigen (CEA) Affimer (Aff). Methods: Anti-CEA and control Myoglobin Affimer binders were subcloned into the expressing vector pET11 followed by transformation into BL21 Star™ (DE3) E.coli. The expression of Affimer binders was induced using 0.1 mM isopropyl β-D-1-thiogalactopyranoside (IPTG). Cells were harvested, lysed and purified using nickle chelating affinity chromatography. The photosensitiser Foslip (soluble analogue of 5,10,15,20-Tetra(m-hydroxyphenyl) chlorin) was incorporated into the core of silica nanoparticles using water-in-oil microemulsion technique. Anti-CEA or control Affs were conjugated to silica nanoparticles surface using sulfosuccinimidyl-4-(N-maleimidomethyl) cyclohexane-1-carboxylate (sulfo SMCC) chemical linker. Binding of CEA-Aff or control nanoparticles to colorectal cancer cells (LoVo, LS174T and HC116) was quantified in vitro using confocal microscopy. Results: The molecular weights of the obtained band of Affimers were ~12.5KDa while the diameter of functionalised silica nanoparticles was ~80nm. CEA-Affimer targeted nanoparticles demonstrated 9.4, 5.8 and 2.5 fold greater fluorescence than control in, LoVo, LS174T and HCT116 cells respectively (p < 0.002) for the single slice analysis. A similar pattern of successful CEA-targeted fluorescence was observed in the maximum image projection analysis, with CEA-targeted nanoparticles demonstrating 4.1, 2.9 and 2.4 fold greater fluorescence than control particles in LoVo, LS174T, and HCT116 cells respectively (p < 0.0002). There was no significant difference in fluorescence for CEA-Affimer vs. CEA-Antibody targeted nanoparticles. Conclusion: We are the first to demonstrate that Foslip-doped silica nanoparticles conjugated to anti-CEA Affimers via SMCC allowed tumour cell-specific fluorescent targeting in vitro, and had shown sufficient promise to justify testing in an animal model of colorectal cancer. CEA-Affimer appears to be a suitable targeting molecule to replace CEA-Antibody. Targeted silica nanoparticles loaded with Foslip photosensitiser is now being optimised to drive photodynamic killing, via reactive oxygen generation.Keywords: colorectal cancer, silica nanoparticles, Affimers, antibodies, imaging
Procedia PDF Downloads 240732 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal
Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi
Abstract:
The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.Keywords: low income, subsidy policy, distributed energy resources, energy justice
Procedia PDF Downloads 112731 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling
Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci
Abstract:
Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.Keywords: land use, spatial resolution, WRF-Chem, air quality assessment
Procedia PDF Downloads 156730 Antimicrobial and Anti-Biofilm Activity of Non-Thermal Plasma
Authors: Jan Masak, Eva Kvasnickova, Vladimir Scholtz, Olga Matatkova, Marketa Valkova, Alena Cejkova
Abstract:
Microbial colonization of medical instruments, catheters, implants, etc. is a serious problem in the spread of nosocomial infections. Biofilms exhibit enormous resistance to environment. The resistance of biofilm populations to antibiotic or biocides often increases by two to three orders of magnitude in comparison with suspension populations. Subjects of interests are substances or physical processes that primarily cause the destruction of biofilm, while the released cells can be killed by existing antibiotics. In addition, agents that do not have a strong lethal effect do not cause such a significant selection pressure to further enhance resistance. Non-thermal plasma (NTP) is defined as neutral, ionized gas composed of particles (photons, electrons, positive and negative ions, free radicals and excited or non-excited molecules) which are in permanent interaction. In this work, the effect of NTP generated by the cometary corona with a metallic grid on the formation and stability of biofilm and metabolic activity of cells in biofilm was studied. NTP was applied on biofilm populations of Staphylococcus epidermidis DBM 3179, Pseudomonas aeruginosa DBM 3081, DBM 3777, ATCC 15442 and ATCC 10145, Escherichia coli DBM 3125 and Candida albicans DBM 2164 grown on solid media on Petri dishes and on the titanium alloy (Ti6Al4V) surface used for the production joint replacements. Erythromycin (for S. epidermidis), polymyxin B (for E. coli and P. aeruginosa), amphotericin B (for C. albicans) and ceftazidime (for P. aeruginosa) were used to study the combined effect of NTP and antibiotics. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Fluorescence microscopy was applied to visualize the biofilm on the surface of the titanium alloy; SYTO 13 was used as a fluorescence probe to stain cells in the biofilm. It has been shown that biofilm populations of all studied microorganisms are very sensitive to the type of used NTP. The inhibition zone of biofilm recorded after 60 minutes exposure to NTP exceeded 20 cm², except P. aeruginosa DBM 3777 and ATCC 10145, where it was about 9 cm². Also metabolic activity of cells in biofilm differed for individual microbial strains. High sensitivity to NTP was observed in S. epidermidis, in which the metabolic activity of biofilm decreased after 30 minutes of NTP exposure to 15% and after 60 minutes to 1%. Conversely, the metabolic activity of cells of C. albicans decreased to 53% after 30 minutes of NTP exposure. Nevertheless, this result can be considered very good. Suitable combinations of exposure time of NTP and the concentration of antibiotic achieved in most cases a remarkable synergic effect on the reduction of the metabolic activity of the cells of the biofilm. For example, in the case of P. aeruginosa DBM 3777, a combination of 30 minutes of NTP with 1 mg/l of ceftazidime resulted in a decrease metabolic activity below 4%.Keywords: anti-biofilm activity, antibiotic, non-thermal plasma, opportunistic pathogens
Procedia PDF Downloads 184729 Proposals for the Practical Implementation of the Biological Monitoring of Occupational Exposure for Antineoplastic Drugs
Authors: Mireille Canal-Raffin, Nadege Lepage, Antoine Villa
Abstract:
Context: Most antineoplastic drugs (AD) have a potential carcinogenic, mutagenic and/or reprotoxic effect and are classified as 'hazardous to handle' by National Institute for Occupational Safety and Health Their handling increases with the increase of cancer incidence. AD contamination from workers who handle AD and/or care for treated patients is, therefore, a major concern for occupational physicians. As part of the process of evaluation and prevention of chemical risks for professionals exposed to AD, Biological Monitoring of Occupational Exposure (BMOE) is the tool of choice. BMOE allows identification of at-risk groups, monitoring of exposures, assessment of poorly controlled exposures and the effectiveness and/or wearing of protective equipment, and documenting occupational exposure incidents to AD. This work aims to make proposals for the practical implementation of the BMOE for AD. The proposed strategy is based on the French good practice recommendations for BMOE, issued in 2016 by 3 French learned societies. These recommendations have been adapted to occupational exposure to AD. Results: AD contamination of professionals is a sensitive topic, and the BMOE requires the establishment of a working group and information meetings within the concerned health establishment to explain the approach, objectives, and purpose of monitoring. Occupational exposure to AD is often discontinuous and 2 steps are essential upstream: a study of the nature and frequency of AD used to select the Biological Exposure Indice(s) (BEI) most representative of the activity; a study of AD path in the institution to target exposed professionals and to adapt medico-professional information sheet (MPIS). The MPIS is essential to gather the necessary elements for results interpretation. Currently, 28 urinary specific BEIs of AD exposure have been identified, and corresponding analytical methods have been published: 11 BEIs were AD metabolites, and 17 were AD. Results interpretation is performed by groups of homogeneous exposure (GHE). There is no threshold biological limit value of interpretation. Contamination is established when an AD is detected in trace concentration or in a urine concentration equal or greater than the limit of quantification (LOQ) of the analytical method. Results can only be compared to LOQs of these methods, which must be as low as possible. For 8 of the 17 AD BEIs, the LOQ is very low with values between 0.01 to 0.05µg/l. For the other BEIs, the LOQ values were higher between 0.1 to 30µg/l. Results restitution by occupational physicians to workers should be individual and collective. Faced with AD dangerousness, in cases of workers contamination, it is necessary to put in place corrective measures. In addition, the implementation of prevention and awareness measures for those exposed to this risk is a priority. Conclusion: This work is a help for occupational physicians engaging in a process of prevention of occupational risks related to AD exposure. With the current analytical tools, effective and available, the (BMOE) to the AD should now be possible to develop in routine occupational physician practice. The BMOE may be complemented by surface sampling to determine workers' contamination modalities.Keywords: antineoplastic drugs, urine, occupational exposure, biological monitoring of occupational exposure, biological exposure indice
Procedia PDF Downloads 136728 Double Liposomes Based Dual Drug Delivery System for Effective Eradication of Helicobacter pylori
Authors: Yuvraj Singh Dangi, Brajesh Kumar Tiwari, Ashok Kumar Jain, Kamta Prasad Namdeo
Abstract:
The potential use of liposomes as drug carriers by i.v. injection is limited by their low stability in blood stream. Firstly, phospholipid exchange and transfer to lipoproteins, mainly HDL destabilizes and disintegrates liposomes with subsequent loss of content. To avoid the pain associated with injection and to obtain better patient compliance studies concerning various dosage forms, have been developed. Conventional liposomes (unilamellar and multilamellar) have certain drawbacks like low entrapment efficiency, stability and release of drug after single breach in external membrane, have led to the new type of liposomal systems. The challenge has been successfully met in the form of Double Liposomes (DL). DL is a recently developed type of liposome, consisting of smaller liposomes enveloped in lipid bilayers. The outer lipid layer of DL can protect inner liposomes against various enzymes, therefore DL was thought to be more effective than ordinary liposomes. This concept was also supported by in vitro release characteristics i.e. DL formation inhibited the release of drugs encapsulated in inner liposomes. DL consists of several small liposomes encapsulated in large liposomes, i.e., multivesicular vesicles (MVV), therefore, DL should be discriminated from ordinary classification of multilamellar vesicles (MLV), large unilamellar vesicles (LUV), small unilamellar vesicles (SUV). However, for these liposomes, the volume of inner phase is small and loading volume of water-soluble drugs is low. In the present study, the potential of phosphatidylethanolamine (PE) lipid anchored double liposomes (DL) to incorporate two drugs in a single system is exploited as a tool to augment the H. pylori eradication rate. Preparation of DL involves two steps, first formation of primary (inner) liposomes by thin film hydration method containing one drug, then addition of suspension of inner liposomes on thin film of lipid containing the other drug. The success of formation of DL was characterized by optical and transmission electron microscopy. Quantitation of DL-bacterial interaction was evaluated in terms of percent growth inhibition (%GI) on reference strain of H. pylori ATCC 26695. To confirm specific binding efficacy of DL to H. pylori PE surface receptor we performed an agglutination assay. Agglutination in DL treated H. pylori suspension suggested selectivity of DL towards the PE surface receptor of H. pylori. Monotherapy is generally not recommended for treatment of a H. pylori infection due to the danger of development of resistance and unacceptably low eradication rates. Therefore, combination therapy with amoxicillin trihydrate (AMOX) as anti-H. pylori agent and ranitidine bismuth citrate (RBC) as antisecretory agent were selected for the study with an expectation that this dual-drug delivery approach will exert acceptable anti-H. pylori activity.Keywords: Helicobacter pylorI, amoxicillin trihydrate, Ranitidine Bismuth citrate, phosphatidylethanolamine, multi vesicular systems
Procedia PDF Downloads 207727 Numerical Investigation of Thermal Energy Storage Panel Using Nanoparticle Enhanced Phase Change Material for Micro-Satellites
Authors: Jelvin Tom Sebastian, Vinod Yeldho Baby
Abstract:
In space, electronic devices are constantly attacked with radiation, which causes certain parts to fail or behave in unpredictable ways. To advance the thermal controllability for microsatellites, we need a new approach and thermal control system that is smaller than that on conventional satellites and that demand no electric power. Heat exchange inside the microsatellites is not that easy as conventional satellites due to the smaller size. With slight mass gain and no electric power, accommodating heat using phase change materials (PCMs) is a strong candidate for solving micro satellites' thermal difficulty. In other words, PCMs can absorb or produce heat in the form of latent heat, changing their phase and minimalizing the temperature fluctuation around the phase change point. The main restriction for these systems is thermal conductivity weakness of common PCMs. As PCM is having low thermal conductivity, it increases the melting and solidification time, which is not suitable for specific application like electronic cooling. In order to increase the thermal conductivity nanoparticles are introduced. Adding the nanoparticles in base PCM increases the thermal conductivity. Increase in weight concentration increases the thermal conductivity. This paper numerically investigates the thermal energy storage panel with nanoparticle enhanced phase change material. Silver nanostructure have increased the thermal properties of the base PCM, eicosane. Different weight concentration (1, 2, 3.5, 5, 6.5, 8, 10%) of silver enhanced phase change material was considered. Both steady state and transient analysis was performed to compare the characteristics of nanoparticle enhanced phase material at different heat loads. Results showed that in steady state, the temperature near the front panel reduced and temperature on NePCM panel increased as the weight concentration increased. With the increase in thermal conductivity more heat was absorbed into the NePCM panel. In transient analysis, it was found that the effect of nanoparticle concentration on maximum temperature of the system was reduced as the melting point of the material reduced with increase in weight concentration. But for the heat load of maximum 20W, the model with NePCM did not attain the melting point temperature. Therefore it showed that the model with NePCM is capable of holding more heat load. In order to study the heat load capacity double the load is given, maximum of 40W was given as first half of the cycle and the other is given constant OW. Higher temperature was obtained comparing the other heat load. The panel maintained a constant temperature for a long duration according to the NePCM melting point. In both the analysis, the uniformity of temperature of the TESP was shown. Using Ag-NePCM it allows maintaining a constant peak temperature near the melting point. Therefore, by altering the weight concentration of the Ag-NePCM it is possible to create an optimum operating temperature required for the effective working of the electronics components.Keywords: carbon-fiber-reinforced polymer, micro/nano-satellite, nanoparticle phase change material, thermal energy storage
Procedia PDF Downloads 203726 Integrated Manufacture of Polymer and Conductive Tracks for Functional Objects Fabrication
Authors: Barbara Urasinska-Wojcik, Neil Chilton, Peter Todd, Christopher Elsworthy, Gregory J. Gibbons
Abstract:
The recent increase in the application of Additive Manufacturing (AM) of products has resulted in new demands on capability. The ability to integrate both form and function within printed objects is the next frontier in the 3D printing area. To move beyond prototyping into low volume production, we demonstrate a UK-designed and built AM hybrid system that combines polymer based structural deposition with digital deposition of electrically conductive elements. This hybrid manufacturing system is based on a multi-planar build approach to improve on many of the limitations associated with AM, such as poor surface finish, low geometric tolerance, and poor robustness. Specifically, the approach involves a multi-planar Material Extrusion (ME) process in which separated build stations with up to 5 axes of motion replace traditional horizontally-sliced layer modeling. The construction of multi-material architectures also involved using multiple print systems in order to combine both ME and digital deposition of conductive material. To demonstrate multi-material 3D printing, three thermoplastics, acrylonitrile butadiene styrene (ABS), polyamide 6,6/6 copolymers (CoPA) and polyamide 12 (PA) were used to print specimens, on top of which our high viscosity Ag-particulate ink was printed in a non-contact process, during which drop characteristics such as shape, velocity, and volume were assessed using a drop watching system. Spectroscopic analysis of these 3D printed materials in the IR region helped to determine the optimum in-situ curing system for implementation into the AM system to achieve improved adhesion and surface refinement. Thermal Analyses were performed to determine the printed materials glass transition temperature (Tg), stability and degradation behavior to find the optimum annealing conditions post printing. Electrical analysis of printed conductive tracks on polymer surfaces during mechanical testing (static tensile and 3-point bending and dynamic fatigue) was performed to assess the robustness of the electrical circuits. The tracks on CoPA, ABS, and PA exhibited low electrical resistance, and in case of PA resistance values of tracks remained unchanged across hundreds of repeated tensile cycles up to 0.5% strain amplitude. Our developed AM printer has the ability to fabricate fully functional objects in one build, including complex electronics. It enables product designers and manufacturers to produce functional saleable electronic products from a small format modular platform. It will make 3D printing better, faster and stronger.Keywords: additive manufacturing, conductive tracks, hybrid 3D printer, integrated manufacture
Procedia PDF Downloads 166725 Two-wavelength High-energy Cr:LiCaAlF6 MOPA Laser System for Medical Multispectral Optoacoustic Tomography
Authors: Radik D. Aglyamov, Alexander K. Naumov, Alexey A. Shavelev, Oleg A. Morozov, Arsenij D. Shishkin, Yury P.Brodnikovsky, Alexander A.Karabutov, Alexander A. Oraevsky, Vadim V. Semashko
Abstract:
The development of medical optoacoustic tomography with the using human blood as endogenic contrast agent is constrained by the lack of reliable, easy-to-use and inexpensive sources of high-power pulsed laser radiation in the spectral region of 750-900 nm [1-2]. Currently used titanium-sapphire, alexandrite lasers or optical parametric light oscillators do not provide the required and stable output characteristics, they are structurally complex, and their cost is up to half the price of diagnostic optoacoustic systems. Here we are developing the lasers based on Cr:LiCaAlF6 crystals which are free of abovementioned disadvantages and provides intensive ten’s ns-range tunable laser radiation at specific absorption bands of oxy- (~840 nm) and -deoxyhemoglobin (~757 nm) in the blood. Cr:LiCAF (с=3 at.%) crystals were grown in Kazan Federal University by the vertical directional crystallization (Bridgman technique) in graphite crucibles in a fluorinating atmosphere at argon overpressure (P=1500 hPa) [3]. The laser elements have cylinder shape with the diameter of 8 mm and 90 mm in length. The direction of the optical axis of the crystal was normal to the cylinder generatrix, which provides the π-polarized laser action correspondent to maximal stimulated emission cross-section. The flat working surfaces of the active elements were polished and parallel to each other with an error less than 10”. No any antireflection coating was applied. The Q-switched master oscillator-power amplifiers laser system (MOPA) with the dual-Xenon flashlamp pumping scheme in diffuse-reflectivity close-coupled head were realized. A specially designed laser cavity, consisting of dielectric highly reflective reflectors with a 2 m-curvature radius, a flat output mirror, a polarizer and Q-switch sell, makes it possible to operate sequentially in a circle (50 ns - laser one pulse after another) at wavelengths of 757 and 840 nm. The programmable pumping system from Tomowave Laser LLC (Russia) provided independent to each pulses (up to 250 J at 180 μs) pumping to equalize the laser radiation intensity at these wavelengths. The MOPA laser operates at 10 Hz pulse repetition rate with the output energy up to 210 mJ. Taking into account the limitations associated with physiological movements and other characteristics of patient tissues, the duration of laser pulses and their energy allows molecular and functional high-contrast imaging to depths of 5-6 cm with a spatial resolution of at least 1 mm. Highly likely the further comprehensive design of laser allows improving the output properties and realizing better spatial resolution of medical multispectral optoacoustic tomography systems.Keywords: medical optoacoustic, endogenic contrast agent, multiwavelength tunable pulse lasers, MOPA laser system
Procedia PDF Downloads 101724 The Quantitative SWOT-Analysis of Service Blood Activity of Kazakhstan
Authors: Alua Massalimova
Abstract:
Situation analysis of Blood Service revealed that the strengths dominated over the weak 1.4 times. The possibilities dominate over the threats by 1.1 times. It follows that by using timely the possibility the Service, it is possible to strengthen its strengths and avoid threats. Priority directions of the resulting analysis are the use of subjective factors, such as personal management capacity managers of the Blood Center in the field of possibilities of legal activity of administrative decisions and the mobilization of stable staff in general market conditions. We have studied for the period 2011-2015 retrospectively indicators of Blood Service of Kazakhstan. Strengths of Blood Service of RK(Ps4,5): 1) indicators of donations for 1000 people is higher than in some countries of the CIS (in Russia 14, Kazakhstan - 17); 2) the functioning science centre of transfusiology; 3) the legal possibility of additional financing blood centers in the form of paid services; 4) the absence of competitors; 5) training on specialty Transfusiology; 6) the stable management staff of blood centers, a high level of competence; 7) increase in the incidence requiring transfusion therapy (oncohematology); 8) equipment upgrades; 9) the opening of a reference laboratory; 10) growth of the proportion of issued high-quality blood components; 11) governmental organization 'Drop of Life'; 12) the functioning bone marrow register; 13) equipped with modern equipment HLA-laboratory; 14) High categorization of average medical workers; 15) availability of own specialized scientific journal; 16) vivarium. The weaknesses (Ps = 3.5): 1) the incomplete equipping of blood centers and blood transfusion cabinets according to standards; 2) low specific weight of paid services of the CC; 3) low categorization of doctors; 4) high staff turnover; 5) the low scientific potential of industrial and clinical of transfusiology; 6) the low wages paid; 7) slight growth of harvested donor blood; 8) the weak continuity with offices blood transfusion; 9) lack of agitation work; 10) the formally functioning of Transfusion Association; 11) the absence of scientific laboratories; 12) high standard deviation from the average for donations in the republic. The possibilities (Ps = 2,7): 1): international grants; 2) organization of international seminars on clinical of transfusiology; 3) cross-sectoral cooperation; 4) to increase scientific research in the field of clinical of transfusiology; 5) reduce the share of donation unsuitable for transfusion and processing; 6) strengthening marketing management in the development of fee-based services; 7) advertising paid services; 8) strengthening the publishing of teaching aids; 9) team-building staff. The threats (Ps = 2.1): 1) an increase of staff turnover; 2) the risk of litigation; 3) reduction gemoprodukts based on evidence-based medicine; 4) regression of scientific capacity; 5) organization of marketing; 6) transfusiologist marketing; 7) reduction in the quality of the evidence base transfusions.Keywords: blood service, healthcare, Kazakhstan, quantative swot analysis
Procedia PDF Downloads 228723 Terrorism in German and Italian Press Headlines: A Cognitive Linguistic Analysis of Conceptual Metaphors
Authors: Silvia Sommella
Abstract:
Islamic terrorism has gained a lot of media attention in the last years also because of the striking increase of terror attacks since 2014. The main aim of this paper is to illustrate the phenomenon of Islamic terrorism by applying frame semantics and metaphor analysis to German and Italian press headlines of the two online weekly publications Der Spiegel and L’Espresso between 2014 and 2019. This study focuses on how media discourse – through the use of conceptual metaphors – let arise in people a particular reception of the phenomenon of Islamic terrorism and accept governmental strategies and policies, perceiving terrorists as evildoers, as the members of an uncivilised group ‘other’ opposed to the civilised group ‘we’: two groups that are perceived as opposed. The press headlines are analyzed on the basis of the cognitive linguistics, namely Lakoff and Johnson’s conceptualization of metaphor to distinguish between abstract conceptual metaphors and specific metaphorical expressions. The study focuses on the contexts, frames, and metaphors. The method adopted in this study is Konerding’s frame semantics (1993). Konerding carried out on the basis of dictionaries – in particular of the Duden Deutsches Universalwörterbuch (Duden Universal German Dictionary) – in a pilot study of a lexicological work hyperonym reduction of substantives, working exclusively with nouns because hyperonyms usually occur in the dictionary meaning explanations as for the main elements of nominal phrases. The results of Konerding’s hyperonym type reduction is a small set of German nouns and they correspond to the highest hyperonyms, the so-called categories, matrix frames: ‘object’, ‘organism’, ‘person/actant’, ‘event’, ‘action/interaction/communication’, ‘institution/social group’, ‘surroundings’, ‘part/piece’, ‘totality/whole’, ‘state/property’. The second step of Konerding’s pilot study consists in determining the potential reference points of each category so that conventionally expectable routinized predications arise as predictors. Konerding found out which predicators the ascertained noun types can be linked to. For the purpose of this study, metaphorical expressions will be listed and categorized in conceptual metaphors and under the matrix frames that correspond to the particular conceptual metaphor. All of the corpus analyses are carried out using Ant Conc corpus software. The research will verify some previously analyzed metaphors such as TERRORISM AS WAR, A CRIME, A NATURAL EVENT, A DISEASE and will identify new conceptualizations and metaphors about Islamic terrorism, especially in the Italian language like TERRORISM AS A GAME, WARES, A DRAMATIC PLAY. Through the identification of particular frames and their construction, the research seeks to understand the public reception and the way to handle the discourse about Islamic terrorism in the above mentioned online weekly publications under a contrastive analysis in the German and in the Italian language.Keywords: cognitive linguistics, frame semantics, Islamic terrorism, media
Procedia PDF Downloads 173722 42CrMo4 Steel Flow Behavior Characterization for High Temperature Closed Dies Hot Forging in Automotive Components Applications
Authors: O. Bilbao, I. Loizaga, F. A. Girot, A. Torregaray
Abstract:
The current energetical situation and the high competitiveness in industrial sectors as the automotive one have become the development of new manufacturing processes with less energy and raw material consumption a real necessity. As consequence, new forming processes related with high temperature hot forging in closed dies have emerged in the last years as new solutions to expand the possibilities of hot forging and iron casting in the automotive industry. These technologies are mid-way between hot forging and semi-solid metal processes, working at temperatures higher than the hot forging but below the solidus temperature or the semi solid range, where no liquid phase is expected. This represents an advantage comparing with semi-solid forming processes as thixoforging, by the reason that no so high temperatures need to be reached in the case of high melting point alloys as steels, reducing the manufacturing costs and the difficulties associated to semi-solid processing of them. Comparing with hot forging, this kind of technologies allow the production of parts with as forged properties and more complex and near-net shapes (thinner sidewalls), enhancing the possibility of designing lightweight components. From the process viewpoint, the forging forces are significantly decreased, and a significant reduction of the raw material, energy consumption, and the forging steps have been demonstrated. Despite the mentioned advantages, from the material behavior point of view, the expansion of these technologies has shown the necessity of developing new material flow behavior models in the process working temperature range to make the simulation or the prediction of these new forming processes feasible. Moreover, the knowledge of the material flow behavior at the working temperature range also allows the design of the new closed dies concept required. In this work, the flow behavior characterization in the mentioned temperature range of the widely used in automotive commercial components 42CrMo4 steel has been studied. For that, hot compression tests have been carried out in a thermomechanical tester in a temperature range that covers the material behavior from the hot forging until the NDT (Nil Ductility Temperature) temperature (1250 ºC, 1275 ºC, 1300 ºC, 1325 ºC, 1350ºC, and 1375 ºC). As for the strain rates, three different orders of magnitudes have been considered (0,1 s-1, 1s-1, and 10s-1). Then, results obtained from the hot compression tests have been treated in order to adapt or re-write the Spittel model, widely used in automotive commercial softwares as FORGE® that restrict the current existing models up to 1250ºC. Finally, the obtained new flow behavior model has been validated by the process simulation in a commercial automotive component and the comparison of the results of the simulation with the already made experimental tests in a laboratory cellule of the new technology. So as a conclusion of the study, a new flow behavior model for the 42CrMo4 steel in the new working temperature range and the new process simulation in its application in automotive commercial components has been achieved and will be shown.Keywords: 42CrMo4 high temperature flow behavior, high temperature hot forging in closed dies, simulation of automotive commercial components, spittel flow behavior model
Procedia PDF Downloads 129721 The Philosophical Hermeneutics Contribution to Form a Highly Qualified Judiciary in Brazil
Authors: Thiago R. Pereira
Abstract:
The philosophical hermeneutics is able to change the Brazilian Judiciary because of the understanding of the characteristics of the human being. It is impossible for humans, to be invested in the function of being a judge, making absolutely neutral decisions, but the philosophical hermeneutics can assist the judge making impartial decisions, based on the federal constitution. The normative legal positivism imagined a neutral judge, a judge able to try without any preconceived ideas, without allowing his/her background to influence him/her. When a judge arbitrates based on legal rules, the problem is smaller, but when there are no clear legal rules, and the judge must try based on principles, the risk of the decision is based on what they believe in. Solipsistically, this issue gains a huge dimension. Today, the Brazilian judiciary is independent, but there must be a greater knowledge of philosophy and the philosophy of law, partially because the bigger problem is the unpredictability of decisions made by the judiciary. Actually, when a lawsuit is filed, the result of this judgment is absolutely unpredictable. It is almost a gamble. There must be the slightest legal certainty and predictability of judicial decisions, so that people, with similar cases, may not receive opposite sentences. The relativism, since classical antiquity, believes in the possibility of multiple answers. Since the Greeks in in the sixth century before Christ, through the Germans in the eighteenth century, and even today, it has been established the constitution as the great law, the Groundnorm, and thus, the relativism of life can be greatly reduced when a hermeneut uses the Constitution as North interpretational, where all interpretation must act as the hermeneutic constitutional filter. For a current philosophy of law, that inside a legal system with a Federal Constitution, there is a single correct answer to a specific case. The challenge is how to find this right answer. The only answer to this question will be that we should use the constitutional principles. But in many cases, a collision between principles will take place, and to resolve this issue, the judge or the hermeneut will choose a solipsism way, using what they personally believe to be the right one. For obvious reasons, that conduct is not safe. Thus, a theory of decision is necessary to seek justice, and the hermeneutic philosophy and the linguistic turn will be necessary for one to find the right answer. In order to help this difficult mission, it will be necessary to use philosophical hermeneutics in order to find the right answer, which is the constitutionally most appropriate response. The constitutionally appropriate response will not always be the answer that individuals agree to, but we must put aside our preferences and defend the answer that the Constitution gives us. Therefore, the hermeneutics applied to Law, in search constitutionally appropriate response, should be the safest way to avoid judicial individual decisions. The aim of this paper is to present the science of law starting from the linguistic turn, the philosophical hermeneutics, moving away from legal positivism. The methodology used in this paper is qualitative, academic and theoretical, philosophical hermeneutics with the mission to conduct research proposing a new way of thinking about the science of law. The research sought to demonstrate the difficulty of the Brazilian courts to depart from the secular influence of legal positivism. Moreover, the research sought to demonstrate the need to think science of law within a contemporary perspective, where the linguistic turn, philosophical hermeneutics, will be the surest way to conduct the science of law in the present century.Keywords: hermeneutic, right answer, solipsism, Brazilian judiciary
Procedia PDF Downloads 350