Search results for: hypergeometric functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2408

Search results for: hypergeometric functions

98 Empirical Study of Innovative Development of Shenzhen Creative Industries Based on Triple Helix Theory

Authors: Yi Wang, Greg Hearn, Terry Flew

Abstract:

In order to understand how cultural innovation occurs, this paper explores the interaction in Shenzhen of China between universities, creative industries, and government in creative economic using the Triple Helix framework. During the past two decades, Triple Helix has been recognized as a new theory of innovation to inform and guide policy-making in national and regional development. Universities and governments around the world, especially in developing countries, have taken actions to strengthen connections with creative industries to develop regional economies. To date research based on the Triple Helix model has focused primarily on Science and Technology collaborations, largely ignoring other fields. Hence, there is an opportunity for work to be done in seeking to better understand how the Triple Helix framework might apply in the field of creative industries and what knowledge might be gleaned from such an undertaking. Since the late 1990s, the concept of ‘creative industries’ has been introduced as policy and academic discourse. The development of creative industries policy by city agencies has improved city wealth creation and economic capital. It claims to generate a ‘new economy’ of enterprise dynamics and activities for urban renewal through the arts and digital media, via knowledge transfer in knowledge-based economies. Creative industries also involve commercial inputs to the creative economy, to dynamically reshape the city into an innovative culture. In particular, this paper will concentrate on creative spaces (incubators, digital tech parks, maker spaces, art hubs) where academic, industry and government interact. China has sought to enhance the brand of their manufacturing industry in cultural policy. It aims to transfer the image of ‘Made in China’ to ‘Created in China’ as well as to give Chinese brands more international competitiveness in a global economy. Shenzhen is a notable example in China as an international knowledge-based city following this path. In 2009, the Shenzhen Municipal Government proposed the city slogan ‘Build a Leading Cultural City”’ to show the ambition of government’s strong will to develop Shenzhen’s cultural capacity and creativity. The vision of Shenzhen is to become a cultural innovation center, a regional cultural center and an international cultural city. However, there has been a lack of attention to the triple helix interactions in the creative industries in China. In particular, there is limited knowledge about how interactions in creative spaces co-location within triple helix networks significantly influence city based innovation. That is, the roles of participating institutions need to be better understood. Thus, this paper discusses the interplay between university, creative industries and government in Shenzhen. Secondary analysis and documentary analysis will be used as methods in an effort to practically ground and illustrate this theoretical framework. Furthermore, this paper explores how are creative spaces being used to implement Triple Helix in creative industries. In particular, the new combination of resources generated from the synthesized consolidation and interactions through the institutions. This study will thus provide an innovative lens to understand the components, relationships and functions that exist within creative spaces by applying Triple Helix framework to the creative industries.

Keywords: cultural policy, creative industries, creative city, triple Helix

Procedia PDF Downloads 173
97 Biomimetic Dinitrosyl Iron Complexes: A Synthetic, Structural, and Spectroscopic Study

Authors: Lijuan Li

Abstract:

Nitric oxide (NO) has become a fascinating entity in biological chemistry over the past few years. It is a gaseous lipophilic radical molecule that plays important roles in several physiological and pathophysiological processes in mammals, including activating the immune response, serving as a neurotransmitter, regulating the cardiovascular system, and acting as an endothelium-derived relaxing factor. NO functions in eukaryotes both as a signal molecule at nanomolar concentrations and as a cytotoxic agent at micromolar concentrations. The latter arises from the ability of NO to react readily with a variety of cellular targets leading to thiol S-nitrosation, amino acid N-nitrosation, and nitrosative DNA damage. Nitric oxide can readily bind to metals to give metal-nitrosyl (M-NO) complexes. Some of these species are known to play roles in biological NO storage and transport. These complexes have different biological, photochemical, or spectroscopic properties due to distinctive structural features. These recent discoveries have spawned a great interest in the development of transition metal complexes containing NO, particularly its iron complexes that are central to the role of nitric oxide in the body. Spectroscopic evidence would appear to implicate species of “Fe(NO)2+” type in a variety of processes ranging from polymerization, carcinogenesis, to nitric oxide stores. Our research focuses on isolation and structural studies of non-heme iron nitrosyls that mimic biologically active compounds and can potentially be used for anticancer drug therapy. We have shown that reactions between Fe(NO)2(CO)2 and a series of imidazoles generated new non-heme iron nitrosyls of the form Fe(NO)2(L)2 [L = imidazole, 1-methylimidazole, 4-methylimidazole, benzimidazole, 5,6-dimethylbenzimidazole, and L-histidine] and a tetrameric cluster of [Fe(NO)2(L)]4 (L=Im, 4-MeIm, BzIm, and Me2BzIm), resulted from the interactions of Fe(NO)2 with a series of substituted imidazoles was prepared. Recently, a series of sulfur bridged iron di nitrosyl complexes with the general formula of [Fe(µ-RS)(NO)2]2 (R = n-Pr, t-Bu, 6-methyl-2-pyridyl, and 4,6-dimethyl-2-pyrimidyl), were synthesized by the reaction of Fe(NO)2(CO)2 with thiols or thiolates. Their structures and properties were studied by IR, UV-vis, 1H-NMR, EPR, electrochemistry, X-ray diffraction analysis and DFT calculations. IR spectra of these complexes display one weak and two strong NO stretching frequencies (νNO) in solution, but only two strong νNO in solid. DFT calculations suggest that two spatial isomers of these complexes bear 3 Kcal energy difference in solution. The paramagnetic complexes [Fe2(µ-RS)2(NO)4]-, have also been investigated by EPR spectroscopy. Interestingly, the EPR spectra of complexes exhibit an isotropic signal of g = 1.998 - 2.004 without hyperfine splitting. The observations are consistent with the results of calculations, which reveal that the unpaired electron dominantly delocalize over the two sulfur and two iron atoms. The difference of the g values between the reduced form of iron-sulfur clusters and the typical monomeric di nitrosyl iron complexes is explained, for the first time, by of the difference in unpaired electron distributions between the two types of complexes, which provides the theoretical basis for the use of g value as a spectroscopic tool to differentiate these biologically active complexes.

Keywords: di nitrosyl iron complex, metal nitrosyl, non-heme iron, nitric oxide

Procedia PDF Downloads 278
96 Presence and Severity of Language Deficits in Comprehension, Production and Pragmatics in a Group of ALS Patients: Analysis with Demographic and Neuropsychological Data

Authors: M. Testa, L. Peotta, S. Giusiano, B. Lazzolino, U. Manera, A. Canosa, M. Grassano, F. Palumbo, A. Bombaci, S. Cabras, F. Di Pede, L. Solero, E. Matteoni, C. Moglia, A. Calvo, A. Chio

Abstract:

Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease of adulthood, which primarily affects the central nervous system and is characterized by progressive bilateral degeneration of motor neurons. The degeneration processes in ALS extend far beyond the neurons of the motor system, and affects cognition, behaviour and language. To outline the prevalence of language deficits in an ALS cohort and explore their profile along with demographic and neuropsychological data. A full neuropsychological battery and language assessment was administered to 56 ALS patients. Neuropsychological assessment included tests of executive functioning, verbal fluency, social cognition and memory. Language was assessed using tests for verbal comprehension, production and pragmatics. Patients were cognitively classified following the Revised Consensus Criteria and divided in three groups showing different levels of language deficits: group 1 - no language deficit; group 2 - one language deficit; group 3 - two or more language deficits. Chi-square for independence and non-parametric measures to compare groups were applied. Nearly half of ALS-CN patients (48%) reported one language test under the clinical cut-off, and only 13% of patents classified as ALS-CI showed no language deficits, while the rest 87% of ALS-CI reported two or more language deficits. ALS-BI and ALS-CBI cases all reported two or more language deficits. Deficits in production and in comprehension appeared more frequent in ALS-CI patients (p=0.011, p=0.003 respectively), with a higher percentage of comprehension deficits (83%). Nearly all ALS-CI reported at least one deficit in pragmatic abilities (96%) and all ALS-BI and ALS-CBI patients showed pragmatic deficits. Males showed higher percentage of pragmatic deficits (97%, p=0.007). No significant differences in language deficits have been found between bulbar and spinal onset. Months from onset and level of impairment at testing (ALS-FRS total score) were not significantly different between levels and type of language impairment. Age and education were significantly higher for cases showing no deficits in comprehension and pragmatics and in the group showing no language deficits. Comparing performances at neuropsychological tests among the three levels of language deficits, no significant differences in neuropsychological performances were found between group 1 and 2; compared to group 1, group 3 appeared to decay specifically on executive testing, verbal/visuospatial learning, and social cognition. Compared to group 2, group 3 showed worse performances specifically in tests of working memory and attention. Language deficits have found to be spread in our sample, encompassing verbal comprehension, production and pragmatics. Our study reveals that also cognitive intact patients (ALS-CN) showed at least one language deficit in 48% of cases. Pragmatic domain is the most compromised (84% of the total sample), present in nearly all ALS-CI (96%), likely due to the influence of executive impairment. Lower age and higher education seem to preserve comprehension, pragmatics and presence of language deficits. Finally, executive functions, verbal/visuospatial learning and social cognition differentiate the group with no language deficits from the group with a clinical language impairment (group 3), while attention and working memory differentiate the group with one language deficit from the clinical impaired group.

Keywords: amyotrophic lateral sclerosis, language assessment, neuropsychological assessment, language deficit

Procedia PDF Downloads 119
95 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System

Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar

Abstract:

Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.

Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel

Procedia PDF Downloads 110
94 Identification and Characterization of Novel Genes Involved in Quinone Synthesis in the Odoriferous Defensive Stink Glands of the Red Flour Beetle, Tribolium castaneum

Authors: B. Atika, S. Lehmann, E. Wimmer

Abstract:

The defense strategy is very common in the insect world. Defensive substances play a wide variety of functions for beetles, such as repellents, toxicants, insecticides, and antimicrobics. Beetles react to predators, invaders, and parasitic microbes with the release of toxic and repellent substances. Defensive substances are directed against a large array of potential target organisms or may function for boiling bombardment or as surfactants. Usually, Coleoptera biosynthesize and store their defensive compounds in a complex secretory organ, known as odoriferous defensive stink glands. The red flour beetle, Tribolium castaneum (Coleoptera: Tenebrionidae), uses these glands to produce antimicrobial p-benzoquinones and 1-alkenes. In the past, the morphology of stink gland has been studied in detail in tenebrionid beetles; however, very little is known about the genes that are involved in the production of gland secretion. In this study, we studied a subset of genes that are essential for the benzoquinone production in red flour beetle. In the first phase, we selected 74 potential candidate genes from a genome-wide RNA interference (RNAi) knockdown screen named 'iBeetle.' All these 74 candidate genes were functionally characterized by RNAi-mediated gene knockdown. Therefore, they were selected for a subsequent gas chromatography-mass spectrometry (GC-MS) analysis of secretion volatiles in respective RNAi knockdown glands. 33 of them were observed to alter the phenotype of stink gland. In the GC-MS analysis, 7 candidate genes were noted to display a strongly altered gland, in terms of secretion color and chemical composition, upon knockdown, showing their key role in the biosynthesis of gland secretion. Morphologically altered stink glands were found for odorant receptor and protein kinase superfamily. Subsequent GC-MS analysis of secretion volatiles revealed reduced benzoquinone levels in LIM domain, PDZ domain, PBP/GOBP family knockdowns and a complete lack of benzoquinones in the knockdown of sulfatase-modifying factor enzyme 1, sulfate transporter family. Based on stink gland transcriptome data, we analyzed the function of sulfatase-modifying factor enzyme 1 and sulfate transporter family via RNAi-mediated gene knockdowns, GC-MS, in situ hybridization, and enzymatic activity assays. Morphologically altered stink glands were noted in knockdown of both these genes. Furthermore, GC-MS analysis of secretion volatiles showed a complete lack of benzoquinones in the knockdown of these two genes. In situ hybridization showed that these two genes are expressed around the vesicle of certain subgroup of secretory stink gland cells. Enzymatic activity assays on stink gland tissue showed that these genes are involved in p-benzoquinone biosynthesis. These results suggest that sulfatase-modifying factor enzyme 1 and sulfate transporter family play a role specifically in benzoquinone biosynthesis in red flour beetles.

Keywords: Red Flour Beetle, defensive stink gland, benzoquinones, sulfate transporter, sulfatase-modifying factor enzyme 1

Procedia PDF Downloads 124
93 Re-Framing Resilience Turn in Risk and Management with Anti-Positivistic Perspective of Holling's Early Work

Authors: Jose CanIzares

Abstract:

In the last decades, resilience has received much attention in relation to understanding and managing new forms of risk, especially in the context of urban adaptation to climate change. There are abundant concerns, however, on how to best interpret resilience and related ideas, and on whether they can guide ethically appropriate risk-related or adaptation efforts. Narrative creation and framing are critical steps in shaping public discussion and policy in large-scale interventions, since they favor or inhibit early decision and interpretation habits, which can be morally sensitive and then become persistent on time. This article adds to such framing process by contesting a conventional narrative on resilience and offering an alternative one. Conventionally, present ideas on resilience are traced to the work of ecologist C. S. Holling, especially to his article Resilience and Stability in Ecosystems. This article is usually portrayed as a contribution of complex systems thinking to theoretical ecology, where Holling appeals to resilience in order to challenge received views on ecosystem stability and the diversity-stability hypothesis. In this regard, resilience is construed as a “purely scientific”, precise and descriptive concept, denoting a complex property that allows ecosystems to persist, or to maintain functions, after disturbance. Yet, these formal features of resilience supposedly changed with Holling’s later work in the 90s, where, it is argued, Holling begun to use resilience as a more pragmatic “boundary term”, aimed at unifying transdisciplinary research about risks, ecological or otherwise, and at articulating public debate and governance strategies on the issue. In the conventional story, increased vagueness and degrees of normativity are the price to pay for this conceptual shift, which has made the term more widely usable, but also incompatible with scientific purposes and morally problematic (if not completely objectionable). This paper builds on a detailed analysis of Holling’s early work to propose an alternative narrative. The study will show that the “complexity turn” has often entangled theoretical and pragmatic aims. Accordingly, Holling’s primary aim was to fight what he termed “pathologies of natural resource management” or “pathologies of command and control management”, and so, the terms of his reform of ecosystem science are partly subordinate to the details of his proposal for reforming the management sciences. As regards resilience, Holling used it as a polysemous, ambiguous and normative term: sometimes, as an instrumental value that is closely related to various stability concepts; other times, and more crucially, as an intrinsic value and a tool for attacking efficiency and instrumentalism in management. This narrative reveals the limitations of its conventional alternative and has several practical advantages. It captures well the structure and purposes of Holling’s project, and the various roles of resilience in it. It helps to link Holling’s early work with other philosophical and ideological shifts at work in the 70s. It highlights the currency of Holling’s early work for present research and action in fields such as risk and climate adaptation. And it draws attention to morally relevant aspects of resilience that the conventional narrative neglects.

Keywords: resilience, complexity turn, risk management, positivistic, framing

Procedia PDF Downloads 140
92 Applying an Automatic Speech Intelligent System to the Health Care of Patients Undergoing Long-Term Hemodialysis

Authors: Kuo-Kai Lin, Po-Lun Chang

Abstract:

Research Background and Purpose: Following the development of the Internet and multimedia, the Internet and information technology have become crucial avenues of modern communication and knowledge acquisition. The advantages of using mobile devices for learning include making learning borderless and accessible. Mobile learning has become a trend in disease management and health promotion in recent years. End-stage renal disease (ESRD) is an irreversible chronic disease, and patients who do not receive kidney transplants can only rely on hemodialysis or peritoneal dialysis to survive. Due to the complexities in caregiving for patients with ESRD that stem from their advanced age and other comorbidities, the patients’ incapacity of self-care leads to an increase in the need to rely on their families or primary caregivers, although whether the primary caregivers adequately understand and implement patient care is a topic of concern. Therefore, this study explored whether primary caregivers’ health care provisions can be improved through the intervention of an automatic speech intelligent system, thereby improving the objective health outcomes of patients undergoing long-term dialysis. Method: This study developed an automatic speech intelligent system with healthcare functions such as health information voice prompt, two-way feedback, real-time push notification, and health information delivery. Convenience sampling was adopted to recruit eligible patients from a hemodialysis center at a regional teaching hospital as research participants. A one-group pretest-posttest design was adopted. Descriptive and inferential statistics were calculated from the demographic information collected from questionnaires answered by patients and primary caregivers, and from a medical record review, a health care scale (recorded six months before and after the implementation of intervention measures), a subjective health assessment, and a report of objective physiological indicators. The changes in health care behaviors, subjective health status, and physiological indicators before and after the intervention of the proposed automatic speech intelligent system were then compared. Conclusion and Discussion: The preliminary automatic speech intelligent system developed in this study was tested with 20 pretest patients at the recruitment location, and their health care capacity scores improved from 59.1 to 72.8; comparisons through a nonparametric test indicated a significant difference (p < .01). The average score for their subjective health assessment rose from 2.8 to 3.3. A survey of their objective physiological indicators discovered that the compliance rate for the blood potassium level was the most significant indicator; its average compliance rate increased from 81% to 94%. The results demonstrated that this automatic speech intelligent system yielded a higher efficacy for chronic disease care than did conventional health education delivered by nurses. Therefore, future efforts will continue to increase the number of recruited patients and to refine the intelligent system. Future improvements to the intelligent system can be expected to enhance its effectiveness even further.

Keywords: automatic speech intelligent system for health care, primary caregiver, long-term hemodialysis, health care capabilities, health outcomes

Procedia PDF Downloads 88
91 The Analgesic Effect of Electroacupuncture in a Murine Fibromyalgia Model

Authors: Bernice Jeanne Lottering, Yi-Wen Lin

Abstract:

Introduction: Chronic pain has a definitive lack of objective parameters in the measurement and treatment efficacy of diseases such as Fibromyalgia (FM). Persistent widespread pain and generalized tenderness are the characteristic symptoms affecting a large majority of the global population, particularly females. This disease has indicated a refractory tendency to conventional treatment ventures, largely resultant from a lack of etiological and pathogenic understanding of the disease development. Emerging evidence indicates that the central nervous system (CNS) plays a critical role in the amplification of pain signals and the neurotransmitters associated therewith. Various stimuli have been found to activate the channels existent on nociceptor terminals, thereby actuating nociceptive impulses along the pain pathways. The transient receptor potential vanalloid 1 (TRPV1) channel functions as a molecular integrator for numerous sensory inputs, such as nociception, and was explored in the current study. Current intervention approaches face a multitude challenges, ranging from effective therapeutic interventions to the limitation of pathognomonic criteria resultant from incomplete understanding and partial evidence on the mechanisms of action of FM. It remains unclear whether electroacupuncture (EA) plays an integral role in the functioning of the TRPV1 pathway, and whether or not it can reduce the chronic pain induced by FM. Aims: The aim of this study was to explore the mechanisms underlying the activation and modulation of the TRPV1 channel pathway in a cold stress model of FM applied to a murine model. Furthermore, the effect of EA in the treatment of mechanical and thermal pain, as expressed in FM was also to be investigated. Methods: 18 C57BL/6 wild type and 6 TRPV1 knockout (KO) mice, aged 8-12 weeks, were exposed to an intermittent cold stress-induced fibromyalgia-like pain model, with or without EA treatment at ZusanLi ST36 (2Hz/20min) on day 3 to 5. Von Frey and Hargreaves behaviour tests were implemented in order to analyze the mechanical and thermal pain thresholds on day 0, 3 and 5 in control group (C), FM group (FM), FM mice with EA treated group (FM + EA) and FM in KO group. Results: An increase in mechanical and thermal hyperalgesia was observed in the FM, EA and KO groups when compared to the control group. This initial increase was reduced in the EA group, which directs focus at the treatment efficacy of EA in nociceptive sensitization, and the analgesic effect EA has attenuating FM associated pain. Discussion: An increase in the nociceptive sensitization was observed through higher withdrawal thresholds in the von Frey mechanical test and the Hargreaves thermal test. TRPV1 function in mice has been scientifically associated with these nociceptive conduits, and the increased behaviour test results suggest that TRPV1 upregulation is central to the FM induced hyperalgesia. This data was supported by the decrease in sensitivity observed in results of the TRPV1 KO group. Moreover, the treatment of EA showed a decrease in this FM induced nociceptive sensitization, suggesting TRPV1 upregulation and overexpression can be attenuated by EA at bilateral ST36. This evidence compellingly implies that the analgesic effect of EA is associated with TRPV1 downregulation.

Keywords: fibromyalgia, electroacupuncture, TRPV1, nociception

Procedia PDF Downloads 119
90 Characterization of Platelet Mitochondrial Metabolism in COVID-19 caused Acute Respiratory Distress Syndrome (ARDS)

Authors: Anna Höfer, Johannes Herrmann, Patrick Meybohm, Christopher Lotz

Abstract:

Mitochondria are pivotal for energy supply and regulation of cellular functions. Deficiencies of mitochondrial metabolism have been implicated in diverse stressful conditions including infections. Platelets are key mediators for thrombo-inflammation during development and resolution of acute respiratory distress syndrome (ARDS). Previous data point to an exhausted platelet phenotype in critically-ill patients with coronavirus 19 disease (COVID-19) impacting the course of disease. The objective of this work was to characterize platelet mitochondrial metabolism in patients suffering from COVID-19 ARDSA longitudinal analysis of platelet mitochondrial metabolism in 24 patients with COVID-19 induced ARDS compared to 35 healthy controls (ctrl) was performed. Blood samples were analyzed at two time points (t1=day 1; t2=day 5-7 after study inclusion). The activity of mitochondrial citrate synthase was photometrically measured. The impact of oxidative stress on mitochondrial permeability was assessed by a photometric calcium-induced swelling assay and the activity of superoxide dismutase (SOD) by a SOD assay kit. The amount of protein carbonylation and the activity of mitochondria complexes I-IV were photometrically determined. Levels of interleukins (IL)-1α, IL-1β and tumor necrosis factor (TNF-) α were measured by a Multiplex assay kit. Median age was 54 years, 63 % were male and BMI was 29.8 kg/m2. SOFA (12; IQR: 10-15) and APACHE II (27; IQR: 24-30) indicated critical illness. Median Murray Score was 3.4 (IQR: 2.8-3.4), 21/24 (88%) required mechanical ventilation and V-V ECMO support in 14/24 (58%). Platelet counts in ARDS did not change during ICU stay (t1: 212 vs. t2: 209 x109/L). However, mean platelet volume (MPV) significantly increased (t1: 10.6 vs. t2: 11.9 fL; p<0.0001). Citrate synthase activity showed no significant differences between ctrl and ARDS patients. Calcium induced swelling was more pronounced in patients at t1 compared to t2 and to ctrl (50µM; t1: 0.006 vs. ctrl: 0.016 ΔOD; p=0.001). The amount of protein carbonylation as marker for irreversible proteomic modification constantly increased during ICU stay and compared to ctrl., without reaching significance. In parallel, superoxid dismutase activity gradually declined during ICU treatment vs. ctrl (t2: - 29 vs. ctrl.: - 17 %; p=0.0464). Complex I analysis revealed significantly stronger activity in ARDS vs. ctrl. (t1: 0.633 vs. ctrl.: 0.415 ΔOD; p=0.0086). There were no significant differences in complex II, III or IV activity in platelets from ARDS patients compared to ctrl. IL-18 constantly increased during the observation period without reaching significance. IL-1α and TNF-α did not differ from ctrl. However, IL-1β levels were significantly elevated in ARDS (t1: 16.8; t2: 16.6 vs. ctrl.: 12.4 pg/mL; p1=0.0335, p2=0.0032). This study reveals new insights in platelet mitochondrial metabolism during COVID-19 caused ARDS. it data point towards enhanced platelet activity with a pronounced turnover rate. We found increased activity of mitochondria complex I and evidence for enhanced oxidative stress. In parallel, protective mechanisms against oxidative stress were narrowed with elevated levels of IL-1β likely causing a pro-apoptotic environment. These mechanisms may contribute to platelet exhaustion in ARDS.

Keywords: acute respiratory distress syndrome (ARDS), coronavirus 19 disease (COVID-19), oxidative stress, platelet mitochondrial metabolism

Procedia PDF Downloads 18
89 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania

Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea

Abstract:

A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.

Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality

Procedia PDF Downloads 97
88 Production of Medicinal Bio-active Amino Acid Gamma-Aminobutyric Acid In Dairy Sludge Medium

Authors: Farideh Tabatabaee Yazdi, Fereshteh Falah, Alireza Vasiee

Abstract:

Introduction: Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is widely present in organisms. GABA is a kind of pharmacological and biological component and its application is wide and useful. Several important physiological functions of GABA have been characterized, such as neurotransmission and induction of hypotension. GABA is also a strong secretagogue of insulin from the pancreas and effectively inhibits small airway-derived lung adenocarcinoma and tranquilizer. Many microorganisms can produce GABA, and lactic acid bacteria have been a focus of research in recent years because lactic acid bacteria possess special physiological activities and are generally regarded as safe. Among them, the Lb. Brevis produced the highest amount of GABA. The major factors affecting GABA production have been characterized, including carbon sources and glutamate concentration. The use of food industry waste to produce valuable products such as amino acids seems to be a good way to reduce production costs and prevent the waste of food resources. In a dairy factory, a high volume of sludge is produced from a separator that contains useful compounds such as growth factors, carbon, nitrogen, and organic matter that can be used by different microorganisms such as Lb.brevis as carbon and nitrogen sources. Therefore, it is a good source of GABA production. GABA is primarily formed by the irreversible α-decarboxylation reaction of L-glutamic acid or its salts, catalysed by the GAD enzyme. In the present study, this aim was achieved for the fast-growing of Lb.brevis and producing GABA, using the dairy industry sludge as a suitable growth medium. Lactobacillus Brevis strains obtained from Microbial Type Culture Collection (MTCC) were used as model strains. In order to prepare dairy sludge as a medium, sterilization should be done at 121 ° C for 15 minutes. Lb. Brevis was inoculated to the sludge media at pH=6 and incubated for 120 hours at 30 ° C. After fermentation, the supernatant solution is centrifuged and then, the GABA produced was analyzed by the Thin Layer chromatography (TLC) method qualitatively and by the high-performance liquid chromatography (HPLC) method quantitatively. By increasing the percentage of dairy sludge in the culture medium, the amount of GABA increased. Also, evaluated the growth of bacteria in this medium showed the positive effect of dairy sludge on the growth of Lb.brevis, which resulted in the production of more GABA. GABA-producing LAB offers the opportunity of developing naturally fermented health-oriented products. Although some GABA-producing LAB has been isolated to find strains suitable for different fermentations, further screening of various GABA-producing strains from LAB, especially high-yielding strains, is necessary. The production of lactic acid, bacterial gamma-aminobutyric acid, is safe and eco-friendly. The use of dairy industry waste causes enhanced environmental safety. Also provides the possibility of producing valuable compounds such as GABA. In general, dairy sludge is a suitable medium for the growth of Lactic Acid Bacteria and produce this amino acid that can reduce the final cost of it by providing carbon and nitrogen source.

Keywords: GABA, Lactobacillus, HPLC, dairy sludge

Procedia PDF Downloads 98
87 Connectomic Correlates of Cerebral Microhemorrhages in Mild Traumatic Brain Injury Victims with Neural and Cognitive Deficits

Authors: Kenneth A. Rostowsky, Alexander S. Maher, Nahian F. Chowdhury, Andrei Irimia

Abstract:

The clinical significance of cerebral microbleeds (CMBs) due to mild traumatic brain injury (mTBI) remains unclear. Here we use magnetic resonance imaging (MRI), diffusion tensor imaging (DTI) and connectomic analysis to investigate the statistical association between mTBI-related CMBs, post-TBI changes to the human connectome and neurological/cognitive deficits. This study was undertaken in agreement with US federal law (45 CFR 46) and was approved by the Institutional Review Board (IRB) of the University of Southern California (USC). Two groups, one consisting of 26 (13 females) mTBI victims and another comprising 26 (13 females) healthy control (HC) volunteers were recruited through IRB-approved procedures. The acute Glasgow Coma Scale (GCS) score was available for each mTBI victim (mean µ = 13.2; standard deviation σ = 0.4). Each HC volunteer was assigned a GCS of 15 to indicate the absence of head trauma at the time of enrollment in our study. Volunteers in the HC and mTBI groups were matched according to their sex and age (HC: µ = 67.2 years, σ = 5.62 years; mTBI: µ = 66.8 years, σ = 5.93 years). MRI [including T1- and T2-weighted volumes, gradient recalled echo (GRE)/susceptibility weighted imaging (SWI)] and gradient echo (GE) DWI volumes were acquired using the same MRI scanner type (Trio TIM, Siemens Corp.). Skull-stripping and eddy current correction were implemented. DWI volumes were processed in TrackVis (http://trackvis.org) and 3D Slicer (http://www.slicer.org). Tensors were fit to DWI data to perform DTI, and tractography streamlines were then reconstructed using deterministic tractography. A voxel classifier was used to identify image features as CMB candidates using Microbleed Anatomic Rating Scale (MARS) guidelines. For each peri-lesional DTI streamline bundle, the null hypothesis was formulated as the statement that there was no neurological or cognitive deficit associated with between-scan differences in the mean FA of DTI streamlines within each bundle. The statistical significance of each hypothesis test was calculated at the α = 0.05 level, subject to the family-wise error rate (FWER) correction for multiple comparisons. Results: In HC volunteers, the along-track analysis failed to identify statistically significant differences in the mean FA of DTI streamline bundles. In the mTBI group, significant differences in the mean FA of peri-lesional streamline bundles were found in 21 out of 26 volunteers. In those volunteers where significant differences had been found, these differences were associated with an average of ~47% of all identified CMBs (σ = 21%). In 12 out of the 21 volunteers exhibiting significant FA changes, cognitive functions (memory acquisition and retrieval, top-down control of attention, planning, judgment, cognitive aspects of decision-making) were found to have deteriorated over the six months following injury (r = -0.32, p < 0.001). Our preliminary results suggest that acute post-TBI CMBs may be associated with cognitive decline in some mTBI patients. Future research should attempt to identify mTBI patients at high risk for cognitive sequelae.

Keywords: traumatic brain injury, magnetic resonance imaging, diffusion tensor imaging, connectomics

Procedia PDF Downloads 148
86 Evaluation: Developing An Appropriate Survey Instrument For E-Learning

Authors: Brenda Ravenscroft, Ulemu Luhanga, Bev King

Abstract:

A comprehensive evaluation of online learning needs to include a blend of educational design, technology use, and online instructional practices that integrate technology appropriately for developing and delivering quality online courses. Research shows that classroom-based evaluation tools do not adequately capture the dynamic relationships between content, pedagogy, and technology in online courses. Furthermore, studies suggest that using classroom evaluations for online courses yields lower than normal scores for instructors, and may affect faculty negatively in terms of administrative decisions. In 2014, the Faculty of Arts and Science at Queen’s University responded to this evidence by seeking an alternative to the university-mandated evaluation tool, which is designed for classroom learning. The Faculty is deeply engaged in e-learning, offering large variety of online courses and programs in the sciences, social sciences, humanities and arts. This paper describes the process by which a new student survey instrument for online courses was developed and piloted, the methods used to analyze the data, and the ways in which the instrument was subsequently adapted based on the results. It concludes with a critical reflection on the challenges of evaluating e-learning. The Student Evaluation of Online Teaching Effectiveness (SEOTE), developed by Arthur W. Bangert in 2004 to assess constructivist-compatible online teaching practices, provided the starting point. Modifications were made in order to allow the instrument to serve the two functions required by the university: student survey results provide the instructor with feedback to enhance their teaching, and also provide the institution with evidence of teaching quality in personnel processes. Changes were therefore made to the SEOTE to distinguish more clearly between evaluation of the instructor’s teaching and evaluation of the course design, since, in the online environment, the instructor is not necessarily the course designer. After the first pilot phase, involving 35 courses, the results were analyzed using Stobart's validity framework as a guide. This process included statistical analyses of the data to test for reliability and validity, student and instructor focus groups to ascertain the tool’s usefulness in terms of the feedback it provided, and an assessment of the utility of the results by the Faculty’s e-learning unit responsible for supporting online course design. A set of recommendations led to further modifications to the survey instrument prior to a second pilot phase involving 19 courses. Following the second pilot, statistical analyses were repeated, and more focus groups were used, this time involving deans and other decision makers to determine the usefulness of the survey results in personnel processes. As a result of this inclusive process and robust analysis, the modified SEOTE instrument is currently being considered for adoption as the standard evaluation tool for all online courses at the university. Audience members at this presentation will be stimulated to consider factors that differentiate effective evaluation of online courses from classroom-based teaching. They will gain insight into strategies for introducing a new evaluation tool in a unionized institutional environment, and methodologies for evaluating the tool itself.

Keywords: evaluation, online courses, student survey, teaching effectiveness

Procedia PDF Downloads 244
85 Identification and Characterization of Small Peptides Encoded by Small Open Reading Frames using Mass Spectrometry and Bioinformatics

Authors: Su Mon Saw, Joe Rothnagel

Abstract:

Short open reading frames (sORFs) located in 5’UTR of mRNAs are known as uORFs. Characterization of uORF-encoded peptides (uPEPs) i.e., a subset of short open reading frame encoded peptides (sPEPs) and their translation regulation lead to understanding of causes of genetic disease, proteome complexity and development of treatments. Existence of uORFs within cellular proteome could be detected by LC-MS/MS. The ability of uORF to be translated into uPEP and achievement of uPEP identification will allow uPEP’s characterization, structures, functions, subcellular localization, evolutionary maintenance (conservation in human and other species) and abundance in cells. It is hypothesized that a subset of sORFs are translatable and that their encoded sPEPs are functional and are endogenously expressed contributing to the eukaryotic cellular proteome complexity. This project aimed to investigate whether sORFs encode functional peptides. Liquid chromatography-mass spectrometry (LC-MS) and bioinformatics were thus employed. Due to probable low abundance of sPEPs and small in sizes, the need for efficient peptide enrichment strategies for enriching small proteins and depleting the sub-proteome of large and abundant proteins is crucial for identifying sPEPs. Low molecular weight proteins were extracted using SDS-PAGE from Human Embryonic Kidney (HEK293) cells and Strong Cation Exchange Chromatography (SCX) from secreted HEK293 cells. Extracted proteins were digested by trypsin to peptides, which were detected by LC-MS/MS. The MS/MS data obtained was searched against Swiss-Prot using MASCOT version 2.4 to filter out known proteins, and all unmatched spectra were re-searched against human RefSeq database. ProteinPilot v5.0.1 was used to identify sPEPs by searching against human RefSeq, Vanderperre and Human Alternative Open Reading Frame (HaltORF) databases. Potential sPEPs were analyzed by bioinformatics. Since SDS PAGE electrophoresis could not separate proteins <20kDa, this could not identify sPEPs. All MASCOT-identified peptide fragments were parts of main open reading frame (mORF) by ORF Finder search and blastp search. No sPEP was detected and existence of sPEPs could not be identified in this study. 13 translated sORFs in HEK293 cells by mass spectrometry in previous studies were characterized by bioinformatics. Identified sPEPs from previous studies were <100 amino acids and <15 kDa. Bioinformatics results showed that sORFs are translated to sPEPs and contribute to proteome complexity. uPEP translated from uORF of SLC35A4 was strongly conserved in human and mouse while uPEP translated from uORF of MKKS was strongly conserved in human and Rhesus monkey. Cross-species conserved uORFs in association with protein translation strongly suggest evolutionary maintenance of coding sequence and indicate probable functional expression of peptides encoded within these uORFs. Translation of sORFs was confirmed by mass spectrometry and sPEPs were characterized with bioinformatics.

Keywords: bioinformatics, HEK293 cells, liquid chromatography-mass spectrometry, ProteinPilot, Strong Cation Exchange Chromatography, SDS-PAGE, sPEPs

Procedia PDF Downloads 160
84 Interpretation of Time Series Groundwater Monitoring Data Using Analytical Impulse Response Function Method to Understand Groundwater Processes Along the Murray River Floodplain at Gunbower Forest, Victoria, Australia

Authors: Mark Hocking

Abstract:

There is concern about the potential impact environmental flooding may have on groundwater levels and salinity processes in the Murray-Darling Basin. A study was undertaken to determine if environmental flooding of the Gunbower Forest has an impact on groundwater level and salinity which is in Victoria, Australia. To assess the impact, Impulse Response Functions (IRFs) are applied to time series groundwater monitoring well data in the area surrounding Gunbower Forest. It is found that rainfall is the primary driver of seasonal water table fluctuation, and the Murray River water level is a secondary contributor to the water table fluctuations. The dominant process that influenced the long-term water table level and salinity conditions is associated with pressure changes in the deep regional aquifer. The study demonstrates that groundwater level fluctuations in the vicinity of Gunbower Forest do not correlate with flooding (natural or managed). Groundwater recharge is calculated by applying the bore hydrograph method to the rainfall-attributed forcing function fluctuations. Data collected from thirty-three bores between 1990 to 2020 is processed to determine a 30-year average groundwater recharge rate. A 5% specific yield of the unconfined aquifer is assumed based on previously published data. It is found that the rainfall-attributed mean annual groundwater recharge varied between 2 mm/year and 189 mm/year with a median of 33.6 mm/year. Surface water recharge is also calculated by analysing the surface water attributed forcing function fluctuations and found to be as high as 37 mm/year, with most of the high values in the vicinity of rivers or agricultural land. There is a long-term regional aquifer declining trend where most water table bores have an average falling trend of 20 cm/year independent of rainfall over the past 30 years. It is found that the groundwater level beneath the Gunbower Forest is dominated by groundwater evapotranspiration. Evapotranspiration lowers the water table by as much as 0.5 m within the forest, thereby causing a relative groundwater level depression under the Gunbower Forest. Historical data shows that groundwater salinity in the area varies and has an electrical conductivity of up to 45 000 µS/cm (comparable to seawater). High groundwater salinity occurs both within and outside the Gunbower Forest as well as adjacent to the Murray River. Available groundwater salinity data suggests trends are generally stable; however, data quality and collection frequency could be improved. This study shows that at the majority of locations analyzed, the groundwater recharge occurred due to both rainfall and water loss from the Murray River. It is found that Deep groundwater pressures determined the base groundwater level, and the fluctuation of the deeper aquifer pressures determined the environmental interaction at the water surface. Local groundwater processes, such as high evapotranspiration rates in Gunbower Forest, have the capacity to lower the water table locally. The rise or fall of the regional aquifer water level has the greatest influence on the groundwater salinity in and around Gunbower Forest.

Keywords: groundwater data interpretation, groundwater monitoring, hydrogeology, impulse response function

Procedia PDF Downloads 29
83 Increasing System Adequacy Using Integration of Pumped Storage: Renewable Energy to Reduce Thermal Power Generations Towards RE100 Target, Thailand

Authors: Mathuravech Thanaphon, Thephasit Nat

Abstract:

The Electricity Generating Authority of Thailand (EGAT) is focusing on expanding its pumped storage hydropower (PSH) capacity to increase the reliability of the system during peak demand and allow for greater integration of renewables. To achieve this requirement, Thailand will have to double its current renewable electricity production. To address the challenges of balancing supply and demand in the grid with increasing levels of RE penetration, as well as rising peak demand, EGAT has already been studying the potential for additional PSH capacity for several years to enable an increased share of RE and replace existing fossil fuel-fired generation. In addition, the role that pumped-storage hydropower would play in fulfilling multiple grid functions and renewable integration. The proposed sites for new PSH would help increase the reliability of power generation in Thailand. However, most of the electricity generation will come from RE, chiefly wind and photovoltaic, and significant additional Energy Storage capacity will be needed. In this paper, the impact of integrating the PSH system on the adequacy of renewable rich power generating systems to reduce the thermal power generating units is investigated. The variations of system adequacy indices are analyzed for different PSH-renewables capacities and storage levels. Power Development Plan 2018 rev.1 (PDP2018 rev.1), which is modified by integrating a six-new PSH system and RE planning and development aftermath in 2030, is the very challenge. The system adequacy indices through power generation are obtained using Multi-Objective Genetic Algorithm (MOGA) Optimization. MOGA is a probabilistic heuristic and stochastic algorithm that is able to find the global minima, which have the advantage that the fitness function does not necessarily require the gradient. In this sense, the method is more flexible in solving reliability optimization problems for a composite power system. The optimization with hourly time step takes years of planning horizon much larger than the weekly horizon that usually sets the scheduling studies. The objective function is to be optimized to maximize RE energy generation, minimize energy imbalances, and minimize thermal power generation using MATLAB. The PDP2018 rev.1 was set to be simulated based on its planned capacity stepping into 2030 and 2050. Therefore, the four main scenario analyses are conducted as the target of renewables share: 1) Business-As-Usual (BAU), 2) National Targets (30% RE in 2030), 3) Carbon Neutrality Targets (50% RE in 2050), and 5) 100% RE or full-decarbonization. According to the results, the generating system adequacy is significantly affected by both PSH-RE and Thermal units. When a PSH is integrated, it can provide hourly capacity to the power system as well as better allocate renewable energy generation to reduce thermal generations and improve system reliability. These results show that a significant level of reliability improvement can be obtained by PSH, especially in renewable-rich power systems.

Keywords: pumped storage hydropower, renewable energy integration, system adequacy, power development planning, RE100, multi-objective genetic algorithm

Procedia PDF Downloads 30
82 Against the Philosophical-Scientific Racial Project of Biologizing Race

Authors: Anthony F. Peressini

Abstract:

The concept of race has recently come prominently back into discussion in the context of medicine and medical science, along with renewed effort to biologize racial concepts. This paper argues that this renewed effort to biologize race by way of medicine and population genetics fail on their own terms, and more importantly, that the philosophical project of biologizing race ought to be recognized for what it is—a retrograde racial project—and abandoned. There is clear agreement that standard racial categories and concepts cannot be grounded in the old way of racial naturalism, which understand race as a real, interest-independent biological/metaphysical category in which its members share “physical, moral, intellectual, and cultural characteristics.” But equally clear is the very real and pervasive presence of racial concepts in individual and collective consciousness and behavior, and so it remains a pressing area in which to seek deeper understanding. Recent philosophical work has endeavored to reconcile these two observations by developing a “thin” conception of race, grounded in scientific concepts but without the moral and metaphysical content. Such “thin,” science-based analyses take the “commonsense” or “folk” sense of race as it functions in contemporary society as the starting point for their philosophic-scientific projects to biologize racial concepts. A “philosophic-scientific analysis” is a special case of the cornerstone of analytic philosophy: a conceptual analysis. That is, a rendering of a concept into the more perspicuous concepts that constitute it. Thus a philosophic-scientific account of a concept is an attempt to work out an analysis of a concept that makes use of empirical science's insights to ground, legitimate and explicate the target concept in terms of clearer concepts informed by empirical results. The focus in this paper is on three recent philosophic-scientific cases for retaining “race” that all share this general analytic schema, but that make use of “medical necessity,” population genetics, and human genetic clustering, respectively. After arguing that each of these three approaches suffers from internal difficulties, the paper considers the general analytic schema employed by such biologizations of race. While such endeavors are inevitably prefaced with the disclaimer that the theory to follow is non-essentialist and non-racialist, the case will be made that such efforts are not neutral scientific or philosophical projects but rather are what sociologists call a racial project, that is, one of many competing efforts that conjoin a representation of what race means to specific efforts to determine social and institutional arrangements of power, resources, authority, etc. Accordingly, philosophic-scientific biologizations of race, since they begin from and condition their analyses on “folk” conceptions, cannot pretend to be “prior to” other disciplinary insights, nor to transcend the social-political dynamics involved in formulating theories of race. As a result, such traditional philosophical efforts can be seen to be disciplinarily parochial and to address only a caricature of a large and important human problem—and thereby further contributing to the unfortunate isolation of philosophical thinking about race from other disciplines.

Keywords: population genetics, ontology of race, race-based medicine, racial formation theory, racial projects, racism, social construction

Procedia PDF Downloads 232
81 Effects of the Exit from Budget Support on Good Governance: Findings from Four Sub-Saharan Countries

Authors: Magdalena Orth, Gunnar Gotz

Abstract:

Background: Domestic accountability, budget transparency and public financial management (PFM) are considered vital components of good governance in developing countries. The aid modality budget support (BS) promotes these governance functions in developing countries. BS engages in political decision-making and provides financial and technical support to poverty reduction strategies of the partner countries. Nevertheless, many donors have withdrawn their support from this modality due to cases of corruption, fraud or human rights violations. This exit from BS is leaving a finance and governance vacuum in the countries. The evaluation team analyzed the consequences of terminating the use of this modality and found particularly negative effects for good governance outcomes. Methodology: The evaluation uses a qualitative (theory-based) approach consisting of a comparative case study design, which is complemented by a process-tracing approach. For the case studies, the team conducted over 100 semi-structured interviews in Malawi, Uganda, Rwanda and Zambia and used four country-specific, tailor-made budget analysis. In combination with a previous DEval evaluation synthesis on the effects of BS, the team was able to create a before-and-after comparison that yields causal effects. Main Findings: In all four countries domestic accountability and budget transparency declined if other forms of pressure are not replacing BS´s mutual accountability mechanisms. In Malawi a fraud scandal created pressure from the society and from donors so that accountability was improved. In the other countries, these pressure mechanisms were absent so that domestic accountability declined. BS enables donors to actively participate in political processes of the partner country as donors transfer funds into the treasury of the partner country and conduct a high-level political dialogue. The results confirm that the exit from BS created a governance vacuum that, if not compensated through external/internal pressure, leads to a deterioration of good governance. For example, in the case of highly aid dependent Malawi did the possibility of a relaunch of BS provide sufficient incentives to push for governance reforms. Overall the results show that the three good governance areas are negatively affected by the exit from BS. This stands in contrast to positive effects found before the exit. The team concludes that the relationship is causal, because the before-and-after comparison coherently shows that the presence of BS correlates with positive effects and the absence with negative effects. Conclusion: These findings strongly suggest that BS is an effective modality to promote governance and its abolishment is likely to cause governance disruptions. Donors and partner governments should find ways to re-engage in closely coordinated policy-based aid modalities. In addition, a coordinated and carefully managed exit-strategy should be in place before an exit from similar modalities is considered. Particularly a continued framework of mutual accountability and a high-level political dialogue should be aspired to maintain pressure and oversight that is required to achieve good governance.

Keywords: budget support, domestic accountability, public financial management and budget transparency, Sub-Sahara Africa

Procedia PDF Downloads 114
80 Improved Approach to the Treatment of Resistant Breast Cancer

Authors: Lola T. Alimkhodjaeva, Lola T. Zakirova, Soniya S. Ziyavidenova

Abstract:

Background: Breast cancer (BC) is still one of the urgent oncology problems. The essential obstacle to the full anti-tumor therapy implementation is drug resistance development. Taking into account the fact that chemotherapy is main antitumor treatment in BC patients, the important task is to improve treatment results. Certain success in overcoming this situation has been associated with the use of methods of extracorporeal blood treatment (ECBT), plasmapheresis. Materials and Methods: We examined 129 women with resistant BC stages 3-4, aged between 56 to 62 years who had previously received 2 courses of CAF chemotherapy. All patients additionally underwent 2 courses of CAF chemotherapy but against the background ECBT with ultrasonic exposure. We studied the following parameters: 1. The highlights of peripheral blood before and after therapy. 2. The state of cellular immunity and identification of activation markers CD23 +, CD25 +, CD38 +, CD95 + on lymphocytes was performed using monoclonal antibodies. Evaluation of humoral immunity was determined by the level of main classes of immunoglobulins IgG, IgA, IgM in serum. 3. The degree of tumor regression was assessed by WHO recommended 4 gradations. (complete - 100%, partial - more than 50% of initial size, process stabilization–regression is less than 50% of initial size and tumor advance progressing). 4. Medical pathomorphism in the tumor was determined by Lavnikova. 5. The study of immediate and remote results, up to 3 years and more. Results and Discussion: After performing extracorporeal blood treatment anemia occurred in 38.9%, leukopenia in 36.8%, thrombocytopenia in 34.6%, hypolymphemia in 26.8%. Studies of immunoglobulin fractions in blood serum were able to establish a certain relationship between the classes of immunoglobulin A, G, M and their functions. The results showed that after treatment the values of main immunoglobulins in patients’ serum approximated to normal. Analysis of expression of activation markers CD25 + cells bearing receptors for IL-2 (IL-2Rα chain) and CD95 + lymphocytes that were mediated physiological apoptosis showed the tendency to increase, which apparently was due to activation of cellular immunity cytokines allocated by ultrasonic treatment. To carry out ECBT on the background of ultrasonic treatment improved the parameters of the immune system, which were expressed in stimulation of cellular immunity and correcting imbalances in humoral immunity. The key indicator of conducted treatment efficiency is the immediate result measured by the degree of tumor regression. After ECBT performance the complete regression was 10.3%, partial response - 55.5%, process stabilization - 34.5%, tumor advance progressing no observed. Morphological investigations of tumor determined therapeutic pathomorphism grade 2 in 15%, in 25% - grade 3 and therapeutic pathomorphism grade 4 in 60% of patients. One of the main criteria for the effect of conducted treatment is to study the remission terms in the postoperative period (up to 3 years or more). The remission terms up to 3 years with ECBT was 34.5%, 5-year survival was 54%. Carried out research suggests that a comprehensive study of immunological and clinical course of breast cancer allows the differentiated approach to the choice of methods for effective treatment.

Keywords: breast cancer, immunoglobulins, extracorporeal blood treatment, chemotherapy

Procedia PDF Downloads 244
79 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis

Authors: Iman Farasat, Howard M. Salis

Abstract:

Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.

Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement

Procedia PDF Downloads 446
78 Anti-tuberculosis, Resistance Modulatory, Anti-pulmonary Fibrosis and Anti-silicosis Effects of Crinum Asiaticum Bulbs and Its Active Metabolite, Betulin

Authors: Theophilus Asante, Comfort Nyarko, Daniel Antwi

Abstract:

Drug-resistant tuberculosis, together with the associated comorbidities like pulmonary fibrosis and silicosis, has been one of the most serious global public health threats that requires immediate action to curb or mitigate it. This prolongs hospital stays, increases the cost of medication, and increases the death toll recorded annually. Crinum asiaticum bulb (CAE) and betulin (BET) are known for their biological and pharmacological effects. Pharmacological effects reported on CAE include antimicrobial, anti-inflammatory, anti-pyretic, anti-analgesic, and anti-cancer effects. Betulin has exhibited a multitude of powerful pharmacological properties ranging from antitumor, anti-inflammatory, anti-parasitic, anti-microbial, and anti-viral activities. This work sought to investigate the anti-tuberculosis and resistant modulatory effects and also assess their effects on mitigating pulmonary fibrosis and silicosis. In the anti-tuberculosis and resistant modulatory effects, both CAE and BET showed strong antimicrobial activities (31.25 ≤ MIC ≤ 500) µg/ml against the studied microorganisms and also produced significant anti-efflux pump and biofilm inhibitory effects (ρ < 0.0001) as well as exhibiting resistance modulatory and synergistic effects when combined with standard antibiotics. Crinum asiaticum bulbs extract and betulin were shown to possess anti-pulmonary fibrosis effects. There was an increased survival rate in the CAE and BET treatment groups compared to the BLM-induced group. There was a marked decrease in the levels of hydroxyproline and collagen I and III in the CAE and BET treatment groups compared to the BLM-treated group. The treatment groups of CAE and BET significantly downregulated the levels of pro-fibrotic and pro-inflammatory cytokine concentrations such as TGF-β1, MMP9, IL-6, IL-1β and TNF-alpha compared to an increase in the BLM-treated groups. The histological findings of the lungs suggested the curative effects of CAE and BET following BLM-induced pulmonary fibrosis in mice. The study showed improved lung functions with a wide focal area of viable alveolar spaces and few collagen fibers deposition on the lungs of the treatment groups. In the anti-silicosis and pulmonoprotective effects of CAE and BET, the levels of NF-κB, TNF-α, IL-1β, IL-6 and hydroxyproline, collagen types I and III were significantly reduced by CAE and BET (ρ < 0.0001). Both CAE and BET significantly (ρ < 0.0001) inhibited the levels of hydroxyproline, collagen I and III when compared with the negative control group. On BALF biomarkers such as macrophages, lymphocytes, monocytes, and neutrophils, CAE and BET were able to reduce their levels significantly (ρ < 0.0001). The CAE and BET were examined for anti-oxidant activity and shown to raise the levels of catalase (CAT) and superoxide dismutase (SOD) while lowering the level of malondialdehyde (MDA). There was an improvement in lung function when lung tissues were examined histologically. Crinum asiaticum bulbs extract and betulin were discovered to exhibit anti-tubercular and resistance-modulatory properties, as well as the capacity to minimize TB comorbidities such as pulmonary fibrosis and silicosis. In addition, CAE and BET may act as protective mechanisms, facilitating the preservation of the lung's physiological integrity. The outcomes of this study might pave the way for the development of leads for producing single medications for the management of drug-resistant tuberculosis and its accompanying comorbidities.

Keywords: fibrosis, crinum, tuberculosis, antiinflammation, drug resistant

Procedia PDF Downloads 53
77 How Obesity Sparks the Immune System and Lessons from the COVID-19 Pandemic

Authors: Husham Bayazed

Abstract:

Purpose of Presentation: Obesity and overweight are among the biggest health challenges of the 21st century, according to the WHO. Obviously, obese individuals suffer different courses of disease – from infections and allergies to cancer- and even respond differently to some treatment options. Of note, obesity often seems to predispose and triggers several secondary diseases such as diabetes, arteriosclerosis, or heart attacks. Since decades it seems that immunological signals gear inflammatory processes among obese individuals with the aforementioned conditions. This review aims to shed light how obesity sparks or rewire the immune system and predisposes to such unpleasant health outcomes. Moreover, lessons from the Covid-19 pandemic ascertain that people living with pre-existing conditions such as obesity can develop severe acute respiratory syndrome (SARS), which needs to be elucidated how obesity and its adjuvant inflammatory process distortion contribute to enhancing severe COVID-19 consequences. Recent Findings: In recent clinical studies, obesity was linked to alter and sparks the immune system in different ways. Adipose tissue (AT) is considered as a secondary immune organ, which is a reservoir of tissue-resident of different immune cells with mediator release, making it a secondary immune organ. Adipocytes per se secrete several pro-inflammatory cytokines (IL-6, IL-4, MCP-1, and TNF-α ) involved in activation of macrophages resulting in chronic low-grade inflammation. The correlation between obesity and T cells dysregulation is pivotal in rewiring the immune system. Of note, autophagy occurrence in adipose tissues further rewire the immune system due to flush and outburst of leptin and adiponectin, which are cytokines and influencing pro-inflammatory immune functions. These immune alterations among obese individuals are collectively incriminated in triggering several metabolic disorders and playing role in increasing cancers incidence and susceptibility to different infections. During COVID-19 pandemic, it was verified that patients with pre-existing obesity being at greater risk of suffering severe and fatal clinical outcomes. Beside obese people suffer from increased airway resistance and reduced lung volume, ACE2 expression in adipose tissue seems to be high and even higher than that in lungs, which spike infection incidence. In essence, obesity with pre-existence of pro-inflammatory cytokines such as LI-6 is a risk factor for cytokine storm and coagulopathy among COVID-19 patients. Summary: It is well documented that obesity is associated with chronic systemic low-grade inflammation, which sparks and alter different pillars of the immune system and triggers different metabolic disorders, and increases susceptibility of infections and cancer incidence. The pre-existing chronic inflammation in obese patients with the augmented inflammatory response against the viral infection seems to increase the susceptibility of these patients to developing severe COVID-19. Although the new weight loss drugs and bariatric surgery are considered as breakthrough news for obesity treatment, but preventing is easier than treating it once it has taken hold. However, obesity and immune system link new insights dispute the role of immunotherapy and regulating immune cells treating diet-induced obesity.

Keywords: immunity, metabolic disorders, cancer, COVID-19

Procedia PDF Downloads 47
76 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow

Authors: Masood Otarod, Ronald M. Supkowski

Abstract:

This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.

Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations

Procedia PDF Downloads 239
75 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data

Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann

Abstract:

Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.

Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers

Procedia PDF Downloads 175
74 Carlos Guillermo 'Cubena' Wilson's Literary Texts as Platforms for Social Commentary and Critique of Panamanian Society

Authors: Laverne Seales

Abstract:

When most people think of Panama, they immediately think of the Canal; however, the construction and the people who made it possible are often omitted and seldom acknowledged. The reality is that the construction of this waterway was achieved through forced migration and discriminatory practices toward people of African descent, specifically black people from the Caribbean. From the colonial period to the opening and subsequent operation of the Panama Canal by the United States, this paper goes through the rich layers of Panamanian history to examine the life of Afro-Caribbeans and their descendants in Panama. It also considers the role of the United States in Panama; it explores how the United States in Panama forged a racially complex country that made the integration of Afro-Caribbeans and their descendants difficult. After laying a historical foundation, the exploration of Afro-Caribbean people and Panamanians of Afro-Caribbean descent are analyzed through Afro-Panamanian writer Carlos Guillermo ‘Cubena’ Wilson's novels, short stories, and poetry. This study focuses on how Cubena addresses racism, discrimination, inequality, and social justice issues towards Afro-Caribbeans and their descendants who traveled to Panama to construct the Canal. Content analysis methodology can yield several significant contributions, and analyzing Carlos Guillermo Wilson's literature under this framework allows us to consider social commentary and critique of Panamanian society. It identifies the social issues and concerns of Afro-Caribbeans and people of Afro-Caribbean descent, such as inequality, corruption, racism, political oppression, and cultural identity. Analysis methodology allows us to explore how Cubena's literature engages with questions of cultural identity and belonging in Panamanian society. By examining themes related to race, ethnicity, language, and heritage, this research uncovers the complexities of Panamanian cultural identity, allowing us to interrogate power dynamics and social hierarchies in Panamanian society. Analyzing the portrayal of different social groups, institutions, and power structures helps uncover how power is wielded, contested, and resisted; Cubena's fictional world allows us to see how it functions in Panama. Content analysis methodology also provides for critiquing political systems and governance in Panama. By examining the representation and presentation of political figures, institutions, and events in Cubena's literature, we uncover his commentary on corruption, authoritarianism, governance, and the role of the United States in Panama. Content analysis highlights how Wilson's literature amplifies the voices and experiences of marginalized individuals and communities in Panamanian society. By centering the narratives of Afro-Panamanians and other marginalized groups, this researcher uncovers Cubena's commitment to social justice and inclusion in his writing and helps the reader engage with historical narratives and collective memory in Panama. Overall, analyzing Carlos Guillermo ‘Cubena’ Wilson's literature as a platform for social commentary and critique of Panamanian society using content analysis methodology provides valuable insights into the cultural, social, and political dimensions of Afro-Panamanians during and after the construction of the Panama Canal.

Keywords: Afro-Caribbean, Panama Canal, race, Afro-Panamanian, identity, history

Procedia PDF Downloads 11
73 Structural Molecular Dynamics Modelling of FH2 Domain of Formin DAAM

Authors: Rauan Sakenov, Peter Bukovics, Peter Gaszler, Veronika Tokacs-Kollar, Beata Bugyi

Abstract:

FH2 (formin homology-2) domains of several proteins, collectively known as formins, including DAAM, DAAM1 and mDia1, promote G-actin nucleation and elongation. FH2 domains of these formins exist as oligomers. Chain dimerization by ring structure formation serves as a structural basis for actin polymerization function of FH2 domain. Proper single chain configuration and specific interactions between its various regions are necessary for individual chains to form a dimer functional in G-actin nucleation and elongation. FH1 and WH2 domain-containing formins were shown to behave as intrinsically disordered proteins. Thus, the aim of this research was to study structural dynamics of FH2 domain of DAAM. To investigate structural features of FH2 domain of DAAM, molecular dynamics simulation of chain A of FH2 domain of DAAM solvated in water box in 50 mM NaCl was conducted at temperatures from 293.15 to 353.15K, with VMD 1.9.2, NAMD 2.14 and Amber Tools 21 using 2z6e and 1v9d PDB structures of DAAM was obtained on I-TASSER webserver. Calcium and ATP bound G-actin 3hbt PDB structure was used as a reference protein with well-described structural dynamics of denaturation. Topology and parameter information of CHARMM 2012 additive all-atom force fields for proteins, carbohydrate derivatives, water and ions were used in NAMD 2.14 and ff19SB force field for proteins in Amber Tools 21. The systems were energy minimized for the first 1000 steps, equilibrated and produced in NPT ensemble for 1ns using stochastic Langevin dynamics and the particle mesh Ewald method. Our root-mean square deviation (RMSD) analysis of molecular dynamics of chain A of FH2 domains of DAAM revealed similar insignificant changes of total molecular average RMSD values of FH2 domain of these formins at temperatures from 293.15 to 353.15K. In contrast, total molecular average RMSD values of G-actin showed considerable increase at 328K, which corresponds to the denaturation of G-actin molecule at this temperature and its transition from native, ordered, to denatured, disordered, state which is well-described in the literature. RMSD values of lasso and tail regions of chain A of FH2 domain of DAAM exhibited higher than total molecular average RMSD at temperatures from 293.15 to 353.15K. These regions are functional in intra- and interchain interactions and contain highly conserved tryptophan residues of lasso region, highly conserved GNYMN sequence of post region and amino acids of the shell of hydrophobic pocket of the salt bridge between Arg171 and Asp321, which are important for structural stability and ordered state of FH2 domain of DAAM and its functions in FH2 domain dimerization. In conclusion, higher than total molecular average RMSD values of lasso and post regions of chain A of FH2 domain of DAAM may explain disordered state of FH2 domain of DAAM at temperatures from 293.15 to 353.15K. Finally, absence of marked transition, in terms of significant changes in average molecular RMSD values between native and denatured states of FH2 domain of DAAM at temperatures from 293.15 to 353.15K, can make it possible to attribute these formins to the group of intrinsically disordered proteins rather than to the group of intrinsically ordered proteins such as G-actin.

Keywords: FH2 domain, DAAM, formins, molecular modelling, computational biophysics

Procedia PDF Downloads 107
72 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan

Authors: Hsien-Te Lin

Abstract:

The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.

Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy

Procedia PDF Downloads 118
71 Teaching Children about Their Brains: Evaluating the Role of Neuroscience Undergraduates in Primary School Education

Authors: Clea Southall

Abstract:

Many children leave primary school having formed preconceptions about their relationship with science. Thus, primary school represents a critical window for stimulating scientific interest in younger children. Engagement relies on the provision of hands-on activities coupled with an ability to capture a child’s innate curiosity. This requires children to perceive science topics as interesting and relevant to their everyday life. Teachers and pupils alike have suggested the school curriculum be tailored to help stimulate scientific interest. Young children are naturally inquisitive about the human body; the brain is one topic which frequently engages pupils, although it is not currently included in the UK primary curriculum. Teaching children about the brain could have wider societal impacts such as increasing knowledge of neurological disorders. However, many primary school teachers do not receive formal neuroscience training and may feel apprehensive about delivering lessons on the nervous system. This is exacerbated by a lack of educational neuroscience resources. One solution is for undergraduates to form partnerships with schools - delivering engaging lessons and supplementing teacher knowledge. The aim of this project was to evaluate the success of a short lesson on the brain delivered by an undergraduate neuroscientist to primary school pupils. Prior to entering schools, semi-structured online interviews were conducted with teachers to gain pedagogical advice and relevant websites were searched for neuroscience resources. Subsequently, a single lesson plan was created comprising of four hands-on activities. The activities were devised in a top-down manner, beginning with learning about the brain as an entity, before focusing on individual neurons. Students were asked to label a ‘brain map’ to assess prior knowledge of brain structure and function. They viewed animal brains and created ‘pipe-cleaner neurons’ which were later used to depict electrical transmission. The same session was delivered by an undergraduate student to 570 key stage 2 (KS2) pupils across five schools in Leeds, UK. Post-session surveys, designed for teachers and pupils respectively, were used to evaluate the session. Children in all year groups had relatively poor knowledge of brain structure and function at the beginning of the session. When asked to label four brain regions with their respective functions, older pupils labeled a mean of 1.5 (± 1.0) brain regions compared to 0.8 (± 0.96) for younger pupils (p=0.002). However, by the end of the session, 95% of pupils felt their knowledge of the brain had increased. Hands-on activities were rated most popular by pupils and were considered the most successful aspect of the session by teachers. Although only half the teachers were aware of neuroscience educational resources, nearly all (95%) felt they would have more confidence in teaching a similar session in the future. All teachers felt the session was engaging and that the content could be linked to the current curriculum. Thus, a short fifty-minute session can successfully enhance pupils’ knowledge of a new topic: the brain. Partnerships with an undergraduate student can provide an alternative method for supplementing teacher knowledge, increasing their confidence in delivering future lessons on the nervous system.

Keywords: education, neuroscience, primary school, undergraduate

Procedia PDF Downloads 185
70 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 151
69 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 157