Search results for: controlled balanced boolean function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7444

Search results for: controlled balanced boolean function

124 The Analgesic Effect of Electroacupuncture in a Murine Fibromyalgia Model

Authors: Bernice Jeanne Lottering, Yi-Wen Lin

Abstract:

Introduction: Chronic pain has a definitive lack of objective parameters in the measurement and treatment efficacy of diseases such as Fibromyalgia (FM). Persistent widespread pain and generalized tenderness are the characteristic symptoms affecting a large majority of the global population, particularly females. This disease has indicated a refractory tendency to conventional treatment ventures, largely resultant from a lack of etiological and pathogenic understanding of the disease development. Emerging evidence indicates that the central nervous system (CNS) plays a critical role in the amplification of pain signals and the neurotransmitters associated therewith. Various stimuli have been found to activate the channels existent on nociceptor terminals, thereby actuating nociceptive impulses along the pain pathways. The transient receptor potential vanalloid 1 (TRPV1) channel functions as a molecular integrator for numerous sensory inputs, such as nociception, and was explored in the current study. Current intervention approaches face a multitude challenges, ranging from effective therapeutic interventions to the limitation of pathognomonic criteria resultant from incomplete understanding and partial evidence on the mechanisms of action of FM. It remains unclear whether electroacupuncture (EA) plays an integral role in the functioning of the TRPV1 pathway, and whether or not it can reduce the chronic pain induced by FM. Aims: The aim of this study was to explore the mechanisms underlying the activation and modulation of the TRPV1 channel pathway in a cold stress model of FM applied to a murine model. Furthermore, the effect of EA in the treatment of mechanical and thermal pain, as expressed in FM was also to be investigated. Methods: 18 C57BL/6 wild type and 6 TRPV1 knockout (KO) mice, aged 8-12 weeks, were exposed to an intermittent cold stress-induced fibromyalgia-like pain model, with or without EA treatment at ZusanLi ST36 (2Hz/20min) on day 3 to 5. Von Frey and Hargreaves behaviour tests were implemented in order to analyze the mechanical and thermal pain thresholds on day 0, 3 and 5 in control group (C), FM group (FM), FM mice with EA treated group (FM + EA) and FM in KO group. Results: An increase in mechanical and thermal hyperalgesia was observed in the FM, EA and KO groups when compared to the control group. This initial increase was reduced in the EA group, which directs focus at the treatment efficacy of EA in nociceptive sensitization, and the analgesic effect EA has attenuating FM associated pain. Discussion: An increase in the nociceptive sensitization was observed through higher withdrawal thresholds in the von Frey mechanical test and the Hargreaves thermal test. TRPV1 function in mice has been scientifically associated with these nociceptive conduits, and the increased behaviour test results suggest that TRPV1 upregulation is central to the FM induced hyperalgesia. This data was supported by the decrease in sensitivity observed in results of the TRPV1 KO group. Moreover, the treatment of EA showed a decrease in this FM induced nociceptive sensitization, suggesting TRPV1 upregulation and overexpression can be attenuated by EA at bilateral ST36. This evidence compellingly implies that the analgesic effect of EA is associated with TRPV1 downregulation.

Keywords: fibromyalgia, electroacupuncture, TRPV1, nociception

Procedia PDF Downloads 119
123 Theoretical and Experimental Investigation of Structural, Electrical and Photocatalytic Properties of K₀.₅Na₀.₅NbO₃ Lead- Free Ceramics Prepared via Different Synthesis Routes

Authors: Manish Saha, Manish Kumar Niranjan, Saket Asthana

Abstract:

The K₀.₅Na₀.₅NbO₃ (KNN) system has emerged as one of the most promising lead-free piezoelectric over the years. In this work, we perform a comprehensive investigation of electronic structure, lattice dynamics and dielectric/ferroelectric properties of the room temperature phase of KNN by combining ab-initio DFT-based theoretical analysis and experimental characterization. We assign the symmetry labels to KNN vibrational modes and obtain ab-initio polarized Raman spectra, Infrared (IR) reflectivity, Born-effective charge tensors, oscillator strengths etc. The computed Raman spectrum is found to agree well with the experimental spectrum. In particular, the results suggest that the mode in the range ~840-870 cm-¹ reported in the experimental studies is longitudinal optical (LO) with A_1 symmetry. The Raman mode intensities are calculated for different light polarization set-ups, which suggests the observation of different symmetry modes in different polarization set-ups. The electronic structure of KNN is investigated, and an optical absorption spectrum is obtained. Further, the performances of DFT semi-local, metal-GGA and hybrid exchange-correlations (XC) functionals, in the estimation of KNN band gaps are investigated. The KNN bandgap computed using GGA-1/2 and HSE06 hybrid functional schemes are found to be in excellant agreement with the experimental value. The COHP, electron localization function and Bader charge analysis is also performed to deduce the nature of chemical bonding in the KNN. The solid-state reaction and hydrothermal methods are used to prepare the KNN ceramics, and the effects of grain size on the physical characteristics these ceramics are examined. A comprehensive study on the impact of different synthesis techniques on the structural, electrical, and photocatalytic properties of ferroelectric ceramics KNN. The KNN-S prepared by solid-state method have significantly larger grain size as compared to that for KNN-H prepared by hydrothermal method. Furthermore, the KNN-S is found to exhibit higher dielectric, piezoelectric and ferroelectric properties as compared to KNN-H. On the other hand, the increased photocatalytic activity is observed in KNN-H as compared to KNN-S. As compared to the hydrothermal synthesis, the solid-state synthesis causes an increase in the relative dielectric permittivity (ε^') from 2394 to 3286, remnant polarization (P_r) from 15.38 to 20.41 μC/cm^², planer electromechanical coupling factor (k_p) from 0.19 to 0.28 and piezoelectric coefficient (d_33) from 88 to 125 pC/N. The KNN-S ceramics are also found to have a lower leakage current density, and higher grain resistance than KNN-H ceramic. The enhanced photocatalytic activity of KNN-H is attributed to relatively smaller particle sizes. The KNN-S and KNN-H samples are found to have degradation efficiencies of RhB solution of 20% and 65%, respectively. The experimental study highlights the importance of synthesis methods and how these can be exploited to tailor the dielectric, piezoelectric and photocatalytic properties of KNN. Overall, our study provides several bench-mark important results on KNN that have not been reported so far.

Keywords: lead-free piezoelectric, Raman intensity spectrum, electronic structure, first-principles calculations, solid state synthesis, photocatalysis, hydrothermal synthesis

Procedia PDF Downloads 19
122 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania

Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea

Abstract:

A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.

Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality

Procedia PDF Downloads 100
121 Comparison of Nutritional Status of Asthmatic vs Non-asthmatic Adults

Authors: Ayesha Mushtaq

Abstract:

Asthma is a pulmonary disease in which blockade of the airway takes place due to inflammation as a response to certain allergens. Breathing troubles, cough, and dyspnea are one of the few symptoms. Several studies have indicated a significant effect on asthma due to changes in dietary routines. Certain food items, such as oily foods and other materials, are known to cause an increase in the symptoms of asthma. Low dietary intake of fruits and vegetables may be important in relation to asthma prevalence. The objective of this study is to assess and compare the nutritional status of asthmatic and non-asthmatic patients. The significance of this study lies in the factor that it will help nutritionists to arrange a feasible dietary routine for asthmatic patients. This research was conducted at the Pulmonology Department of the Pakistan Institute of Medical Science Islamabad. About thirty hundred thirty-four million people are affected by asthma worldwide. Pakistan is on the verge of being an uplifted urban population and asthma cases are increasingly high these days. Several studies suggest an increase in the Asthmatic patient population due to improper diet. Other studies conducted at different institutions have conducted research on similar topics. These studies have suggested that there is a substantial alteration in the nutritional status of asthmatic and non-Asthmatic patients. This is a cross-sectional study aimed at assessing the nutritious standing of Asthmatic and non-asthmatic patients. This research took place at the Pakistan Institute of Medical Sciences (PIMS), Islamabad, Pakistan. The research included asthmatic and non-asthmatic patients coming to the pulmonology department clinic at the Pakistan Institute of Medical Sciences (PIMS). These patients were aged between 20-60 years. A questionnaire was developed for these patients to estimate their dietary plans in these patients. The methodology included four sections. The first section was the Socio-Demographic profile, which included age, gender, monthly income and occupation. The next section was anthropometric measurements which included the weight, height and body mass index (BMI) of the individual. The next section, section three, was about the biochemical attributes, such as for biochemical profiling, pulmonary function testing (PFT) was performed. In the next section, Dietary habits, which were assessed by using a food frequency questionnaire (FFQ) through food habits and consumption pattern, was assessed. The next section life style data, in which the person's level of physical activity, sleep and smoking habits were assessed. The next section was statistical analysis. All the data obtained from the study were statistically analyzed and assessed. Most of the asthma Patients were females, with weight more than normal or even obese. Body Mass Index (BMI) was higher in asthma Patients than those in non-Asthmatic ones. When the nutritional Values were assessed, we came to know that these patients were low on certain nutrients and their diet included more junk and oily food than healthy vegetables and fruits. Beverages intake was also included in the same assessment. It is evident from this study that nutritional status has a contributory effect on asthma. So, patients on the verge of developing asthma or those who have developed asthma should focus on their diet, maintain good eating habits and take healthy diets, including fruits and vegetables rather than oily foods. Proper sleep may also contribute to the control of asthma.

Keywords: NUTRI, BMI, asthma, food

Procedia PDF Downloads 48
120 A Long-Standing Methodology Quest Regarding Commentary of the Qur’an: Modern Debates on Function of Hermeneutics in the Quran Scholarship in Turkey

Authors: Merve Palanci

Abstract:

This paper aims to reveal and analyze methodology debates on Qur’an Commentary in Turkish Scholarship and to make sound inductions on the current situation, with reference to the literature evolving around the credibility of Hermeneutics when the case is Qur’an commentary and methodological connotations related to it, together with the other modern approaches to the Qur’an. It is fair to say that Tafseer, constituting one of the main parts of basic Islamic sciences, has drawn great attention from both Muslim and non-Muslim scholars for a long time. And with the emplacement of an acute junction between natural sciences and social sciences in the post-enlightenment period, this interest seems to pave the way for methodology discussions that are conducted by theology spheres, occupying a noticeable slot in Tafseer literature, as well. A panoramic glance at the classical treatise in relation to the methodology of Tafseer, namely Usul al-Tafseer, leads the reader to the conclusion that these classics are intrinsically aimed at introducing the Qur’an and its early history of formation as a corpus and providing a better understanding of its content. To illustrate, the earliest methodology work extant for Qur’an commentary, al- Aql wa’l Fahm al- Qur’an by Harith al-Muhasibi covers content that deals with Qur’an’s rhetoric, its muhkam and mutashabih, and abrogation, etc. And most of the themes in question are evident to share a common ground: understanding the Scripture and producing an accurate commentary to be built on this preliminary phenomenon of understanding. The content of other renowned works in an overtone of Tafseer methodology, such as Funun al Afnan, al- Iqsir fi Ilm al- Tafseer, and other succeeding ones al- Itqan and al- Burhan is also rich in hints related to preliminary phenomena of understanding. However, these works are not eligible for being classified as full-fledged methodology manuals assuring a true understanding of the Qur’an. And Hermeneutics is believed to supply substantial data applicable to Qur’an commentary as it deals with the nature of understanding itself. Referring to the latest tendencies in Tafseer methodology, this paper envisages to centralize hermeneutical debates in modern scholarship of Qur’an commentary and the incentives that lead scholars to apply for Hermeneutics in Tafseer literature. Inspired from these incentives, the study involves three parts. In the introduction part, this paper introduces key features of classical methodology works in general terms and traces back the main methodological shifts of modern times in Qur’an commentary. To this end, revisionist Ecole, scientific Qur’an commentary ventures, and thematic Qur’an commentary are included and analysed briefly. However, historical-critical commentary on the Quran, as it bears a close relationship with hermeneutics, is handled predominantly. The second part is based on the hermeneutical nature of understanding the Scripture, revealing a timeline for the beginning of hermeneutics debates in Tafseer, and Fazlur Rahman’s(d.1988) influence will be manifested for establishing a theoretical bridge. In the following part, reactions against the application of Hermeneutics in Tafseer activity and pro-hermeneutics works will be revealed through cross-references to the prominent figures of both, and the literature in question in theology scholarship in Turkey will be explored critically.

Keywords: hermeneutics, Tafseer, methodology, Ulum al- Qur’an, modernity

Procedia PDF Downloads 47
119 Isolation and Transplantation of Hepatocytes in an Experimental Model

Authors: Inas Raafat, Azza El Bassiouny, Waldemar L. Olszewsky, Nagui E. Mikhail, Mona Nossier, Nora E. I. El-Bassiouni, Mona Zoheiry, Houda Abou Taleb, Noha Abd El-Aal, Ali Baioumy, Shimaa Attia

Abstract:

Background: Orthotopic liver transplantation is an established treatment for patients with severe acute and end-stage chronic liver disease. The shortage of donor organs continues to be the rate-limiting factor for liver transplantation throughout the world. Hepatocyte transplantation is a promising treatment for several liver diseases and can, also, be used as a "bridge" to liver transplantation in cases of liver failure. Aim of the work: This study was designed to develop a highly efficient protocol for isolation and transplantation of hepatocytes in experimental Lewis rat model to provide satisfactory guidelines for future application on humans.Materials and Methods: Hepatocytes were isolated from the liver by double perfusion technique and bone marrow cells were isolated by centrifugation of shafts of tibia and femur of donor Lewis rats. Recipient rats were subjected to sub-lethal dose of irradiation 2 days before transplantation. In a laparotomy operation the spleen was injected by freshly isolated hepatocytes and bone marrow cells were injected intravenously. The animals were sacrificed 45 day latter and splenic sections were prepared and stained with H & E, PAS AFP and Prox1. Results: The data obtained from this study showed that the double perfusion technique is successful in separation of hepatocytes regarding cell number and viability. Also the method used for bone marrow cells separation gave excellent results regarding cell number and viability. Intrasplenic engraftment of hepatocytes and live tissue formation within the splenic tissue were found in 70% of cases. Hematoxylin and eosin stained splenic sections from 7 rats showed sheets and clusters of cells among the splenic tissues. Periodic Acid Schiff stained splenic sections from 7 rats showed clusters of hepatocytes with intensely stained pink cytoplasmic granules denoting the presence of glycogen. Splenic sections from 7 rats stained with anti-α-fetoprotein antibody showed brownish cytoplasmic staining of the hepatocytes denoting positive expression of AFP. Splenic sections from 7 rats stained with anti-Prox1 showed brownish nuclear staining of the hepatocytes denoting positive expression of Prox1 gene on these cells. Also, positive expression of Prox1 gene was detected on lymphocytes aggregations in the spleens. Conclusions: Isolation of liver cells by double perfusion technique using collagenase buffer is a reliable method that has a very satisfactory yield regarding cell number and viability. The intrasplenic route of transplantation of the freshly isolated liver cells in an immunocompromised model was found to give good results regarding cell engraftment and tissue formation. Further studies are needed to assess function of engrafted hepatocytes by measuring prothrombin time, serum albumin and bilirubin levels.

Keywords: Lewis rats, hepatocytes, BMCs, transplantation, AFP, Prox1

Procedia PDF Downloads 287
118 The Gender Criteria of Film Criticism: Creating the ‘Big’, Avoiding the Important

Authors: Eleni Karasavvidou

Abstract:

Social and anthropological research, parallel to Gender Studies, highlighted the relationship between social structures and symbolic forms as an important field of interaction and recording of 'social trends.' Since the study of representations can contribute to the understanding of the social functions and power relations, they encompass. This ‘mirage,’ however, has not only to do with the representations themselves but also with the ways they are received and the film or critical narratives that are established as dominant or alternative. Cinema and the criticism of its cultural products are no exception. Even in the rapidly changing media landscape of the 21st century, movies remain an integral and widespread part of popular culture, making films an extremely powerful means of 'legitimizing' or 'delegitimizing' visions of domination and commonsensical gender stereotypes throughout society. And yet it is film criticism, the 'language per se,' that legitimizes, reinforces, rewards and reproduces (or at least ignores) the stereotypical depictions of female roles that remain common in the realm of film images. This creates the need for this issue to have emerged (also) in academic research questioning gender criteria in film reviews as part of the effort for an inclusive art and society. Qualitative content analysis is used to examine female roles in selected Oscar-nominated films against their reviews from leading websites and newspapers. This method was chosen because of the complex nature of the depictions in the films and the narratives they evoke. The films were divided into basic scenes depicting social functions, such as love and work relationships, positions of power and their function, which were analyzed by content analysis, with borrowings from structuralism (Gennette) and the local/universal images of intercultural philology (Wierlacher). In addition to the measurement of the general ‘representation-time’ by gender, other qualitative characteristics were also analyzed, such as: speaking time, sayings or key actions, overall quality of the character's action in relation to the development of the scenario and social representations in general, as well as quantitatively (insufficient number of female lead roles, fewer key supporting roles, relatively few female directors and people in the production chain and how they might affect screen representations. The quantitative analysis in this study was used to complement the qualitative content analysis. Then the focus shifted to the criteria of film criticism and to the rhetorical narratives that exclude or highlight in relation to gender identities and functions. In the criteria and language of film criticism, stereotypes are often reproduced or allegedly overturned within the framework of apolitical "identity politics," which mainly addresses the surface of a self-referential cultural-consumer product without connecting it more deeply with the material and cultural life. One of the prime examples of this failure is the Bechtel Test, which tracks whether female characters speak in a film regardless of whether women's stories are represented or not in the films analyzed. If perceived unbiased male filmmakers still fail to tell truly feminist stories, the same is the case with the criteria of criticism and the related interventions.

Keywords: representations, context analysis, reviews, sexist stereotypes

Procedia PDF Downloads 55
117 Harnessing Nature's Fury: Hyptis Suaveolens Loaded Bioactive Liposome for Photothermal Therapy of Lung Cancer

Authors: Sajmina Khatun, Monika Pebam, Aravind Kumar Rengan

Abstract:

Photothermal therapy, a subset of nanomedicine, takes advantage of light-absorbing agents to generate localized heat, selectively eradicating cancer cells. This innovative approach minimizes damage to healthy tissues and offers a promising avenue for targeted cancer treatment. Unlike conventional therapies, photothermal therapy harnesses the power of light to combat malignancies precisely and effectively, showcasing its potential to revolutionize cancer treatment paradigms. The combined strengths of nanomedicine and photothermal therapy signify a transformative shift toward more effective, targeted, and tolerable cancer treatments in the medical landscape. Utilizing natural products becomes instrumental in formulating diverse bioactive medications owing to their various pharmacological properties attributed to the existence of phenolic structures, triterpenoids, and similar compounds. Hyptis suaveolens, commonly known as pignut, stands as an aromatic herb within the Lamiaceae family and represents a valuable therapeutic plant. Flourishing in swamps and alongside tropical and subtropical roadsides, these noxious weeds impede the development of adjacent plants. Hyptis suaveolens ranks among the most globally distributed alien invasive species. The present investigation revealed that a versatile, biodegradable liposome nanosystem (HIL NPs), incorporating bioactive molecules from Hyptis suaveolens, exhibits effective bioavailability to cancer cells, enabling tumor ablation upon near-infrared (NIR) laser exposure. The components within the nanosystem, specifically the bioactive molecules from Hyptis, function as anticancer agents, aiding in the photothermal ablation of highly metastatic lung cancer cells. Despite being a prolific weed impeding neighboring plant growth, Hyptis suaveolens showcases therapeutic benefits through its bioactive compounds. The obtained HIL NPs, characterized as a photothermally active liposome nanosystem, demonstrate a pronounced fluorescence absorption peak in the NIR range and achieve a high photothermal conversion efficiency under NIR laser irradiation. Transmission electron microscopy (TEM) and particle size analysis reveal that HIL NPs possess a spherical shape with a size of 141 ± 30 nm. Moreover, in vitro assessments of HIL NPs against lung cancer cell lines (A549) indicate effective anticancer activity through a combined cytotoxic effect and hyperthermia. Tumor ablation is facilitated by apoptosis induced by the overexpression of ɣ-H2AX, arresting cancer cell proliferation. Consequently, the multifunctional and biodegradable nanosystem (HIL NPs), incorporating bioactive compounds from Hyptis, provides valuable perspectives for developing an innovative therapeutic strategy originating from a challenging weed. This approach holds promise for potential applications in both bioimaging and the combined use of phyto-photothermal therapy for cancer treatment.

Keywords: bioactive liposome, hyptis suaveolens, photothermal therapy, lung cancer

Procedia PDF Downloads 56
116 COVID-19’s Impact on the Use of Media, Educational Performance, and Learning in Children and Adolescents with ADHD Who Engaged in Virtual Learning

Authors: Christina Largent, Tazley Hobbs

Abstract:

Objective: A literature review was performed to examine the existing research on COVID-19 lockdown as it relates to ADHD child/adolescent individuals, media use, and impact on educational performance/learning. It was surmised that with the COVID-19 shut-down and transition to remote learning, a less structured learning environment, increased screen time, in addition to potential difficulty accessing school resources would impair ADHD individuals’ performance and learning. A resulting increase in the number of youths diagnosed and treated for ADHD would be expected. As of yet, there has been little to no published data on the incidence of ADHD as it relates to COVID-19 outside of reports from several nonprofit agencies such as CHADD (Children and Adults with Attention-Deficit/Hyperactivity Disorder ), who reported an increased number of calls to their helpline, The New York based Child Mind Institute, who reported an increased number of appointments to discuss medications, and research released from Athenahealth showing an increase in the number of patients receiving new diagnosis of ADHD and new prescriptions for ADHD medications. Methods: A literature search for articles published between 2020 and 2021 from Pubmed, Google Scholar, PsychInfo, was performed. Search phrases and keywords included “covid, adhd, child, impact, remote learning, media, screen”. Results: Studies primarily utilized parental reports, with very few from the perspective of the ADHD individuals themselves. Most findings thus far show that with the COVID-19 quarantine and transition to online learning, ADHD individuals’ experienced decreased ability to keep focused or adhere to the daily routine, as well as increased inattention-related problems, such as careless mistakes or lack of completion in homework, which in turn translated into overall more difficulty with remote learning. To add further injury, one study showed (just on evaluation of two different sites within the US) that school based services for these individuals decreased with the shift to online-learning. Increased screen time, television, social media, and gaming were noted amongst ADHD individuals. One study further differentiated the degree of digital media, identifying individuals with “problematic “ or “non-problematic” use. ADHD children with problematic digital media use suffered from more severe core symptoms of ADHD, negative emotions, executive function deficits, damage to family environment, pressure from life events, and a lower motivation to learn. Conclusions and Future Considerations: Studies found not only was online learning difficult for ADHD individuals but it, in addition to greater use of digital media, was associated with worsening ADHD symptoms impairing schoolwork, in addition to secondary findings of worsening mood and behavior. Currently, data on the number of new ADHD cases, in addition to data on the prescription and usage of stimulants during COVID-19, has not been well documented or studied; this would be well-warranted out of concern for over diagnosing or over-prescribing our youth. It would also be well-worth studying how reversible or long-lasting these negative impacts may be.

Keywords: COVID-19, remote learning, media use, ADHD, child, adolescent

Procedia PDF Downloads 108
115 Formulation and Optimization of Self Nanoemulsifying Drug Delivery System of Rutin for Enhancement of Oral Bioavailability Using QbD Approach

Authors: Shrestha Sharma, Jasjeet K. Sahni, Javed Ali, Sanjula Baboota

Abstract:

Introduction: Rutin is a naturally occurring strong antioxidant molecule belonging to bioflavonoid category. Due to its free radical scavenging properties, it has been found to be beneficial in the treatment of various diseases including inflammation, cancer, diabetes, allergy, cardiovascular disorders and various types of microbial infections. Despite its beneficial effects, it suffers from the problem of low aqueous solubility which is responsible for low oral bioavailability. The aim of our study was to optimize and characterize self-nanoemulsifying drug delivery system (SNEDDS) of rutin using Box-Behnken design (BBD) combined with a desirability function. Further various antioxidant, pharmacokinetic and pharmacodynamic studies were performed for the optimized rutin SNEDDS formulation. Methodologies: Selection of oil, surfactant and co-surfactant was done on the basis of solubility/miscibility studies. Sefsol+ Vitamin E, Solutol HS 15 and Transcutol P were selected as oil phase, surfactant and co-surfactant respectively. Optimization of SNEDDS formulations was done by a three-factor, three-level (33)BBD. The independent factors were Sefsol+ Vitamin E, Solutol HS15, and Transcutol P. The dependent variables were globule size, self emulsification time (SEF), % transmittance and cumulative percentage drug released. Various response surface graphs and contour plots were constructed to understand the effect of different factor, their levels and combinations on the responses. The optimized Rutin SNEDDS formulation was characterized for various parameters such as globule size, zeta potential, viscosity, refractive index , % Transmittance and in vitro drug release. Ex vivo permeation studies and pharmacokinetic studies were performed for optimized formulation. Antioxidant activity was determined by DPPH and reducing power assays. Anti-inflammatory activity was determined by using carrageenan induced rat paw oedema method. Permeation of rutin across small intestine was assessed using confocal laser scanning microscopy (CLSM). Major findings:The optimized SNEDDS formulation consisting of Sefsol+ Vitamin E - Solutol HS15 -Transcutol HP at proportions of 25:35:17.5 (w/w) was prepared and a comparison of the predicted values and experimental values were found to be in close agreement. The globule size and PDI of optimized SNEDDS formulation was found to be 16.08 ± 0.02 nm and 0.124±0.01 respectively. Significant (p˂0.05) increase in percentage drug release was achieved in the case of optimized SNEDDS formulation (98.8 %) as compared to rutin suspension. Furthermore, pharmacokinetic study showed a 2.3-fold increase in relative oral bioavailability compared with that of the suspension. Antioxidant assay results indicated better efficacy of the developed formulation than the pure drug and it was found to be comparable with ascorbic acid. The results of anti-inflammatory studies showed 72.93 % inhibition for the SNEDDS formulation which was significantly higher than the drug suspension 46.56%. The results of CLSM indicated that the absorption of SNEDDS formulation was considerably higher than that from rutin suspension. Conclusion: Rutin SNEDDS have been successfully prepared and they can serve as an effective tool in enhancing oral bioavailability and efficacy of Rutin.

Keywords: rutin, oral bioavilability, pharamacokinetics, pharmacodynamics

Procedia PDF Downloads 475
114 Anti-tuberculosis, Resistance Modulatory, Anti-pulmonary Fibrosis and Anti-silicosis Effects of Crinum Asiaticum Bulbs and Its Active Metabolite, Betulin

Authors: Theophilus Asante, Comfort Nyarko, Daniel Antwi

Abstract:

Drug-resistant tuberculosis, together with the associated comorbidities like pulmonary fibrosis and silicosis, has been one of the most serious global public health threats that requires immediate action to curb or mitigate it. This prolongs hospital stays, increases the cost of medication, and increases the death toll recorded annually. Crinum asiaticum bulb (CAE) and betulin (BET) are known for their biological and pharmacological effects. Pharmacological effects reported on CAE include antimicrobial, anti-inflammatory, anti-pyretic, anti-analgesic, and anti-cancer effects. Betulin has exhibited a multitude of powerful pharmacological properties ranging from antitumor, anti-inflammatory, anti-parasitic, anti-microbial, and anti-viral activities. This work sought to investigate the anti-tuberculosis and resistant modulatory effects and also assess their effects on mitigating pulmonary fibrosis and silicosis. In the anti-tuberculosis and resistant modulatory effects, both CAE and BET showed strong antimicrobial activities (31.25 ≤ MIC ≤ 500) µg/ml against the studied microorganisms and also produced significant anti-efflux pump and biofilm inhibitory effects (ρ < 0.0001) as well as exhibiting resistance modulatory and synergistic effects when combined with standard antibiotics. Crinum asiaticum bulbs extract and betulin were shown to possess anti-pulmonary fibrosis effects. There was an increased survival rate in the CAE and BET treatment groups compared to the BLM-induced group. There was a marked decrease in the levels of hydroxyproline and collagen I and III in the CAE and BET treatment groups compared to the BLM-treated group. The treatment groups of CAE and BET significantly downregulated the levels of pro-fibrotic and pro-inflammatory cytokine concentrations such as TGF-β1, MMP9, IL-6, IL-1β and TNF-alpha compared to an increase in the BLM-treated groups. The histological findings of the lungs suggested the curative effects of CAE and BET following BLM-induced pulmonary fibrosis in mice. The study showed improved lung functions with a wide focal area of viable alveolar spaces and few collagen fibers deposition on the lungs of the treatment groups. In the anti-silicosis and pulmonoprotective effects of CAE and BET, the levels of NF-κB, TNF-α, IL-1β, IL-6 and hydroxyproline, collagen types I and III were significantly reduced by CAE and BET (ρ < 0.0001). Both CAE and BET significantly (ρ < 0.0001) inhibited the levels of hydroxyproline, collagen I and III when compared with the negative control group. On BALF biomarkers such as macrophages, lymphocytes, monocytes, and neutrophils, CAE and BET were able to reduce their levels significantly (ρ < 0.0001). The CAE and BET were examined for anti-oxidant activity and shown to raise the levels of catalase (CAT) and superoxide dismutase (SOD) while lowering the level of malondialdehyde (MDA). There was an improvement in lung function when lung tissues were examined histologically. Crinum asiaticum bulbs extract and betulin were discovered to exhibit anti-tubercular and resistance-modulatory properties, as well as the capacity to minimize TB comorbidities such as pulmonary fibrosis and silicosis. In addition, CAE and BET may act as protective mechanisms, facilitating the preservation of the lung's physiological integrity. The outcomes of this study might pave the way for the development of leads for producing single medications for the management of drug-resistant tuberculosis and its accompanying comorbidities.

Keywords: fibrosis, crinum, tuberculosis, antiinflammation, drug resistant

Procedia PDF Downloads 56
113 Impact of Urban Migration on Caste: Rohinton Mistry’s a Fine Balance and Rural-to-Urban Caste Migration in India

Authors: Mohua Dutta

Abstract:

The primary aim of this research paper is to investigate the forced urban migration of Dalits in India who are fleeing caste persecution in rural areas. This paper examines the relationship between caste and rural-to-urban internal migration in India using a literary text, Rohinton Mistry’s A Fine Balance, highlighting the challenges faced by Dalits in rural areas that force them to migrate to urban areas. Despite the prevalence of such discussions in Dalit autobiographies written in vernacular languages, there is a lack of discussion regarding caste migration in Indian English Literature, including this present text, as evidenced by the existing critical interpretations of the novel, which this paper seeks to rectify. The primary research question is how urban migration affects caste system in India and why rural-to-urban caste migration occurs. The purpose of this paper is to better understand the reasons for Dalit migration, the challenges they face in rural and urban areas, and the lingering influence of caste in both rural and urban areas. The study reveals that the promise of mobility and emancipation provided by class operations drives rural-to-urban caste migration in India, but it also reveals that caste marginalization in rural areas is closely linked to class marginalization and other forms of subalternity in urban areas. Moreover, the caste system persists in urban areas as well, making Dalit migrants more vulnerable to social, political, and economic discrimination. The reason for this is that, despite changes in profession and urban migration, the trapped structure of caste capital and family networks exposes migrants to caste and class oppressions. To reach its conclusion, this study employs a variety of methodologies. Discourse analysis is used to investigate the current debates and narratives surrounding caste migration. Critical race theory, specifically intersectional theory and social constructivism, aids in comprehending the complexities of caste, class, and migration. Mistry's novel is subjected to textual analysis in order to identify and interpret references to caste migration. Secondary data, such as theoretical understanding of the caste system in operation and scholarly works on caste migration, are also used to support and strengthen the findings and arguments presented in the paper. The study concludes that rural-to-urban caste migration in India is primarily motivated by the promise of socioeconomic mobility and emancipation offered by urban spaces. However, the caste system persists in urban areas, resulting in the continued marginalisation and discrimination of Dalit migrants. The study also highlights the limitations of urban migration in providing true emancipation for Dalit migrants, as they remain trapped within caste and family network structures. Overall, the study raises awareness of the complexities surrounding caste migration and its impact on the lives of India's marginalised communities. This study contributes to the field of Migration Studies by shedding light on an often-overlooked issue: Dalit migration. It challenges existing literary critical interpretations by emphasising the significance of caste migration in Indian English Literature. The study also emphasises the interconnectedness of caste and class, broadening understanding of how these systems function in both rural and urban areas.

Keywords: rural-to-urban caste migration in india, internal migration in india, caste system in india, dalit movement in india, rooster coop of caste and class, urban poor as subalterns

Procedia PDF Downloads 37
112 Relationship Between Brain Entropy Patterns Estimated by Resting State fMRI and Child Behaviour

Authors: Sonia Boscenco, Zihan Wang, Euclides José de Mendoça Filho, João Paulo Hoppe, Irina Pokhvisneva, Geoffrey B.C. Hall, Michael J. Meaney, Patricia Pelufo Silveira

Abstract:

Entropy can be described as a measure of the number of states of a system, and when used in the context of physiological time-based signals, it serves as a measure of complexity. In functional connectivity data, entropy can account for the moment-to-moment variability that is neglected in traditional functional magnetic resonance imaging (fMRI) analyses. While brain fMRI resting state entropy has been associated with some pathological conditions like schizophrenia, no investigations have explored the association between brain entropy measures and individual differences in child behavior in healthy children. We describe a novel exploratory approach to evaluate brain fMRI resting state data in two child cohorts, and MAVAN (N=54, 4.5 years, 48% males) and GUSTO (N = 206, 4.5 years, 48% males) and its associations to child behavior, that can be used in future research in the context of child exposures and long-term health. Following rs-fMRI data pre-processing and Shannon entropy calculation across 32 network regions of interest to acquire 496 unique functional connections, partial correlation coefficient analysis adjusted for sex was performed to identify associations between entropy data and Strengths and Difficulties questionnaire in MAVAN and Child Behavior Checklist domains in GUSTO. Significance was set at p < 0.01, and we found eight significant associations in GUSTO. Negative associations were found between two frontoparietal regions and cerebellar posterior and oppositional defiant problems, (r = -0.212, p = 0.006) and (r = -0.200, p = 0.009). Positive associations were identified between somatic complaints and four default mode connections: salience insula (r = 0.202, p < 0.01), dorsal attention intraparietal sulcus (r = 0.231, p = 0.003), language inferior frontal gyrus (r = 0.207, p = 0.008) and language posterior superior temporal gyrus (r = 0.210, p = 0.008). Positive associations were also found between insula and frontoparietal connection and attention deficit / hyperactivity problems (r = 0.200, p < 0.01), and insula – default mode connection and pervasive developmental problems (r = 0.210, p = 0.007). In MAVAN, ten significant associations were identified. Two positive associations were found = with prosocial scores: the salience prefrontal cortex and dorsal attention connection (r = 0.474, p = 0.005) and the salience supramarginal gyrus and dorsal attention intraparietal sulcus (r = 0.447, p = 0.008). The insula and prefrontal connection were negatively associated with peer problems (r = -0.437, p < 0.01). Conduct problems were negatively associated with six separate connections, the left salience insula and right salience insula (r = -0.449, p = 0.008), left salience insula and right salience supramarginal gyrus (r = -0.512, p = 0.002), the default mode and visual network (r = -0.444, p = 0.009), dorsal attention and language network (r = -0.490, p = 0.003), and default mode and posterior parietal cortex (r = -0.546, p = 0.001). Entropy measures of resting state functional connectivity can be used to identify individual differences in brain function that are correlated with variation in behavioral problems in healthy children. Further studies applying this marker into the context of environmental exposures are warranted.

Keywords: child behaviour, functional connectivity, imaging, Shannon entropy

Procedia PDF Downloads 171
111 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines

Authors: Kamyar Tolouei, Ehsan Moosavi

Abstract:

In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.

Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization

Procedia PDF Downloads 73
110 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 92
109 Tales of Two Cities: 'Motor City' Detroit and 'King Cotton' Manchester: Transatlantic Transmissions and Transformations, Flows of Communications, Commercial and Cultural Connections

Authors: Dominic Sagar

Abstract:

Manchester ‘King Cotton’, the first truly industrial city of the nineteenth century, passing on the baton to Detroit ‘Motor City’, is the first truly modern city. We are exploring the tales of the two cities, their rise and fall and subsequent post-industrial decline, their transitions and transformations, whilst alongside paralleling their corresponding, commercial, cultural, industrial and even agricultural, artistic and musical transactions and connections. The paper will briefly contextualize how technologies of the industrial age and modern age have been instrumental in the development of these cities and other similar cities including New York. However, the main focus of the study will be the present and more importantly the future, how globalisation and the advancements of digital technologies and industries have shaped the cities developments from AlanTuring and the making of the first programmable computer to the effect of digitalisation and digital initiatives. Manchester now has a thriving creative digital infrastructure of Digilabs, FabLabs, MadLabs and hubs, the study will reference the Smart Project and the Manchester Digital Development Association whilst paralleling similar digital and creative industrial initiatives now starting to happen in Detroit. The paper will explore other topics including the need to allow for zones of experimentation, areas to play, think and create in order develop and instigate new initiatives and ideas of production, carrying on the tradition of influential inventions throughout the history of these key cities. Other topics will be briefly touched on, such as urban farming, citing the Biospheric foundation in Manchester and other similar projects in Detroit. However, the main thread will focus on the music industries and how they are contributing to the regeneration of cities. Musically and artistically, Manchester and Detroit have been closely connected by the flow and transmission of information and transfer of ideas via ‘cars and trains and boats and planes’ through to the new ‘super highway’. From Detroit to Manchester often via New York and Liverpool and back again, these musical and artistic connections and flows have greatly affected and influenced both cities and the advancement of technology are still connecting the cities. In summary two hugely important industrial cities, subsequently both experienced massive decline in fortunes, having had their large industrial hearts ripped out, ravaged leaving dying industrial carcasses and car crashes of despair, dereliction, desolation and post-industrial wastelands vacated by a massive exodus of the cities’ inhabitants. To examine the affinity, similarity and differences between Manchester & Detroit, from their industrial importance to their post-industrial decline and their current transmutations, transformations, transient transgressions, cities in transition; contrasting how they have dealt with these problems and how they can learn from each other. With a view to framing these topics with regard to how various communities have shaped these cities and the creative industries and design [the new cotton/car manufacturing industries] are reinventing post-industrial cities, to speculate on future development of these themes in relation to Globalisation, digitalisation and how cities can function to develop solutions to communal living in cities of the future.

Keywords: cultural capital, digital developments, musical initiatives, zones of experimentation

Procedia PDF Downloads 164
108 Structural Molecular Dynamics Modelling of FH2 Domain of Formin DAAM

Authors: Rauan Sakenov, Peter Bukovics, Peter Gaszler, Veronika Tokacs-Kollar, Beata Bugyi

Abstract:

FH2 (formin homology-2) domains of several proteins, collectively known as formins, including DAAM, DAAM1 and mDia1, promote G-actin nucleation and elongation. FH2 domains of these formins exist as oligomers. Chain dimerization by ring structure formation serves as a structural basis for actin polymerization function of FH2 domain. Proper single chain configuration and specific interactions between its various regions are necessary for individual chains to form a dimer functional in G-actin nucleation and elongation. FH1 and WH2 domain-containing formins were shown to behave as intrinsically disordered proteins. Thus, the aim of this research was to study structural dynamics of FH2 domain of DAAM. To investigate structural features of FH2 domain of DAAM, molecular dynamics simulation of chain A of FH2 domain of DAAM solvated in water box in 50 mM NaCl was conducted at temperatures from 293.15 to 353.15K, with VMD 1.9.2, NAMD 2.14 and Amber Tools 21 using 2z6e and 1v9d PDB structures of DAAM was obtained on I-TASSER webserver. Calcium and ATP bound G-actin 3hbt PDB structure was used as a reference protein with well-described structural dynamics of denaturation. Topology and parameter information of CHARMM 2012 additive all-atom force fields for proteins, carbohydrate derivatives, water and ions were used in NAMD 2.14 and ff19SB force field for proteins in Amber Tools 21. The systems were energy minimized for the first 1000 steps, equilibrated and produced in NPT ensemble for 1ns using stochastic Langevin dynamics and the particle mesh Ewald method. Our root-mean square deviation (RMSD) analysis of molecular dynamics of chain A of FH2 domains of DAAM revealed similar insignificant changes of total molecular average RMSD values of FH2 domain of these formins at temperatures from 293.15 to 353.15K. In contrast, total molecular average RMSD values of G-actin showed considerable increase at 328K, which corresponds to the denaturation of G-actin molecule at this temperature and its transition from native, ordered, to denatured, disordered, state which is well-described in the literature. RMSD values of lasso and tail regions of chain A of FH2 domain of DAAM exhibited higher than total molecular average RMSD at temperatures from 293.15 to 353.15K. These regions are functional in intra- and interchain interactions and contain highly conserved tryptophan residues of lasso region, highly conserved GNYMN sequence of post region and amino acids of the shell of hydrophobic pocket of the salt bridge between Arg171 and Asp321, which are important for structural stability and ordered state of FH2 domain of DAAM and its functions in FH2 domain dimerization. In conclusion, higher than total molecular average RMSD values of lasso and post regions of chain A of FH2 domain of DAAM may explain disordered state of FH2 domain of DAAM at temperatures from 293.15 to 353.15K. Finally, absence of marked transition, in terms of significant changes in average molecular RMSD values between native and denatured states of FH2 domain of DAAM at temperatures from 293.15 to 353.15K, can make it possible to attribute these formins to the group of intrinsically disordered proteins rather than to the group of intrinsically ordered proteins such as G-actin.

Keywords: FH2 domain, DAAM, formins, molecular modelling, computational biophysics

Procedia PDF Downloads 108
107 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan

Authors: Hsien-Te Lin

Abstract:

The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.

Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy

Procedia PDF Downloads 118
106 Michel Foucault’s Docile Bodies and The Matrix Trilogy: A Close Reading Applied to the Human Pods and Growing Fields in the Films

Authors: Julian Iliev

Abstract:

The recent release of The Matrix Resurrections persuaded many film scholars that The Matrix trilogy had lost its appeal and its concepts were largely outdated. This study examines the human pods and growing fields in the trilogy. Their functionality is compared to Michel Foucault’s concept of docile bodies: linking fictional and contemporary worlds. This paradigm is scrutinized through surveillance literature. The analogy brings to light common elements of hidden surveillance practices in technologies. The comparison illustrates the effects of body manipulation portrayed in the movies and their relevance with contemporary surveillance practices. Many scholars have utilized a close reading methodology in film studies (J.Bizzocchi, J.Tanenbaum, P.Larsen, S. Herbrechter, and Deacon et al.). The use of a particular lens through which media text is examined is an indispensable factor that needs to be incorporated into the methodology. The study spotlights both scenes from the trilogy depicting the human pods and growing fields. The functionality of the pods and the fields compare directly with Foucault’s concept of docile bodies. By utilizing Foucault’s study as a lens, the research will unearth hidden components and insights into the films. Foucault recognizes three disciplines that produce docile bodies: 1) manipulation and the interchangeability of individual bodies, 2) elimination of unnecessary movements and management of time, and 3) command system guaranteeing constant supervision and continuity protection. These disciplines can be found in the pods and growing fields. Each body occupies a single pod aiding easier manipulation and fast interchangeability. The movement of the bodies in the pods is reduced to the absolute minimum. Thus, the body is transformed into the ultimate object of control – minimum movement correlates to maximum energy generation. Supervision is exercised by wiring the body with numerous types of cables. This ultimate supervision of body activity reduces the body’s purpose to mere functioning. If a body does not function as an energy source, then it’s unplugged, ejected, and liquefied. The command system secures the constant supervision and continuity of the process. To Foucault, the disciplines are distinctly different from slavery because they stop short of a total takeover of the bodies. This is a clear difference from the slave system implemented in the films. Even though their system might lack sophistication, it makes up for it in the elevation of functionality. Further, surveillance literature illustrates the connection between the generation of body energy in The Matrix trilogy to the generation of individual data in contemporary society. This study found that the three disciplines producing docile bodies were present in the portrayal of the pods and fields in The Matrix trilogy. The above comparison combined with surveillance literature yields insights into analogous processes and contemporary surveillance practices. Thus, the constant generation of energy in The Matrix trilogy can be equated to the consistent data generation in contemporary society. This essay shows the relevance of the body manipulation concept in the Matrix films with contemporary surveillance practices.

Keywords: docile bodies, film trilogies, matrix movies, michel foucault, privacy loss, surveillance

Procedia PDF Downloads 61
105 Characterizing the Spatially Distributed Differences in the Operational Performance of Solar Power Plants Considering Input Volatility: Evidence from China

Authors: Bai-Chen Xie, Xian-Peng Chen

Abstract:

China has become the world's largest energy producer and consumer, and its development of renewable energy is of great significance to global energy governance and the fight against climate change. The rapid growth of solar power in China could help achieve its ambitious carbon peak and carbon neutrality targets early. However, the non-technical costs of solar power in China are much higher than at international levels, meaning that inefficiencies are rooted in poor management and improper policy design and that efficiency distortions have become a serious challenge to the sustainable development of the renewable energy industry. Unlike fossil energy generation technologies, the output of solar power is closely related to the volatile solar resource, and the spatial unevenness of solar resource distribution leads to potential efficiency spatial distribution differences. It is necessary to develop an efficiency evaluation method that considers the volatility of solar resources and explores the mechanism of the influence of natural geography and social environment on the spatially varying characteristics of efficiency distribution to uncover the root causes of managing inefficiencies. The study sets solar resources as stochastic inputs, introduces a chance-constrained data envelopment analysis model combined with the directional distance function, and measures the solar resource utilization efficiency of 222 solar power plants in representative photovoltaic bases in northwestern China. By the meta-frontier analysis, we measured the characteristics of different power plant clusters and compared the differences among groups, discussed the mechanism of environmental factors influencing inefficiencies, and performed statistical tests through the system generalized method of moments. Rational localization of power plants is a systematic project that requires careful consideration of the full utilization of solar resources, low transmission costs, and power consumption guarantee. Suitable temperature, precipitation, and wind speed can improve the working performance of photovoltaic modules, reasonable terrain inclination can reduce land cost, and the proximity to cities strongly guarantees the consumption of electricity. The density of electricity demand and high-tech industries is more important than resource abundance because they trigger the clustering of power plants to result in a good demonstration and competitive effect. To ensure renewable energy consumption, increased support for rural grids and encouraging direct trading between generators and neighboring users will provide solutions. The study will provide proposals for improving the full life-cycle operational activities of solar power plants in China to reduce high non-technical costs and improve competitiveness against fossil energy sources.

Keywords: solar power plants, environmental factors, data envelopment analysis, efficiency evaluation

Procedia PDF Downloads 59
104 The 10,000 Fold Effect of Retrograde Neurotransmission: A New Concept for Cerebral Palsy Revival by the Use of Nitric Oxide Donars

Authors: V. K. Tewari, M. Hussain, H. K. D. Gupta

Abstract:

Background: Nitric Oxide Donars (NODs) (intrathecal sodium nitroprusside (ITSNP) and oral tadalafil 20mg post ITSNP) has been studied in this context in cerebral palsy patients for fast recovery. This work proposes two mechanisms for acute cases and one mechanism for chronic cases, which are interrelated, for physiological recovery. a) Retrograde Neurotransmission (acute cases): 1) Normal excitatory impulse: at the synaptic level, glutamate activates NMDA receptors, with nitric oxide synthetase (NOS) on the postsynaptic membrane, for further propagation by the calcium-calmodulin complex. Nitric oxide (NO, produced by NOS) travels backward across the chemical synapse and binds the axon-terminal NO receptor/sGC of a presynaptic neuron, regulating anterograde neurotransmission (ANT) via retrograde neurotransmission (RNT). Heme is the ligand-binding site of the NO receptor/sGC. Heme exhibits > 10,000-fold higher affinity for NO than for oxygen (the 10,000-fold effect) and is completed in 20 msec. 2) Pathological conditions: normal synaptic activity, including both ANT and RNT, is absent. A NO donor (SNP) releases NO from NOS in the postsynaptic region. NO travels backward across a chemical synapse to bind to the heme of a NO receptor in the axon terminal of a presynaptic neuron, generating an impulse, as under normal conditions. b) Vasopasm: (acute cases) Perforators show vasospastic activity. NO vasodilates the perforators via the NO-cAMP pathway. c) Long-Term Potentiation (LTP): (chronic cases) The NO–cGMP-pathway plays a role in LTP at many synapses throughout the CNS and at the neuromuscular junction. LTP has been reviewed both generally and with respect to brain regions specific for memory/learning. Aims/Study Design: The principles of “generation of impulses from the presynaptic region to the postsynaptic region by very potent RNT (10,000-fold effect)” and “vasodilation of arteriolar perforators” are the basis of the authors’ hypothesis to treat cerebral palsy cases. Case-control prospective study. Materials and Methods: The experimental population included 82 cerebral palsy patients (10 patients were given control treatments without NOD or with 5% dextrose superfusion, and 72 patients comprised the NOD group). The mean time for superfusion was 5 months post-cerebral palsy. Pre- and post-NOD status was monitored by Gross Motor Function Classification System for Cerebral Palsy (GMFCS), MRI, and TCD studies. Results: After 7 days in the NOD group, the mean change in the GMFCS score was an increase of 1.2 points mean; after 3 months, there was an increase of 3.4 points mean, compared to the control-group increase of 0.1 points at 3 months. MRI and TCD documented the improvements. Conclusions: NOD (ITSNP boosts up the recovery and oral tadalafil maintains the recovery to a well-desired level) acts swiftly in the treatment of CP, acting within 7 days on 5 months post-cerebral palsy either of the three mechanisms.

Keywords: cerebral palsy, intrathecal sodium nitroprusside, oral tadalafil, perforators, vasodilations, retrograde transmission, the 10, 000-fold effect, long-term potantiation

Procedia PDF Downloads 341
103 A Novel Upregulated circ_0032746 on Sponging with MIR4270 Promotes the Proliferation and Migration of Esophageal Squamous Cell Carcinoma

Authors: Sachin Mulmi Shrestha, Xin Fang, Hui Ye, Lihua Ren, Qinghua Ji, Ruihua Shi

Abstract:

Background: Esophageal squamous cell carcinoma (ESCC) is a tumor arising from esophageal epithelial cells and is one of the major disease subtype in Asian countries, including China. Esophageal cancer is the 7th highest incidence based on the 2020 data of GLOBOCAN. The pathogenesis of cancer is still not well understood as many molecular and genetic basis of esophageal carcinogenesis has yet to be clearly elucidated. Circular RNAs are RNA molecules that are formed by back-splicing covalently joined 3′- and 5′-endsrather than canonical splicing, and recent data suggest circular RNAs could sponge miRNAs and are enriched with functional miRNA binding sites. Hence, we studied the mechanism of circular RNA, its biological function, and the relationship between microRNA in the carcinogenesis of ESCC. Methods: 4 pairs of normal and esophageal cancer tissues were collected in Zhongda hospital, affiliated to Southeast University, and high-throughput RNA sequencing was done. The result revealed that circ_0032746 was upregulated, and thus we selected circ_0032746 for further study. The backsplice junction of circRNA was validated by sanger sequence, and stability was determined by RNASE R assay. The binding site of circRNA and microRNA was predicted by circinteractome,mirandaand RNAhybrid database. Furthermore, circRNA was silenced by siRNA and then by lentivirus. The regulatory axis of circ0032746/miR4270 was validated by shRNA, mimic, and inhibitor transfection. Then, in vitro experiments were performed to assess the role of circ0032746 on proliferation (CCK-8 assay and colon formation assay), migration and invasion (Transewell assay), and apoptosis of ESCC. Results: The upregulated circ0032746 was validated in 9 pairs of tissues and 5 types of cell lines by qPCR, which showed high expression and was statistically significant (P<0.005) ). Upregulated circ0032746 was silenced by shRNA, which showed significant knockdown in KYSE 30 and TE-1 cell lines expression compared to control. Nuclear and cytoplasmic mRNA fraction experiment displayed the cytoplasmic location of circ0032746. The sponging of miR4270 was validated by co-transfection of sh-circ0032746 and mimic or inhibitor. Transfection with mimic showed the decreased expression of circ_0032746, whereas inhibitor inhibited the result. In vitro experiments showed that silencing of circ_0032746 inhibited the proliferation, migration, and invasion compared to the negative control group. The apoptosis was seen higher in a knockdown group than in the control group. Furthermore, 11 common mircoRNA target mRNAs were predicted by Targetscan, MirTarbase, and miRanda database, which may further play role in the pathogenesis. Conclusion: Our results showed that novel circ_0032746 is upregulated in ESCC and plays role in itsoncogenicity. Silencing of circ_0032746 inhibits the proliferation and migration of ESCC whereas increases the apoptosis of cancer cells. Hence, circ0032746 acts as an oncogene in ESCC by sponging with miR4270 and could be a potential biomarker in the diagnosis of ESCC in the future.

Keywords: circRNA, esophageal squamous cell carcinoma, microRNA, upregulated

Procedia PDF Downloads 82
102 Applications of Polyvagal Theory for Trauma in Clinical Practice: Auricular Acupuncture and Herbology

Authors: Aurora Sheehy, Caitlin Prince

Abstract:

Within current orthodox medical protocols, trauma and mental health issues are deemed to reside within the realm of cognitive or psychological therapists and are marginalised in these areas, in part due to limited drugs option available, mostly manipulating neurotransmitters or sedating patients to reduce symptoms. By contrast, this research presents examples from the clinical practice of how trauma can be assessed and treated physiologically. Adverse Childhood Experiences (ACEs) are a tally of different types of abuse and neglect. It has been used as a measurable and reliable predictor of the likelihood of the development of autoimmune disease. It is a direct way to demonstrate reliably the health impact of traumatic life experiences. A second assessment tool is Allostatic Load, which refers to the cumulative effects that chronic stress has on mental and physical health. It records the decline of an individual’s physiological capacity to cope with their experience. It uses a specific grouping of serum testing and physical measures. It includes an assessment of neuroendocrine, cardiovascular, immune and metabolic systems. Allostatic load demonstrates the health impact that trauma has throughout the body. It forms part of an initial intake assessment in clinical practice and could also be used in research to evaluate treatment. Examining medicinal plants for their physiological, neurological and somatic effects through the lens of Polyvagal theory offers new opportunities for trauma treatments. In situations where Polyvagal theory recommends activities and exercises to enable parasympathetic activation, many herbs that affect Effector Memory T (TEM) cells also enact these responses. Traditional or Indigenous European herbs show the potential to support the polyvagal tone, through multiple mechanisms. As the ventral vagal nerve reaches almost every major organ, plants that have actions on these tissues can be understood via their polyvagal actions, such as monoterpenes as agents to improve respiratory vagal tone, cyanogenic glycosides to reset polyvagal tone, volatile oils rich in phenyl methyl esters improve both sympathetic and parasympathetic tone, bitters activate gut function and can strongly promote parasympathetic regulation. Auricular Acupuncture uses a system of somatotopic mapping of the auricular surface overlaid with an image of an inverted foetus with each body organ and system featured. Given that the concha of the auricle is the only place on the body where the Vagus Nerve neurons reach the surface of the skin, several investigators have evaluated non-invasive, transcutaneous electrical nerve stimulation (TENS) at auricular points. Drawn from an interdisciplinary evidence base and developed through clinical practice, these assessment and treatment tools are examples of practitioners in the field innovating out of necessity for the best outcomes for patients. This paper draws on case studies to direct future research.

Keywords: polyvagal, auricular acupuncture, trauma, herbs

Procedia PDF Downloads 54
101 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 39
100 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 52
99 Financial Policies in the Process of Global Crisis: Case Study Kosovo, Case Kosovo

Authors: Shpetim Rezniqi

Abstract:

Financial Policies in the process of global crisis the current crisis has swept the world with special emphasis, most developed countries, those countries which have most gross -product world and you have a high level of living.Even those who are not experts can describe the consequences of the crisis to see the reality that is seen, but how far will it go this crisis is impossible to predict. Even the biggest experts have conjecture and large divergence, but agree on one thing: - The devastating effects of this crisis will be more severe than ever before and can not be predicted.Long time, the world was dominated economic theory of free market laws. With the belief that the market is the regulator of all economic problems. The market, as river water will flow to find the best and will find the necessary solution best. Therefore much less state market barriers, less state intervention and market itself is an economic self-regulation. Free market economy became the model of global economic development and progress, it transcends national barriers and became the law of the development of the entire world economy. Globalization and global market freedom were principles of development and international cooperation. All international organizations like the World Bank, states powerful economic, development and cooperation principles laid free market economy and the elimination of state intervention. The less state intervention much more freedom of action was this market- leading international principle. We live in an era of financial tragic. Financial markets and banking in particular economies are in a state of thy good, US stock markets fell about 40%, in other words, this time, was one of the darkest moments 5 since 1920. Prior to her rank can only "collapse" of the stock of Wall Street in 1929, technological collapse of 2000, the crisis of 1973 after the Yom Kippur war, while the price of oil quadrupled and famous collapse of 1937 / '38, when Europe was beginning World war II In 2000, even though it seems like the end of the world was the corner, the world economy survived almost intact. Of course, that was small recessions in the United States, Europe, or Japan. Much more difficult the situation was at crisis 30s, or 70s, however, succeeded the world. Regarding the recent financial crisis, it has all the signs to be much sharper and with more consequences. The decline in stock prices is more a byproduct of what is really happening. Financial markets began dance of death with the credit crisis, which came as a result of the large increase in real estate prices and household debt. It is these last two phenomena can be matched very well with the gains of the '20s, a period during which people spent fists as if there was no tomorrow. All is not away from the mouth of the word recession, that fact no longer a sudden and abrupt. But as much as the financial markets melt, the greater is the risk of a problematic economy for years to come. Thus, for example, the banking crisis in Japan proved to be much more severe than initially expected, partly because the assets which were based more loans had, especially the land that falling in value. The price of land in Japan is about 15 years that continues to fall. (ADRI Nurellari-Published in the newspaper "Classifieds"). At this moment, it is still difficult to çmosh to what extent the crisis has affected the economy and what would be the consequences of the crisis. What we know is that many banks will need more time to reduce the award of credit, but banks have this primary function, this means huge loss.

Keywords: globalisation, finance, crisis, recomandation, bank, credits

Procedia PDF Downloads 358
98 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 68
97 Rheological Properties of Thermoresponsive Poly(N-Vinylcaprolactam)-g-Collagen Hydrogel

Authors: Serap Durkut, A. Eser Elcin, Y. Murat Elcin

Abstract:

Stimuli-sensitive polymeric hydrogels have received extensive attention in the biomedical field due to their sensitivity to physical and chemical stimuli (temperature, pH, ionic strength, light, etc.). This study describes the rheological properties of a novel thermoresponsive poly(N-vinylcaprolactam)-g-collagen hydrogel. In the study, we first synthesized a facile and novel synthetic carboxyl group-terminated thermo-responsive poly(N-vinylcaprolactam)-COOH (PNVCL-COOH) via free radical polymerization. Further, this compound was effectively grafted with native collagen, by utilizing the covalent bond between the carboxylic acid groups at the end of the chains and amine groups of the collagen using cross-linking agent (EDC/NHS), forming PNVCL-g-Col. Newly-formed hybrid hydrogel displayed novel properties, such as increased mechanical strength and thermoresponsive characteristics. PNVCL-g-Col showed low critical solution temperature (LCST) at 38ºC, which is very close to the body temperature. Rheological studies determine structural–mechanical properties of the materials and serve as a valuable tool for characterizing. The rheological properties of hydrogels are described in terms of two dynamic mechanical properties: the elastic modulus G′ (also known as dynamic rigidity) representing the reversible stored energy of the system, and the viscous modulus G″, representing the irreversible energy loss. In order to characterize the PNVCL-g-Col, the rheological properties were measured in terms of the function of temperature and time during phase transition. Below the LCST, favorable interactions allowed the dissolution of the polymer in water via hydrogen bonding. At temperatures above the LCST, PNVCL molecules within PNVCL-g-Col aggregated due to dehydration, causing the hydrogel structure to become dense. When the temperature reached ~36ºC, both the G′ and G″ values crossed over. This indicates that PNVCL-g-Col underwent a sol-gel transition, forming an elastic network. Following temperature plateau at 38ºC, near human body temperature the sample displayed stable elastic network characteristics. The G′ and G″ values of the PNVCL-g-Col solutions sharply increased at 6-9 minute interval, due to rapid transformation into gel-like state and formation of elastic networks. Copolymerization with collagen leads to an increase in G′, as collagen structure contains a flexible polymer chain, which bestows its elastic properties. Elasticity of the proposed structure correlates with the number of intermolecular cross-links in the hydrogel network, increasing viscosity. However, at 8 minutes, G′ and G″ values sharply decreased for pure collagen solutions due to the decomposition of the elastic and viscose network. Complex viscosity is related to the mechanical performance and resistance opposing deformation of the hydrogel. Complex viscosity of PNVCL-g-Col hydrogel was drastically changed with temperature and the mechanical performance of PNVCL-g-Col hydrogel network increased, exhibiting lesser deformation. Rheological assessment of the novel thermo-responsive PNVCL-g-Col hydrogel, exhibited that the network has stronger mechanical properties due to both permanent stable covalent bonds and physical interactions, such as hydrogen- and hydrophobic bonds depending on temperature.

Keywords: poly(N-vinylcaprolactam)-g-collagen, thermoresponsive polymer, rheology, elastic modulus, stimuli-sensitive

Procedia PDF Downloads 219
96 Snake Locomotion: From Sinusoidal Curves and Periodic Spiral Formations to the Design of a Polymorphic Surface

Authors: Ennios Eros Giogos, Nefeli Katsarou, Giota Mantziorou, Elena Panou, Nikolaos Kourniatis, Socratis Giannoudis

Abstract:

In the context of the postgraduate course Productive Design, Department of Interior Architecture of the University of West Attica in Athens, under the guidance of Professors Nikolaos Koyrniatis and Socratis Giannoudis, kinetic mechanisms with parametric models were examined for their further application in the design of objects. In the first phase, the students studied a motion mechanism that they chose from daily experience and then analyzed its geometric structure in relation to the geometric transformations that exist. In the second phase, the students tried to design it through a parametric model in Grasshopper3d for Rhino algorithmic processor and plan the design of its application in an everyday object. For the project presented, our team began by studying the movement of living beings, specifically the snake. By studying the snake and the role that the environment has in its movement, four basic typologies were recognized: serpentine, concertina, sidewinding and rectilinear locomotion, as well as its ability to perform spiral formations. Most typologies are characterized by ripples, a series of sinusoidal curves. For the application of the snake movement in a polymorphic space divider, the use of a coil-type joint was studied. In the Grasshopper program, the simulation of the desired motion for the polymorphic surface was tested by applying a coil on a sinusoidal curve and a spiral curve. It was important throughout the process that the points corresponding to the nodes of the real object remain constant in number, as well as the distances between them and the elasticity of the construction had to be achieved through a modular movement of the coil and not some elastic element (material) at the nodes. Using mesh (repeating coil), the whole construction is transformed into a supporting body and combines functionality with aesthetics. The set of elements functions as a vertical spatial network, where each element participates in its coherence and stability. Depending on the positions of the elements in terms of the level of support, different perspectives are created in terms of the visual perception of the adjacent space. For the implementation of the model on the scale (1:3), (0.50m.x2.00m.), the load-bearing structure that was studied has aluminum rods for the basic pillars Φ6mm and Φ 2.50 mm, for the secondary columns. Filling elements and nodes are of similar material and were made of MDF surfaces. During the design process, four trapezoidal patterns were picketed, which function as filling elements, while in order to support their assembly, a different engraving facet was done. The nodes have holes that can be pierced by the rods, while their connection point with the patterns has a half-carved recess. The patterns have a corresponding recess. The nodes are of two different types depending on the column that passes through them. The patterns and knots were designed to be cut and engraved using a Laser Cutter and attached to the knots using glue. The parameters participate in the design as mechanisms that generate complex forms and structures through the repetition of constantly changing versions of the parts that compose the object.

Keywords: polymorphic, locomotion, sinusoidal curves, parametric

Procedia PDF Downloads 73
95 Providing Leadership in Nigerian University Education Research Enterprise: The Imperative of Research Ethics

Authors: O. O. Oku, K. S. Jerry-Alagbaoso

Abstract:

It is universally acknowledged that the primary function of universities is the generation and dissemination of knowledge. This mission is pursued through the research component of the university programme especially at the post-graduate level. The senior academic staff teach, supervise and provide general academic leadership to post-graduate students who are expected to carry out research leading to the presentation of dissertation as requirement for the award of doctoral degree in their various disciplines. Carrying out the research enterprises involves a lot of corroboration among individuals and communities. The need to safeguard the interest of everyone involved in the enterprise makes the development of ethical standard in research imperative. Ensuring the development and effective application of such ethical standard falls within the leadership role of the vice –chancellors, Deans of post-graduate schools/ faculties, Heads of Departments and supervisors. It is the relevance and application of such ethical standard in Nigerian university research efforts that this study discussed. The study adopted the descriptive research design. A researcher-made 4 point rating scale was used to elicit information from the post-graduate dissertation supervisors sampled from one university each from the six geo-political zones in Nigeria using the purposive sampling technique. The data collected was analysed using the mean score and standard deviation. The findings of the study include among others that there are several cases of unethical practices by Ph.D dissertation students in Nigerian universities. Prominent among these include duplicating research topics, making unauthorized copies of data paper or computer programme, failing to acknowledge contributions of relevant people and authors, rigging an experiment to prempt the result among others. Some of the causes of the unethical practices according to the respondents include inadequate funding of universities resulting in inadequate remuneration for university teachers, inadequacy of equipment and infrastructures, poor supervision of Ph.D students,’ poverty on the side of the student researchers and non-application of sanctions on violators. Improved funding of the Nigerian universities system with emphasis on both staff and student research efforts, admitting academic oriented students into the Ph.D programme and ensuring the application of appropriate sanctions in cases of unethical conduct in research featured prominently in the needed leadership imperatives. Based on the findings of the study, the researchers recommend the development of university research policies that is closely tied to each university’s strategic plan. Such plan should explain the research focus that will attract more funding and direct students interest towards it without violating the principle of academic freedom. The plan should also incorporate the establishment of a research administration office to provide the necessary link between the students and funding agencies and also organise training for supervisors on leadership activities expected of them while educating students on the processes involved in carrying out a qualitative and acceptable research study. Such exercise should include the ethical principles and guidelines that comprise all parts of research from research topic through the literature review to the design and the truthful reporting of results.

Keywords: academic leadership, ethical standards, research stakeholders, research enterprise

Procedia PDF Downloads 214