Search results for: two-parameter criterion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 660

Search results for: two-parameter criterion

30 Neuroanatomical Specificity in Reporting & Diagnosing Neurolinguistic Disorders: A Functional & Ethical Primer

Authors: Ruairi J. McMillan

Abstract:

Introduction: This critical analysis aims to ascertain how well neuroanatomical aetiologies are communicated within 20 case reports of aphasia. Neuroanatomical visualisations based on dissected brain specimens were produced and combined with white matter tract and vascular taxonomies of function in order to address the most consistently underreported features found within the aphasic case study reports. Together, these approaches are intended to integrate aphasiological knowledge from the past 20 years with aphasiological diagnostics, and to act as prototypal resources for both researchers and clinical professionals. The medico-legal precedent for aphasia diagnostics under Canadian, US and UK case law and the neuroimaging/neurological diagnostics relative to the functional capacity of aphasic patients are discussed in relation to the major findings of the literary analysis, neuroimaging protocols in clinical use today, and the neuroanatomical aetiologies of different aphasias. Basic Methodology: Literature searches of relevant scientific databases (e.g, OVID medline) were carried out using search terms such as aphasia case study (year) & stroke induced aphasia case study. A series of 7 diagnostic reporting criteria were formulated, and the resulting case studies were scored / 7 alongside clinical stroke criteria. In order to focus on the diagnostic assessment of the patient’s condition, only the case report proper (not the discussion) was used to quantify results. Statistical testing established if specific reporting criteria were associated with higher overall scores and potentially inferable increases in quality of reporting. Statistical testing of whether criteria scores were associated with an unclear/adjusted diagnosis were also tested, as well as the probability of a given criterion deviating from an expected estimate. Major Findings: The quantitative analysis of neuroanatomically driven diagnostics in case studies of aphasia revealed particularly low scores in the connection of neuroanatomical functions to aphasiological assessment (10%), and in the inclusion of white matter tracts within neuroimaging or assessment diagnostics (30%). Case studies which included clinical mention of white matter tracts within the report itself were distributed among higher scoring cases, as were case studies which (as clinically indicated) related the affected vascular region to the brain parenchyma of the language network. Concluding Statement: These findings indicate that certain neuroanatomical functions are integrated less often within the patient report than others, despite a precedent for well-integrated neuroanatomical aphasiology also being found among the case studies sampled, and despite these functions being clinically essential in diagnostic neuroimaging and aphasiological assessment. Therefore, ultimately the integration and specificity of aetiological neuroanatomy may contribute positively to the capacity and autonomy of aphasic patients as well as their clinicians. The integration of a full aetiological neuroanatomy within the reporting of aphasias may improve patient outcomes and sustain autonomy in the event of medico-ethical investigation.

Keywords: aphasia, language network, functional neuroanatomy, aphasiological diagnostics, medico-legal ethics

Procedia PDF Downloads 67
29 Inferring Influenza Epidemics in the Presence of Stratified Immunity

Authors: Hsiang-Yu Yuan, Marc Baguelin, Kin O. Kwok, Nimalan Arinaminpathy, Edwin Leeuwen, Steven Riley

Abstract:

Traditional syndromic surveillance for influenza has substantial public health value in characterizing epidemics. Because the relationship between syndromic incidence and the true infection events can vary from one population to another and from one year to another, recent studies rely on combining serological test results with syndromic data from traditional surveillance into epidemic models to make inference on epidemiological processes of influenza. However, despite the widespread availability of serological data, epidemic models have thus far not explicitly represented antibody titre levels and their correspondence with immunity. Most studies use dichotomized data with a threshold (Typically, a titre of 1:40 was used) to define individuals as likely recently infected and likely immune and further estimate the cumulative incidence. Underestimation of Influenza attack rate could be resulted from the dichotomized data. In order to improve the use of serosurveillance data, here, a refinement of the concept of the stratified immunity within an epidemic model for influenza transmission was proposed, such that all individual antibody titre levels were enumerated explicitly and mapped onto a variable scale of susceptibility in different age groups. Haemagglutination inhibition titres from 523 individuals and 465 individuals during pre- and post-pandemic phase of the 2009 pandemic in Hong Kong were collected. The model was fitted to serological data in age-structured population using Bayesian framework and was able to reproduce key features of the epidemics. The effects of age-specific antibody boosting and protection were explored in greater detail. RB was defined to be the effective reproductive number in the presence of stratified immunity and its temporal dynamics was compared to the traditional epidemic model using use dichotomized seropositivity data. Deviance Information Criterion (DIC) was used to measure the fitness of the model to serological data with different mechanisms of the serological response. The results demonstrated that the differential antibody response with age was present (ΔDIC = -7.0). The age-specific mixing patterns with children specific transmissibility, rather than pre-existing immunity, was most likely to explain the high serological attack rates in children and low serological attack rates in elderly (ΔDIC = -38.5). Our results suggested that the disease dynamics and herd immunity of a population could be described more accurately for influenza when the distribution of immunity was explicitly represented, rather than relying only on the dichotomous states 'susceptible' and 'immune' defined by the threshold titre (1:40) (ΔDIC = -11.5). During the outbreak, RB declined slowly from 1.22[1.16-1.28] in the first four months after 1st May. RB dropped rapidly below to 1 during September and October, which was consistent to the observed epidemic peak time in the late September. One of the most important challenges for infectious disease control is to monitor disease transmissibility in real time with statistics such as the effective reproduction number. Once early estimates of antibody boosting and protection are obtained, disease dynamics can be reconstructed, which are valuable for infectious disease prevention and control.

Keywords: effective reproductive number, epidemic model, influenza epidemic dynamics, stratified immunity

Procedia PDF Downloads 260
28 Testing Two Actors Contextual Interaction Theory in a Multi Actors Context: Case of COVID-19 Disease Prevention and Control Policy

Authors: Muhammad Fayyaz Nazir, Ellen Wayenberg, Shahzadaah Faahed Qureshi

Abstract:

Introduction: The study is based on the Contextual Interaction Theory (CIT) constructs to explore the role of policy actors in implementing the COVID-19 Disease Prevention and Control (DP&C) Policy. The study analyzes the role of healthcare workers' contextual factors, such as cognition, motives, and resources, and their interactions in implementing Social Distancing (SD). In this way, we test a two actors policy implementation theory, i.e., the CIT in a three-actor context. Methods: Data was collected through document analysis and semi-structured interviews. For a qualitative study design, interviews were conducted with questions on cognition, motives, and resources from the healthcare workers involved in implementing SD in the local context in Multan – Pakistan. The possible interactions resulting from contextual factors of the policy actors – healthcare workers were identified through framework analysis protocol guided by CIT and supported by trustworthiness criterion and data saturation. Results: This inquiry resulted in theory application, addition, and enrichment. The theoretical application in the three actor's contexts illustrates the different levels of motives, cognition, and resources of healthcare workers – senior administrators, managers, and healthcare professionals. The senior administrators working in National Command and Operations Center (NCOC), Provincial Technical Committees (PTCs), and Districts Covid Teams (DCTs) were playing their role with high motivation. They were fully informed about the policy and moderately resourceful. The policy implementors: healthcare managers working on implementing the SD within their respective hospitals were playing their role with high motivation and were fully informed about the policy. However, they lacked the required resources to implement SD. The target medical and allied healthcare professionals were moderately motivated but lack of resources and information. The interaction resulted in cooperation and the need for learning to manage the future healthcare crisis. However, the lack of resources created opposition to the implementation of SD. Objectives of the Study: The study aimed to apply a two actors theory in a multi actors context. We take this as an opportunity to qualitatively test the theory in a novel situation of the Covid-19 pandemic and make way for its quantitative application by designing a survey instrument so that implementation researchers can apply CIT through multivariate analyses or higher-order statistical modeling. Conclusion: Applying two actors' implementation theory in exploring a complex case of healthcare intervention in three actors context is a unique work that has never been done before, up to the best of our knowledge. So, the work will contribute to the policy implementation studies by applying, extending, and enriching an implementation theory in a novel case of the Covi-19 pandemic, ultimately fulfilling the gap in implementation literature. Policy institutions and other low or middle-income countries can learn from this research and improve SD implementation by working on the variables with weak significance levels.

Keywords: COVID-19, disease prevention and control policy, implementation, policy actors, social distancing

Procedia PDF Downloads 58
27 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 149
26 Characteristics-Based Lq-Control of Cracking Reactor by Integral Reinforcement

Authors: Jana Abu Ahmada, Zaineb Mohamed, Ilyasse Aksikas

Abstract:

The linear quadratic control system of hyperbolic first order partial differential equations (PDEs) are presented. The aim of this research is to control chemical reactions. This is achieved by converting the PDEs system to ordinary differential equations (ODEs) using the method of characteristics to reduce the system to control it by using the integral reinforcement learning. The designed controller is applied to a catalytic cracking reactor. Background—Transport-Reaction systems cover a large chemical and bio-chemical processes. They are best described by nonlinear PDEs derived from mass and energy balances. As a main application to be considered in this work is the catalytic cracking reactor. Indeed, the cracking reactor is widely used to convert high-boiling, high-molecular weight hydrocarbon fractions of petroleum crude oils into more valuable gasoline, olefinic gases, and others. On the other hand, control of PDEs systems is an important and rich area of research. One of the main control techniques is feedback control. This type of control utilizes information coming from the system to correct its trajectories and drive it to a desired state. Moreover, feedback control rejects disturbances and reduces the variation effects on the plant parameters. Linear-quadratic control is a feedback control since the developed optimal input is expressed as feedback on the system state to exponentially stabilize and drive a linear plant to the steady-state while minimizing a cost criterion. The integral reinforcement learning policy iteration technique is a strong method that solves the linear quadratic regulator problem for continuous-time systems online in real time, using only partial information about the system dynamics (i.e. the drift dynamics A of the system need not be known), and without requiring measurements of the state derivative. This is, in effect, a direct (i.e. no system identification procedure is employed) adaptive control scheme for partially unknown linear systems that converges to the optimal control solution. Contribution—The goal of this research is to Develop a characteristics-based optimal controller for a class of hyperbolic PDEs and apply the developed controller to a catalytic cracking reactor model. In the first part, developing an algorithm to control a class of hyperbolic PDEs system will be investigated. The method of characteristics will be employed to convert the PDEs system into a system of ODEs. Then, the control problem will be solved along the characteristic curves. The reinforcement technique is implemented to find the state-feedback matrix. In the other half, applying the developed algorithm to the important application of a catalytic cracking reactor. The main objective is to use the inlet fraction of gas oil as a manipulated variable to drive the process state towards desired trajectories. The outcome of this challenging research would yield the potential to provide a significant technological innovation for the gas industries since the catalytic cracking reactor is one of the most important conversion processes in petroleum refineries.

Keywords: PDEs, reinforcement iteration, method of characteristics, riccati equation, cracking reactor

Procedia PDF Downloads 91
25 The Influence of Perinatal Anxiety and Depression on Breastfeeding Behaviours: A Qualitative Systematic Review

Authors: Khulud Alhussain, Anna Gavine, Stephen Macgillivray, Sushila Chowdhry

Abstract:

Background: Estimates show that by the year 2030, mental illness will account for more than half of the global economic burden, second to non-communicable diseases. Often, the perinatal period is characterised by psychological ambivalence and a mixed anxiety-depressive condition. Maternal mental disorder is associated with perinatal anxiety and depression and affects breastfeeding behaviors. Studies also indicate that maternal mental health can considerably influence a baby's health in numerous aspects and impact the newborn health due to lack of adequate breastfeeding. However, studies reporting factors associated with breastfeeding behaviors are predominantly quantitative. Therefore, it is not clear what literature is available to understand the factors affecting breastfeeding and perinatal women’s perspectives and experiences. Aim: This review aimed to explore the perceptions and experiences of women with perinatal anxiety and depression, as well as how these experiences influence their breastfeeding behaviours. Methods: A systematic literature review of qualitative studies in line with the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ). Four electronic databases (CINAHL, PsycINFO, Embase, and Google Scholar) were explored for relevant studies using a search strategy. The search was restricted to studies published in the English language between 2000 and 2022. Findings from the literature were screened using a pre-defined screening criterion and the quality of eligible studies was appraised using the Walsh and Downe (2006) checklist. Findings were extracted and synthesised based on Braun and Clark. The review protocol was registered on PROSPERO (Ref: CRD42022319609). Result: A total of 4947 studies were identified from the four databases. Following duplicate removal and screening 16 studies met the inclusion criteria. The studies included 87 pregnant and 302 post-partum women from 12 countries. The participants were from a variety of economic, regional, and religious backgrounds, mainly from the age of 18 to 45 years old. Three main themes were identified: Barriers to breastfeeding, breastfeeding facilitators, emotional disturbance, and breastfeeding. Seven subthemes emerged from the data: expectation versus reality, uncertainly about maternal competencies, body image and breastfeeding, lack of sufficient breastfeeding support for family and caregivers’ support, influences positive breastfeeding practices, breastfeeding education, and causes of mental strain among breastfeeding women. Breastfeeding duration is affected in women with mental health disorders, irrespective of their desire to breastfeed. Conclusion: There is significant empirical evidence that breastfeeding behaviour and perinatal mental disturbance are linked. However, there is a lack of evidence to apply the findings to Saudi women due to lack of empirical qualitative information. To improve the psychological well-being of mothers, it is crucial to explore and recognise any concerns with their mental, physical, and emotional well-being. Therefore, robust research is needed so that breastfeeding intervention researchers and policymakers can focus on specifically what needs to be done to help mentally distressed perinatal women and their new-born.

Keywords: pregnancy, perinatal period, anxiety, depression, emotional disturbance, breastfeeding

Procedia PDF Downloads 98
24 Negative Perceptions of Ageing Predicts Greater Dysfunctional Sleep Related Cognition Among Adults Aged 60+

Authors: Serena Salvi

Abstract:

Ageistic stereotypes and practices have become a normal and therefore pervasive phenomenon in various aspects of everyday life. Over the past years, renewed awareness towards self-directed age stereotyping in older adults has given rise to a line of research focused on the potential role of attitudes towards ageing on seniors’ health and functioning. This set of studies has showed how a negative internalisation of ageistic stereotypes would discourage older adults in seeking medical advice, in addition to be associated to negative subjective health evaluation. An important dimension of mental health that is often affected in older adults is represented by sleep quality. Self-reported sleep quality among older adults has shown to be often unreliable when compared to their objective sleep measures. Investigations focused on self-reported sleep quality among older adults have suggested how this portion of the population would tend to accept disrupted sleep if believed to be up to standard for their age. On the other hand, unrealistic expectations, and dysfunctional beliefs towards sleep in ageing, might prompt older adults to report sleep disruption even in the absence of objective disrupted sleep. Objective of this study is to examine an association between personal attitudes towards ageing in adults aged 60+ and dysfunctional sleep related cognition. More in detail, this study aims to investigate a potential association between personal attitudes towards ageing, sleep locus of control and dysfunctional beliefs towards sleep among this portion of the population. Data in this study were statistically analysed in SPSS software. Participants were recruited through the online participants recruitment system Prolific. Inclusion of attention check questions throughout the questionnaire and consistency of responses were looked at. Prior to the commencement of this study, Ethical Approval was granted (ref. 39396). Descriptive statistics were used to determine the frequency, mean, and SDs of the variables. Pearson coefficient was used for interval variables, independent T-test for comparing means between two independent groups, analysis of variance (ANOVA) test for comparing the means in several independent groups, and hierarchical linear regression models for predicting criterion variables based on predictor variables. In this study self-perceptions of ageing were assessed using APQ-B’s subscales, while dysfunctional sleep related cognition was operationalised using the SLOC and the DBAS16 scales. Of the final subscales taken in consideration in the brief version of the APQ questionnaire, Emotional Representations (ER), Control Positive (PC) and Control and Consequences Negative (NC) have shown to be of particularly relevance for the remits of this study. Regression analysis show how an increase in the APQ-B subscale Emotional Representations (ER) predicts an increase in dysfunctional beliefs and attitudes towards sleep in this sample, after controlling for subjective sleep quality, level of depression and chronological age. A second regression analysis showed that APQ-B subscales Control Positive (PC) and Control and Consequences Negative (NC) were significant predictors in the change of variance of SLOC, after controlling for subjective sleep quality, level of depression and dysfunctional beliefs about sleep.

Keywords: sleep-related cognition, perceptions of aging, older adults, sleep quality

Procedia PDF Downloads 103
23 Feasibility of Applying a Hydrodynamic Cavitation Generator as a Method for Intensification of Methane Fermentation Process of Virginia Fanpetals (Sida hermaphrodita) Biomass

Authors: Marcin Zieliński, Marcin Dębowski, Mirosław Krzemieniewski

Abstract:

The anaerobic degradation of substrates is limited especially by the rate and effectiveness of the first (hydrolytic) stage of fermentation. This stage may be intensified through pre-treatment of substrate aimed at disintegration of the solid phase and destruction of substrate tissues and cells. The most frequently applied criterion of disintegration outcomes evaluation is the increase in biogas recovery owing to the possibility of its use for energetic purposes and, simultaneously, recovery of input energy consumed for the pre-treatment of substrate before fermentation. Hydrodynamic cavitation is one of the methods for organic substrate disintegration that has a high implementation potential. Cavitation is explained as the phenomenon of the formation of discontinuity cavities filled with vapor or gas in a liquid induced by pressure drop to the critical value. It is induced by a varying field of pressures. A void needs to occur in the flow in which the pressure first drops to the value close to the pressure of saturated vapor and then increases. The process of cavitation conducted under controlled conditions was found to significantly improve the effectiveness of anaerobic conversion of organic substrates having various characteristics. This phenomenon allows effective damage and disintegration of cellular and tissue structures. Disintegration of structures and release of organic compounds to the dissolved phase has a direct effect on the intensification of biogas production in the process of anaerobic fermentation, on reduced dry matter content in the post-fermentation sludge as well as a high degree of its hygienization and its increased susceptibility to dehydration. A device the efficiency of which was confirmed both in laboratory conditions and in systems operating in the technical scale is a hydrodynamic generator of cavitation. Cavitators, agitators and emulsifiers constructed and tested worldwide so far have been characterized by low efficiency and high energy demand. Many of them proved effective under laboratory conditions but failed under industrial ones. The only task successfully realized by these appliances and utilized on a wider scale is the heating of liquids. For this reason, their usability was limited to the function of heating installations. Design of the presented cavitation generator allows achieving satisfactory energy efficiency and enables its use under industrial conditions in depolymerization processes of biomass with various characteristics. Investigations conducted on the laboratory and industrial scale confirmed the effectiveness of applying cavitation in the process of biomass destruction. The use of the cavitation generator in laboratory studies for disintegration of sewage sludge allowed increasing biogas production by ca. 30% and shortening the treatment process by ca. 20 - 25%. The shortening of the technological process and increase of wastewater treatment plant effectiveness may delay investments aimed at increasing system output. The use of a mechanical cavitator and application of repeated cavitation process (4-6 times) enables significant acceleration of the biogassing process. In addition, mechanical cavitation accelerates increases in COD and VFA levels.

Keywords: hydrodynamic cavitation, pretreatment, biomass, methane fermentation, Virginia fanpetals

Procedia PDF Downloads 434
22 Migration as a Trigger Causing Change to the Levant Literary Modernism

Authors: Aathira Peedikaparambil Somasundaran

Abstract:

The beginning of the 20th century marked the perios when a new generation of Lebanese radicals sowed the seeds for the second phase of Levant literary modernism, situated in the Levant. Beirut, during this era popularly fit every radical writer’s criterion owing to its weakened censorship and political control, despite the absence of a protective womb for the development of literary modernism, caused by the natively prevalent political unsettlement. The third stage of literary modernization, in which scholars used Western-inspired critical techniques to better understand their own cultures, coincides with the time period examined in this paper, which involved the international-inspired critical analysis of native cultural stimulants, which raised questions among Arab freethinking intellectuals. Locals who ventured outside recognised the difference between the West's progress and their own nations' stagnation. The awareness of such ‘gap of success’ aroused an ambition from journalists, authors, and proletarian revolutionaries who had studied in Europe, and finally developed enlightened ideas. Some Middle Eastern authors and artists only adopted current social and political frameworks after discovering western modernity. After learning about the upheavals that were happening in the West, these thinkers aspired to bring about equally broad drastic developments in their own country's social, political, and cultural milieu. These occurrences illustrate the increased power of migration to alter the cultural and literary scene in the Levant. The paper intends to discuss the different effects of migration that contributed to Levant literary modernism. The exploration of these factors as causes begins with addressing the politically influenced activism, that has always been a relevant part of Beirut, and then diving into the psychological effects of migration in the individuals of the society, which might have induced an accommodability to alien thoughts and ideas over time, as a coping mechanism. Nature or environmental stimuli, a common trigger for any creative output, often having the highest influence during travel will be identified and analysed to inspect the extent of its impact on the exchange of ideas that resulted in Levant modernism. The efficiency of both the stimulating component of travel and the diaspora of the indigenous, a by-product of travel in catalysing modernism in the Levant has to be proven in order to understand how migration indirectly affected the transmission and adoption of ideas in Levant literature. The paper will revisit the events revolving around these key players and platforms like Shir, to understand how the Lebanese literature, tied down in poetry drastically mutated under the leadership of Adonis, Yusuf et Khal, and other pioneers of Levant literary modernism. The conclision will identify the triggers that helped authors overcome personal and geographical barriers to unite the West and the Levant, and investigate the extent to which the bi-directional migration prompted a transformation in the local poetry. Consequently, the paper aims to shed light into the unique factor that provoked the shift in the literary scene of Twentieth century in the Middle East.

Keywords: literature, modernism, Middle East, levant, Beirut

Procedia PDF Downloads 81
21 Targeting Peptide Based Therapeutics: Integrated Computational and Experimental Studies of Autophagic Regulation in Host-Parasite Interaction

Authors: Vrushali Guhe, Shailza Singh

Abstract:

Cutaneous leishmaniasis is neglected tropical disease present worldwide caused by the protozoan parasite Leishmania major, the therapeutic armamentarium for leishmaniasis are showing several limitations as drugs are showing toxic effects with increasing resistance by a parasite. Thus identification of novel therapeutic targets is of paramount importance. Previous studies have shown that autophagy, a cellular process, can either facilitate infection or aid in the elimination of the parasite, depending on the specific parasite species and host background in leishmaniasis. In the present study, our objective was to target the essential autophagy protein ATG8, which plays a crucial role in the survival, infection dynamics, and differentiation of the Leishmania parasite. ATG8 in Leishmania major and its homologue, LC3, in Homo sapiens, act as autophagic markers. Present study manifested the crucial role of ATG8 protein as a potential target for combating Leishmania major infection. Through bioinformatics analysis, we identified non-conserved motifs within the ATG8 protein of Leishmania major, which are not present in LC3 of Homo sapiens. Against these two non-conserved motifs, we generated a peptide library of 60 peptides on the basis of physicochemical properties. These peptides underwent a filtering process based on various parameters, including feasibility of synthesis and purification, compatibility with Selective Reaction Monitoring (SRM)/Multiple reaction monitoring (MRM), hydrophobicity, hydropathy index, average molecular weight (Mw average), monoisotopic molecular weight (Mw monoisotopic), theoretical isoelectric point (pI), and half-life. Further filtering criterion shortlisted three peptides by using molecular docking and molecular dynamics simulations. The direct interaction between ATG8 and the shortlisted peptides was confirmed through Surface Plasmon Resonance (SPR) experiments. Notably, these peptides exhibited the remarkable ability to penetrate the parasite membrane and exert profound effects on Leishmania major. The treatment with these peptides significantly impacted parasite survival, leading to alterations in the cell cycle and morphology. Furthermore, the peptides were found to modulate autophagosome formation, particularly under starved conditions, suggesting their involvement in disrupting the regulation of autophagy within Leishmania major. In vitro, studies demonstrated that the selected peptides effectively reduced the parasite load within infected host cells. Encouragingly, these findings were corroborated by in vivo experiments, which showed a reduction in parasite burden upon peptide administration. Additionally, the peptides were observed to affect the levels of LC3II within host cells. In conclusion, our findings highlight the efficacy of these novel peptides in targeting Leishmania major’s ATG8 and disrupting parasite survival. These results provide valuable insights into the development of innovative therapeutic strategies against leishmaniasis via targeting autophagy protein ATG8 of Leishmania major.

Keywords: ATG8, leishmaniasis, surface plasmon resonance, MD simulation, molecular docking, peptide designing, therapeutics

Procedia PDF Downloads 80
20 Challenging Convections: Rethinking Literature Review Beyond Citations

Authors: Hassan Younis

Abstract:

Purpose: The objective of this study is to review influential papers in the sustainability and supply chain studies domain, leveraging insights from this review to develop a structured framework for academics and researchers. This framework aims to assist scholars in identifying the most impactful publications for their scholarly pursuits. Subsequently, the study will apply and trial the developed framework on selected scholarly articles within the sustainability and supply chain studies domain to evaluate its efficacy, practicality, and reliability. Design/Methodology/Approach: Utilizing the "Publish or Perish" tool, a search was conducted to locate papers incorporating "sustainability" and "supply chain" in their titles. After rigorous filtering steps, a panel of university professors identified five crucial criteria for evaluating research robustness: average yearly citation counts (25%), scholarly contribution (25%), alignment of findings with objectives (15%), methodological rigor (20%), and journal impact factor (15%). These five evaluation criteria are abbreviated as “ACMAJ" framework. Each paper then received a tiered score (1-3) for each criterion, normalized within its category, and summed using weighted averages to calculate a Final Normalized Score (FNS). This systematic approach allows for objective comparison and ranking of the research based on its impact, novelty, rigor, and publication venue. Findings: The study's findings highlight the lack of structured frameworks for assessing influential sustainability research in supply chain management, which often results in a dependence on citation counts. A complete model that incorporates five essential criteria has been suggested as a response. By conducting a methodical trial on specific academic articles in the field of sustainability and supply chain studies, the model demonstrated its effectiveness as a tool for identifying and selecting influential research papers that warrant additional attention. This work aims to fill a significant deficiency in existing techniques by providing a more comprehensive approach to identifying and ranking influential papers in the field. Practical Implications: The developed framework helps scholars identify the most influential sustainability and supply chain publications. Its validation serves the academic community by offering a credible tool and helping researchers, students, and practitioners find and choose influential papers. This approach aids field literature reviews and study suggestions. Analysis of major trends and topics deepens our grasp of this critical study area's changing terrain. Originality/Value: The framework stands as a unique contribution to academia, offering scholars an important and new tool to identify and validate influential publications. Its distinctive capacity to efficiently guide scholars, learners, and professionals in selecting noteworthy publications, coupled with the examination of key patterns and themes, adds depth to our understanding of the evolving landscape in this critical field of study.

Keywords: supply chain management, sustainability, framework, model

Procedia PDF Downloads 52
19 Populism as a Society Dividing Discourse in Lithuania: The Case of the Elections of Parliament of the Republic of Lithuania of 2024

Authors: Vaicekauskiene G., Nabazaite E.

Abstract:

This study analyses the rise of global populism in Western democracies, focusing primarily on the populist rhetoric. Populist rhetoric is based on anti-pluralist ideas, opposing a “homogeneous nation” against “dangerous others” who are pushed out of the nation by populists, and can be citizens from both in-groups and out-groups. This study will examine the case of the elections of Parliament of the Republic of Lithuania of 2024. Fifteen candidate lists of parties and coalitions participated in the elections to the Lithuanian Parliament in 2024. Focus group methodology will be used to analyse the narratives of party supporters actively engaged in politics, trying to identify public support/opposition to populism. Liberal democracy is experiencing a crisis in both the US and Western democracies in Europe. The election results of recent years are increasingly announcing populist victories or the creation of new populist parties. Far-right parties lead the governments in three countries – Hungary, Slovakia, and Italy, and they are part of the ruling coalition in Sweden, Finland, and the Netherlands. It will become clear in the USA whether Donald Trump will be re-elected as president in November of this year. Trump’s victory in 2016 was named by political scientists as the apotheosis of populism. Influential politicians consolidate all bad manners and social categorization in the digital era of demagoguery. The research shows that a significant proportion of democratic societies also support this divisive discourse. Citizens, as consumers of information, often approve of populist communication themselves. New parliament elections were held in Lithuania in October 2024. Ideas that polarize society were amplified in the public space, negativism increased, and with it distrust towards the state, its institutions, and democratically elected politicians, “enemies” were sought and conspiracy theories were created. Problem of the Study. This study analyses the global rise of populism from the perspective of Lithuania with various groups of society, trying to understand the relationship of citizens with democracy through believing in populists, approval/disapproval of the expression of populism. Opinions are an important challenge when trying to find the truth in the age of populism, because democratic societies are based on the culture of discussion and the idea of consensus. Methodology. This study will deconstruct the narratives of Lithuanian citizens from the point of view of populism. Fifteen focus group discussions will be held with supporters of the party lists that participated in the Parliament elections of the Republic of Lithuania during November-December 2024. The main unifying criterion for focus group participants is their political activity, while the distinguishing criteria are age, gender and place of residence. Fifteen focus groups were chosen due to the fact that fifteen candidate lists of parties and coalitions participated in the elections and seeking to ensure the variety of participants. This study aims to emphasize populism as a communication phenomenon in Lithuania. Public testimonies and experiences will reveal new meanings about the understanding of populism and support/opposition towards it.

Keywords: democracy, narratives in populist rhetoric, populist rhetoric, populism

Procedia PDF Downloads 15
18 Balloon Analogue Risk Task (BART) Performance Indicators Help Predict Outcomes of Matched Savings Program

Authors: Carlos M. Parra, Matthew Sutherland, Ranjita Poudel

Abstract:

Reduced mental-bandwidth related to low socioeconomic status (low-SES) might lead to impulsivity and risk-taking behavior, which poses as a major hurdle towards asset building (savings) behavior. Understanding the relationship between risk-related personality metrics as well as laboratory risk behavior and real-life savings behavior can help facilitate the development of effective asset building programs, which are vital for mitigating financial vulnerability and income inequality. As such, this study explored the relationship between personality metrics, laboratory behavior in a risky decision-making task and real-life asset building (savings) behaviors among individuals with low-SES from Miami, Florida (FL). Study participants (12 male, 15 female) included racially and ethnically diverse adults (mean age 41.22 ± 12.65 years), with incomplete higher education (18% had High School Diploma, 30% Associates, and 52% Some College), and low annual income (mean $13,872 ± $8020.43). Participants completed eight self-report surveys and played a widely used risky decision-making paradigm called the Balloon Analogue Risk Task (BART). Specifically, participants played three runs of BART (20 trials in each run; total 60 trials). In addition, asset building behavior data was collected for 24 participants who opened and used savings accounts and completed a 6-month savings program that involved monthly matches, and a final reward for completing the savings program without any interim withdrawals. Each participant’s total savings at the end of this program was the main asset building indicator considered. In addition, a new effective use of average pump bet (EUAPB) indicator was developed to characterize each participant’s ability to place winning bets. This indicator takes the ratio of each participant’s total BART earnings to average pump bet (APB) in all 60 trials. Our findings indicated that EUAPB explained more than a third of the variation in total savings among participants. Moreover, participants who managed to obtain BART earnings of at least 30 cents out of their APB, also tended to exhibit better asset building (savings) behavior. In particular, using this criterion to separate participants into high and low EUAPB groups, the nine participants with high EUAPB (mean BART earnings of 35.64 cents per APB) ended up with higher mean total savings ($255.11), while the 15 participants with low EUAPB (mean BART earnings of 22.50 cents per APB) obtained lower mean total savings ($40.01). All mean differences are statistically significant (2-tailed p  .0001) indicating that the relation between higher EUAPB and higher total savings is robust. Overall, these findings can help refine asset building interventions implemented by policy makers and practitioners interested in reducing financial vulnerability among low-SES population. Specifically, by helping identify individuals who are likely to readily take advantage of savings opportunities (such as matched savings programs) and avoiding the stipulation of unnecessary and expensive financial coaching programs to these individuals. This study was funded by J.P. Morgan Chase (JPMC) and carried out by scientists from Florida International University (FIU) in partnership with Catalyst Miami.

Keywords: balloon analogue risk task (BART), matched savings programs, asset building capability, low-SES participants

Procedia PDF Downloads 145
17 Dynamic Simulation of IC Engine Bearings for Fault Detection and Wear Prediction

Authors: M. D. Haneef, R. B. Randall, Z. Peng

Abstract:

Journal bearings used in IC engines are prone to premature failures and are likely to fail earlier than the rated life due to highly impulsive and unstable operating conditions and frequent starts/stops. Vibration signature extraction and wear debris analysis techniques are prevalent in the industry for condition monitoring of rotary machinery. However, both techniques involve a great deal of technical expertise, time and cost. Limited literature is available on the application of these techniques for fault detection in reciprocating machinery, due to the complex nature of impact forces that confounds the extraction of fault signals for vibration based analysis and wear prediction. This work is an extension of a previous study, in which an engine simulation model was developed using a MATLAB/SIMULINK program, whereby the engine parameters used in the simulation were obtained experimentally from a Toyota 3SFE 2.0 litre petrol engines. Simulated hydrodynamic bearing forces were used to estimate vibrations signals and envelope analysis was carried out to analyze the effect of speed, load and clearance on the vibration response. Three different loads 50/80/110 N-m, three different speeds 1500/2000/3000 rpm, and three different clearances, i.e., normal, 2 times and 4 times the normal clearance were simulated to examine the effect of wear on bearing forces. The magnitude of the squared envelope of the generated vibration signals though not affected by load, but was observed to rise significantly with increasing speed and clearance indicating the likelihood of augmented wear. In the present study, the simulation model was extended further to investigate the bearing wear behavior, resulting as a consequence of different operating conditions, to complement the vibration analysis. In the current simulation, the dynamics of the engine was established first, based on which the hydrodynamic journal bearing forces were evaluated by numerical solution of the Reynold’s equation. Also, the essential outputs of interest in this study, critical to determine wear rates are the tangential velocity and oil film thickness between the journal and bearing sleeve, which if not maintained appropriately, have a detrimental effect on the bearing performance. Archard’s wear prediction model was used in the simulation to calculate the wear rate of bearings with specific location information as all determinative parameters were obtained with reference to crank rotation. Oil film thickness obtained from the model was used as a criterion to determine if the lubrication is sufficient to prevent contact between the journal and bearing thus causing accelerated wear. A limiting value of 1 µm was used as the minimum oil film thickness needed to prevent contact. The increased wear rate with growing severity of operating conditions is analogous and comparable to the rise in amplitude of the squared envelope of the referenced vibration signals. Thus on one hand, the developed model demonstrated its capability to explain wear behavior and on the other hand it also helps to establish a correlation between wear based and vibration based analysis. Therefore, the model provides a cost-effective and quick approach to predict the impending wear in IC engine bearings under various operating conditions.

Keywords: condition monitoring, IC engine, journal bearings, vibration analysis, wear prediction

Procedia PDF Downloads 310
16 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 339
15 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface

Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves

Abstract:

In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.

Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment

Procedia PDF Downloads 134
14 Assessment and Forecasting of the Impact of Negative Environmental Factors on Public Health

Authors: Nurlan Smagulov, Aiman Konkabayeva, Akerke Sadykova, Arailym Serik

Abstract:

Introduction. Adverse environmental factors do not immediately lead to pathological changes in the body. They can exert the growth of pre-pathology characterized by shifts in physiological, biochemical, immunological and other indicators of the body state. These disorders are unstable, reversible and indicative of body reactions. There is an opportunity to objectively judge the internal structure of the adaptive body reactions at the level of individual organs and systems. In order to obtain a stable response of the body to the chronic effects of unfavorable environmental factors of low intensity (compared to production environment factors), a time called the «lag time» is needed. The obtained results without considering this factor distort reality and, for the most part, cannot be a reliable statement of the main conclusions in any work. A technique is needed to reduce methodological errors and combine mathematical logic using statistical methods and a medical point of view, which ultimately will affect the obtained results and avoid a false correlation. Objective. Development of a methodology for assessing and predicting the environmental factors impact on the population health considering the «lag time.» Methods. Research objects: environmental and population morbidity indicators. The database on the environmental state was compiled from the monthly newsletters of Kazhydromet. Data on population morbidity were obtained from regional statistical yearbooks. When processing static data, a time interval (lag) was determined for each «argument-function» pair. That is the required interval, after which the harmful factor effect (argument) will fully manifest itself in the indicators of the organism's state (function). The lag value was determined by cross-correlation functions of arguments (environmental indicators) with functions (morbidity). Correlation coefficients (r) and their reliability (t), Fisher's criterion (F) and the influence share (R2) of the main factor (argument) per indicator (function) were calculated as a percentage. Results. The ecological situation of an industrially developed region has an impact on health indicators, but it has some nuances. Fundamentally opposite results were obtained in the mathematical data processing, considering the «lag time». Namely, an expressed correlation was revealed after two databases (ecology-morbidity) shifted. For example, the lag period was 4 years for dust concentration, general morbidity, and 3 years – for childhood morbidity. These periods accounted for the maximum values of the correlation coefficients and the largest percentage of the influencing factor. Similar results were observed in relation to the concentration of soot, dioxide, etc. The comprehensive statistical processing using multiple correlation-regression variance analysis confirms the correctness of the above statement. This method provided the integrated approach to predicting the degree of pollution of the main environmental components to identify the most dangerous combinations of concentrations of leading negative environmental factors. Conclusion. The method of assessing the «environment-public health» system (considering the «lag time») is qualitatively different from the traditional (without considering the «lag time»). The results significantly differ and are more amenable to a logical explanation of the obtained dependencies. The method allows presenting the quantitative and qualitative dependence in a different way within the «environment-public health» system.

Keywords: ecology, morbidity, population, lag time

Procedia PDF Downloads 81
13 Multi-Criteria Assessment of Biogas Feedstock

Authors: Rawan Hakawati, Beatrice Smyth, David Rooney, Geoffrey McCullough

Abstract:

Targets have been set in the EU to increase the share of renewable energy consumption to 20% by 2020, but developments have not occurred evenly across the member states. Northern Ireland is almost 90% dependent on imported fossil fuels. With such high energy dependency, Northern Ireland is particularly susceptible to the security of supply issues. Linked to fossil fuels are greenhouse gas emissions, and the EU plans to reduce emissions by 20% by 2020. The use of indigenously produced biomass could reduce both greenhouse gas emissions and external energy dependence. With a wide range of both crop and waste feedstock potentially available in Northern Ireland, anaerobic digestion has been put forward as a possible solution for renewable energy production, waste management, and greenhouse gas reduction. Not all feedstock, however, is the same, and an understanding of feedstock suitability is important for both plant operators and policy makers. The aim of this paper is to investigate biomass suitability for anaerobic digestion in Northern Ireland. It is also important that decisions are based on solid scientific evidence. For this reason, the methodology used is multi-criteria decision matrix analysis which takes multiple criteria into account simultaneously and ranks alternatives accordingly. The model uses the weighted sum method (which follows the Entropy Method to measure uncertainty using probability theory) to decide on weights. The Topsis method is utilized to carry out the mathematical analysis to provide the final scores. Feedstock that is currently available in Northern Ireland was classified into two categories: wastes (manure, sewage sludge and food waste) and energy crops, specifically grass silage. To select the most suitable feedstock, methane yield, feedstock availability, feedstock production cost, biogas production, calorific value, produced kilowatt-hours, dry matter content, and carbon to nitrogen ratio were assessed. The highest weight (0.249) corresponded to production cost reflecting a variation of £41 gate fee to 22£/tonne cost. The weights calculated found that grass silage was the most suitable feedstock. A sensitivity analysis was then conducted to investigate the impact of weights. The analysis used the Pugh Matrix Method which relies upon The Analytical Hierarchy Process and pairwise comparisons to determine a weighting for each criterion. The results showed that the highest weight (0.193) corresponded to biogas production indicating that grass silage and manure are the most suitable feedstock. Introducing co-digestion of two or more substrates can boost the biogas yield due to a synergistic effect induced by the feedstock to favor positive biological interactions. A further benefit of co-digesting manure is that the anaerobic digestion process also acts as a waste management strategy. From the research, it was concluded that energy from agricultural biomass is highly advantageous in Northern Ireland because it would increase the country's production of renewable energy, manage waste production, and would limit the production of greenhouse gases (current contribution from agriculture sector is 26%). Decision-making methods based on scientific evidence aid policy makers in classifying multiple criteria in a logical mathematical manner in order to reach a resolution.

Keywords: anaerobic digestion, biomass as feedstock, decision matrix, renewable energy

Procedia PDF Downloads 462
12 Study of the Biological Activity of a Ganglioside-Containing Drug (Cronassil) in an Experimental Model of Multiple Sclerosis

Authors: Hasmik V. Zanginyan, Gayane S. Ghazaryan, Laura M. Hovsepyan

Abstract:

Experimental autoimmune encephalomyelitis (EAE) is an inflammatory demyelinating disease of the central nervous system that is induced in laboratory animals by developing an immune response against myelin epitopes. The typical clinical course is ascending palsy, which correlates with inflammation and tissue damage in the thoracolumbar spinal cord, although the optic nerves and brain (especially the subpial white matter and brainstem) are also often affected. With multiple sclerosis, there is a violation of lipid metabolism in myelin. When membrane lipids (glycosphingolipids, phospholipids) are disturbed, metabolites not only play a structural role in membranes but are also sources of secondary mediators that transmit multiple cellular signals. The purpose of this study was to investigate the effect of ganglioside as a therapeutic agent in experimental multiple sclerosis. The biological activity of a ganglioside-containing medicinal preparation (Cronassial) was evaluated in an experimental model of multiple sclerosis in laboratory animals. An experimental model of multiple sclerosis in rats was obtained by immunization with myelin basic protein (MBP), as well as homogenization of the spinal cord or brain. EAE was induced by administering a mixture of an encephalitogenic mixture (EGM) with Complete Freund’s Adjuvant. Mitochondrial fraction was isolated in a medium containing 0,25 M saccharose and 0, 01 M tris buffer, pH - 7,4, by a method of differential centrifugation on a K-24 centrifuge. Glutathione peroxidase activity was assessed by reduction reactions of hydrogen peroxide (H₂O₂) and lipid hydroperoxides (ROOH) in the presence of GSH. LPO activity was assessed by the amount of malondialdehyde (MDA) in the total homogenate and mitochondrial fraction of the spinal cord and brain of control and experimental autoimmune encephalomyelitis rats. MDA was assessed by a reaction with Thiobarbituric acid. For statistical data analysis on PNP, SPSS (Statistical Package for Social Science) package was used. The nature of the distribution of the obtained data was determined by the Kolmogorov-Smirnov criterion. The comparative analysis was performed using a nonparametric Mann-Whitney test. The differences were statistically significant when р ≤ 0,05 or р ≤ 0,01. Correlational analysis was conducted using a nonparametric Spearman test. In the work, refrigeratory centrifuge, spectrophotometer LKB Biochrom ULTROSPECII (Sweden), pH-meter PL-600 mrc (Israel), guanosine, and ATP (Sigma). The study of the process of lipid peroxidation in the total homogenate of the brain and spinal cord in experimental animals revealed an increase in the content of malonic dialdehyde. When applied, Cronassial observed normalization of lipid peroxidation processes. Reactive oxygen species, causing lipid peroxidation processes, can be toxic both for neurons and for oligodendrocytes that form myelin, causing a violation of their lipid composition. The high content of lipids in the brain and the uniqueness of their structure determines the nature of the development of LPO processes. The lipid layer of cellular and intracellular membranes performs two main functions -barrier and matrix (structural). Damage to the barrier leads to dysregulation of intracellular processes and severe disorders of cellular functions.

Keywords: experimental autoimmune encephalomyelitis, multiple sclerosis, neuroinflammation, therapy

Procedia PDF Downloads 92
11 Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material

Authors: Natalia Podkhodova, Olga Sheremeteva, Mariia Soldaeva

Abstract:

Creating favorable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. Psychology research has demonstrated that positive comprehension becomes possible when new information becomes part of student’s subjective experience and when linkages between the attributes of notions and various ways of their presentations can be established. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. The article describes the implementation of a holistic approach to teaching mathematics designed to address the primary challenges of such teaching, specifically, the challenge of students’ comprehension. This approach consists of (1) establishing links between the attributes of a notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience -emotional and value, contextual, procedural, communicative- during the educational process; (3) links between different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modeling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and technology influence understanding of material used in teaching mathematics was the research’s primary goal. The research included an experiment in which 256 secondary school students took part: 142 in the experimental group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics -'Derivative' and 'Trigonometric functions'- was evaluated. Control group participants were taught using traditional methods. Students in the experimental group were taught using the holistic method: under the teacher’s guidance, they carried out problems designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as problems that required the ability to operate with all modes of presentation. The use of the technology that forms inter-subject notions based on linkages between perceptional, real, and conceptual mathematical spaces proved to be of special interest to the students. Results of the experiment were analyzed by presenting students in each of the groups with a final test in each of the studied topics. The test included problems that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion was used to reveal the statistical significance of results (pass-fail the modeling test). A significant difference in results was revealed (p < 0.001), which allowed the authors to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. Also, it was revealed (used Student’s t-test) that the students of the experimental group performed reliably (p = 0.0001) more problems in comparison with those in the control group. The results obtained allow us to conclude that increasing comprehension and assimilation of study material took place as a result of applying implemented methods and techniques.

Keywords: comprehension of mathematical content, holistic approach to teaching mathematics in secondary school, subjective experience, technology of the formation of inter-subject notions

Procedia PDF Downloads 176
10 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation

Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya

Abstract:

The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.

Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity

Procedia PDF Downloads 184
9 A Study on the Relation among Primary Care Professionals Serving Disadvantaged Community, Socioeconomic Status, and Adverse Health Outcome

Authors: Chau-Kuang Chen, Juanita Buford, Colette Davis, Raisha Allen, John Hughes, James Tyus, Dexter Samuels

Abstract:

During the post-Civil War era, the city of Nashville, Tennessee, had the highest mortality rate in the country. The elevated death and disease among ex-slaves were attributable to the unavailability of healthcare. To address the paucity of healthcare services, the College, an institution with the mission of educating minority professionals and serving the under served population, was established in 1876. This study was designed to assess if the College has accomplished its mission of serving under served communities and contributed to the elimination of health disparities in the United States. The study objective was to quantify the impact of socioeconomic status and adverse health outcomes on primary care professionals serving disadvantaged communities, which, in turn, was significantly associated with a health professional shortage score partly designated by the U.S. Department of Health and Human Services. Various statistical methods were used to analyze the alumni data in years 1975 – 2013. K-means cluster analysis was utilized to identify individual medical and dental graduates into the cluster groups of the practice communities (Disadvantaged or Non-disadvantaged Communities). Discriminant analysis was implemented to verify the classification accuracy of cluster analysis. The independent t test was performed to detect the significant mean differences for clustering and criterion variables between Disadvantaged and Non-disadvantaged Communities, which confirms the “content” validity of cluster analysis model. Chi-square test was used to assess if the proportion of cluster groups (Disadvantaged vs Non-disadvantaged Communities) were consistent with that of practicing specialties (primary care vs. non-primary care). Finally, the partial least squares (PLS) path model was constructed to explore the “construct” validity of analytics model by providing the magnitude effects of socioeconomic status and adverse health outcome on primary care professionals serving disadvantaged community. The social ecological theory along with statistical models mentioned was used to establish the relationship between medical and dental graduates (primary care professionals serving disadvantaged communities) and their social environments (socioeconomic status, adverse health outcome, health professional shortage score). Based on social ecological framework, it was hypothesized that the impact of socioeconomic status and adverse health outcomes on primary care professionals serving disadvantaged communities could be quantified. Also, primary care professionals serving disadvantaged communities related to a health professional shortage score can be measured. Adverse health outcome (adult obesity rate, age-adjusted premature mortality rate, and percent of people diagnosed with diabetes) could be affected by the latent variable, namely socioeconomic status (unemployment rate, poverty rate, percent of children who were in free lunch programs, and percent of uninsured adults). The study results indicated that approximately 83% (3,192/3,864) of the College’s medical and dental graduates from 1975 to 2013 were practicing in disadvantaged communities. In addition, the PLS path modeling demonstrated that primary care professionals serving disadvantaged community was significantly associated with socioeconomic status and adverse health outcome (p < .001). In summary, the majority of medical and dental graduates from the College provide primary care services to disadvantaged communities with low socioeconomic status and high adverse health outcomes, which demonstrate that the College has fulfilled its mission.

Keywords: disadvantaged community, K-means cluster analysis, PLS path modeling, primary care

Procedia PDF Downloads 550
8 Developing and integrated Clinical Risk Management Model

Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei

Abstract:

Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.

Keywords: failure modes and effective analysis, risk management, root cause analysis, model

Procedia PDF Downloads 249
7 Adapting to College: Exploration of Psychological Well-Being, Coping, and Identity as Markers of Readiness

Authors: Marit D. Murry, Amy K. Marks

Abstract:

The transition to college is a critical period that affords abundant opportunities for growth in conjunction with novel challenges for emerging adults. During this time, emerging adults are garnering experiences and acquiring hosts of new information that they are required to synthesize and use to inform life-shaping decisions. This stage is characterized by instability and exploration, which necessitates a diverse set of coping skills to successfully navigate and positively adapt to their evolving environment. However, important sociocultural factors result in differences that occur developmentally for minority emerging adults (i.e., emerging adults with an identity that has been or is marginalized). While the transition to college holds vast potential, not all are afforded the same chances, and many individuals enter into this stage at varying degrees of readiness. Understanding the nuance and diversity of student preparedness for college and contextualizing these factors will better equip systems to support incoming students. Emerging adulthood for ethnic, racial minority students presents itself as an opportunity for growth and resiliency in the face of systemic adversity. Ethnic, racial identity (ERI) is defined as an identity that develops as a function of one’s ethnic-racial group membership. Research continues to demonstrate ERI as a resilience factor that promotes positive adjustment in young adulthood. Adaptive coping responses (e.g., engaging in help-seeking behavior, drawing on personal and community resources) have been identified as possible mechanisms through which ERI buffers youth against stressful life events, including discrimination. Additionally, trait mindfulness has been identified as a significant predictor of general psychological health, and mindfulness practice has been shown to be a self-regulatory strategy that promotes healthy stress responses and adaptive coping strategy selection. The current study employed a person-centered approach to explore emerging patterns across ethnic identity development and psychological well-being criterion variables among college freshmen. Data from 283 incoming college freshmen at Northeastern University were analyzed. The Brief COPE Acceptance and Emotional Support scales, the Five Factor Mindfulness Questionnaire, and MIEM Exploration and Affirmation measures were used to inform the cluster profiles. The TwoStep auto-clustering algorithm revealed an optimal three-cluster solution (BIC = 848.49), which classified 92.6% (n = 262) of participants in the sample into one of the three clusters. The clusters were characterized as ‘Mixed Adjustment’, ‘Lowest Adjustment’, and ‘Moderate Adjustment.’ Cluster composition varied significantly by ethnicity X² (2, N = 262) = 7.74 (p = .021) and gender X² (2, N = 259) = 10.40 (p = .034). The ‘Lowest Adjustment’ cluster contained the highest proportion of students of color, 41% (n = 32), and male-identifying students, 44.2% (n = 34). Follow-up analyses showed higher ERI exploration in ‘Moderate Adjustment’ cluster members, also reported higher levels of psychological distress, with significantly elevated depression scores (p = .011), psychological diagnoses of depression (p = .013), anxiety (p = .005) and psychiatric disorders (p = .025). Supporting prior research, students engaging with identity exploration processes often endure more psychological distress. These results indicate that students undergoing identity development may require more socialization and different services beyond normal strategies.

Keywords: adjustment, coping, college, emerging adulthood, ethnic-racial identity, psychological well-being, resilience

Procedia PDF Downloads 110
6 Effect of Preoxidation on the Effectiveness of Gd₂O₃ Nanoparticles Applied as a Source of Active Element in the Crofer 22 APU Coated with a Protective-conducting Spinel Layer

Authors: Łukasz Mazur, Kamil Domaradzki, Maciej Bik, Tomasz Brylewski, Aleksander Gil

Abstract:

Interconnects used in solid oxide fuel and electrolyzer cells (SOFCₛ/SOECs) serve several important functions, and therefore interconnect materials must exhibit certain properties. Their thermal expansion coefficient needs to match that of the ceramic components of these devices – the electrolyte, anode and cathode. Interconnects also provide structural rigidity to the entire device, which is why interconnect materials must exhibit sufficient mechanical strength at high temperatures. Gas-tightness is also a prerequisite since they separate gas reagents, and they also must provide very good electrical contact between neighboring cells over the entire operating time. High-chromium ferritic steels meets these requirements to a high degree but are affected by the formation of a Cr₂O₃ scale, which leads to increased electrical resistance. The final criterion for interconnect materials is chemical inertness in relation to the remaining cell components. In the case of ferritic steels, this has proved difficult due to the formation of volatile and reactive oxyhydroxides observed when Cr₂O3 is exposed to oxygen and water vapor. This process is particularly harmful on the cathode side in SOFCs and the anode side in SOECs. To mitigate this, protective-conducting ceramic coatings can be deposited on an interconnect's surface. The area-specific resistance (ASR) of a single interconnect cannot exceed 0.1 m-2 at any point of the device's operation. The rate at which the CrO₃ scale grows on ferritic steels can be reduced significantly via the so-called reactive element effect (REE). Research has shown that the deposition of Gd₂O₃ nanoparticles on the surface of the Crofer 22 APU, already modified using a protective-conducting spinel layer, further improves the oxidation resistance of this steel. However, the deposition of the manganese-cobalt spinel layer is a rather complex process and is performed at high temperatures in reducing and oxidizing atmospheres. There was thus reason to believe that this process may reduce the effectiveness of Gd₂O₃ nanoparticles added as an active element source. The objective of the present study was, therefore, to determine any potential impact by introducing a preoxidation stage after the nanoparticle deposition and before the steel is coated with the spinel. This should have allowed the nanoparticles to incorporate into the interior of the scale formed on the steel. Different samples were oxidized for 7000 h in air at 1073 K under quasi-isothermal conditions. The phase composition, chemical composition, and microstructure of the oxidation products formed on the samples were determined using X-ray diffraction, Raman spectroscopy, and scanning electron microscopy combined with energy-dispersive X-ray spectroscopy. A four-point, two-probe DC method was applied to measure ASR. It was found that coating deposition does indeed reduce the beneficial effect of Gd₂O₃ addition, since the smallest mass gain and the lowest ASR value were determined for the sample for which the additional preoxidation stage had been performed. It can be assumed that during this stage, gadolinium incorporates into and segregates at grain boundaries in the thin Cr₂O₃ that is forming. This allows the Gd₂O₃ nanoparticles to be a more effective source of the active element.

Keywords: interconnects, oxide nanoparticles, reactive element effect, SOEC, SOFC

Procedia PDF Downloads 84
5 Detection of Mustard Traces in Food by an Official Food Safety Laboratory

Authors: Clara Tramuta, Lucia Decastelli, Elisa Barcucci, Sandra Fragassi, Samantha Lupi, Enrico Arletti, Melissa Bizzarri, Daniela Manila Bianchi

Abstract:

Introdution: Food allergies occurs, in the Western World, 2% of adults and up to 8% of children. The protection of allergic consumers is guaranted, in Eurrope, by Regulation (EU) No 1169/2011 of the European Parliament which governs the consumer's right to information and identifies 14 food allergens to be mandatory indicated on the label. Among these, mustard is a popular spice added to enhance the flavour and taste of foods. It is frequently present as an ingredient in spice blends, marinades, salad dressings, sausages, and other products. Hypersensitivity to mustard is a public health problem since the ingestion of even low amounts can trigger severe allergic reactions. In order to protect the allergic consumer, high performance methods are required for the detection of allergenic ingredients. Food safety laboratories rely on validated methods that detect hidden allergens in food to ensure the safety and health of allergic consumers. Here we present the test results for the validation and accreditation of a Real time PCR assay (RT-PCR: SPECIALfinder MC Mustard, Generon), for the detection of mustard traces in food. Materials and Methods. The method was tested on five classes of food matrices: bakery and pastry products (chocolate cookies), meats (ragù), ready-to-eat (mixed salad), dairy products (yogurt), grains, and milling products (rice and barley flour). Blank samples were spiked starting with the mustard samples (Sinapis Alba), lyophilized and stored at -18 °C, at a concentration of 1000 ppm. Serial dilutions were then prepared to a final concentration of 0.5 ppm, using the DNA extracted by ION Force FAST (Generon) from the blank samples. The Real Time PCR reaction was performed by RT-PCR SPECIALfinder MC Mustard (Generon), using CFX96 System (BioRad). Results. Real Time PCR showed a limit of detection (LOD) of 0.5 ppm in grains and milling products, ready-to-eat, meats, bakery, pastry products, and dairy products (range Ct 25-34). To determine the exclusivity parameter of the method, the ragù matrix was contaminated with Prunus dulcis (almonds), peanut (Arachis hypogaea), Glycine max (soy), Apium graveolens (celery), Allium cepa (onion), Pisum sativum (peas), Daucus carota (carrots), and Theobroma cacao (cocoa) and no cross-reactions were observed. Discussion. In terms of sensitivity, the Real Time PCR confirmed, even in complex matrix, a LOD of 0.5 ppm in five classes of food matrices tested; these values are compatible with the current regulatory situation that does not consider, at international level, to establish a quantitative criterion for the allergen considered in this study. The Real Time PCR SPECIALfinder kit for the detection of mustard proved to be easy to use and particularly appreciated for the rapid response times considering that the amplification and detection phase has a duration of less than 50 minutes. Method accuracy was rated satisfactory for sensitivity (100%) and specificity (100%) and was fully validated and accreditated. It was found adequate for the needs of the laboratory as it met the purpose for which it was applied. This study was funded in part within a project of the Italian Ministry of Health (IZS PLV 02/19 RC).

Keywords: allergens, food, mustard, real time PCR

Procedia PDF Downloads 166
4 Environmental Fate and Toxicity of Aged Titanium Dioxide Nano-Composites Used in Sunscreen

Authors: Danielle Slomberg, Jerome Labille, Riccardo Catalano, Jean-Claude Hubaud, Alexandra Lopes, Alice Tagliati, Teresa Fernandes

Abstract:

In the assessment and management of cosmetics and personal care products, sunscreens are of emerging concern regarding both human and environmental health. Organic UV blockers in many sunscreens have been evidenced to undergo rapid photodegradation, induce dermal allergic reactions due to skin penetration, and to cause adverse effects on marine systems. While mineral UV-blockers may offer a safer alternative, their fate and impact and resulting regulation are still under consideration, largely related to the potential influence of nanotechnology-based products on both consumers and the environment. Nanometric titanium dioxide (TiO₂) UV-blockers have many advantages in terms of sun protection and asthetics (i.e., transparency). These UV-blockers typically consist of rutile nanoparticles coated with a primary mineral layer (silica or alumina) aimed at blocking the nanomaterial photoactivity and can include a secondary organic coating (e.g., stearic acid, methicone) aimed at favouring dispersion of the nanomaterial in the sunscreen formulation. The nanomaterials contained in the sunscreen can leave the skin either through a bathing of everyday usage, with subsequent release into rivers, lakes, seashores, and/or sewage treatment plants. The nanomaterial behaviour, fate and impact in these different systems is largely determined by its surface properties, (e.g. the nanomaterial coating type) and lifetime. The present work aims to develop the eco-design of sunscreens through the minimisation of risks associated with nanomaterials incorporated into the formulation. All stages of the sunscreen’s life cycle must be considered in this aspect, from its manufacture to its end-of-life, through its use by the consumer to its impact on the exposed environment. Reducing the potential release and/or toxicity of the nanomaterial from the sunscreen is a decisive criterion for its eco-design. TiO₂ UV-blockers of varied size and surface coating (e.g., stearic acid and silica) have been selected for this study. Hydrophobic TiO₂ UV-blockers (i.e., stearic acid-coated) were incorporated into a typical water-in-oil (w/o) formulation while hydrophilic, silica-coated TiO₂ UV-blockers were dispersed into an oil-in-water (o/w) formulation. The resulting sunscreens were characterised in terms of nanomaterial localisation, sun protection factor, and photo-passivation. The risk to the direct aquatic environment was assessed by evaluating the release of nanomaterials from the sunscreen through a simulated laboratory aging procedure. The size distribution, surface charge, and degradation state of the nano-composite by-products, as well as their nanomaterial concentration and colloidal behaviour were determined in a variety of aqueous environments (e.g., seawater and freshwater). Release of the hydrophobic nanocomposites into the aqueous environment was driven by oil droplet formation while hydrophilic nano-composites were readily dispersed. Ecotoxicity of the sunscreen by-products (from both w/o and o/w formulations) and their risk to marine organisms were assessed using coral symbiotes and tropical corals, evaluating both lethal and sublethal toxicities. The data dissemination and provided risk knowledge from the present work will help guide regulation related to nanomaterials in sunscreen, provide better information for consumers, and allow for easier decision-making for manufacturers.

Keywords: alteration, environmental fate, sunscreens, titanium dioxide nanoparticles

Procedia PDF Downloads 262
3 Multiaxial Stress Based High Cycle Fatigue Model for Adhesive Joint Interfaces

Authors: Martin Alexander Eder, Sergei Semenov

Abstract:

Many glass-epoxy composite structures, such as large utility wind turbine rotor blades (WTBs), comprise of adhesive joints with typically thick bond lines used to connect the different components during assembly. Performance optimization of rotor blades to increase power output by simultaneously maintaining high stiffness-to-low-mass ratios entails intricate geometries in conjunction with complex anisotropic material behavior. Consequently, adhesive joints in WTBs are subject to multiaxial stress states with significant stress gradients depending on the local joint geometry. Moreover, the dynamic aero-elastic interaction of the WTB with the airflow generates non-proportional, variable amplitude stress histories in the material. Empiricism shows that a prominent failure type in WTBs is high cycle fatigue failure of adhesive bond line interfaces, which in fact over time developed into a design driver as WTB sizes increase rapidly. Structural optimization employed at an early design stage, therefore, sets high demands on computationally efficient interface fatigue models capable of predicting the critical locations prone for interface failure. The numerical stress-based interface fatigue model presented in this work uses the Drucker-Prager criterion to compute three different damage indices corresponding to the two interface shear tractions and the outward normal traction. The two-parameter Drucker-Prager model was chosen because of its ability to consider shear strength enhancement under compression and shear strength reduction under tension. The governing interface damage index is taken as the maximum of the triple. The damage indices are computed through the well-known linear Palmgren-Miner rule after separate rain flow-counting of the equivalent shear stress history and the equivalent pure normal stress history. The equivalent stress signals are obtained by self-similar scaling of the Drucker-Prager surface whose shape is defined by the uniaxial tensile strength and the shear strength such that it intersects with the stress point at every time step. This approach implicitly assumes that the damage caused by the prevailing multiaxial stress state is the same as the damage caused by an amplified equivalent uniaxial stress state in the three interface directions. The model was implemented as Python plug-in for the commercially available finite element code Abaqus for its use with solid elements. The model was used to predict the interface damage of an adhesively bonded, tapered glass-epoxy composite cantilever I-beam tested by LM Wind Power under constant amplitude compression-compression tip load in the high cycle fatigue regime. Results show that the model was able to predict the location of debonding in the adhesive interface between the webfoot and the cap. Moreover, with a set of two different constant life diagrams namely in shear and tension, it was possible to predict both the fatigue lifetime and the failure mode of the sub-component with reasonable accuracy. It can be concluded that the fidelity, robustness and computational efficiency of the proposed model make it especially suitable for rapid fatigue damage screening of large 3D finite element models subject to complex dynamic load histories.

Keywords: adhesive, fatigue, interface, multiaxial stress

Procedia PDF Downloads 169
2 Nigeria Rural Water Supply Management: Participatory Process as the Best Option

Authors: E. O. Aluta, C. A. Booth, D. G. Proverbs, T. Appleby

Abstract:

Challenges in the effective management of potable water have attracted global attention in recent years and remain many world regions’ major priorities. Scarcity and unavailability of potable water may potentially escalate poverty, obviate democratic expression of views and militate against inter-sectoral development. These challenges contra-indicate the inherent potentials of the resource. Thus, while creation of poverty may be regarded as a broad-based problem, it is capable of reflecting life-span reduction diseases, the friction of interests manifesting in threats and warfare, the relegation of democratic principles for authoritarian definitions and Human Rights abuse. The challenges may be identified as manifestations of ineffective management of potable water resource and therefore, regarded as major problems in environmental protection. In reaction, some nations have re-examined their laws and policies, while others have developed innovative projects, which seek to ameliorate difficulties of providing sustainable potable water. The problems resonate in Nigeria, where the legal framework supporting the supply and management of potable water has been criticized as ineffective. This has impacted more on rural community members, often regarded as ‘voiceless’. At that level, the participation of non-state actors has been identified as an effective strategy, which can improve water supply. However, there are indications that there is no pragmatic application of this, resulting in over-centralization and top-down management. Thus, this study focuses on how the participatory process may enable the development of participatory water governance framework, for use in Nigeria rural communities. The Rural Advisory Board (RAB) is proposed as a governing body to promote proximal relationships, institute democratisation borne out of participation, while enabling effective accountability and information. The RAB establishes mechanisms for effectiveness, taking into consideration Transparency, Accountability and Participation (TAP), advocated as guiding principles of decision-makers. Other tools, which may be explored in achieving these are, Laws and Policies supporting the water sector, under the direction of the Ministries and Law Courts, which ensure non-violation of laws. Community norms and values, consisting of Nigerian traditional belief system, perceptions, attitude and reality (often undermined in favour of legislations), are relied on to pave the way for enforcement. While the Task Forces consist of community members with specific designation of duties, which ensure compliance and enforceability, a cross-section of community members are assigned duties. Thus, the principle of participation is pragmatically reflected. A review of the literature provided information on the potentials of the participatory process, in potable water governance. Qualitative methodology was explored by using the semi-structured interview as strategy for inquiry. The purposive sampling strategy, consisting of homogeneous, heterogeneous and criterion techniques was applied to enable sampling. The samples, sourced from diverse positions of life, were from the study area of Delta State of Nigeria, involving three local governments of Oshimili South, Uvwie and Warri South. From the findings, there are indications that the application of the participatory process is inhered with empowerment of the rural community members to make legitimate demands for TAP. This includes the obviation of mono-decision making for the supply and management of potable water. This is capable of restructuring the top-down management to a top-down/bottom-up system.

Keywords: participation, participatory process, participatory water governance, rural advisory board

Procedia PDF Downloads 385
1 Evaluation of Academic Research Projects Using the AHP and TOPSIS Methods

Authors: Murat Arıbaş, Uğur Özcan

Abstract:

Due to the increasing number of universities and academics, the fund of the universities for research activities and grants/supports given by government institutions have increased number and quality of academic research projects. Although every academic research project has a specific purpose and importance, limited resources (money, time, manpower etc.) require choosing the best ones from all (Amiri, 2010). It is a pretty hard process to compare and determine which project is better such that the projects serve different purposes. In addition, the evaluation process has become complicated since there are more than one evaluator and multiple criteria for the evaluation (Dodangeh, Mojahed and Yusuff, 2009). Mehrez and Sinuany-Stern (1983) determined project selection problem as a Multi Criteria Decision Making (MCDM) problem. If a decision problem involves multiple criteria and objectives, it is called as a Multi Attribute Decision Making problem (Ömürbek & Kınay, 2013). There are many MCDM methods in the literature for the solution of such problems. These methods are AHP (Analytic Hierarchy Process), ANP (Analytic Network Process), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation), UTADIS (Utilities Additives Discriminantes), ELECTRE (Elimination et Choix Traduisant la Realite), MAUT (Multiattribute Utility Theory), GRA (Grey Relational Analysis) etc. Teach method has some advantages compared with others (Ömürbek, Blacksmith & Akalın, 2013). Hence, to decide which MCDM method will be used for solution of the problem, factors like the nature of the problem, types of choices, measurement scales, type of uncertainty, dependency among the attributes, expectations of decision maker, and quantity and quality of the data should be considered (Tavana & Hatami-Marbini, 2011). By this study, it is aimed to develop a systematic decision process for the grant support applications that are expected to be evaluated according to their scientific adequacy by multiple evaluators under certain criteria. In this context, project evaluation process applied by The Scientific and Technological Research Council of Turkey (TÜBİTAK) the leading institutions in our country, was investigated. Firstly in the study, criteria that will be used on the project evaluation were decided. The main criteria were selected among TÜBİTAK evaluation criteria. These criteria were originality of project, methodology, project management/team and research opportunities and extensive impact of project. Moreover, for each main criteria, 2-4 sub criteria were defined, hence it was decided to evaluate projects over 13 sub-criterion in total. Due to superiority of determination criteria weights AHP method and provided opportunity ranking great number of alternatives TOPSIS method, they are used together. AHP method, developed by Saaty (1977), is based on selection by pairwise comparisons. Because of its simple structure and being easy to understand, AHP is the very popular method in the literature for determining criteria weights in MCDM problems. Besides, the TOPSIS method developed by Hwang and Yoon (1981) as a MCDM technique is an alternative to ELECTRE method and it is used in many areas. In the method, distance from each decision point to ideal and to negative ideal solution point was calculated by using Euclidian Distance Approach. In the study, main criteria and sub-criteria were compared on their own merits by using questionnaires that were developed based on an importance scale by four relative groups of people (i.e. TUBITAK specialists, TUBITAK managers, academics and individuals from business world ) After these pairwise comparisons, weight of the each main criteria and sub-criteria were calculated by using AHP method. Then these calculated criteria’ weights used as an input in TOPSİS method, a sample consisting 200 projects were ranked on their own merits. This new system supported to opportunity to get views of the people that take part of project process including preparation, evaluation and implementation on the evaluation of academic research projects. Moreover, instead of using four main criteria in equal weight to evaluate projects, by using weighted 13 sub-criteria and decision point’s distance from the ideal solution, systematic decision making process was developed. By this evaluation process, new approach was created to determine importance of academic research projects.

Keywords: Academic projects, Ahp method, Research projects evaluation, Topsis method.

Procedia PDF Downloads 589