Search results for: state response to left extremism
1163 Assessment of Very Low Birth Weight Neonatal Tracking and a High-Risk Approach to Minimize Neonatal Mortality in Bihar, India
Authors: Aritra Das, Tanmay Mahapatra, Prabir Maharana, Sridhar Srikantiah
Abstract:
In the absence of adequate well-equipped neonatal-care facilities serving rural Bihar, India, the practice of essential home-based newborn-care remains critically important for reduction of neonatal and infant mortality, especially among pre-term and small-for-gestational-age (Low-birth-weight) newborns. To improve the child health parameters in Bihar, ‘Very-Low-Birth-Weight (vLBW) Tracking’ intervention is being conducted by CARE India, since 2015, targeting public facility-delivered newborns weighing ≤2000g at birth, to improve their identification and provision of immediate post-natal care. To assess the effectiveness of the intervention, 200 public health facilities were randomly selected from all functional public-sector delivery points in Bihar and various outcomes were tracked among the neonates born there. Thus far, one pre-intervention (Feb-Apr’2015-born neonates) and three post-intervention (for Sep-Oct’2015, Sep-Oct’2016 and Sep-Oct’2017-born children) follow-up studies were conducted. In each round, interviews were conducted with the mothers/caregivers of successfully-tracked children to understand outcome, service-coverage and care-seeking during the neonatal period. Data from 171 matched facilities common across all rounds were analyzed using SAS-9.4. Identification of neonates with birth-weight ≤ 2000g improved from 2% at baseline to 3.3%-4% during post-intervention. All indicators pertaining to post-natal home-visits by frontline-workers (FLWs) improved. Significant improvements between baseline and post-intervention rounds were also noted regarding mothers being informed about ‘weak’ child – at the facility (R1 = 25 to R4 = 50%) and at home by FLW (R1 = 19%, to R4 = 30%). Practice of ‘Kangaroo-Mother-Care (KMC)’– an important component of essential newborn care – showed significant improvement in postintervention period compared to baseline in both facility (R1 = 15% to R4 = 31%) and home (R1 = 10% to R4=29%). Increasing trend was noted regarding detection and birth weight-recording of the extremely low-birth-weight newborns (< 1500 g) showed an increasing trend. Moreover, there was a downward trend in mortality across rounds, in each birth-weight strata (< 1500g, 1500-1799g and >= 1800g). After adjustment for the differential distribution of birth-weights, mortality was found to decline significantly from R1 (22.11%) to R4 (11.87%). Significantly declining trend was also observed for both early and late neonatal mortality and morbidities. Multiple regression analysis identified - birth during immediate post-intervention phase as well as that during the maintenance phase, birth weight > 1500g, children of low-parity mothers, receiving visit from FLW in the first week and/or receiving advice on extra care from FLW as predictors of survival during neonatal period among vLBW newborns. vLBW tracking was found to be a successful and sustainable intervention and has already been handed over to the Government.Keywords: weak newborn tracking, very low birth weight babies, newborn care, community response
Procedia PDF Downloads 1621162 Using Virtual Reality Exergaming to Improve Health of College Students
Authors: Juanita Wallace, Mark Jackson, Bethany Jurs
Abstract:
Introduction: Exergames, VR games used as a form of exercise, are being used to reduce sedentary lifestyles in a vast number of populations. However, there is a distinct lack of research comparing the physiological response during VR exergaming to that of traditional exercises. The purpose of this study was to create a foundationary investigation establishing changes in physiological responses resulting from VR exergaming in a college aged population. Methods: In this IRB approved study, college aged students were recruited to play a virtual reality exergame (Beat Saber) on the Oculus Quest 2 (Facebook, 2021) in either a control group (CG) or training group (TG). Both groups consisted of subjects who were not habitual users of virtual reality. The CG played VR one time per week for three weeks and the TG played 150 min/week three weeks. Each group played the same nine Beat Saber songs, in a randomized order, during 30 minute sessions. Song difficulty was increased during play based on song performance. Subjects completed a pre- and posttests at which the following was collected: • Beat Saber Game Metrics: song level played, song score, number of beats completed per song and accuracy (beats completed/total beats) • Physiological Data: heart rate (max and avg.), active calories • Demographics Results: A total of 20 subjects completed the study; nine in the CG (3 males, 6 females) and 11 (5 males, 6 females) in the TG. • Beat Saber Song Metrics: The TG improved performance from a normal/hard difficulty to hard/expert. The CG stayed at the normal/hard difficulty. At the pretest there was no difference in game accuracy between groups. However, at the posttest the CG had a higher accuracy. • Physiological Data (Table 1): Average heart rates were similar between the TG and CG at both the pre- and posttest. However, the TG expended more total calories. Discussion: Due to the lack of peer reviewed literature on c exergaming using Beat Saber, the results of this study cannot be directly compared. However, the results of this study can be compared with the previously established trends for traditional exercise. In traditional exercise, an increase in training volume equates to increased efficiency at the activity. The TG should naturally increase in difficulty at a faster rate than the CG because they played 150 hours per week. Heart rate and caloric responses also increase during traditional exercise as load increases (i.e. speed or resistance). The TG reported an increase in total calories due to a higher difficulty of play. The song accuracy decreases in the TG can be explained by the increased difficulty of play. Conclusion: VR exergaming is comparable to traditional exercise for loads within the 50-70% of maximum heart rate. The ability to use VR for health could motivate individuals who do not engage in traditional exercise. In addition, individuals in health professions can and should promote VR exergaming as a viable way to increase physical activity and improve health in their clients/patients.Keywords: virtual reality, exergaming, health, heart rate, wellness
Procedia PDF Downloads 1881161 The Applicability of General Catholic Canon Law during the Ongoing Migration Crisis in Hungary
Authors: Lorand Ujhazi
Abstract:
The vast majority of existing canonical studies about migration are focused on examining the general pastoral and legal regulations of the Catholic Church. The weakness of this approach is that it ignores a number of important factors; like the financial, legal and personal circumstances of a particular church or the canonical position of certain organizations which actually look after the immigrants. This paper is a case study, which analyses the current and historical migration related policies and activities of the Catholic Church in Hungary. To achieve this goal the study uses canon law, historical publications, various instructions and communications issued by church superiors, Hungarian and foreign media reports and the relevant Hungarian legislation. The paper first examines how the Hungarian Catholic Church assisted migrants like Armenians fleeing from the Ottoman Empire, Poles escaping during the Second World War, East German and Romanian citizens in the 1980s and refugees from the former Yugoslavia in the 1990s. These events underline the importance of past historical experience in the development of contemporary pastoral and humanitarian policy of the Catholic Church in Hungary. Then the paper turns to the events of the ongoing crisis by describing the unique challenges faced by churches in transit countries like Hungary. Then the research contrasts these findings with the typical responsibilities of churches in countries which are popular destinations for immigrants. The next part of the case study focuses on the changes to the pre-crisis legal and canonical framework which influenced the actions of hierarchical and charity organizations in Hungary. Afterwards, the paper illustrates the dangers of operating in an unclear legal environment, where some charitable activities of the church like a fundraising campaign may be interpreted as a national security risk by state authorities. Then the paper presents the reactions of Hungarian academics to the current migration crisis and finally it offers some proposals how to improve parts of Canon Law which govern immigration. The conclusion of the paper is that during the formulation of the central refugee policy of the Catholic Church decision makers must take into consideration the peculiar circumstances of its particular churches. This approach may prevent disharmony between the existing central regulations, the policy of the Vatican and the operations of the local church organizations.Keywords: canon law, Catholic Church, civil law, Hungary, immigration, national security
Procedia PDF Downloads 3091160 Urban Accessibility of Historical Cities: The Venetian Case Study
Authors: Valeria Tatano, Francesca Guidolin, Francesca Peltrera
Abstract:
The preservation of historical Italian heritage, at the urban and architectural scale, has to consider restrictions and requirements connected with conservation issues and usability needs, which are often at odds with historical heritage preservation. Recent decades have been marked by the search for increased accessibility not only of public and private buildings, but to the whole historical city, also for people with disability. Moreover, in the last years the concepts of Smart City and Healthy City seek to improve accessibility both in terms of mobility (independent or assisted) and fruition of goods and services, also for historical cities. The principles of Inclusive Design have introduced new criteria for the improvement of public urban space, between current regulations and best practices. Moreover, they have contributed to transforming “special needs” into an opportunity of social innovation. These considerations find a field of research and analysis in the historical city of Venice, which is at the same time a site of UNESCO world heritage, a mass tourism destination bringing in visitors from all over the world and a city inhabited by an aging population. Due to its conformation, Venetian urban fabric is only partially accessible: about four thousand bridges divide thousands of islands, making it almost impossible to move independently. These urban characteristics and difficulties were the base, in the last 20 years, for several researches, experimentations and solutions with the aim of eliminating architectural barriers, in particular for the usability of bridges. The Venetian Municipality with the EBA Office and some external consultants realized several devices (e.g. the “stepped ramp” and the new accessible ramps for the Venice Marathon) that should determine an innovation for the city, passing from the use of mechanical replicable devices to specific architectural projects in order to guarantee autonomy in use. This paper intends to present the state-of-the-art in bridges accessibility, through an analysis based on Inclusive Design principles and on the current national and regional regulation. The purpose is to evaluate some possible strategies that could improve performances, between limits and possibilities of interventions. The aim of the research is to lay the foundations for the development of a strategic program for the City of Venice that could successfully bring together both conservation and improvement requirements.Keywords: accessibility of historical cities, historical heritage preservation, inclusive design, technological and social innovation
Procedia PDF Downloads 2831159 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English
Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista
Abstract:
The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.Keywords: corpus linguistics, historical linguistics, old English, parallel corpus
Procedia PDF Downloads 2131158 Copyright Infringement for Academic Authorship in Uganda: Implications on Exemptions of Fair Use for Educational Purposes in Universities
Authors: Elisam Magara
Abstract:
Like any other property, Intellectual Property (IP) must be regarded, respected, and remunerated to address the historical, ethical, economical and informational needs of society. Article 26 of the Constitution of the Republic of Uganda 1995, the Copyright and Neighbouring Rights (CNR) Act 2006 and CNR Regulations 2010 guide copyright protection in Uganda. However, an unpredictable environment has negatively impact on certain author/intellectual freedoms; and the infringements on academic works that affect the economic rights of authors that limit authors from fully enjoying the benefits of authorship. Notwithstanding the different licensing systems and copyright protection avenues, educational institutions and custodians of copyright works (libraries, archives) have continued to advocate for open access to information resources, under the legal exceptions of fair use for educational purposes. Thus, a study was conducted in educational institutions, libraries and archives in Uganda to assess the state of copyright infringement in Uganda in an increased use of academic authored works. The study attempted to establish the nature and forms of Copyright Infringement, the circumstances for copyright infringement, assessed the opinions from the custodians on strategies for balancing copyright protection for economic and moral gains by authors and increased access to information for educational purposes and fair-use. Through a survey, using a self-administered questionnaire, interviews and physical visits, the study was conducted in higher education institutions, libraries and archives among the officers that manage and keep copyright works. It established that the uncontrolled reproduction of copyright works in educational institutions and information institutions, have contributed copyright infringement robbing authors of their potential economic earnings and limiting their academic innovativeness and creativity. The study also established that lack of consciousness and awareness on copyright issues by lecturers, universities and libraries has made copyright works in Universities highly susceptible to copyright infringement. Thus the increased access to materials without restrictions has resulted in copyright infringement among the educational institutions, libraries and archives. A strategic alliance by the collecting Society (Uganda Reproduction Rights Organisation (URRO), government, Universities and right holders organisations (UTANA) to work together and institute a programme to address copyright protection and access to information is pertinently required.Keywords: access to information, academic Writing, copyright, copyright infringement, copyright protection, exemptions of fair use, intellectual property rights
Procedia PDF Downloads 4561157 Population Pharmacokinetics of Levofloxacin and Moxifloxacin, and the Probability of Target Attainment in Ethiopian Patients with Multi-Drug Resistant Tuberculosis
Authors: Temesgen Sidamo, Prakruti S. Rao, Eleni Akllilu, Workineh Shibeshi, Yumi Park, Yong-Soon Cho, Jae-Gook Shin, Scott K. Heysell, Stellah G. Mpagama, Ephrem Engidawork
Abstract:
The fluoroquinolones (FQs) are used off-label for the treatment of multidrug-resistant tuberculosis (MDR-TB), and for evaluation in shortening the duration of drug-susceptible TB in recently prioritized regimens. Within the class, levofloxacin (LFX) and moxifloxacin (MXF) play a substantial role in ensuring success in treatment outcomes. However, sub-therapeutic plasma concentrations of either LFX or MXF may drive unfavorable treatment outcomes. To the best of our knowledge, the pharmacokinetics of LFX and MXF in Ethiopian patients with MDR-TB have not yet been investigated. Therefore, the aim of this study was to develop a population pharmacokinetic (PopPK) model of levofloxacin (LFX) and moxifloxacin (MXF) and assess the percent probability of target attainment (PTA) as defined by the ratio of the area under the plasma concentration-time curve over 24-h (AUC0-24) and the in vitro minimum inhibitory concentration (MIC) (AUC0-24/MIC) in Ethiopian MDR-TB patients. Steady-state plasma was collected from 39 MDR-TB patients enrolled in the programmatic treatment course and the drug concentrations were determined using optimized liquid chromatography-tandem mass spectrometry. In addition, the in vitro MIC of the patients' pretreatment clinical isolates was determined. PopPK and simulations were run at various doses, and PK parameters were estimated. The effect of covariates on the PK parameters and the PTA for maximum mycobacterial kill and resistance prevention was also investigated. LFX and MXF both fit in a one-compartment model with adjustments. The apparent volume of distribution (V) and clearance (CL) of LFX were influenced by serum creatinine (Scr), whereas the absorption constant (Ka) and V of MXF were influenced by Scr and BMI, respectively. The PTA for LFX maximal mycobacterial kill at the critical MIC of 0.5 mg/L was 29%, 62%, and 95% with the simulated 750 mg, 1000 mg, and 1500 mg doses, respectively, whereas the PTA for resistance prevention at 1500 mg was only 4.8%, with none of the lower doses achieving this target. At the critical MIC of 0.25 mg/L, there was no difference in the PTA (94.4%) for maximum bacterial kill among the simulated doses of MXF (600 mg, 800 mg, and 1000 mg), but the PTA for resistance prevention improved proportionately with dose. Standard LFX and MXF doses may not provide adequate drug exposure. LFX PopPK is more predictable for maximum mycobacterial kill, whereas MXF's resistance prevention target increases with dose. Scr and BMI are likely to be important covariates in dose optimization or therapeutic drug monitoring (TDM) studies in Ethiopian patients.Keywords: population PK, PTA, moxifloxacin, levofloxacin, MDR-TB patients, ethiopia
Procedia PDF Downloads 1201156 Approaches to Inducing Obsessional Stress in Obsessive-Compulsive Disorder (OCD): An Empirical Study with Patients Undergoing Transcranial Magnetic Stimulation (TMS) Therapy
Authors: Lucia Liu, Matthew Koziol
Abstract:
Obsessive-compulsive disorder (OCD), a long-lasting anxiety disorder involving recurrent, intrusive thoughts, affects over 2 million adults in the United States. Transcranial magnetic stimulation (TMS) stands out as a noninvasive, cutting-edge therapy that has been shown to reduce symptoms in patients with treatment-resistant OCD. The Food and Drug Administration (FDA) approved protocol pairs TMS sessions with individualized symptom provocation, aiming to improve the susceptibility of brain circuits to stimulation. However, limited standardization or guidance exists on how to conduct symptom provocation and which methods are most effective. This study aims to compare the effect of internal versus external techniques to induce obsessional stress in a clinical setting during TMS therapy. Two symptom provocation methods, (i) Asking patients thought-provoking questions about their obsessions (internal) and (ii) Requesting patients to perform obsession-related tasks (external), were employed in a crossover design with repeated measurement. Thirty-six treatments of NeuroStar TMS were administered to each of two patients over 8 weeks in an outpatient clinic. Patient One received 18 sessions of internal provocation followed by 18 sessions of external provocation, while Patient Two received 18 sessions of external provocation followed by 18 sessions of internal provocation. The primary outcome was the level of self-reported obsessional stress on a visual analog scale from 1 to 10. The secondary outcome was self-reported OCD severity, collected biweekly in a four-level Likert-scale (1 to 4) of bad, fair, good and excellent. Outcomes were compared and tested between provocation arms through repeated measures ANOVA, accounting for intra-patient correlations. Ages were 42 for Patient One (male, White) and 57 for Patient Two (male, White). Both patients had similar moderate symptoms at baseline, as determined through the Yale-Brown Obsessive Compulsive Scale (YBOCS). When comparing obsessional stress induced across the two arms of internal and external provocation methods, the mean (SD) was 6.03 (1.18) for internal and 4.01 (1.28) for external strategies (P=0.0019); ranges were 3 to 8 for internal and 2 to 8 for external strategies. Internal provocation yielded 5 (31.25%) bad, 6 (33.33%) fair, 3 (18.75%) good, and 2 (12.5%) excellent responses for OCD status, while external provocation yielded 5 (31.25%) bad, 9 (56.25%) fair, 1 (6.25%) good, and 1 (6.25%) excellent responses (P=0.58). Internal symptom provocation tactics had a significantly stronger impact on inducing obsessional stress and led to better OCD status (non-significant). This could be attributed to the fact that answering questions may prompt patients to reflect more on their lived experiences and struggles with OCD. In the future, clinical trials with larger sample sizes are warranted to validate this finding. Results support the increased integration of internal methods into structured provocation protocols, potentially reducing the time required for provocation and achieving greater treatment response to TMS.Keywords: obsessive-compulsive disorder, transcranial magnetic stimulation, mental health, symptom provocation
Procedia PDF Downloads 571155 Intracranial Hypotension: A Brief Review of the Pathophysiology and Diagnostic Algorithm
Authors: Ana Bermudez de Castro Muela, Xiomara Santos Salas, Silvia Cayon Somacarrera
Abstract:
The aim of this review is to explain what is the intracranial hypotension and its main causes, and also to approach to the diagnostic management in the different clinical situations, understanding radiological findings, and physiopathological substrate. An approach to the diagnostic management is presented: what are the guidelines to follow, the different tests available, and the typical findings. We review the myelo-CT and myelo-RM studies in patients with suspected CSF fistula or hypotension of unknown cause during the last 10 years in three centers. Signs of intracranial hypotension (subdural hygromas/hematomas, pachymeningeal enhancement, venous sinus engorgement, pituitary hyperemia, and lowering of the brain) that are evident in baseline CT and MRI are also sought. The intracranial hypotension is defined as a lower opening pressure of 6 cmH₂O. It is a relatively rare disorder with an annual incidence of 5 per 100.000, with a female to male ratio 2:1. The clinical features it’s an orthostatic headache, which is defined as development or aggravation of headache when patients move from a supine to an upright position and disappear or typically relieve after lay down. The etiology is a decrease in the amount of cerebrospinal fluid (CSF), usually by loss of it, either spontaneous or secondary (post-traumatic, post-surgical, systemic disease, post-lumbar puncture etc.) and rhinorrhea and/or otorrhea may exist. The pathophysiological mechanisms of hypotension and CSF hypertension are interrelated, as a situation of hypertension may lead to hypotension secondary to spontaneous CSF leakage. The diagnostic management of intracranial hypotension in our center includes, in the case of being spontaneous and without rhinorrhea and/or otorrhea and according to necessity, a range of available tests, which will be performed from less to more complex: cerebral CT, cerebral MRI and spine without contrast and CT/MRI with intrathecal contrast. If we are in a situation of intracranial hypotension with the presence of rhinorrhea/otorrhea, a sample can be obtained for the detection of b2-transferrin, which is found in the CSF physiologically, as well as sinus CT and cerebral MRI including constructive interference steady state (CISS) sequences. If necessary, cisternography studies are performed to locate the exact point of leakage. It is important to emphasize the significance of myelo-CT / MRI to establish the diagnosis and location of CSF leak, which is indispensable for therapeutic planning (whether surgical or not) in patients with more than one lesion or doubts in the baseline tests.Keywords: cerebrospinal fluid, neuroradiology brain, magnetic resonance imaging, fistula
Procedia PDF Downloads 1271154 Characterization and Modelling of Groundwater Flow towards a Public Drinking Water Well Field: A Case Study of Ter Kamerenbos Well Field
Authors: Buruk Kitachew Wossenyeleh
Abstract:
Groundwater is the largest freshwater reservoir in the world. Like the other reservoirs of the hydrologic cycle, it is a finite resource. This study focused on the groundwater modeling of the Ter Kamerenbos well field to understand the groundwater flow system and the impact of different scenarios. The study area covers 68.9Km2 in the Brussels Capital Region and is situated in two river catchments, i.e., Zenne River and Woluwe Stream. The aquifer system has three layers, but in the modeling, they are considered as one layer due to their hydrogeological properties. The catchment aquifer system is replenished by direct recharge from rainfall. The groundwater recharge of the catchment is determined using the spatially distributed water balance model called WetSpass, and it varies annually from zero to 340mm. This groundwater recharge is used as the top boundary condition for the groundwater modeling of the study area. During the groundwater modeling using Processing MODFLOW, constant head boundary conditions are used in the north and south boundaries of the study area. For the east and west boundaries of the study area, head-dependent flow boundary conditions are used. The groundwater model is calibrated manually and automatically using observed hydraulic heads in 12 observation wells. The model performance evaluation showed that the root means the square error is 1.89m and that the NSE is 0.98. The head contour map of the simulated hydraulic heads indicates the flow direction in the catchment, mainly from the Woluwe to Zenne catchment. The simulated head in the study area varies from 13m to 78m. The higher hydraulic heads are found in the southwest of the study area, which has the forest as a land-use type. This calibrated model was run for the climate change scenario and well operation scenario. Climate change may cause the groundwater recharge to increase by 43% and decrease by 30% in 2100 from current conditions for the high and low climate change scenario, respectively. The groundwater head varies for a high climate change scenario from 13m to 82m, whereas for a low climate change scenario, it varies from 13m to 76m. If doubling of the pumping discharge assumed, the groundwater head varies from 13m to 76.5m. However, if the shutdown of the pumps is assumed, the head varies in the range of 13m to 79m. It is concluded that the groundwater model is done in a satisfactory way with some limitations, and the model output can be used to understand the aquifer system under steady-state conditions. Finally, some recommendations are made for the future use and improvement of the model.Keywords: Ter Kamerenbos, groundwater modelling, WetSpass, climate change, well operation
Procedia PDF Downloads 1531153 A Systematic Map of the Research Trends in Wildfire Management in Mediterranean-Climate Regions
Authors: Renata Martins Pacheco, João Claro
Abstract:
Wildfires are becoming an increasing concern worldwide, causing substantial social, economic, and environmental disruptions. This situation is especially relevant in Mediterranean-climate regions, present in all the five continents of the world, in which fire is not only a natural component of the environment but also perhaps one of the most important evolutionary forces. The rise in wildfire occurrences and their associated impacts suggests the need for identifying knowledge gaps and enhancing the basis of scientific evidence on how managers and policymakers may act effectively to address them. Considering that the main goal of a systematic map is to collate and catalog a body of evidence to describe the state of knowledge for a specific topic, it is a suitable approach to be used for this purpose. In this context, the aim of this study is to systematically map the research trends in wildfire management practices in Mediterranean-climate regions. A total of 201 wildfire management studies were analyzed and systematically mapped in terms of their: Year of publication; Place of study; Scientific outlet; Research area (Web of Science) or Research field (Scopus); Wildfire phase; Central research topic; Main objective of the study; Research methods; and Main conclusions or contributions. The results indicate that there is an increasing number of studies being developed on the topic (most from the last 10 years), but more than half of them are conducted in few Mediterranean countries (60% of the analyzed studies were conducted in Spain, Portugal, Greece, Italy or France), and more than 50% are focused on pre-fire issues, such as prevention and fuel management. In contrast, only 12% of the studies focused on “Economic modeling” or “Human factors and issues,” which suggests that the triple bottom line of the sustainability argument (social, environmental, and economic) is not being fully addressed by fire management research. More than one-fourth of the studies had their objective related to testing new approaches in fire or forest management, suggesting that new knowledge is being produced on the field. Nevertheless, the results indicate that most studies (about 84%) employed quantitative research methods, and only 3% of the studies used research methods that tackled social issues or addressed expert and practitioner’s knowledge. Perhaps this lack of multidisciplinary studies is one of the factors hindering more progress from being made in terms of reducing wildfire occurrences and their impacts.Keywords: wildfire, Mediterranean-climate regions, management, policy
Procedia PDF Downloads 1241152 Estimating Multidimensional Water Poverty Index in India: The Alkire Foster Approach
Authors: Rida Wanbha Nongbri, Sabuj Kumar Mandal
Abstract:
The Sustainable Development Goals (SDGs) for 2016-2030 were adopted in response to Millennium Development Goals (MDGs) which focused on access to sustainable water and sanitations. For over a decade, water has been a significant subject that is explored in various facets of life. Our day-to-day life is significantly impacted by water poverty at the socio-economic level. Reducing water poverty is an important policy challenge, particularly in emerging economies like India, owing to its population growth, huge variation in topology and climatic factors. To design appropriate water policies and its effectiveness, a proper measurement of water poverty is essential. In this backdrop, this study uses the Alkire Foster (AF) methodology to estimate a multidimensional water poverty index for India at the household level. The methodology captures several attributes to understand the complex issues related to households’ water deprivation. The study employs two rounds of Indian Human Development Survey data (IHDS 2005 and 2012) which focuses on 4 dimensions of water poverty including water access, water quantity, water quality, and water capacity, and seven indicators capturing these four dimensions. In order to quantify water deprivation at the household level, an AF dual cut-off counting method is applied and Multidimensional Water Poverty Index (MWPI) is calculated as the product of Headcount Ratio (Incidence) and average share of weighted dimension (Intensity). The results identify deprivation across all dimensions at the country level and show that a large proportion of household in India is deprived of quality water and suffers from water access in both 2005 and 2012 survey rounds. The comparison between the rural and urban households shows that higher ratio of the rural households are multidimensionally water poor as compared to their urban counterparts. Among the four dimensions of water poverty, water quality is found to be the most significant one for both rural and urban households. In 2005 round, almost 99.3% of households are water poor for at least one of the four dimensions, and among the water poor households, the intensity of water poverty is 54.7%. These values do not change significantly in 2012 round, but we could observe significance differences across the dimensions. States like Bihar, Tamil Nadu, and Andhra Pradesh are ranked the most in terms of MWPI, whereas Sikkim, Arunachal Pradesh and Chandigarh are ranked the lowest in 2005 round. Similarly, in 2012 round, Bihar, Uttar Pradesh and Orissa rank the highest in terms of MWPI, whereas Goa, Nagaland and Arunachal Pradesh rank the lowest. The policy implications of this study can be multifaceted. It can urge the policy makers to focus either on the impoverished households with lower intensity levels of water poverty to minimize total number of water poor households or can focus on those household with high intensity of water poverty to achieve an overall reduction in MWPI.Keywords: .alkire-foster (AF) methodology, deprivation, dual cut-off, multidimensional water poverty index (MWPI)
Procedia PDF Downloads 701151 Data Collection in Protected Agriculture for Subsequent Big Data Analysis: Methodological Evaluation in Venezuela
Authors: Maria Antonieta Erna Castillo Holly
Abstract:
During the last decade, data analysis, strategic decision making, and the use of artificial intelligence (AI) tools in Latin American agriculture have been a challenge. In some countries, the availability, quality, and reliability of historical data, in addition to the current data recording methodology in the field, makes it difficult to use information systems, complete data analysis, and their support for making the right strategic decisions. This is something essential in Agriculture 4.0. where the increase in the global demand for fresh agricultural products of tropical origin, during all the seasons of the year requires a change in the production model and greater agility in the responses to the consumer market demands of quality, quantity, traceability, and sustainability –that means extensive data-. Having quality information available and updated in real-time on what, how much, how, when, where, at what cost, and the compliance with production quality standards represents the greatest challenge for sustainable and profitable agriculture in the region. The objective of this work is to present a methodological proposal for the collection of georeferenced data from the protected agriculture sector, specifically in production units (UP) with tall structures (Greenhouses), initially for Venezuela, taking the state of Mérida as the geographical framework, and horticultural products as target crops. The document presents some background information and explains the methodology and tools used in the 3 phases of the work: diagnosis, data collection, and analysis. As a result, an evaluation of the process is carried out, relevant data and dashboards are displayed, and the first satellite maps integrated with layers of information in a geographic information system are presented. Finally, some improvement proposals and tentatively recommended applications are added to the process, understanding that their objective is to provide better qualified and traceable georeferenced data for subsequent analysis of the information and more agile and accurate strategic decision making. One of the main points of this study is the lack of quality data treatment in the Latin America area and especially in the Caribbean basin, being one of the most important points how to manage the lack of complete official data. The methodology has been tested with horticultural products, but it can be extended to other tropical crops.Keywords: greenhouses, protected agriculture, data analysis, geographic information systems, Venezuela
Procedia PDF Downloads 1331150 Climate Change and Rural-Urban Migration in Brazilian Semiarid Region
Authors: Linda Márcia Mendes Delazeri, Dênis Antônio Da Cunha
Abstract:
Over the past few years, the evidence that human activities have altered the concentration of greenhouse gases in the atmosphere have become stronger, indicating that this accumulation is the most likely cause of climate change observed so far. The risks associated with climate change, although uncertain, have the potential to increase social vulnerability, exacerbating existing socioeconomic challenges. Developing countries are potentially the most affected by climate change, since they have less potential to adapt and are those most dependent on agricultural activities, one of the sectors in which the major negative impacts are expected. In Brazil, specifically, it is expected that the localities which form the semiarid region are among the most affected, due to existing irregularity in rainfall and high temperatures, in addition to economic and social factors endemic to the region. Given the strategic limitations to handle the environmental shocks caused by climate change, an alternative adopted in response to these shocks is migration. Understanding the specific features of migration flows, such as duration, destination and composition is essential to understand the impacts of migration on origin and destination locations and to develop appropriate policies. Thus, this study aims to examine whether climatic factors have contributed to rural-urban migration in semiarid municipalities in the recent past and how these migration flows will be affected by future scenarios of climate change. The study was based on microeconomic theory of utility maximization, in which, to decide to leave the countryside and move on to the urban area, the individual seeks to maximize its utility. Analytically, we estimated an econometric model using the modeling of Fixed Effects and the results confirmed the expectation that climate drivers are crucial for the occurrence of the rural-urban migration. Also, other drivers of the migration process, as economic, social and demographic factors were also important. Additionally, predictions about the rural-urban migration motivated by variations in temperature and precipitation in the climate change scenarios RCP 4.5 and 8.5 were made for the periods 2016-2035 and 2046-2065, defined by the Intergovernmental Panel on Climate Change (IPCC). The results indicate that there will be increased rural-urban migration in the semiarid region in both scenarios and in both periods. In general, the results of this study reinforce the need for formulations of public policies to avoid migration for climatic reasons, such as policies that give support to the productive activities generating income in rural areas. By providing greater incentives for family agriculture and expanding sources of credit for the farmer, it will have a better position to face climate adversities and to settle in rural areas. Ultimately, if migration becomes necessary, there must be the adoption of policies that seek an organized and planned development of urban areas, considering migration as an adaptation strategy to adverse climate effects. Thus, policies that act to absorb migrants in urban areas and ensure that they have access to basic services offered to the urban population would contribute to the social costs reduction of climate variability.Keywords: climate change, migration, rural productivity, semiarid region
Procedia PDF Downloads 3521149 Farmers’ Perception, Willingness and Capacity in Utilization of Household Sewage Sludge as Organic Resources for Peri-Urban Agriculture around Jos Nigeria
Authors: C. C. Alamanjo, A. O. Adepoju, H. Martin, R. N. Baines
Abstract:
Peri-urban agriculture in Jos Nigeria serves as a major means of livelihood for both urban and peri-urban poor, and constitutes huge commercial inclination with a target market that has spanned beyond Plateau State. Yet, the sustainability of this sector is threatened by intensive application of urban refuse ash contaminated with heavy metals, as a result of the highly heterogeneous materials used in ash production. Hence, this research aimed to understand the current fertilizer employed by farmers, their perception and acceptability in utilization of household sewage sludge for agricultural purposes and their capacity in mitigating risks associated with such practice. Mixed methods approach was adopted, and data collection tools used include survey questionnaire, focus group discussion with farmers, participants and field observation. The study identified that farmers maintain a complex mixture of organic and chemical fertilizers, with mixture composition that is dependent on fertilizer availability and affordability. Also, farmers have decreased the rate of utilization of urban refuse ash due to labor and increased logistic cost and are keen to utilize household sewage sludge for soil fertility improvement but are mainly constrained by accessibility of this waste product. Nevertheless, farmers near to sewage disposal points have commenced utilization of household sewage sludge for improving soil fertility. Farmers were knowledgeable on composting but find their strategic method of dewatering and sun drying more convenient. Irrigation farmers were not enthusiastic for treatment, as they desired both water and sludge. Secondly, household sewage sludge observed in the field is heterogeneous due to nearness between its disposal point and that of urban refuse, which raises concern for possible cross-contamination of pollutants and also portrays lack of extension guidance as regards to treatment and management of household sewage sludge for agricultural purposes. Hence, farmers concerns need to be addressed, particularly in providing extension advice and establishment of decentralized household sewage sludge collection centers, for continuous availability of liquid and concentrated sludge. Urgent need is also required for the Federal Government of Nigeria to increase commitment towards empowering her subsidiaries for efficient discharge of corporate responsibilities.Keywords: ash, farmers, household, peri-urban, refuse, sewage, sludge, urban
Procedia PDF Downloads 1401148 Hui as Religious over Ethnic Identity: A Case Study of Muslim Ethnic Interaction in Central Northwest China
Authors: Hugh Battye
Abstract:
In recent years, Muslim identity in China has strengthened against the backdrop of a worldwide Islamic revival. One discussion arising from this has been focused around the Hui, an ethnicity created by the Communist government in the 1950s covering the Chinese speaking 'Sino-Muslims' as opposed to those with their own language. While the term Hui in Chinese has traditionally meant 'Muslim', the strengthening of Hui identity in recent decades has led to a debate among scholars as to whether this identity is primarily ethnically or religiously driven. This article looks at the case of a mixed ethnic community in rural Gansu Province, Central Northwest China, which not only contains the official Hui ethnicity but also members of the smaller Muslim Salar and Bonan minority groups. In analyzing the close interaction between these groups, the paper will argue that, despite government attempts to promote the Hui as an ethnicity within its modern ethnic paradigm, in rural Gansu and the general region, Hui is still essentially seen as a religious identity. Having provided an overview of the historical evolution of the Hui ethnonym in China and presented the views of some of the important scholars involved in the discussion, the paper will then offer its findings based on participant observation and survey work in Gansu. The results will show that, firstly, for the local Muslims, religious identity clearly dominates ethnic identity. On the ground, the term Hui continues to be used as a catch-all term for Muslims, whether they belong to the official 'Hui' nationality or not, and against this backdrop, the ethnic importance of being 'Hui', 'Bonan' or 'Salar' within the Muslim community itself is by contrast minimal. Secondly, however, this local Muslim solidarity is not at present pointing towards some kind of national pan-ethnic Islamic movement that could potentially set itself up in opposition to the Chinese government; rather it is better seen as part of an ongoing negotiation by local Muslims with the state in the context of its ascribed ethnic categories. The findings of this study in a region where many of the Muslims are more conservative in their beliefs is not necessarily replicated in other contexts, such as in urban areas and in eastern and southern China, and hence reification of the term Hui as one idea extending all across China should be avoided, whether in terms of a united religious 'ummah' or of a real or imagined 'ethnic group.' Rather, this localized case study seeks to demonstrate ways in which Muslims of rural Central Northwest China are 'being Hui,' as a contribution to the broader discussion on what it means to be Muslim and Chinese in the reform era.Keywords: China, ethnicity, Hui, identity, Muslims
Procedia PDF Downloads 1271147 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 1261146 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada
Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman
Abstract:
Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.Keywords: HAND, DTM, rapid floodplain, simplified conceptual models
Procedia PDF Downloads 1531145 A Method to Evaluate and Compare Web Information Extractors
Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman
Abstract:
Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.Keywords: web information extractors, information extraction evaluation method, Google scholar, web
Procedia PDF Downloads 2481144 Study of Oxidative Processes in Blood Serum in Patients with Arterial Hypertension
Authors: Laura M. Hovsepyan, Gayane S. Ghazaryan, Hasmik V. Zanginyan
Abstract:
Hypertension (HD) is the most common cardiovascular pathology that causes disability and mortality in the working population. Most often, heart failure (HF), which is based on myocardial remodeling, leads to death in hypertension. Recently, endothelial dysfunction (EDF) or a violation of the functional state of the vascular endothelium has been assigned a significant role in the structural changes in the myocardium and the occurrence of heart failure in patients with hypertension. It has now been established that tissues affected by inflammation form increased amounts of superoxide radical and NO, which play a significant role in the development and pathogenesis of various pathologies. They mediate inflammation, modify proteins and damage nucleic acids. The aim of this work was to study the processes of oxidative modification of proteins (OMP) and the production of nitric oxide in hypertension. In the experimental work, the blood of 30 donors and 33 patients with hypertension was used. For the quantitative determination of OMP products, the based on the reaction of the interaction of oxidized amino acid residues of proteins and 2,4-dinitrophenylhydrazine (DNPH) with the formation of 2,4-dinitrophenylhydrazones, the amount of which was determined spectrophotometrically. The optical density of the formed carbonyl derivatives of dinitrophenylhydrazones was recorded at different wavelengths: 356 nm - aliphatic ketone dinitrophenylhydrazones (KDNPH) of neutral character; 370 nm - aliphatic aldehyde dinirophenylhydrazones (ADNPH) of neutral character; 430 nm - aliphatic KDNFG of the main character; 530 nm - basic aliphatic ADNPH. Nitric oxide was determined by photometry using Grace's solution. Adsorption was measured on a Thermo Scientific Evolution 201 SF at a wavelength of 546 nm. Thus, the results of the studies showed that in patients with arterial hypertension, an increased level of nitric oxide in the blood serum is observed, and there is also a tendency to an increase in the intensity of oxidative modification of proteins at a wavelength of 270 nm and 363 nm, which indicates a statistically significant increase in aliphatic aldehyde and ketone dinitrophenylhydrazones. The increase in the intensity of oxidative modification of blood plasma proteins in the studied patients, revealed by us, actually reflects the general direction of free radical processes and, in particular, the oxidation of proteins throughout the body. A decrease in the activity of the antioxidant system also leads to a violation of protein metabolism. The most important consequence of the oxidative modification of proteins is the inactivation of enzymes.Keywords: hypertension (HD), oxidative modification of proteins (OMP), nitric oxide (NO), oxidative stress
Procedia PDF Downloads 1111143 Online Monitoring and Control of Continuous Mechanosynthesis by UV-Vis Spectrophotometry
Authors: Darren A. Whitaker, Dan Palmer, Jens Wesholowski, James Flaherty, John Mack, Ahmad B. Albadarin, Gavin Walker
Abstract:
Traditional mechanosynthesis has been performed by either ball milling or manual grinding. However, neither of these techniques allow the easy application of process control. The temperature may change unpredictably due to friction in the process. Hence the amount of energy transferred to the reactants is intrinsically non-uniform. Recently, it has been shown that the use of Twin-Screw extrusion (TSE) can overcome these limitations. Additionally, TSE enables a platform for continuous synthesis or manufacturing as it is an open-ended process, with feedstocks at one end and product at the other. Several materials including metal-organic frameworks (MOFs), co-crystals and small organic molecules have been produced mechanochemically using TSE. The described advantages of TSE are offset by drawbacks such as increased process complexity (a large number of process parameters) and variation in feedstock flow impacting on product quality. To handle the above-mentioned drawbacks, this study utilizes UV-Vis spectrophotometry (InSpectroX, ColVisTec) as an online tool to gain real-time information about the quality of the product. Additionally, this is combined with real-time process information in an Advanced Process Control system (PharmaMV, Perceptive Engineering) allowing full supervision and control of the TSE process. Further, by characterizing the dynamic behavior of the TSE, a model predictive controller (MPC) can be employed to ensure the process remains under control when perturbed by external disturbances. Two reactions were studied; a Knoevenagel condensation reaction of barbituric acid and vanillin and, the direct amidation of hydroquinone by ammonium acetate to form N-Acetyl-para-aminophenol (APAP) commonly known as paracetamol. Both reactions could be carried out continuously using TSE, nuclear magnetic resonance (NMR) spectroscopy was used to confirm the percentage conversion of starting materials to product. This information was used to construct partial least squares (PLS) calibration models within the PharmaMV development system, which relates the percent conversion to product to the acquired UV-Vis spectrum. Once this was complete, the model was deployed within the PharmaMV Real-Time System to carry out automated optimization experiments to maximize the percentage conversion based on a set of process parameters in a design of experiments (DoE) style methodology. With the optimum set of process parameters established, a series of PRBS process response tests (i.e. Pseudo-Random Binary Sequences) around the optimum were conducted. The resultant dataset was used to build a statistical model and associated MPC. The controller maximizes product quality whilst ensuring the process remains at the optimum even as disturbances such as raw material variability are introduced into the system. To summarize, a combination of online spectral monitoring and advanced process control was used to develop a robust system for optimization and control of two TSE based mechanosynthetic processes.Keywords: continuous synthesis, pharmaceutical, spectroscopy, advanced process control
Procedia PDF Downloads 1791142 Wind Energy Harvester Based on Triboelectricity: Large-Scale Energy Nanogenerator
Authors: Aravind Ravichandran, Marc Ramuz, Sylvain Blayac
Abstract:
With the rapid development of wearable electronics and sensor networks, batteries cannot meet the sustainable energy requirement due to their limited lifetime, size and degradation. Ambient energies such as wind have been considered as an attractive energy source due to its copious, ubiquity, and feasibility in nature. With miniaturization leading to high-power and robustness, triboelectric nanogenerator (TENG) have been conceived as a promising technology by harvesting mechanical energy for powering small electronics. TENG integration in large-scale applications is still unexplored considering its attractive properties. In this work, a state of the art design TENG based on wind venturi system is demonstrated for use in any complex environment. When wind introduces into the air gap of the homemade TENG venturi system, a thin flexible polymer repeatedly contacts with and separates from electrodes. This device structure makes the TENG suitable for large scale harvesting without massive volume. Multiple stacking not only amplifies the output power but also enables multi-directional wind utilization. The system converts ambient mechanical energy to electricity with 400V peak voltage by charging of a 1000mF super capacitor super rapidly. Its future implementation in an array of applications aids in environment friendly clean energy production in large scale medium and the proposed design performs with an exhaustive material testing. The relation between the interfacial micro-and nano structures and the electrical performance enhancement is comparatively studied. Nanostructures are more beneficial for the effective contact area, but they are not suitable for the anti-adhesion property due to the smaller restoring force. Considering these issues, the nano-patterning is proposed for further enhancement of the effective contact area. By considering these merits of simple fabrication, outstanding performance, robust characteristic and low-cost technology, we believe that TENG can open up great opportunities not only for powering small electronics, but can contribute to large-scale energy harvesting through engineering design being complementary to solar energy in remote areas.Keywords: triboelectric nanogenerator, wind energy, vortex design, large scale energy
Procedia PDF Downloads 2151141 Glucose Measurement in Response to Environmental and Physiological Challenges: Towards a Non-Invasive Approach to Study Stress in Fishes
Authors: Tomas Makaras, Julija Razumienė, Vidutė Gurevičienė, Gintarė Sauliutė, Milda Stankevičiūtė
Abstract:
Stress responses represent animal’s natural reactions to various challenging conditions and could be used as a welfare indicator. Regardless of the wide use of glucose measurements in stress evaluation, there are some inconsistencies in its acceptance as a stress marker, especially when it comes to comparison with non-invasive cortisol measurements in the fish challenging stress. To meet the challenge and to test the reliability and applicability of glucose measurement in practice, in this study, different environmental/anthropogenic exposure scenarios were simulated to provoke chemical-induced stress in fish (14-days exposure to landfill leachate) followed by a 14-days stress recovery period and under the cumulative effect of leachate fish subsequently exposed to pathogenic oomycetes (Saprolegnia parasitica) to represent a possible infection in fish. It is endemic to all freshwater habitats worldwide and is partly responsible for the decline of natural freshwater fish populations. Brown trout (Salmo trutta fario) and sea trout (Salmo trutta trutta) juveniles were chosen because of a large amount of literature on physiological stress responses in these species was known. Glucose content in fish by applying invasive and non-invasive glucose measurement procedures in different test mediums such as fish blood, gill tissues and fish-holding water were analysed. The results indicated that the quantity of glucose released in the holding water of stressed fish increased considerably (approx. 3.5- to 8-fold) and remained substantially higher (approx. 2- to 4-fold) throughout the stress recovery period than the control level suggesting that fish did not recover from chemical-induced stress. The circulating levels of glucose in blood and gills decreased over time in fish exposed to different stressors. However, the gill glucose level in fish showed a decrease similar to the control levels measured at the same time points, which was found to be insignificant. The data analysis showed that concentrations of β-D glucose measured in gills of fish treated with S. parasitica differed significantly from the control recovery, but did not differ from the leachate recovery group showing that S. parasitica presence in water had no additive effects. In contrast, a positive correlation between blood and gills glucose were determined. Parallel trends in blood and water glucose changes suggest that water glucose measurement has much potency in predicting stress. This study demonstrated that measuring β-D-glucose in fish-holding water is not stressful as it involves no handling and manipulation of an organism and has critical technical advantages concerning current (invasive) methods, mainly using blood samples or specific tissues. The quantification of glucose could be essential for studies examining the stress physiology/aquaculture studies interested in the assessment or long-term monitoring of fish health.Keywords: brown trout, landfill leachate, sea trout, pathogenic oomycetes, β-D-glucose
Procedia PDF Downloads 1741140 Engineering Topology of Construction Ecology in Urban Environments: Suez Canal Economic Zone
Authors: Moustafa Osman Mohammed
Abstract:
Integration sustainability outcomes give attention to construction ecology in the design review of urban environments to comply with Earth’s System that is composed of integral parts of the (i.e., physical, chemical and biological components). Naturally, exchange patterns of industrial ecology have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. When engineering topology is affecting internal and external processes in system networks, it postulated the valence of the first-level spatial outcome (i.e., project compatibility success). These instrumentalities are dependent on relating the second-level outcome (i.e., participant security satisfaction). Construction ecology approach feedback energy from resources flows between biotic and abiotic in the entire Earth’s ecosystems. These spatial outcomes are providing an innovation, as entails a wide range of interactions to state, regulate and feedback “topology” to flow as “interdisciplinary equilibrium” of ecosystems. The interrelation dynamics of ecosystems are performing a process in a certain location within an appropriate time for characterizing their unique structure in “equilibrium patterns”, such as biosphere and collecting a composite structure of many distributed feedback flows. These interdisciplinary systems regulate their dynamics within complex structures. These dynamic mechanisms of the ecosystem regulate physical and chemical properties to enable a gradual and prolonged incremental pattern to develop a stable structure. The engineering topology of construction ecology for integration sustainability outcomes offers an interesting tool for ecologists and engineers in the simulation paradigm as an initial form of development structure within compatible computer software. This approach argues from ecology, resource savings, static load design, financial other pragmatic reasons, while an artistic/architectural perspective, these are not decisive. The paper described an attempt to unify analytic and analogical spatial modeling in developing urban environments as a relational setting, using optimization software and applied as an example of integrated industrial ecology where the construction process is based on a topology optimization approach.Keywords: construction ecology, industrial ecology, urban topology, environmental planning
Procedia PDF Downloads 1321139 Reconceptualizing “Best Practices” in Public Sector
Authors: Eftychia Kessopoulou, Styliani Xanthopoulou, Ypatia Theodorakioglou, George Tsiotras, Katerina Gotzamani
Abstract:
Public sector managers frequently herald that implementing best practices as a set of standards, may lead to superior organizational performance. However, recent research questions the objectification of best practices, highlighting: a) the inability of public sector organizations to develop innovative administrative practices, as well as b) the adoption of stereotypical renowned practices inculcated in the public sector by international governance bodies. The process through which organizations construe what a best practice is, still remains a black box that is yet to be investigated, given the trend of continuous changes in public sector performance, as well as the burgeoning interest of sharing popular administrative practices put forward by international bodies. This study aims to describe and understand how organizational best practices are constructed by public sector performance management teams, like benchmarkers, during the benchmarking-mediated performance improvement process and what mechanisms enable this construction. A critical realist action research methodology is employed, starting from a description of various approaches on best practice nature when a benchmarking-mediated performance improvement initiative, such as the Common Assessment Framework, is applied. Firstly, we observed the benchmarker’s management process of best practices in a public organization, so as to map their theories-in-use. As a second step we contextualized best administrative practices by reflecting the different perspectives emerged from the previous stage on the design and implementation of an interview protocol. We used this protocol to conduct 30 semi-structured interviews with “best practice” process owners, in order to examine their experiences and performance needs. Previous research on best practices has shown that needs and intentions of benchmarkers cannot be detached from the causal mechanisms of the various contexts in which they work. Such causal mechanisms can be found in: a) process owner capabilities, b) the structural context of the organization, and c) state regulations. Therefore, we developed an interview protocol theoretically informed in the first part to spot causal mechanisms suggested by previous research studies and supplemented it with questions regarding the provision of best practice support from the government. Findings of this work include: a) a causal account of the nature of best administrative practices in the Greek public sector that shed light on explaining their management, b) a description of the various contexts affecting best practice conceptualization, and c) a description of how their interplay changed the organization’s best practice management.Keywords: benchmarking, action research, critical realism, best practices, public sector
Procedia PDF Downloads 1291138 Anticancer Potentials of Aqueous Tinospora cordifolia and Its Bioactive Polysaccharide, Arabinogalactan on Benzo(a)Pyrene Induced Pulmonary Tumorigenesis: A Study with Relevance to Blood Based Biomarkers
Authors: Vandana Mohan, Ashwani Koul
Abstract:
Aim: To evaluate the potential of Aqueous Tinospora cordifolia stem extract (Aq.Tc) and Arabinogalactan (AG) on pulmonary carcinogenesis and associated tumor markers. Background: Lung cancer is one of the most frequent malignancy with high mortality rate due to limitation of early detection resulting in low cure rates. Current research effort focuses on identifying some blood-based biomarkers like CEA, ctDNA and LDH which may have potential to detect cancer at an early stage, evaluation of therapeutic response and its recurrence. Medicinal plants and their active components have been widely investigated for their anticancer potentials. Aqueous preparation of T. Cordifolia extract is enriched in the polysaccharide fraction i.e., AG when compared with other types of extract. Moreover, reports are available of polysaccharide fraction of T. Cordifolia in in vitro lung cancer models which showed profound anti-metastatic activity against these cell lines. However, not much has been explored about its effect in in vivo lung cancer models and the underlying mechanism involved. Experimental Design: Mice were randomly segregated into six groups. Group I animals served as control. Group II animals were administered with Aq. Tc extract (200 mg/kg b.w.) p.o.on the alternate days. Group III animals were fed with AG (7.5 mg/kg b.w.) p.o. on the alternate days (thrice a week). Group IV animals were installed with Benzo(a)pyrene (50 mg/kg b.w.), i.p. twice within an interval of two weeks. Group V animals received Aq. Tc extract as in group II along with it B(a)P was installed after two weeks of Aq. Tc administration following the same protocol as for group IV. Group VI animals received AG as in group III along with it B(a)P was installed after two weeks of AG administration. Results: Administration of B(a)P to mice resulted in increased tumor incidence, multiplicity and pulmonary somatic index with concomitant increase in serum/plasma markers like CEA, ctDNA, LDH and TNF-α.Aq.Tc and AG supplementation significantly attenuated these alterations at different stages of tumorigenesis thereby showing potent anti-cancer effect in lung cancer. A pronounced decrease in serum/plasma markers were observed in animals treated with Aq.Tc as compared to those fed with AG. Also, extensive hyperproliferation of alveolar epithelium was prominent in B(a)P induced lung tumors. However, treatment of Aq.Tc and AG to lung tumor bearing mice exhibited reduced alveolar damage evident from decreased number of hyperchromatic irregular nuclei. A direct correlation between the concentration of tumor markers and the intensity of lung cancer was observed in animals bearing cancer co-treated with Aq.Tc and AG. Conclusion: These findings substantiate the chemopreventive potential of Aq.Tc and AG against lung tumorigenesis. Interestingly, Aq.Tc was found to be more effective in modulating the cancer as reflected by various observations which may be attributed to the synergism offered by various components of Aq.Tc. Further studies are in progress to understand the underlined mechanism in inhibiting lung tumorigenesis by Aq.Tc and AG.Keywords: Arabinogalactan, Benzo(a)pyrene B(a)P, carcinoembryonic antigen (CEA), circulating tumor DNA (ctDNA), lactate dehydrogenase (LDH), Tinospora cordifolia
Procedia PDF Downloads 1851137 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture
Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger
Abstract:
3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.Keywords: 3D woven composites, compression, preforms, textile composites
Procedia PDF Downloads 1361136 Facial Behavior Modifications Following the Diffusion of the Use of Protective Masks Due to COVID-19
Authors: Andreas Aceranti, Simonetta Vernocchi, Marco Colorato, Daniel Zaccariello
Abstract:
Our study explores the usefulness of implementing facial expression recognition capabilities and using the Facial Action Coding System (FACS) in contexts where the other person is wearing a mask. In the communication process, the subjects use a plurality of distinct and autonomous reporting systems. Among them, the system of mimicking facial movements is worthy of attention. Basic emotion theorists have identified the existence of specific and universal patterns of facial expressions related to seven basic emotions -anger, disgust, contempt, fear, sadness, surprise, and happiness- that would distinguish one emotion from another. However, due to the COVID-19 pandemic, we have come up against the problem of having the lower half of the face covered and, therefore, not investigable due to the masks. Facial-emotional behavior is a good starting point for understanding: (1) the affective state (such as emotions), (2) cognitive activity (perplexity, concentration, boredom), (3) temperament and personality traits (hostility, sociability, shyness), (4) psychopathology (such as diagnostic information relevant to depression, mania, schizophrenia, and less severe disorders), (5) psychopathological processes that occur during social interactions patient and analyst. There are numerous methods to measure facial movements resulting from the action of muscles, see for example, the measurement of visible facial actions using coding systems (non-intrusive systems that require the presence of an observer who encodes and categorizes behaviors) and the measurement of electrical "discharges" of contracting muscles (facial electromyography; EMG). However, the measuring system invented by Ekman and Friesen (2002) - "Facial Action Coding System - FACS" is the most comprehensive, complete, and versatile. Our study, carried out on about 1,500 subjects over three years of work, allowed us to highlight how the movements of the hands and upper part of the face change depending on whether the subject wears a mask or not. We have been able to identify specific alterations to the subjects’ hand movement patterns and their upper face expressions while wearing masks compared to when not wearing them. We believe that finding correlations between how body language changes when our facial expressions are impaired can provide a better understanding of the link between the face and body non-verbal language.Keywords: facial action coding system, COVID-19, masks, facial analysis
Procedia PDF Downloads 801135 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 1291134 Changes in Rainfall and Temperature and Its Impact on Crop Production in Moyamba District, Southern Sierra Leone
Authors: Keiwoma Mark Yila, Mathew Lamrana Siaffa Gboku, Mohamed Sahr Lebbie, Lamin Ibrahim Kamara
Abstract:
Rainfall and temperature are the important variables which are often used to trace climate variability and change. A perception study and analysis of climatic data were conducted to assess the changes in rainfall and temperature and their impact on crop production in Moyamba district, Sierra Leone. For the perception study, 400 farmers were randomly selected from farmer-based organizations (FBOs) in 4 chiefdoms, and 30 agricultural extension workers (AWEs) in the Moyamba district were purposely selected as respondents. Descriptive statistics and Kendall’s test of concordance was used to analyze the data collected from the farmers and AEWs. Data for the analysis of variability and trends of rainfall and temperature from 1991 to 2020 were obtained from the Sierra Leone Meteorological Agency and Njala University and grouped into monthly, seasonal and annual time series. Regression analysis was used to determine the statistical values and trend lines for the seasonal and annual time series data. The Mann-Kendall test and Sen’s Slope Estimator were used to analyze the trends' significance and magnitude, respectively. The results of both studies show evidence of climate change in the Moyamba district. A substantial number of farmers and AEWs perceived a decrease in the annual rainfall amount, length of the rainy season, a late start and end of the rainy season, an increase in the temperature during the day and night, and a shortened harmattan period over the last 30 years. Analysis of the meteorological data shows evidence of variability in the seasonal and annual distribution of rainfall and temperature, a decreasing and non-significant trend in the rainy season and annual rainfall, and an increasing and significant trend in seasonal and annual temperature from 1991 to 2020. However, the observed changes in rainfall and temperature by the farmers and AEWs partially agree with the results of the analyzed meteorological data. The majority of the farmers perceived that; adverse weather conditions have negatively affected crop production in the district. Droughts, high temperatures, and irregular rainfall are the three major adverse weather events that farmers perceived to have contributed to a substantial loss in the yields of the major crops cultivated in the district. In response to the negative effects of adverse weather events, a substantial number of farmers take no action due to their lack of knowledge and technical or financial capacity to implement climate-sensitive agricultural (CSA) practices. Even though few farmers are practising some CSA practices in their farms, there is an urgent need to build the capacity of farmers and AEWs to adapt to and mitigate the negative impacts of climate change. The most priority support needed by farmers is the provision of climate-resilient crop varieties, whilst the AEWs need training on CSA practices.Keywords: climate change, crop productivity, farmer’s perception, rainfall, temperature, Sierra Leone
Procedia PDF Downloads 74