Search results for: standard uptake value (SUV)
537 Analysis of Distance Travelled by Plastic Consumables Used in the First 24 Hours of an Intensive Care Admission: Impacts and Methods of Mitigation
Authors: Aidan N. Smallwood, Celestine R. Weegenaar, Jack N. Evans
Abstract:
The intensive care unit (ICU) is a particularly resource heavy environment, in terms of staff, drugs and equipment required. Whilst many areas of the hospital are attempting to cut down on plastic use and minimise their impact on the environment, this has proven challenging within the confines of intensive care. Concurrently, as globalization has progressed over recent decades, there has been a tendency towards centralised manufacturing with international distribution networks for products, often covering large distances. In this study, we have modelled the standard consumption of plastic single-use items over the course of the first 24-hours of an average individual patient’s stay in a 12 bed ICU in the United Kingdom (UK). We have identified the country of manufacture and calculated the minimum possible distance travelled by each item from factory to patient. We have assumed direct transport via the shortest possible straight line from country of origin to the UK and have not accounted for transport within either country. Assuming an intubated patient with invasive haemodynamic monitoring and central venous access, there are a total of 52 distincts, largely plastic, disposable products which would reasonably be required in the first 24-hours after admission. Each product type has only been counted once to account for multiple items being shipped as one package. Travel distances from origin were summed to give the total distance combined for all 52 products. The minimum possible total distance travelled from country of origin to the UK for all types of product was 273,353 km, equivalent to 6.82 circumnavigations of the globe, or 71% of the way to the moon. The mean distance travelled was 5,256 km, approximately the distance from London to Mecca. With individual packaging for each item, the total weight of consumed products was 4.121 kg. The CO2 produced shipping these items by air freight would equate to 30.1 kg, however doing the same by sea would produce 0.2 kg CO2. Extrapolating these results to the 211,932 UK annual ICU admissions (2018-2019), even with the underestimates of distance and weight of our assumptions, air freight would account for 6586 tons CO2 emitted annually, approximately 130 times that of sea freight. Given the drive towards cost saving within the UK health service, and the decline of the local manufacturing industry, buying from intercontinental manufacturers is inevitable However, transporting all consumables by sea where feasible would be environmentally beneficial, as well as being less costly than air freight. At present, the NHS supply chain purchases from medical device companies, and there is no freely available information as to the transport mode used to deliver the product to the UK. This must be made available to purchasers in order to give a fuller picture of life cycle impact and allow for informed decision making in this regard.Keywords: CO2, intensive care, plastic, transport
Procedia PDF Downloads 178536 Bulbar Conjunctival Kaposi's Sarcoma Unmasked by Immune Reconstitution Syndrome
Authors: S. Mohd Afzal, R. O'Connell
Abstract:
Kaposi's sarcoma (KS) is the most common HIV-related cancer, and ocular manifestations constitute at least 25% of all KS cases. However, ocular presentations often occur in the context of systemic KS, and isolated lesions are rare. We report a unique case of ocular KS masquerading as subconjunctival haemorrhage, and only developing systemic manifestations after initiation of HIV treatment. Case: A 49-year old man with previous hypertensive stroke and newly diagnosed HIV infection presented with an acutely red left eye following repeated bouts of coughing. Given the convincing history of poorly controlled hypertension and cough, a diagnosis of subconjunctival haemorrhage was made. Over the next week, his ocular lesion began to improve and he subsequently started anti-retroviral therapy. Prior to receiving anti-retroviral therapy, his CD4+ lymphocyte count was 194 cells/mm3 with HIV viral load greater than 1 million/ml. This rapidly improved to a viral load of 150 copies/ml within 2 weeks of starting treatment. However, a few days after starting HIV treatment, his ocular lesion recurred. Ophthalmic examination was otherwise normal. He also developed widespread lymphadenopathy and multiple dark lesions on his torso. Histology and virology confirmed KS, systemically triggered by Immune Reconstitution Syndrome (KS-IRIS). The patient has since undergone chemotherapy successfully. Discussion: Kaposi's sarcoma is an atypical tumour caused by human herpesvirus 8 (HHV-8), also known as Kaposi’s sarcoma-associated herpesvirus (KSHV). In immunosuppressed patients, KSHV can also cause lymphoproliferative disorders such as primary effusion lymphoma and Castleman's disease (in our patient’s case, this was excluded through histological analysis of lymph nodes). KSHV is one of the seven currently known human oncoviruses, and its pathogenesis is poorly understood. Up to 13% of patients with HIV-related KS experience worsening of the disease after starting anti-retroviral treatment, due to a sudden increase in CD4 cell counts. Histology remains the diagnostic gold standard. Current British HIV Association (BHIVA) guidelines recommend treatment using anti-retroviral drugs, with either intralesional vinblastine for local disease or systemic chemotherapy for disseminated KS. Conclusion: This case is unique as ocular KS as initial presentation is rare and our patient's diagnosis was only made after systemic lesions were triggered by immune reconstitution. KS should be considered as an important differential diagnosis for red eyes in all patients at risk of acquiring HIV infection.Keywords: human herpesvirus 8, human immunodeficiency virus, immune reconstitution syndrome, Kaposi’s sarcoma, Kaposi’s sarcoma-associated herpesvirus
Procedia PDF Downloads 334535 An Integrative Review on the Experiences of Integration of Quality Assurance Systems in Universities
Authors: Laura Mion
Abstract:
Concepts of quality assurance and management are now part of the organizational culture of the Universities. Quality Assurance (QA) systems are, in large part, provided for by national regulatory dictates or supranational indications (such as, for example, at European level are, the ESG Guidelines "European Standard Guidelines"), but their specific definition, in terms of guiding principles, requirements and methodologies, are often delegated to the national evaluation agencies or to the autonomy of individual universities. For this reason, the experiences of implementation of QA systems in different countries and in different universities is an interesting source of information to understand how quality in universities is understood, pursued and verified. The literature often deals with the treatment of the experiences of implementation of QA systems in the individual areas in which the University's activity is carried out - teaching, research, third mission - but only rarely considers quality systems with a systemic and integrated approach, which allows to correlate subjects, actions, and performance in a virtuous circuit of continuous improvement. In particular, it is interesting to understand how to relate the results and uses of the QA in the triple distinction of university activities, identifying how one can cause the performance of the other as a function of an integrated whole and not as an exploit of specific activities or processes conceived in an abstractly atomistic way. The aim of the research is, therefore, to investigate which experiences of "integrated" QA systems are present on the international scene: starting from the experience of European countries that have long shared the Bologna Process for the creation of a European space for Higher Education (EHEA), but also considering experiences from emerging countries that use QA processes to develop their higher education systems to keep them up to date with international levels. The concept of "integration", in this research, is understood in a double meaning: i) between the different areas of activity, in particular between the didactic and research areas, and possibly with the so-called "third mission" "ii) the functional integration between those involved in quality assessment and management and the governance of the University. The paper will present the results of a systematic review conducted according with a method of an integrative review aimed at identifying best practices of quality assurance systems, in individual countries or individual universities, with a high level of integration. The analysis of the material thus obtained has made it possible to grasp common and transversal elements of QA system integration practices or particularly interesting elements and strengths of these experiences that can, therefore, be considered as winning aspects in a QA practice. The paper will present the method of analysis carried out, and the characteristics of the experiences identified, of which the structural elements will be highlighted (level of integration, areas considered, organizational levels included, etc.) and the elements for which these experiences can be considered as best practices.Keywords: quality assurance, university, integration, country
Procedia PDF Downloads 86534 The Moderation Effect of Financial Distress on the Relationship Between Market Power and Earnings Management of Firms
Authors: Shazia Ali, Yves Mard, Éric Severin
Abstract:
To the best of our knowledge, this is the first study to have analyzed the impact of a) firm-specific product-market power and b) industry competition on earnings management behavior of European firms in distress versus healthy years while controlling for firm-level characteristics. We predicted a significant relationship between firms’ product market power and earnings management tools and their trade-off under the moderation effect of financial distress. We found that the firm-level market power hereinafter referred to as MP (proxied by the industry-adjusted Lerner Index) is positively associated with both real and accrual earnings management. However, MP is associated with a higher level of real earnings management compared to accrual earnings management in distress years compared to healthy years. On the other hand, industry product market power (representing low competition and proxied by the inverse of the total number of firms in an industry hereinafter referred to as NUMB) and firms product market power (proxied by firm market share hereinafter referred to as MS) are associated with lower inflationary accruals and higher deflationary accruals respectively. On the other hand, they are found to be linked with higher real earnings management in distress versus healthy years. When we divided the sample into small and big firms based on their respective industry-year median total assets, we found that all three measures of industry competition (Industry Median Lerner Index (hereinafter referred to as IMLI), NUMB, and Herfindahl–Hirschman Index (hereinafter referred to as HHI) indicate that small firms in low-competitive industries in financial distress are more likely to inflate their earnings through discretionary accruals. While big firms in this situation are more likely to lower the use of both inflationary and deflationary discretionary accruals as indicated by IMLI and HHI and trade-off accruals earnings management for real earnings management as indicated by NUMB. Moreover, IMLI and HHI did not show any interesting results when we divided the sample based on the firm Lerner Index/Market Power. However, the distressed firms with high market power (MP>industry median) are found to engage in income-decreasing discretionary accruals in low-competitive industries (high NUMB). Whereas firms with low market power in the same industry use downward discretionary accruals but inflate income using real activities (abnCFO). Our findings are robust across alternate measures of discretionary accruals and financial distress, such as the Altman Z-Score. The finding of the study is valuable for accounting standard setters, competition authorities, policymakers, and investors alike to help in informed decision-making.Keywords: financial distress, earnings management, market competition
Procedia PDF Downloads 119533 On Copular Constructions in Yemeni Arabic and the Cartography of Subjects
Authors: Ameen Alahdal
Abstract:
This paper investigates copular constructions in Raimi Yemeni Arabic (RYA). The aim of the paper is actually twofold. First it explores the types of copular constructions in Raimi Yemeni Arabic, a variety of Arabic that has not attracted a lot of attention. In this connection, the paper shows that RYA manifests ‘bare’, verbal and pronominal/PRON copular constructions, just like other varieties of Arabic and indeed other Semitic languages like Hebrew. The sentences below from RYA represent the three constructions, respectively. (1) a. nada Hilwah Nada pretty.3sf ‘Nada is pretty’ b. kan al-banat hina was the-girls here ‘The girls were here c. ali hu-l mudiir Ali he-the manager ‘Ali is the manager’ Interestingly, in addition to these common types of copular constructions, RYA seems to exhibit dual copula sentences, a construction that features both a pronominal copula and a verbal copula. Such a construction is attested neither in Standard Arabic nor in other modern varieties of Arabic such as Lebanese, Moroccan, Egyptian, Jordanian. Remarkably, dual copular sentences do not appear even in other dialects of Yemeni Arabic such as Sanaani, Adeni and Tehami. (2) is an example. (2) maha kan-ih mudarrisah maha was-she teacher.3sf ‘Maha was a teacehr’ Second, the paper considers the cartography of subject positions in copular constructions proposed by Shlonsky and Rizzi (2018). Different copular constructions seem to involve different subject positions (which might eventually correlate with different interpretations – not our concern in this paper). Here, it is argued that in a bare copular sentence, as in (1a), RYA might exploit two criterial subject positions (in Rizzi’s sense), in addition to the canonical Spec,TP position. Under mainstream minimalist assumption, a copular sentence is analyzed as a PredP. Thus, in addition to the PredP-related thematic subject position, a criterial subject position is posited outside of PredP. (3) below represents the cartography of subject positions in a bare copular construction. (3) [……..DP subj PredP DP Pred DP/AP/PP ] In PRON sentences, as exemplified in (1c), another two subject positions are postulated high in the clause, particularly above PolP. (4) illustrates the hierarchy of the subject positions in a PRON copular construction. The subject resides in Spec,SUBJ2P. (4) …DP SUBJ2 …DP SUBJ1 … Pol … DP subj PredP Another related phenomenon in RYA which sets it apart from other languages like Hebrew is that of negative bare copular construction. This construction involves a PRON, which is not found in its affirmative counterpart. PRON, however, is hosted neither by SUBJ20 nor by SUBJ10. Rather, PRON occurs below Neg0 (Pol0 in the hierarchy). This situation raises interesting issues for the hierarchy of subjects in copular constructions as well as to the syntax of the left periphery in general. With regard to what causes the subject to move, there are different potential triggers. For instance, movement of the subject at the base, i.e., out of PredP is triggered by a labeling failure. Other movements of the subject can be driven by a formal feature like EPP, or a criterial feature like [subj].Keywords: Yemeni Arabic, copular constructions, cartography of subjects, labeling, criterial positions
Procedia PDF Downloads 110532 Disaggregating Communities and the Making of Factional States: Evidence from Joint Forest Management in Sundarban, India
Authors: Amrita Sen
Abstract:
In the face of a growing insurgent movement and the perceived failure of the state and the market towards sustainable resource management, a range of decentralized forest management policies was formulated in the last two decades, which recognized the need for community representations within the statutory methods of forest management. The recognition conceded on the virtues of ecological sustainability and traditional environmental knowledge, which were considered to be the principal repositories of the forest dependent communities. The present study, in the light of empirical insights, reflects on the contemporary disjunctions between the preconceived communitarian ethic in environmentalism and the lived reality of forest based life-worlds. Many of the popular as well as dominant ideologies, which have historically shaped the conceptual and theoretical understanding of sociology, needs further perusal in the context of the emerging contours of empirical knowledge, which lends opportunities for substantive reworking and analysis. The image of the community appears to be one of those concepts, an identity which has for long defined perspectives and processes associated with people living together harmoniously in small physical spaces. Through an ethnographic account of the implementation of Joint Forest Management (JFM) in a forest fringe village in Sundarban, the study explores the ways in which the idea of ‘community’ gets transformed through the process of state-making, rendering the necessity of its departure from the standard, conventional definition of homogeneity and internal equity. The study necessitates an attention towards the anthropology of micro-politics, disaggregating an essentially constructivist anthropology of ‘collective identities’, which can render the visibility of political mobilizations plausible within the seemingly culturalist production of communities. The two critical questions that the paper seeks to ask in this context are: how the ‘local’ is constituted within community based conservation practices? Within the efforts of collaborative forest management, how accurately does the depiction of ‘indigenous environmental knowledge’, subscribe to its role of sustainable conservation practices? Reflecting on the execution of JFM in Sundarban, the study critically explores the ways in which the state ceases to be ‘trans-national’ and interacts with the rural life-worlds through its local factions. Simultaneously, the study attempts to articulate the scope of constructing a competing representation of community, shaped by increasing political negotiations and bureaucratic alignments which strains against the usual preoccupations with tradition primordiality and non material culture as well as the amorous construction of indigeneity.Keywords: community, environmentalism, JFM, state-making, identities, indigenous
Procedia PDF Downloads 197531 Providing Health Promotion Information by Digital Animation to International Visitors in Japan: A Factorial Design View of Nurses
Authors: Mariko Nishikawa, Masaaki Yamanaka, Ayami Kondo
Abstract:
Background: International visitors to Japan are at a risk of travel-related illnesses or injury that could result in hospitalization in a country where the language and customs are unique. Over twelve million international visitors came to Japan in 2015, and more are expected leading up to the Tokyo Olympics. One aspect of this is the potentially greater demand on healthcare services by foreign visitors. Nurses who take care of them have anxieties and concerns of their knowledge of the Japanese health system. Objectives: An effective distribution of travel-health information is vital for facilitating care for international visitors. Our research investigates whether a four-minute digital animation (Mari Info Japan), designed and developed by the authors and applied to a survey of 513 nurses who take care of foreigners daily, could clarify travel health procedures, reduce anxieties, while making it enjoyable to learn. Methodology: Respondents to a survey were divided into two groups. The intervention group watched Mari Info Japan. The control group read a standard guidebook. The participants were requested to fill a two-page questionnaire called Mari Meter-X, STAI-Y in English and mark a face scale, before and after the interventions. The questions dealt with knowledge of health promotion, the Japanese healthcare system, cultural concerns, anxieties, and attitudes in Japan. Data were collected from an intervention group (n=83) and control group (n=83) of nurses in a hospital, Japan for foreigners from February to March, 2016. We analyzed the data using Text Mining Studio for open-ended questions and JMP for statistical significance. Results: We found that the intervention group displayed more confidence and less anxiety to take care of foreign patients compared to the control group. The intervention group indicated a greater comfort after watching the animation. However, both groups were most likely to be concerned about language, the cost of medical expenses, informed consent, and choice of hospital. Conclusions: From the viewpoint of nurses, the provision of travel-health information by digital animation to international visitors to Japan was more effective than traditional methods as it helped them be better prepared to treat travel-related diseases and injury among international visitors. This study was registered number UMIN000020867. Funding: Grant–in-Aid for Challenging Exploratory Research 2010-2012 & 2014-16, Japanese Government.Keywords: digital animation, health promotion, international visitor, Japan, nurse
Procedia PDF Downloads 306530 Reaching the Goals of Routine HIV Screening Programs: Quantifying and Implementing an Effective HIV Screening System in Northern Nigeria Facilities Based on Optimal Volume Analysis
Authors: Folajinmi Oluwasina, Towolawi Adetayo, Kate Ssamula, Penninah Iutung, Daniel Reijer
Abstract:
Objective: Routine HIV screening has been promoted as an essential component of efforts to reduce incidence, morbidity, and mortality. The objectives of this study were to identify the optimal annual volume needed to realize the public health goals of HIV screening in the AIDS Healthcare Foundation supported hospitals and establish an implementation process to realize that optimal annual volume. Methods: Starting in 2011 a program was established to routinize HIV screening within communities and government hospitals. In 2016 Five-years of HIV screening data were reviewed to identify the optimal annual proportions of age-eligible patients screened to realize the public health goals of reducing new diagnoses and ending late-stage diagnosis (tracked as concurrent HIV/AIDS diagnosis). Analysis demonstrated that rates of new diagnoses level off when 42% of age-eligible patients were screened, providing a baseline for routine screening efforts; and concurrent HIV/AIDS diagnoses reached statistical zero at screening rates of 70%. Annual facility based targets were re-structured to meet these new target volumes. Restructuring efforts focused on right-sizing HIV screening programs to align and transition programs to integrated HIV screening within standard medical care and treatment. Results: Over one million patients were screened for HIV during the five years; 16, 033 new HIV diagnoses and access to care and treatment made successfully for 82 % (13,206), and concurrent diagnosis rates went from 32.26% to 25.27%. While screening rates increased by 104.7% over the 5-years, volume analysis demonstrated that rates need to further increase by 62.52% to reach desired 20% baseline and more than double to reach optimal annual screening volume. In 2011 facility targets for HIV screening were increased to reflect volume analysis, and in that third year, 12 of the 19 facilities reached or exceeded new baseline targets. Conclusions and Recommendation: Quantifying targets against routine HIV screening goals identified optimal annual screening volume and allowed facilities to scale their program size and allocate resources accordingly. The program transitioned from utilizing non-evidence based annual volume increases to establishing annual targets based on optimal volume analysis. This has allowed efforts to be evaluated on the ability to realize quantified goals related to the public health value of HIV screening. Optimal volume analysis helps to determine the size of an HIV screening program. It is a public health tool, not a tool to determine if an individual patient should receive screening.Keywords: HIV screening, optimal volume, HIV diagnosis, routine
Procedia PDF Downloads 263529 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches
Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang
Abstract:
Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis
Procedia PDF Downloads 398528 Diagnostic Delays and Treatment Dilemmas: A Case of Drug-Resistant HIV and Tuberculosis
Authors: Christi Jackson, Chuka Onaga
Abstract:
Introduction: We report a case of delayed diagnosis of extra-pulmonary INH-mono-resistant Tuberculosis (TB) in a South African patient with drug-resistant HIV. Case Presentation: A 36-year old male was initiated on 1st line (NNRTI-based) anti-retroviral therapy (ART) in September 2009 and switched to 2nd line (PI-based) ART in 2011, according to local guidelines. He was following up at the outpatient wellness unit of a public hospital, where he was diagnosed with Protease Inhibitor resistant HIV in March 2016. He had an HIV viral load (HIVVL) of 737000 copies/mL, CD4-count of 10 cells/µL and presented with complaints of productive cough, weight loss, chronic diarrhoea and a septic buttock wound. Several investigations were done on sputum, stool and pus samples but all were negative for TB. The patient was treated with antibiotics and the cough and the buttock wound improved. He was subsequently started on a 3rd-line ART regimen of Darunavir, Ritonavir, Etravirine, Raltegravir, Tenofovir and Emtricitabine in May 2016. He continued losing weight, became too weak to stand unsupported and started complaining of abdominal pain. Further investigations were done in September 2016, including a urine specimen for Line Probe Assay (LPA), which showed M. tuberculosis sensitive to Rifampicin but resistant to INH. A lymph node biopsy also showed histological confirmation of TB. Management and outcome: He was started on Rifabutin, Pyrazinamide and Ethambutol in September 2016, and Etravirine was discontinued. After 6 months on ART and 2 months on TB treatment, his HIVVL had dropped to 286 copies/mL, CD4 improved to 179 cells/µL and he showed clinical improvement. Pharmacy supply of his individualised drugs was unreliable and presented some challenges to continuity of treatment. He successfully completed his treatment in June 2017 while still maintaining virological suppression. Discussion: Several laboratory-related factors delayed the diagnosis of TB, including the unavailability of urine-lipoarabinomannan (LAM) and urine-GeneXpert (GXP) tests at this facility. Once the diagnosis was made, it presented a treatment dilemma due to the expected drug-drug interactions between his 3rd-line ART regimen and his INH-resistant TB regimen, and specialist input was required. Conclusion: TB is more difficult to diagnose in patients with severe immunosuppression, therefore additional tests like urine-LAM and urine-GXP can be helpful in expediting the diagnosis in these cases. Patients with non-standard drug regimens should always be discussed with a specialist in order to avoid potentially harmful drug-drug interactions.Keywords: drug-resistance, HIV, line probe assay, tuberculosis
Procedia PDF Downloads 169527 Effect of Progressive Muscle Relaxation on the Postpartum Depression and General Comfort Levels
Authors: İlknur Gökşin, Sultan Ayaz Alkaya
Abstract:
Objective: Progressive muscle relaxation (PMR) include the deliberate stretching and relaxation of the major muscle groups of the human body. This study was conducted to evaluate the effect of PMR applied in women on the postpartum depression and general comfort level. Methods: The study population of this quasi-experimental study with pre-test, post-test and control group consisted of primipara women who had vaginal delivery in the obstetric service of a university hospital. The experimental and control groups consisted of 35 women each. The data were collected by questionnaire, the Edinburgh Postnatal Depression Scale (EPDS) and the General Comfort Questionnaire (GCQ). The women were matched according to their age and education level and divided into the experimental and control groups by simple random selection. Postpartum depression risk and general comfort was evaluated at the 2nd and 5th days, 10th and 15th days, fourth week and eighth week after birth. The experimental group was visited at home and PMR was applied. After the first visit, women were asked to apply PMR regularly three times a week for eight weeks. During the application, the researcher called the participants twice a week to follow up the continuity of the application. No intervention was performed in the control group. For data analysis, descriptive statistics such as number, percentage, mean, standard deviation, significance test of difference between two means and ANOVA were used. Approval of the ethics committee and permission of the institution were obtained for the study. Results: There were no significant differences between the women in the experimental and control groups in terms of age, education status and employment status (p>0.05). There was no statistically significant difference between the experimental and control groups in terms of EPDS pre-test, 1st, 2nd and 3rd follow-up mean scores (p>0.05). There was a statistically significant difference between EPDS pre-test and 3rd follow-up scores of the experimental group (p<0.05), whereas there was no such difference in the control group (p>0.05). There was no statistically significant difference between the experimental and control groups in terms of mean GCQ pre-test scores (p>0.05), whereas in the 1st, 2nd and 3rd follow-ups there was a statistically significant difference between the mean GCQ scores (p<0.05). It was found that there was a significant increase in the GCQ physical, psychospiritual and sociocultural comfort sub-scales, relief and relaxation levels of the experimental group between the pre-test and 3rd follow-ups scores (p<0.05). And, a significant decrease was found between pre-test and 3rd follow-up GCQ psychospiritual, environmental and sociocultural comfort sub-scale, relief, relaxation and superiority levels (p<0.05). Conclusion: Progressive muscle relaxation was effective on reducing the postpartum depression risk and increasing general comfort. It is recommended to provide progressive muscle relaxation training to women in the postpartum period as well as ensuring the continuity of this practice.Keywords: general comfort, postpartum depression, postpartum period, progressive muscle relaxation
Procedia PDF Downloads 255526 Comparative Histological, Immunohistochemical and Biochemical Study on the Effect of Vit. C, Vit. E, Gallic Acid and Silymarin on Carbon Tetrachloride Model of Liver Fibrosis in Rats
Authors: Safaa S. Hassan, Mohammed H. Elbakry, Safwat A. Mangoura, Zainab M. Omar
Abstract:
Background: Liver fibrosis is the main reason for increased mortality in chronic liver disease. It has no standard treatment. Antioxidants from a variety of sources are capable of slowing or preventing oxidation of other molecules. Aim: to evaluate the hepatoprotective effect of vit. C, vit. E and gallic acid in comparison to silymarin in the rat model of carbon tetrachloride induced liver fibrosis and their possible mechanisms of action. Material& Methods: A total number of 60 adult male albino rats 160-200gm were divided into six equal groups; received subcutaneous (s.c) injection for 8 weeks. Group I: as control. Group II: received 1.5 mL/kg of CCL4 .Group III: CCL4 and co- treatment with silymarin 100mg/kg p.o. daily. Group IV: CCL4 and co-treatment with vit. C 50mg/kg p.o. daily. Group V: CCL4 and co-treatment with vit. E 200mg/kg. p.o. Group VI: CCL4 and co-treatment with Gallic acid 100mg/kg. p.o. daily. Liver was processed for histological and immunohistochemical examination. Levels of AST, ALT, ALP, reduced GSH, MDA, SOD and hydroxyproline concentration were measured and evaluated statistically. Results: Light and electron microscopic examination of liver of group II exhibited foci of altered cells with dense nuclei and vacuolated, granular cytoplasm, mononuclear cell infiltration in portal areas, profuse collagen fiber deposits were found around portal tract, more intense staining α-SMA-positive cells occupied most of the liver fibrosis tissue, electron lucent areas in the cytoplasm of the hepatocytes, margination of nuclear chromatin. Treatment by any of the antioxidants variably reduced the hepatic structural changes induced by CCL4. Biochemical analysis showed that carbon tetrachloride significantly increased the levels of serum AST, ALT, ALP, hepatic malondialdehyde and hydroxyproline content. Moreover, it decreased the activities of superoxide dismutase and glutathione. Treatment with silymarin, gallic acid, vit. C and vit. E decreased significantly the AST, ALT, and ALP levels in plasma, MDA and hydroxyproline and increased the activities of SOD and glutathione in liver tissue. The effect of administration of CCl4 was improved with the used antioxidants in variable degrees. The most efficient antioxidant was silymarin followed by gallic acid and vit. C then vit. E. It is possibly due to their antioxidant effect, free radical scavenging properties and the reduction of oxidant dependent activation and proliferation of HSCs. Conclusion: So these antioxidants can be a promising drugs candidate for ameliorating liver fibrosis better than the use of the drugs and their side effects.Keywords: antioxidant, ccl4, gallic acid, liver fibrosis
Procedia PDF Downloads 271525 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China
Abstract:
With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.Keywords: landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture
Procedia PDF Downloads 163524 Effect of Tooth Bleaching Agents on Enamel Demineralisation
Authors: Najlaa Yousef Qusti, Steven J. Brookes, Paul A. Brunton
Abstract:
Background: Tooth discoloration can be an aesthetic problem, and tooth whitening using carbamide peroxide bleaching agents are a popular treatment option. However, there are concerns about possible adverse effects such as demineralisation of the bleached enamel; however, the cause of this demineralisation is unclear. Introduction: Teeth can become stained or discoloured over time. Tooth whitening is an aesthetic solution for tooth discoloration. Bleaching solutions of 10% carbamide peroxide (CP) have become the standard agent used in dentist-prescribed and home-applied ’vital bleaching techniques’. These materials release hydrogen peroxide (H₂O₂), the active whitening agent. However, there is controversy in the literature regarding the effect of bleaching agents on enamel integrity and enamel mineral content. The purpose of this study was to establish if carbamide peroxide bleaching agents affect the acid solubility of enamel (i.e., make teeth more prone to demineralisation). Materials and Methods: Twelve human premolar teeth were sectioned longitudinally along the midline and varnished to leave the natural enamel surface exposed. The baseline behavior of each tooth half in relation to its demineralisation in acid was established by sequential exposure to 4 vials containing 1ml of 10mM acetic acid (1 minute/vial). This was followed by exposure to 10% CP for 8 hours. After washing in distilled water, the tooth half was sequentially exposed to 4 further vials containing acid to test if the acid susceptibility of the enamel had been affected. The corresponding tooth half acted as a control and was exposed to distilled water instead of CP. The mineral loss was determined by measuring [Ca²⁺] and [PO₄³⁻] released in each vial using a calcium ion-selective electrode and the phosphomolybdenum blue method, respectively. The effect of bleaching on the tooth surfaces was also examined using SEM. Results: Exposure to carbamide peroxide did not significantly alter the susceptibility of enamel to acid attack, and SEM of the enamel surface revealed a slight alteration in surface appearance. SEM images of the control enamel surface showed a flat enamel surface with some shallow pits, whereas the bleached enamel appeared with an increase in surface porosity and some areas of mild erosion. Conclusions: Exposure to H₂O₂ equivalent to 10% CP does not significantly increase subsequent acid susceptibility of enamel as determined by Ca²⁺ release from the enamel surface. The effects of bleaching on mineral loss were indistinguishable from distilled water in the experimental system used. However, some surface differences were observed by SEM. The phosphomolybdenum blue method for phosphate is compromised by peroxide bleaching agents due to their oxidising properties. However, the Ca²⁺ electrode is unaffected by oxidising agents and can be used to determine the mineral loss in the presence of peroxides.Keywords: bleaching, carbamide peroxide, demineralisation, teeth whitening
Procedia PDF Downloads 125523 The Comparison of Bird’s Population between Naturally Regenerated Acacia Forest with Adjacent Secondary Indigenous Forest in Universiti Malaysia Sabah
Authors: Jephte Sompud, Emily A. Gilbert, Andy Russel Mojiol, Cynthia B. Sompud, Alim Biun
Abstract:
Naturally regenerated acacia forest and secondary indigenous forest forms some of the urban forests in Sabah. Naturally regenerated acacia trees are usually seen along the road that exists as forest islands. Acacia tree is not an indigenous tree species in Sabah that was introduced in the 1960’s as fire breakers that eventually became one of the preferred trees for forest plantation for paper and pulp production. Due to its adaptability to survive even in impoverished soils and poor-irrigated land, this species has rapidly spread throughout Sabah through natural regeneration. Currently, there is a lack of study to investigate the bird population in the naturally regenerated acacia forest. This study is important because it shed some light on the role of naturally regenerated acacia forest on bird’s population, as bird is known to be a good bioindicator forest health. The aim of this study was to document the bird’s population in naturally regenerated acacia forest with that adjacent secondary indigenous forest. The study site for this study was at Universiti Malaysia Sabah (UMS) Campus. Two forest types in the campus were chosen as a study site, of which were naturally regenerated Acacia Forest and adjacent secondary indigenous forest, located at the UMS Hill. A total of 21 sampling days were conducted in each of the forest types. The method used during this study was solely mist nets with three pockets. Whenever a bird is caught, it is extracted from the net to be identified and measurements were recorded in a standard data sheet. Mist netting was conducted from 6 morning until 5 evening. This study was conducted between February to August 2014. Birds that were caught were ring banded to initiate a long-term study on the understory bird’s population in the Campus The data was analyzed using descriptive analysis, diversity indices, and t-test. The bird population diversity at naturally regenerated Acacia forest with those at the secondary indigenous forest was calculated using two common indices, of which were Shannon-Wiener and Simpson diversity index. There were 18 families with 33 species that were recorded from both sites. The number of species recorded at the naturally regenerated acacia forest was 26 species while at the secondary indigenous forest were 19 species. The Shannon diversity index for Naturally Regenerated Acacia Forest and secondary indigenous forests were 2.87 and 2.46. The results show that there was very significantly higher species diversity at the Naturally Regenerated Acacia Forest as opposed to the secondary indigenous forest (p<0.001). This suggests that Naturally Regenerated Acacia forest plays an important role in urban bird conservation. It is recommended that Naturally Regenerated Acacia Forests should be considered as an established urban forest conservation area as they do play a role in biodiversity conservation. More future studies in Naturally Regenerated Acacia Forest should be encouraged to determine the status and value of biodiversity conservation of this ecosystem.Keywords: naturally regenerated acacia forest, bird population diversity, Universiti Malaysia Sabah, biodiversity conservation
Procedia PDF Downloads 424522 Cardiac Arrest after Cardiac Surgery
Authors: Ravshan A. Ibadov, Sardor Kh. Ibragimov
Abstract:
Objective. The aim of the study was to optimize the protocol of cardiopulmonary resuscitation (CPR) after cardiovascular surgical interventions. Methods. The experience of CPR conducted on patients after cardiovascular surgical interventions in the Department of Intensive Care and Resuscitation (DIR) of the Republican Specialized Scientific-Practical Medical Center of Surgery named after Academician V. Vakhidov is presented. The key to the new approach is the rapid elimination of reversible causes of cardiac arrest, followed by either defibrillation or electrical cardioversion (depending on the situation) before external heart compression, which may damage sternotomy. Careful use of adrenaline is emphasized due to the potential recurrence of hypertension, and timely resternotomy (within 5 minutes) is performed to ensure optimal cerebral perfusion through direct massage. Out of 32 patients, cardiac arrest in the form of asystole was observed in 16 (50%), with hypoxemia as the cause, while the remaining 16 (50%) experienced ventricular fibrillation caused by arrhythmogenic reactions. The age of the patients ranged from 6 to 60 years. All patients were evaluated before the operation using the ASA and EuroSCORE scales, falling into the moderate-risk group (3-5 points). CPR was conducted for cardiac activity restoration according to the American Heart Association and European Resuscitation Council guidelines (Ley SJ. Standards for Resuscitation After Cardiac Surgery. Critical Care Nurse. 2015;35(2):30-38). The duration of CPR ranged from 8 to 50 minutes. The ARASNE II scale was used to assess the severity of patients' conditions after CPR, and the Glasgow Coma Scale was employed to evaluate patients' consciousness after the restoration of cardiac activity and sedation withdrawal. Results. In all patients, immediate chest compressions of the necessary depth (4-5 cm) at a frequency of 100-120 compressions per minute were initiated upon detection of cardiac arrest. Regardless of the type of cardiac arrest, defibrillation with a manual defibrillator was performed 3-5 minutes later, and adrenaline was administered in doses ranging from 100 to 300 mcg. Persistent ventricular fibrillation was also treated with antiarrhythmic therapy (amiodarone, lidocaine). If necessary, infusion of inotropes and vasopressors was used, and for the prevention of brain edema and the restoration of adequate neurostatus within 1-3 days, sedation, a magnesium-lidocaine mixture, mechanical intranasal cooling of the brain stem, and neuroprotective drugs were employed. A coordinated effort by the resuscitation team and proper role allocation within the team were essential for effective cardiopulmonary resuscitation (CPR). All these measures contributed to the improvement of CPR outcomes. Conclusion. Successful CPR following cardiac surgical interventions involves interdisciplinary collaboration. The application of an optimized CPR standard leads to a reduction in mortality rates and favorable neurological outcomes.Keywords: cardiac surgery, cardiac arrest, resuscitation, critically ill patients
Procedia PDF Downloads 52521 Principal Well-Being at Hong Kong: A Quantitative Investigation
Authors: Junjun Chen, Yingxiu Li
Abstract:
The occupational well-being of school principals has played a vital role in the pursuit of individual and school wellness and success. However, principals’ well-being worldwide is under increasing threat because of the challenging and complex nature of their work and growing demands for school standardisation and accountability. Pressure is particularly acute in the post-pandemicfuture as principals attempt to deal with the impact of the pandemic on top of more regular demands. This is particularly true in Hong Kong, as school principals are increasingly wedged between unparalleled political, social, and academic responsibilities. Recognizing the semantic breadth of well-being, scholars have not determined a single, mutually agreeable definition but agreed that the concept of well-being has multiple dimensions across various disciplines. The multidimensional approach promises more precise assessments of the relationships between well-being and other concepts than the ‘affect-only’ approach or other single domains for capturing the essence of principal well-being. The multiple-dimension well-being concept is adopted in this project to understand principal well-being in this study. This study aimed to understand the situation of principal well-being and its influential drivers with a sample of 670 principals from Hong Kong and Mainland China. An online survey was sent to the participants after the breakout of COVID-19 by the researchers. All participants were well informed about the purposes and procedure of the project and the confidentiality of the data prior to filling in the questionnaire. Confirmatory factor analysis and structural equation modelling performed with Mplus were employed to deal with the dataset. The data analysis procedure involved the following three steps. First, the descriptive statistics (e.g., mean and standard deviation) were calculated. Second, confirmatory factor analysis (CFA) was used to trim principal well-being measurement performed with maximum likelihood estimation. Third, structural equation modelling (SEM) was employed to test the influential factors of principal well-being. The results of this study indicated that the overall of principal well-being were above the average mean score. The highest ranking in this study given by the principals was to their psychological and social well-being (M = 5.21). This was followed by spiritual (M = 5.14; SD = .77), cognitive (M = 5.14; SD = .77), emotional (M = 4.96; SD = .79), and physical well-being (M = 3.15; SD = .73). Participants ranked their physical well-being the lowest. Moreover, professional autonomy, supervisor and collegial support, school physical conditions, professional networking, and social media have showed a significant impact on principal well-being. The findings of this study will potentially enhance not only principal well-being, but also the functioning of an individual principal and a school without sacrificing principal well-being for quality education in the process. This will eventually move one step forward for a new future - a wellness society advocated by OECD. Importantly, well-being is an inside job that begins with choosing to have wellness, whilst supports to become a wellness principal are also imperative.Keywords: well-being, school principals, quantitative, influential factors
Procedia PDF Downloads 82520 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics
Authors: Weikang Gong, Chunhua Li
Abstract:
Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure
Procedia PDF Downloads 120519 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single
Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa
Abstract:
Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600 minimum load impedance of the DAQ card with the 5 to 20 low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.Keywords: flux density, electrical steel, LabVIEW, magnetization
Procedia PDF Downloads 290518 Endemic Asteraceae from Mauritius Islands as Potential Phytomedicines
Authors: S.Kauroo, J. Govinden Soulange, D. Marie
Abstract:
Psiadia species from the Asteraceae are traditionally used in the folk medicine of Mauritius to treat cutaneous and bronchial infections. The present study aimed at validating the phytomedicinal properties of the selected species from the Asteraceae family, namely Psiadia arguta, Psiadia viscosa, Psiadia lithospermifolia, and Distephanus populifolius. Dried hexane, ethyl acetate, and methanol leaf extracts were studied for their antioxidant properties using the DPPH (1, 1-diphenyl-2-picryl-hydrazyl), FRAP (Ferric Reducing Ability of Plasma), and Deoxyribose assays. Antibacterial activity against human pathogenic bacteria namely Escherichia coli (ATCC 27853), Staphylococcus aureus (ATCC 29213), Enterococcus faecalis (ATCC 29212), Klebsiella pneumonia (ATCC27853), Pseudomonas aeruginosa (ATCC 27853), and Bacillus cereus (ATCC 11778) was measured using the broth microdilution assay. Qualitative phytochemical screening using standard methods revealed the presence of coumarins, tannins, leucoanthocyanins, and steroids in all the tested extracts. The measured phenolics level of the selected plant extracts varied from 24.0 to 231.6 mg GAE/g with the maximum level in methanol extracts in all four species. The highest flavonoids and proanthocyanidins content was noted in Psiadia arguta methanolic extracts with 65.7±1.8 mg QE/g and 5.1±0.0 mg CAT/g dry weight (DW) extract, respectively. The maximum free radical scavenging activity was measured in Psiadia arguta methanol and ethyl acetate extracts with IC50 11.3±0.2 and 11.6± 0.2 µg/mL, respectively and followed by Distephanus populifolius methanol extracts with an IC50 of 11.3± 0.8 µg/mL. The maximum ferric reducing antioxidant potential was noted in Psiadia lithospermifolia methanol extracts with a FRAP value of 18.8 ± 0.4 µmol Fe2+/L/g DW. The antioxidant capacity based on DPPH and Deoxyribose values were negatively related to total phenolics, flavonoid and proanthocyanidins content while the ferric reducing antioxidant potential were strongly correlated to total phenolics, flavonoid and proanthocyanidins content. All four species exhibited antimicrobial activity against the tested bacteria (both Gram-negative and Gram-positive). Interestingly, the hexane and ethyl acetate extracts of Psiadia viscosa and Psiadia lithospermifolia were more active than the control antibiotic Chloramphenicol. The Minimum inhibitory concentration (MIC) values for hexane and ethyl acetate extracts of Psiadia viscosa and Psiadia lithospermifolia against the tested bacteria ranged from (62.5 to 500 µg/ml). These findings validate the use of these tested Asteraceae in the traditional medicine of Mauritius and also highlight their pharmaceutical potential as prospective phytomedicines.Keywords: antibacterial, antioxidant, DPPH, flavonoids, FRAP, Psiadia spp
Procedia PDF Downloads 528517 Learning to Translate by Learning to Communicate to an Entailment Classifier
Authors: Szymon Rutkowski, Tomasz Korbak
Abstract:
We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning
Procedia PDF Downloads 127516 A Study of the Prevalence of Trichinellosis in Domestic and Wild Animals for the Region of Sofia, Bulgaria
Authors: Valeria Dilcheva, Svetlozara Petkova, Ivelin Vladov
Abstract:
Nemathodes of the genus Trichinella are zoonotic parasites with a cosmopolitan distribution. More than 100 species of mammals, birds and reptiles are involved in the natural cycle of this nematode. At present, T. spiralis, T. pseudospiralis, and T. britovi have been found in Bulgaria. The existence of natural wildlife and domestic reservoirs of Trichinella spp. can be a serious threat to human health. Three trichinella isolates caused human trichinella infection outbreaks from three regions of Sofia City Province were used for the research: sample No. 1 - Ratus norvegicus, sample No. 2 – domestic pig (Sus scrofa domestica), sample No. 3 - domestic pig (Sus scrofa domestica). Trichinella larvae of the studied species were isolated via digestive method (pepsin, hydrochloric acid, water) at 37ºC by standard procedure and were determined by gender (male and female) based on their morphological characteristics. As a reference trichinella species were used: T. spiralis, T. pseudospiralis, T. nativa and T. britovi. Single male and female larvae of the three isolates were crossed with single male and female larvae of the reference trichinella species as well as reciprocally. As a result of cross-breeding, offspring of muscular larvae with T. spiralis and T. britovi were obtained, while in experiments with T. pseudospiralis and T. nativa, trichinella larvae were not found in the laboratory mice. The results obtained in the control groups indicate that the trichinella larvae used from the isolates and the four trichinella species are infective. Also, the infective ability of the F1 offspring from the successful cross-breeding between isolates and reference species was investigated. Through the data obtained in the experiment was found that isolates No. 1 and No. 2 belong to the species T. spiralis, and isolate No. 3 belongs to the species T. britovi. The results were confirmed by PCR and real-time PCR analysis. Thus the presence and circulation of the species T. spiralis and T. britovi in Bulgaria was confirmed. Probably the rodents (rats) are involved in the distribution of T. spiralis in urban environment. The species T. britovi found in a domestic pig speaks of some contact with wild animals for which T. britovi is characteristic. The probable reason is that a large number of farmers in Bulgaria practice the free-range breeding of domestic pigs. Part of the farmers also used as food for domestic pigs waste products from the game (foxes, jackals, bears, wolves) and probably thus the infection was obtained. The distribution range of trichinella species in Bulgaria is not strictly outlined. It is believed that T. spiralis is most common in domestic animals and T. britovi and T. pseudospiralis are characteristic of wildlife. To answer the question whether wild and synanthropic animals are infected with the same or different trichinella species, which species predominate in nature and what their distribution among different hosts is, further research is required.Keywords: cross-breeding, Sofia, trichinellosis, Trichinella britovi, Trichinella spiralis
Procedia PDF Downloads 187515 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 108514 Social Ties and the Prevalence of Single Chronic Morbidity and Multimorbidity among the Elderly Population in Selected States of India
Authors: Sree Sanyal
Abstract:
Research in ageing often highlights the age-related health dimension more than the psycho-social characteristics of the elderly, which also influences and challenges the health outcomes. Multimorbidity is defined as the person having more than one chronic non-communicable diseases and their prevalence increases with ageing. The study aims to evaluate the influence of social ties on self-reported prevalence of multimorbidity (selected chronic non-communicable diseases) among the selected states of elderly population in India. The data is accessed from Building Knowledge Base on Population Ageing in India (BKPAI), collected in 2011 covering the self-reported chronic non-communicable diseases like arthritis, heart disease, diabetes, lung disease with asthma, hypertension, cataract, depression, dementia, Alzheimer’s disease, and cancer. The data of the above diseases were taken together and categorized as: ‘no disease’, ‘one disease’ and ‘multimorbidity’. The predicted variables were demographic, socio-economic, residential types, and the variable of social ties includes social support, social engagement, perceived support, connectedness, and importance of the elderly. Predicted probability for multiple logistic regression was used to determine the background characteristics of the old in association with chronic morbidities showing multimorbidity. The finding suggests that 24.35% of the elderly are suffering from multimorbidity. Research shows that with reference to ‘no disease’, according to the socio-economic characteristics of the old, the female oldest old (80+) from others in caste and religion, widowed, never had any formal education, ever worked in their life, coming from the second wealth quintile standard, from rural Maharashtra are more prone with ‘one disease’. From the social ties background, the elderly who perceives they are important to the family, after getting older their decision-making status has been changed, prefer to stay with son and spouse only, satisfied with the communication from their children are more likely to have less single morbidity and the results are significant. Again, with respect to ‘no disease’, the female oldest old (80+), who are others in caste, Christian in religion, widowed, having less than 5 years of education completed, ever worked, from highest wealth quintile, residing in urban Kerala are more associated with multimorbidity. The elderly population who are more socially connected through family visits, public gatherings, gets support in decision making, who prefers to spend their later years with son and spouse only but stays alone shows lesser prevalence of multimorbidity. In conclusion, received and perceived social integration and support from associated neighborhood in the older days, knowing about their own needs in life facilitates better health and wellbeing of the elderly population in selected states of India.Keywords: morbidity, multi-morbidity, prevalence, social ties
Procedia PDF Downloads 120513 Development of a Quick On-Site Pass/Fail Test for the Evaluation of Fresh Concrete Destined for Application as Exposed Concrete
Authors: Laura Kupers, Julie Piérard, Niki Cauberg
Abstract:
The use of exposed concrete (sometimes referred to as architectural concrete), keeps gaining popularity. Exposed concrete has the advantage to combine the structural properties of concrete with an aesthetic finish. However, for a successful aesthetic finish, much attention needs to be paid to the execution (formwork, release agent, curing, weather conditions…), the concrete composition (choice of the raw materials and mix proportions) as well as to its fresh properties. For the latter, a simple on-site pass/fail test could halt the casting of concrete not suitable for architectural concrete and thus avoid expensive repairs later. When architects opt for an exposed concrete, they usually want a smooth, uniform and nearly blemish-free surface. For this choice, a standard ‘construction’ concrete does not suffice. An aesthetic surface finishing requires the concrete to contain a minimum content of fines to minimize the risk of segregation and to allow complete filling of more complex shaped formworks. The concrete may neither be too viscous as this makes it more difficult to compact and it increases the risk of blow holes blemishing the surface. On the other hand, too much bleeding may cause color differences on the concrete surface. An easy pass/fail test, which can be performed on the site just before the casting, could avoid these problems. In case the fresh concrete fails the test, the concrete can be rejected. Only in case the fresh concrete passes the test, the concrete would be cast. The pass/fail tests are intended for a concrete with a consistency class S4. Five tests were selected as possible onsite pass/fail test. Two of these tests already exist: the K-slump test (ASTM C1362) and the Bauer Filter Press Test. The remaining three tests were developed by the BBRI in order to test the segregation resistance of fresh concrete on site: the ‘dynamic sieve stability test’, the ‘inverted cone test’ and an adapted ‘visual stability index’ (VSI) for the slump and flow test. These tests were inspired by existing tests for self-compacting concrete, for which the segregation resistance is of great importance. The suitability of the fresh concrete mixtures was also tested by means of a laboratory reference test (resistance to segregation) and by visual inspection (blow holes, structure…) of small test walls. More than fifteen concrete mixtures of different quality were tested. The results of the pass/fail tests were compared with the results of this laboratory reference test and the test walls. The preliminary laboratory results indicate that concrete mixtures ‘suitable’ for placing as exposed concrete (containing sufficient fines, a balanced grading curve etc.) can be distinguished from ‘inferior’ concrete mixtures. Additional laboratory tests, as well as tests on site, will be conducted to confirm these preliminary results and to set appropriate pass/fail values.Keywords: exposed concrete, testing fresh concrete, segregation resistance, bleeding, consistency
Procedia PDF Downloads 423512 Evaluation of Cooperative Hand Movement Capacity in Stroke Patients Using the Cooperative Activity Stroke Assessment
Authors: F. A. Thomas, M. Schrafl-Altermatt, R. Treier, S. Kaufmann
Abstract:
Stroke is the main cause of adult disability. Especially upper limb function is affected in most patients. Recently, cooperative hand movements have been shown to be a promising type of upper limb training in stroke rehabilitation. In these movements, which are frequently found in activities of daily living (e.g. opening a bottle, winding up a blind), the force of one upper limb has to be equally counteracted by the other limb to successfully accomplish a task. The use of standardized and reliable clinical assessments is essential to evaluate the efficacy of therapy and the functional outcome of a patient. Many assessments for upper limb function or impairment are available. However, the evaluation of cooperative hand movement tasks are rarely included in those. Thus, the aim of this study was (i) to develop a novel clinical assessment (CASA - Cooperative Activity Stroke Assessment) for the evaluation of patients’ capacity to perform cooperative hand movements and (ii) to test its inter- and interrater reliability. Furthermore, CASA scores were compared to current gold standard assessments for upper extremity in stroke patients (i.e. Fugl-Meyer Assessment, Box & Blocks Test). The CASA consists of five cooperative activities of daily living including (1) opening a jar, (2) opening a bottle, (3) open and closing of a zip, (4) unscrew a nut and (5) opening a clipbox. Here, the goal is to accomplish the tasks as fast as possible. In addition to the quantitative rating (i.e. time) which is converted to a 7-point scale, also the quality of the movement is rated in a 4-point scale. To test the reliability of CASA, fifteen stroke subjects were tested within a week twice by the same two raters. Intra-and interrater reliability was calculated using the intraclass correlation coefficient (ICC) for total CASA score and single items. Furthermore, Pearson-correlation was used to compare the CASA scores to the scores of Fugl-Meyer upper limb assessment and the box and blocks test, which were assessed in every patient additionally to the CASA. ICC scores of the total CASA score indicated an excellent- and single items established a good to excellent inter- and interrater reliability. Furthermore, the CASA score was significantly correlated to the Fugl-Meyer and Box & Blocks score. The CASA provides a reliable assessment for cooperative hand movements which are crucial for many activities of daily living. Due to its non-costly setup, easy and fast implementation, we suggest it to be well suitable for clinical application. In conclusion, the CASA is a useful tool in assessing the functional status and therapy related recovery in cooperative hand movement capacity in stroke patients.Keywords: activitites of daily living, clinical assessment, cooperative hand movements, reliability, stroke
Procedia PDF Downloads 318511 Bituminous Geomembranes: Sustainable Products for Road Construction and Maintenance
Authors: Ines Antunes, Andrea Massari, Concetta Bartucca
Abstract:
Greenhouse gasses (GHG) role in the atmosphere has been well known since the 19th century; however, researchers have begun to relate them to climate changes only in the second half of the following century. From this moment, scientists started to correlate the presence of GHG such as CO₂ with the global warming phenomena. This has raised the awareness not only of those who were experts in this field but also of public opinion, which is becoming more and more sensitive to environmental pollution and sustainability issues. Nowadays the reduction of GHG emissions is one of the principal objectives of EU nations. The target is an 80% reduction of emissions in 2050 and to reach the important goal of carbon neutrality. Road sector is responsible for an important amount of those emissions (about 20%). The most part is due to traffic, but a good contribution is also given directly or indirectly from road construction and maintenance. Raw material choice and reuse of post-consumer plastic rather than a cleverer design of roads have an important contribution to reducing carbon footprint. Bituminous membranes can be successfully used as reinforcement systems in asphalt layers to improve road pavement performance against cracking. Composite materials coupling membranes with grids and/or fabrics should be able to combine improved tensile properties of the reinforcement with stress absorbing and waterproofing effects of membranes. Polyglass, with its brand dedicated to road construction and maintenance called Polystrada, has done more than this. The company's target was not only to focus sustainability on the final application but also to implement a greener mentality from the cradle to the grave. Starting from production, Polyglass has made important improvements finalized to increase efficiency and minimize waste. The installation of a trigeneration plant and the usage of selected production scraps inside the products as well as the reduction of emissions into the environment, are one of the main efforts of the company to reduce impact during final product build-up. Moreover, the benefit given by installing Polystrada products brings a significant improvement in road lifetime. This has an impact not only on the number of maintenance or renewal that needs to be done (build less) but also on traffic density due to works and road deviation in case of operations. During the end of the life of a road, Polystrada products can be 100% recycled and milled with classical systems used without changing the normal maintenance procedures. In this work, all these contributions were quantified in terms of CO₂ emission thanks to an LCA analysis. The data obtained were compared with a classical system or a standard production of a membrane. What it is possible to see is that the usage of Polyglass products for street maintenance and building gives a significant reduction of emissions in case of membrane installation under the road wearing course.Keywords: CO₂ emission, LCA, maintenance, sustainability
Procedia PDF Downloads 65510 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data
Authors: Martin Pellon Consunji
Abstract:
Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms
Procedia PDF Downloads 122509 STML: Service Type-Checking Markup Language for Services of Web Components
Authors: Saqib Rasool, Adnan N. Mian
Abstract:
Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.Keywords: REST, STML, type checking, web component
Procedia PDF Downloads 251508 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 158