Search results for: semantic data
2111 Prevalence of Breast Cancer Molecular Subtypes at a Tertiary Cancer Institute
Authors: Nahush Modak, Meena Pangarkar, Anand Pathak, Ankita Tamhane
Abstract:
Background: Breast cancer is the prominent cause of cancer and mortality among women. This study was done to show the statistical analysis of a cohort of over 250 patients detected with breast cancer diagnosed by oncologists using Immunohistochemistry (IHC). IHC was performed by using ER; PR; HER2; Ki-67 antibodies. Materials and methods: Formalin fixed Paraffin embedded tissue samples were obtained by surgical manner and standard protocol was followed for fixation, grossing, tissue processing, embedding, cutting and IHC. The Ventana Benchmark XT machine was used for automated IHC of the samples. Antibodies used were supplied by F. Hoffmann-La Roche Ltd. Statistical analysis was performed by using SPSS for windows. Statistical tests performed were chi-squared test and Correlation tests with p<.01. The raw data was collected and provided by National Cancer Insitute, Jamtha, India. Result: Luminal B was the most prevailing molecular subtype of Breast cancer at our institute. Chi squared test of homogeneity was performed to find equality in distribution and Luminal B was the most prevalent molecular subtype. The worse prognostic indicator for breast cancer depends upon expression of Ki-67 and her2 protein in cancerous cells. Our study was done at p <.01 and significant dependence was observed. There exists no dependence of age on molecular subtype of breast cancer. Similarly, age is an independent variable while considering Ki-67 expression. Chi square test performed on Human epidermal growth factor receptor 2 (HER2) statuses of patients and strong dependence was observed in percentage of Ki-67 expression and Her2 (+/-) character which shows that, value of Ki depends upon Her2 expression in cancerous cells (p<.01). Surprisingly, dependence was observed in case of Ki-67 and Pr, at p <.01. This shows that Progesterone receptor proteins (PR) are over-expressed when there is an elevation in expression of Ki-67 protein. Conclusion: We conclude from that Luminal B is the most prevalent molecular subtype at National Cancer Institute, Jamtha, India. There was found no significant correlation between age and Ki-67 expression in any molecular subtype. And no dependence or correlation exists between patients’ age and molecular subtype. We also found that, when the diagnosis is Luminal A, out of the cohort of 257 patients, no patient shows >14% Ki-67 value. Statistically, extremely significant values were observed for dependence of PR+Her2- and PR-Her2+ scores on Ki-67 expression. (p<.01). Her2 is an important prognostic factor in breast cancer. Chi squared test for Her2 and Ki-67 shows that the expression of Ki depends upon Her2 statuses. Moreover, Ki-67 cannot be used as a standalone prognostic factor for determining breast cancer.Keywords: breast cancer molecular subtypes , correlation, immunohistochemistry, Ki-67 and HR, statistical analysis
Procedia PDF Downloads 1232110 Staying When Everybody Else Is Leaving: Coping with High Out-Migration in Rural Areas of Serbia
Authors: Anne Allmrodt
Abstract:
Regions of South-East Europe are characterised by high out-migration for decades. The reasons for leaving range from the hope of a better work situation to a better health care system and beyond. In Serbia, this high out-migration hits the rural areas in particular so that the population number is in the red repeatedly. It might not be hard to guess that this negative population growth has the potential to create different challenges for those who stay in rural areas. So how are they coping with the – statistically proven – high out-migration? Having this in mind, the study is investigating the people‘s individual awareness of the social phenomenon high out-migration and their daily life strategies in rural areas. Furthermore, the study seeks to find out the people’s resilient skills in that context. Is the condition of high out-migration conducive for resilience? The methodology combines a quantitative and a qualitative approach (mixed methods). For the quantitative part, a standardised questionnaire has been developed, including a multiple choice section and a choice experiment. The questionnaire was handed out to people living in rural areas of Serbia only (n = 100). The sheet included questions about people’s awareness of high out-migration, their own daily life strategies or challenges and their social network situation (data about the social network was necessary here since it is supposed to be an influencing variable for resilience). Furthermore, test persons were asked to make different choices of coping with high out-migration in a self-designed choice experiment. Additionally, the study included qualitative interviews asking citizens from rural areas of Serbia. The topics asked during the interview focused on their awareness of high out-migration, their daily life strategies, and challenges as well as their social network situation. Results have shown the following major findings. The awareness of high out-migration is not the same with all test persons. Some declare it as something positive for their own life, others as negative or not effecting at all. The way of coping generally depended – maybe not surprising – on the people’s social network. However – and this might be the most important finding - not everybody with a certain number of contacts had better coping strategies and was, therefore, more resilient. Here the results show that especially people with high affiliation and proximity inside their network were able to cope better and shew higher resilience skills. The study took one step forward in terms of knowledge about societal resilience as well as coping strategies of societies in rural areas. It has shown part of the other side of nowadays migration‘s coin and gives a hint for a more sustainable rural development and community empowerment.Keywords: coping, out-migration, resilience, rural development, social networks, south-east Europe
Procedia PDF Downloads 1292109 Digimesh Wireless Sensor Network-Based Real-Time Monitoring of ECG Signal
Authors: Sahraoui Halima, Dahani Ameur, Tigrine Abedelkader
Abstract:
DigiMesh technology represents a pioneering advancement in wireless networking, offering cost-effective and energy-efficient capabilities. Its inherent simplicity and adaptability facilitate the seamless transfer of data between network nodes, extending the range and ensuring robust connectivity through autonomous self-healing mechanisms. In light of these advantages, this study introduces a medical platform harnessed with DigiMesh wireless network technology characterized by low power consumption, immunity to interference, and user-friendly operation. The primary application of this platform is the real-time, long-distance monitoring of Electrocardiogram (ECG) signals, with the added capacity for simultaneous monitoring of ECG signals from multiple patients. The experimental setup comprises key components such as Raspberry Pi, E-Health Sensor Shield, and Xbee DigiMesh modules. The platform is composed of multiple ECG acquisition devices labeled as Sensor Node 1 and Sensor Node 2, with a Raspberry Pi serving as the central hub (Sink Node). Two communication approaches are proposed: Single-hop and multi-hop. In the Single-hop approach, ECG signals are directly transmitted from a sensor node to the sink node through the XBee3 DigiMesh RF Module, establishing peer-to-peer connections. This approach was tested in the first experiment to assess the feasibility of deploying wireless sensor networks (WSN). In the multi-hop approach, two sensor nodes communicate with the server (Sink Node) in a star configuration. This setup was tested in the second experiment. The primary objective of this research is to evaluate the performance of both Single-hop and multi-hop approaches in diverse scenarios, including open areas and obstructed environments. Experimental results indicate the DigiMesh network's effectiveness in Single-hop mode, with reliable communication over distances of approximately 300 meters in open areas. In the multi-hop configuration, the network demonstrated robust performance across approximately three floors, even in the presence of obstacles, without the need for additional router devices. This study offers valuable insights into the capabilities of DigiMesh wireless technology for real-time ECG monitoring in healthcare applications, demonstrating its potential for use in diverse medical scenarios.Keywords: DigiMesh protocol, ECG signal, real-time monitoring, medical platform
Procedia PDF Downloads 792108 Clustering-Based Computational Workload Minimization in Ontology Matching
Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris
Abstract:
In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching
Procedia PDF Downloads 2482107 Anaerobic Digestion of Green Wastes at Different Solids Concentrations and Temperatures to Enhance Methane Generation
Authors: A. Bayat, R. Bello-Mendoza, D. G. Wareham
Abstract:
Two major categories of green waste are fruit and vegetable (FV) waste and garden and yard (GY) waste. Although, anaerobic digestions (AD) is able to manage FV waste; there is less confidence in the conditions for AD to handle GY wastes (grass, leaves, trees and bush trimmings); mainly because GY contains lignin and other recalcitrant organics. GY in the dry state (TS ≥ 15 %) can be digested at mesophilic temperatures; however, little methane data has been reported under thermophilic conditions, where conceivably better methane yields could be achieved. In addition, it is suspected that at lower solids concentrations, the methane yield could be increased. As such, the aim of this research is to find the temperature and solids concentration conditions that produce the most methane; under two different temperature regimes (mesophilic, thermophilic) and three solids states (i.e. 'dry', 'semi-dry' and 'wet'). Twenty liters of GY waste was collected from a public park located in the northern district in Tehran. The clippings consisted of freshly cut grass as well as dry branches and leaves. The GY waste was chopped before being fed into a mechanical blender that reduced it to a paste-like consistency. An initial TS concentration of approximately 38 % was achieved. Four hundred mL of anaerobic inoculum (average total solids (TS) concentration of 2.03 ± 0.131 % of which 73.4% were volatile solid (VS), soluble chemical oxygen demand (sCOD) of 4.59 ± 0.3 g/L) was mixed with the GY waste substrate paste (along with distilled water) to achieve a TS content of approximately 20 %. For comparative purposes, approximately 20 liters of FV waste was ground in the same manner as the GY waste. Since FV waste has a much higher natural water content than GY, it was dewatered to obtain a starting TS concentration in the dry solid-state range (TS ≥ 15 %). Three samples were dewatered to an average starting TS concentration of 32.71 %. The inoculum was added (along with distilled water) to dilute the initial FV TS concentrations down to semi-dry conditions (10-15 %) and wet conditions (below 10 %). Twelve 1-L batch bioreactors were loaded simultaneously with either GY or FV waste at TS solid concentrations ranging from 3.85 ± 1.22 % to 20.11 ± 1.23 %. The reactors were sealed and were operated for 30 days while being immersed in water baths to maintain a constant temperature of 37 ± 0.5 °C (mesophilic) or 55 ± 0.5 °C (thermophilic). A maximum methane yield of 115.42 (L methane/ kg VS added) was obtained for the GY thermophilic-wet AD combination. Methane yield was enhanced by 240 % compared to the GY waste mesophilic-dry condition. The results confirm that high temperature regimes and small solids concentrations are conditions that enhance methane yield from GY waste. A similar trend was observed for the anaerobic digestion of FV waste. Furthermore, a maximum value of VS (53 %) and sCOD (84 %) reduction was achieved during the AD of GY waste under the thermophilic-wet condition.Keywords: anaerobic digestion, thermophilic, mesophilic, total solids concentration
Procedia PDF Downloads 1412106 The Existential in a Practical Phenomenology Research: A Study on the Political Participation of Young Women
Authors: Amanda Aliende da Matta, Maria del Pilar Fogueiras Bertomeu, Valeria de Ormaechea Otalora, Maria Paz Sandin Esteban, Miriam Comet Donoso
Abstract:
This communication presents proposed questions about the existential in research on the political participation of young women. The study follows a qualitative methodology, in particular, the applied hermeneutic phenomenological (AHP) method, and the general objective of the research is to give an account of the experience of political participation as a young woman. The study participants are women aged 18 to 35 who have experience in political participation. The techniques of data collection are the descriptive story and the phenomenological interview. Hermeneutic phenomenology as a research approach is based on phenomenological philosophy and applied hermeneutics. The ultimate objective of HP is to gain access to the meaning structures of lived experience by appropriating them, clarifying them, and reflectively making them explicit. Human experiences are always lived through existential: fundamental themes that are useful in exploring meaningful aspects of our life worlds. Everyone experiences the world through the existential of lived relationships, the lived body, lived space, lived time, and lived things. The phenomenological research, then, also tacitly asks about the existential. Existentials are universal themes useful for exploring significant aspects of our life world and of the particular phenomena under study. Four main existentials prove especially helpful as guides for reflection in the research process: relationship, body, space, and time. For example, in our case, we may ask ourselves how can the existentials of relationship, body, space, and time guide us in exploring the structures of meaning in the lived experience of political participation as a woman and a young person. The study is still not finished, as we are currently conducting phenomenological thematic analysis on the collected stories of lived experience. Yet, we have already identified some fragments of texts that show the existential in their experiences, which we will transcribe below. 1) Relationality - The experienced I-Other. It regards how relationships are experienced in our narratives about political participation as young women. One example would be: “As we had known each other for a long time, we understood each other with our eyes; we were all a little bit on the same page, thinking the same thing.” 2) Corporeality - The lived body. It regards how the lived body is experienced in activities of political participation as a young woman. One example would be: “My blood was boiling, but it was not the time to throw anything in their face, we had to look for solutions.”; “I had a lump in my throat and I wanted to cry.”. 3) Spatiality - The lived space. It regards how one experiences the lived space in political participation activities as a young woman. One example would be: “And the feeling I got when I saw [it] it's like watching everybody going into a mousetrap.” 4) Temporality - Lived time. It regards how one experiences the lived time in political participation activities as a young woman. One example would be: “Then, there were also meetings that went on forever…”Keywords: applied hermeneutic phenomenology, existentials, hermeneutics, phenomenology, political participation
Procedia PDF Downloads 942105 The Use of Gender-Fair Language in CS National Exams
Authors: Moshe Leiba, Doron Zohar
Abstract:
Computer Science (CS) and programming is still considered a boy’s club and is a male-dominated profession. This is also the case in high schools and higher education. In Israel, not different from the rest of the world, there are less than 35% of female students in CS studies that take the matriculation exams. The Israeli matriculation exams are written in a masculine form language. Gender-fair language (GFL) aims at reducing gender stereotyping and discrimination. There are several strategies that can be employed to make languages gender-fair and to treat women and men symmetrically (especially in languages with grammatical gender, among them neutralization and using the plural form. This research aims at exploring computer science teachers’ beliefs regarding the use of gender-fair language in exams. An exploratory quantitative research methodology was employed to collect the data. A questionnaire was administered to 353 computer science teachers. 58% female and 42% male. 86% are teaching for at least 3 years, with 59% of them have a teaching experience of 7 years. 71% of the teachers teach in high school, and 82% of them are preparing students for the matriculation exam in computer science. The questionnaire contained 2 matriculation exam questions from previous years and open-ended questions. Teachers were asked which form they think is more suited: (a) the existing form (mescaline), (b) using both gender full forms (e.g., he/she), (c) using both gender short forms, (d) plural form, (e) natural form, and (f) female form. 84% of the teachers recognized the need to change the existing mescaline form in the matriculation exams. About 50% of them thought that using the plural form was the best-suited option. When examining the teachers who are pro-change and those who are against, no gender differences or teaching experience were found. The teachers who are pro gender-fair language justified it as making it more personal and motivating for the female students. Those who thought that the mescaline form should remain argued that the female students do not complain and the change in form will not influence or affect the female students to choose to study computer science. Some even argued that the change will not affect the students but can only improve their sense of identity or feeling toward the profession (which seems like a misconception). This research suggests that the teachers are pro-change and believe that re-formulating the matriculation exams is the right step towards encouraging more female students to choose to study computer science as their major study track and to bridge the gap for gender equality. This should indicate a bottom-up approach, as not long after this research was conducted, the Israeli ministry of education decided to change the matriculation exams to gender-fair language using the plural form. In the coming years, with the transition to web-based examination, it is suggested to use personalization and adjust the language form in accordance with the student's gender.Keywords: compter science, gender-fair language, teachers, national exams
Procedia PDF Downloads 1122104 Effects of Bleaching Procedures on Dentine Sensitivity
Authors: Suhayla Reda Al-Banai
Abstract:
Problem Statement: Tooth whitening was used for over one hundred and fifty year. The question concerning the whiteness of teeth is a complex one since tooth whiteness will vary from individual to individual, dependent on age and culture, etc. Tooth whitening following treatment may be dependent on the type of whitening system used to whiten the teeth. There are a few side-effects to the process, and these include tooth sensitivity and gingival irritation. Some individuals may experience no pain or sensitivity following the procedure. Purpose: To systematically review the available published literature until 31st December 2021 to identify all relevant studies for inclusion and to determine whether there was any evidence demonstrating that the application of whitening procedures resulted in the tooth sensitivity. Aim: Systematically review the available published works of literature to identify all relevant studies for inclusion and to determine any evidence demonstrating that application of 10% & 15% carbamide peroxide in tooth whitening procedures resulted in tooth sensitivity. Material and Methods: Following a review of 70 relevant papers from searching both electronic databases (OVID MEDLINE and PUBMED) and hand searching of relevant written journals, 49 studies were identified, 42 papers were subsequently excluded, and 7 studies were finally accepted for inclusion. The extraction of data for inclusion was conducted by two reviewers. The main outcome measures were the methodology and assessment used by investigators to evaluate tooth sensitivity in tooth whitening studies. Results: The reported evaluation of tooth sensitivity during tooth whitening procedures was based on the subjective response of subjects rather than a recognized methodology for evaluating. One of the problems in evaluating was the lack of homogeneity in study design. Seven studies were included. The studies included essential features namely: randomized group, placebo controls, doubleblind and single-blind. Drop-out was obtained from two of included studies. Three of the included studies reported sensitivity at the baseline visit. Two of the included studies mentioned the exclusion criteria Conclusions: The results were inconclusive due to: Limited number of included studies, the study methodology, and evaluation of DS reported. Tooth whitening procedures adversely affect both hard and soft tissues in the oral cavity. Sideeffects are mild and transient in nature. Whitening solutions with greater than 10% carbamide peroxide causes more tooth sensitivity. Studies using nightguard vital bleaching with 10% carbamide peroxide reported two side effects tooth sensitivity and gingival irritation, although tooth sensitivity was more prevalent than gingival irritationKeywords: dentine, sensitivity, bleaching, carbamide peroxde
Procedia PDF Downloads 702103 Three Foci of Trust as Potential Mediators in the Association Between Job Insecurity and Dynamic Organizational Capability: A Quantitative, Exploratory Study
Authors: Marita Heyns
Abstract:
Job insecurity is a distressing phenomenon which has far reaching consequences for both employees and their organizations. Previously, much attention has been given to the link between job insecurity and individual level performance outcomes, while less is known about how subjectively perceived job insecurity might transfer beyond the individual level to affect performance of the organization on an aggregated level. Research focusing on how employees’ fear of job loss might affect the organization’s ability to respond proactively to volatility and drastic change through applying its capabilities of sensing, seizing, and reconfiguring, appears to be practically non-existent. Equally little is known about the potential underlying mechanisms through which job insecurity might affect the dynamic capabilities of an organization. This study examines how job insecurity might affect dynamic organizational capability through trust as an underling process. More specifically, it considered the simultaneous roles of trust at an impersonal (organizational) level as well as trust at an interpersonal level (in leaders and co-workers) as potential underlying mechanisms through which job insecurity might affect the organization’s dynamic capability to respond to opportunities and imminent, drastic change. A quantitative research approach and a stratified random sampling technique enabled the collection of data among 314 managers at four different plant sites of a large South African steel manufacturing organization undergoing dramatic changes. To assess the study hypotheses, the following statistical procedures were employed: Structural equation modelling was performed in Mplus to evaluate the measurement and structural models. The Chi-square values test for absolute fit as well as alternative fit indexes such as the Comparative Fit Index and the Tucker-Lewis Index, the Root Mean Square Error of Approximation and the Standardized Root Mean Square Residual were used as indicators of model fit. Composite reliabilities were calculated to evaluate the reliability of the factors. Finally, interaction effects were tested by using PROCESS and the construction of two-sided 95% confidence intervals. The findings indicate that job insecurity had a lower-than-expected detrimental effect on evaluations of the organization’s dynamic capability through the conducive buffering effects of trust in the organization and in its leaders respectively. In contrast, trust in colleagues did not seem to have any noticeable facilitative effect. The study proposes that both job insecurity and dynamic capability can be managed more effectively by also paying attention to factors that could promote trust in the organization and its leaders; some practical recommendations are given in this regard.Keywords: dynamic organizational capability, impersonal trust, interpersonal trust, job insecurity
Procedia PDF Downloads 912102 The Use of Technology in Theatrical Performances as a Tool of Audience’S Engagement
Authors: Chrysoula Bousiouta
Abstract:
Throughout the history of theatre, technology has played an important role both in influencing the relationship between performance and audience and offering different kinds of experiences. The use of technology dates back in ancient times, when the introduction of artifacts, such as “Deus ex machine” in ancient Greek theatre, started. Taking into account the key techniques and experiences used throughout history, this paper investigates how technology, through new media, influences contemporary theatre. In the context of this research, technology is defined as projections, audio environments, video-projections, sensors, tele-connections, all alongside with the performance, challenging audience’s participation. The theoretical framework of the research covers, except for the history of theatre, the theory of “experience economy” that took over the service and goods economy. The research is based on the qualitative and comparative analysis of two case studies, Contact Theatre in Manchester (United Kingdom) and Bios in Athens (Greece). The data selection includes desk research and is complemented with semi structured interviews. Building on the results of the research one could claim that the intended experience of modern/contemporary theatre is that of engagement. In this context, technology -as defined above- plays a leading role in creating it. This experience passes through and exists in the middle of the realms of entertainment, education, estheticism and escapism. Furthermore, it is observed that nowadays, theatre is not only about acting but also about performing; it is that one where the performances are unfinished without the participation of the audience. Both case studies try to achieve the experience of engagement through practices that promote the attraction of attention, the increase of imagination, the interaction, the intimacy and the true activity. These practices are achieved through the script, the scenery, the language and the environment of a performance. Contact and Bios consider technology as an intimate tool in order to accomplish the above, and they make an extended use of it. The research completes a notable record of technological techniques that modern theatres use. The use of technology, inside or outside the limits of film technique’s, helps to rivet the attention of the audience, to make performances enjoyable, to give the sense of the “unfinished” or to be used for things that take place around the spectators and force them to take action, being spect-actors. The advantage of technology is that it can be used as a hook for interaction in all stages of a performance. Further research on the field could involve exploring alternative ways of binding technology and theatre or analyzing how the performance is perceived through the use of technological artifacts.Keywords: experience of engagement, interactive theatre, modern theatre, performance, technology
Procedia PDF Downloads 2502101 TARF: Web Toolkit for Annotating RNA-Related Genomic Features
Abstract:
Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.Keywords: RNA-related genomic features, annotation, visualization, web server
Procedia PDF Downloads 2082100 Radical Scavenging Activity of Protein Extracts from Pulse and Oleaginous Seeds
Authors: Silvia Gastaldello, Maria Grillo, Luca Tassoni, Claudio Maran, Stefano Balbo
Abstract:
Antioxidants are nowadays attractive not only for the countless benefits to the human and animal health, but also for the perspective of use as food preservative instead of synthetic chemical molecules. In this study, the radical scavenging activity of six protein extracts from pulse and oleaginous seeds was evaluated. The selected matrices are Pisum sativum (yellow pea from two different origins), Carthamus tinctorius (safflower), Helianthus annuus (sunflower), Lupinus luteus cv Mister (lupin) and Glycine max (soybean), since they are economically interesting for both human and animal nutrition. The seeds were grinded and proteins extracted from 20mg powder with a specific vegetal-extraction kit. Proteins have been quantified through Bradford protocol and scavenging activity was revealed using DPPH assay, based on radical DPPH (2,2-diphenyl-1-picrylhydrazyl) absorbance decrease in the presence of antioxidants molecules. Different concentrations of the protein extract (1, 5, 10, 50, 100, 500 µg/ml) were mixed with DPPH solution (DPPH 0,004% in ethanol 70% v/v). Ascorbic acid was used as a scavenging activity standard reference, at the same six concentrations of protein extracts, while DPPH solution was used as control. Samples and standard were prepared in triplicate and incubated for 30 minutes in dark at room temperature, the absorbance was read at 517nm (ABS30). Average and standard deviation of absorbance values were calculated for each concentration of samples and standard. Statistical analysis using t-students and p-value were performed to assess the statistical significance of the scavenging activity difference between the samples (or standard) and control (ABSctrl). The percentage of antioxidant activity has been calculated using the formula [(ABSctrl-ABS30)/ABSctrl]*100. The obtained results demonstrate that all matrices showed antioxidant activity. Ascorbic acid, used as standard, exhibits a 96% scavenging activity at the concentration of 500 µg/ml. At the same conditions, sunflower, safflower and yellow peas revealed the highest antioxidant performance among the matrices analyzed, with an activity of 74%, 68% and 70% respectively (p < 0.005). Although lupin and soybean exhibit a lower antioxidant activity compared to the other matrices, they showed a percentage of 46 and 36 respectively. All these data suggest the possibility to use undervalued edible matrices as antioxidants source. However, further studies are necessary to investigate a possible synergic effect of several matrices as well as the impact of industrial processes for a large-scale approach.Keywords: antioxidants, DPPH assay, natural matrices, vegetal proteins
Procedia PDF Downloads 4332099 Cosmetic Recommendation Approach Using Machine Learning
Authors: Shakila N. Senarath, Dinesh Asanka, Janaka Wijayanayake
Abstract:
The necessity of cosmetic products is arising to fulfill consumer needs of personality appearance and hygiene. A cosmetic product consists of various chemical ingredients which may help to keep the skin healthy or may lead to damages. Every chemical ingredient in a cosmetic product does not perform on every human. The most appropriate way to select a healthy cosmetic product is to identify the texture of the body first and select the most suitable product with safe ingredients. Therefore, the selection process of cosmetic products is complicated. Consumer surveys have shown most of the time, the selection process of cosmetic products is done in an improper way by consumers. From this study, a content-based system is suggested that recommends cosmetic products for the human factors. To such an extent, the skin type, gender and price range will be considered as human factors. The proposed system will be implemented by using Machine Learning. Consumer skin type, gender and price range will be taken as inputs to the system. The skin type of consumer will be derived by using the Baumann Skin Type Questionnaire, which is a value-based approach that includes several numbers of questions to derive the user’s skin type to one of the 16 skin types according to the Bauman Skin Type indicator (BSTI). Two datasets are collected for further research proceedings. The user data set was collected using a questionnaire given to the public. Those are the user dataset and the cosmetic dataset. Product details are included in the cosmetic dataset, which belongs to 5 different kinds of product categories (Moisturizer, Cleanser, Sun protector, Face Mask, Eye Cream). An alternate approach of TF-IDF (Term Frequency – Inverse Document Frequency) is applied to vectorize cosmetic ingredients in the generic cosmetic products dataset and user-preferred dataset. Using the IF-IPF vectors, each user-preferred products dataset and generic cosmetic products dataset can be represented as sparse vectors. The similarity between each user-preferred product and generic cosmetic product will be calculated using the cosine similarity method. For the recommendation process, a similarity matrix can be used. Higher the similarity, higher the match for consumer. Sorting a user column from similarity matrix in a descending order, the recommended products can be retrieved in ascending order. Even though results return a list of similar products, and since the user information has been gathered, such as gender and the price ranges for product purchasing, further optimization can be done by considering and giving weights for those parameters once after a set of recommended products for a user has been retrieved.Keywords: content-based filtering, cosmetics, machine learning, recommendation system
Procedia PDF Downloads 1342098 Signaling Theory: An Investigation on the Informativeness of Dividends and Earnings Announcements
Authors: Faustina Masocha, Vusani Moyo
Abstract:
For decades, dividend announcements have been presumed to contain important signals about the future prospects of companies. Similarly, the same has been presumed about management earnings announcements. Despite both dividend and earnings announcements being considered informative, a number of researchers questioned their credibility and found both to contain short-term signals. Pertaining to dividend announcements, some authors argued that although they might contain important information that can result in changes in share prices, which consequently results in the accumulation of abnormal returns, their degree of informativeness is less compared to other signaling tools such as earnings announcements. Yet, this claim in favor has been refuted by other researchers who found the effect of earnings to be transitory and of little value to shareholders as indicated by the little abnormal returns earned during the period surrounding earnings announcements. Considering the above, it is apparent that both dividends and earnings have been hypothesized to have a signaling impact. This prompts one to question which between these two signaling tools is more informative. To answer this question, two follow-up questions were asked. The first question sought to determine the event which results in the most effect on share prices, while the second question focused on the event that influenced trading volume the most. To answer the first question and evaluate the effect that each of these events had on share prices, an event study methodology was employed on a sample made up of the top 10 JSE-listed companies for data collected from 2012 to 2019 to determine if shareholders gained abnormal returns (ARs) during announcement dates. The event that resulted in the most persistent and highest amount of ARs was considered to be more informative. Looking at the second follow-up question, an investigation was conducted to determine if either dividends or earnings announcements influenced trading patterns, resulting in abnormal trading volumes (ATV) around announcement time. The event that resulted in the most ATV was considered more informative. Using an estimation period of 20 days and an event window of 21 days, and hypothesis testing, it was found that announcements pertaining to the increase of earnings resulted in the most ARs, Cumulative Abnormal Returns (CARs) and had a lasting effect in comparison to dividend announcements whose effect lasted until day +3. This solidifies some empirical arguments that the signaling effect of dividends has become diminishing. It was also found that when reported earnings declined in comparison to the previous period, there was an increase in trading volume, resulting in ATV. Although dividend announcements did result in abnormal returns, they were lesser than those acquired during earnings announcements which refutes a number of theoretical and empirical arguments that found dividends to be more informative than earnings announcements.Keywords: dividend signaling, event study methodology, information content of earnings, signaling theory
Procedia PDF Downloads 1732097 Technology for Good: Deploying Artificial Intelligence to Analyze Participant Response to Anti-Trafficking Education
Authors: Ray Bryant
Abstract:
3Strands Global Foundation (3SGF), a non-profit with a mission to mobilize communities to combat human trafficking through prevention education and reintegration programs, launched a groundbreaking study that calls out the usage and benefits of artificial intelligence in the war against human trafficking. Having gathered more than 30,000 stories from counselors and school staff who have gone through its PROTECT Prevention Education program, 3SGF sought to develop a methodology to measure the effectiveness of the training, which helps educators and school staff identify physical signs and behaviors indicating a student is being victimized. The program further illustrates how to recognize and respond to trauma and teaches the steps to take to report human trafficking, as well as how to connect victims with the proper professionals. 3SGF partnered with Levity, a leader in no-code Artificial Intelligence (AI) automation, to create the research study utilizing natural language processing, a branch of artificial intelligence, to measure the effectiveness of their prevention education program. By applying the logic created for the study, the platform analyzed and categorized each story. If the story, directly from the educator, demonstrated one or more of the desired outcomes; Increased Awareness, Increased Knowledge, or Intended Behavior Change, a label was applied. The system then added a confidence level for each identified label. The study results were generated with a 99% confidence level. Preliminary results show that of the 30,000 stories gathered, it became overwhelmingly clear that a significant majority of the participants now have increased awareness of the issue, demonstrated better knowledge of how to help prevent the crime, and expressed an intention to change how they approach what they do daily. In addition, it was observed that approximately 30% of the stories involved comments by educators expressing they wish they’d had this knowledge sooner as they can think of many students they would have been able to help. Objectives Of Research: To solve the problem of needing to analyze and accurately categorize more than 30,000 data points of participant feedback in order to evaluate the success of a human trafficking prevention program by using AI and Natural Language Processing. Methodologies Used: In conjunction with our strategic partner, Levity, we have created our own NLP analysis engine specific to our problem. Contributions To Research: The intersection of AI and human rights and how to utilize technology to combat human trafficking.Keywords: AI, technology, human trafficking, prevention
Procedia PDF Downloads 592096 The Association between C-Reactive Protein and Hypertension with Different US Participants Ethnicity-Findings from National Health and Nutrition Examination Survey 1999-2010
Authors: Ghada Abo-Zaid
Abstract:
The main objective of this study was to examine the association between the elevated level of CRP and incidence of hypertension before and after adjusting by age, BMI, gender, SES, smoking, diabetes, cholesterol LDL and cholesterol HDL and to determine whether the association were differ by race. Method: Cross sectional data for participations from age 17 to age 74 years who included in The National Health and Nutrition Examination Survey (NHANES) from 1999 to 2010 were analysed. CRP level was classified into three categories ( > 3mg/L, between 1mg/LL and 3mg/L, and < 3 mg/L). Blood pressure categorization was done using JNC 7 algorithm Hypertension defined as either systolic blood pressure (SBP) of 140 mmHg or more and disystolic blood pressure (DBP) of 90mmHg or greater, otherwise a self-reported prior diagnosis by a physician. Pre-hypertension was defined as (139 > SBP > 120 or 89 > DPB > 80). Multinominal regression model was undertaken to measure the association between CRP level and hypertension. Results: In univariable models, CRP concentrations > 3 mg/L were associated with a 73% greater risk of incident hypertension compared with CRP concentrations < 1 mg/L (Hypertension: odds ratio [OR] = 1.73; 95% confidence interval [CI], 1.50-1.99). Ethnic comparisons showed that American Mexican had the highest risk of incident hypertension (odds ratio [OR] = 2.39; 95% confidence interval [CI], 2.21-2.58).This risk was statistically insignificant, however, either after controlling by other variables (Hypertension: OR = 0.75; 95% CI, 0.52-1.08,), or categorized by race [American Mexican: odds ratio [OR] = 1.58; 95% confidence interval [CI], 0,58-4.26, Other Hispanic: odds ratio [OR] = 0.87; 95% confidence interval [CI], 0.19-4.42, Non-Hispanic white: odds ratio [OR] = 0.90; 95% confidence interval [CI], 0.50-1.59, Non-Hispanic Black: odds ratio [OR] = 0.44; 95% confidence interval [CI], 0.22-0,87]. The same results were found for pre-hypertension, and the Non-Hispanic black showed the highest significant risk for Pre-Hypertension (odds ratio [OR] = 1.60; 95% confidence interval [CI], 1.26-2.03). When CRP concentrations were between 1.0-3.0 mg/L, in an unadjusted models prehypertension was associated with higher likelihood of elevated CRP (OR = 1.37; 95% CI, 1.15-1.62). The same relationship was maintained in Non-Hispanic white, Non-Hispanic black, and other race (Non-Hispanic white: OR = 1.24; 95% CI, 1.03-1.48, Non-Hispanic black: OR = 1.60; 95% CI, 1.27-2.03, other race: OR = 2.50; 95% CI, 1.32-4.74) while the association was insignificant with American Mexican and other Hispanic. In the adjusted model, the relationship between CRP and prehypertension were no longer available. In contrary, Hypertension was not independently associated with elevated CRP, and the results were the same after grouped by race or adjusted by the confounder variables. The same results were obtained when SBP or DBP were on a continuous measure. Conclusions: This study confirmed the existence of an association between hypertension, prehypertension and elevated level of CRP, however this association was no longer available after adjusting by other variables. Ethic group differences were statistically significant at the univariable models, while it disappeared after controlling by other variables.Keywords: CRP, hypertension, ethnicity, NHANES, blood pressure
Procedia PDF Downloads 4142095 The Unspoken Truth of Female Domestic Violence: An Integrative Review
Authors: Glenn Guira
Abstract:
Domestic violence is an international pandemic that has affected women from all walks of life. The World Health Organization (2016), announced that recent global prevalence of violence against women indicates that 1 in 3 (35 %) women worldwide have experienced either physical and/or sexual intimate partner violence or non-partner violence in their lifetime. It further said that violence against women is a major public health problem and violations of women’s human rights. Furthermore, the agency said that the factors associated in an increased risk of experiencing intimate partner and sexual violence include low education, child maltreatment or exposure to violence between parents, abuse during childhood, attitudes accepting violence and gender inequality. This is an integrative review of domestic violence focusing on four themes namely types of domestic violence against women, predictors of domestic violence against women, effects of domestic violence against women and strategies in addressing domestic violence against women. This integrative research study was conducted to identify relevant themes on domestic violence that was conducted and published. This study is geared toward understanding further domestic violence as a public health concern. Using the keywords domestic violence, Google Scholar, MEDLINE PLUS, and Ingenta Connect were searched to identify relevant studies. This resulted in 3,467 studies that fall within the copyright year 2006 – 2016. The studies were delimited to domestic violence against women because there are other types of violence that can be committed such as senior citizens abuse, child abuse, violence against males and gay/lesbian abuse. The significant findings of the research study are the following: the forms of domestic violence against women include physical, sexual, psychological, emotional, economic, spiritual and conflict-related violence against, the predictors of domestic violence against women include demographic, health-related, psychological, behavioral, partner-related and social-stress factors, the effects of domestic violence against women include victim-related factors and child-related factors and the strategies addressing domestic violence against women include personal-related strategies, education-related strategies, health-related strategies, legal-related strategies and judicial-related strategies. Consequent to the foregoing findings, the following conclusions are drawn by the researcher that there are published researches that presented different forms, predictors, effects and strategies addressing domestic violence committed by perpetrators against women. The researcher recommended that the summarized comprehensive data should be use to educate people who are potential victims of domestic violence and that future researchers should continue to conduct research for the development of pragmatic programs aimed at reducing domestic violence.Keywords: domestic violence, physical abuse, intimate partner violence, sexual violence
Procedia PDF Downloads 2672094 Indicators of Sustainable Intensification: Views from British Stakeholders
Authors: N. Mahon, I. Crute, M. Di Bonito, E. Simmons, M. M. Islam
Abstract:
Growing interest in the concept of the sustainable intensification (SI) of agriculture has been shown by, national governments, transnational agribusinesses, intergovernmental organizations and research institutes, amongst others. This interest may be because SI is seen as a ‘third way’ for agricultural development, between the seemingly disparate paradigms of ‘intensive’ agriculture and more ‘sustainable’ forms of agriculture. However, there is a lack of consensus as to what SI means in practice and how it should be measured using indicators of change. This has led to growing confusion, disagreement and skepticism regarding the concept, especially amongst civil society organizations, both in the UK and other countries. This has prompted the need for bottom-up, participatory approaches to identify indicators of SI. Our aim is to identify the views of British stakeholders regarding the areas of agreement and disagreement as to what SI is and how it should be measured in the UK using indicators of change. Data for this investigation came from 32 semi-structured interviews, conducted between 2015 and 2016, with stakeholders from throughout the UK food system. In total 110 indicators of SI were identified. These indicators covered a wide variety of subjects including biophysical, social and political considerations. A number of indicators appeared to be widely applicable and were similar to those suggested in the global literature. These include indicators related to the management of the natural resources on which agriculture relies e.g., ‘Soil organic matter’, ‘Number of pollinators per hectare’ and ‘Depth of water table’. As well as those related to agricultural externalities, e.g., ‘Greenhouse gas emissions’ and ‘Concentrations of agro-chemicals in waterways’. However, many of the indicators were much more specific to the context of the UK. These included, ‘Areas of high nature value farmland’, ‘Length of hedgerows per hectare’ and ‘Age of farmers’. Furthermore, tensions could be seen when participants considered the relative importance of agricultural mechanization versus levels of agricultural employment, the pros and cons of intensive, housed livestock systems and value of wild biodiversity versus the desire to increase agricultural yields. These areas of disagreement suggest the need to carefully consider the trade-offs inherent in the concept. Our findings indicate that in order to begin to resolve the confusions surrounding SI it needs to be considered in a context specific manner, rather than as a single uniform concept. Furthermore, both the environmental and the social parameters in which agriculture operates need to be considered in order to operationalize SI in a meaningful way. We suggest that participatory approaches are key to this process, facilitating dialogue and collaborative-learning between all the stakeholders, allowing them to reach a shared vision for the future of agricultural development.Keywords: agriculture, indicators, participatory approach, sustainable intensification
Procedia PDF Downloads 2242093 Bioleaching of Metals Contained in Spent Catalysts by Acidithiobacillus thiooxidans DSM 26636
Authors: Andrea M. Rivas-Castillo, Marlenne Gómez-Ramirez, Isela Rodríguez-Pozos, Norma G. Rojas-Avelizapa
Abstract:
Spent catalysts are considered as hazardous residues of major concern, mainly due to the simultaneous presence of several metals in elevated concentrations. Although hydrometallurgical, pyrometallurgical and chelating agent methods are available to remove and recover some metals contained in spent catalysts; these procedures generate potentially hazardous wastes and the emission of harmful gases. Thus, biotechnological treatments are currently gaining importance to avoid the negative impacts of chemical technologies. To this end, diverse microorganisms have been used to assess the removal of metals from spent catalysts, comprising bacteria, archaea and fungi, whose resistance and metal uptake capabilities differ depending on the microorganism tested. Acidophilic sulfur oxidizing bacteria have been used to investigate the biotreatment and extraction of valuable metals from spent catalysts, namely Acidithiobacillus thiooxidans and Acidithiobacillus ferroxidans, as they present the ability to produce leaching agents such as sulfuric acid and sulfur oxidation intermediates. In the present work, the ability of A. thiooxidans DSM 26636 for the bioleaching of metals contained in five different spent catalysts was assessed by growing the culture in modified Starkey mineral medium (with elemental sulfur at 1%, w/v), and 1% (w/v) pulp density of each residue for up to 21 days at 30 °C and 150 rpm. Sulfur-oxidizing activity was periodically evaluated by determining sulfate concentration in the supernatants according to the NMX-k-436-1977 method. The production of sulfuric acid was assessed in the supernatants as well, by a titration procedure using NaOH 0.5 M with bromothymol blue as acid-base indicator, and by measuring pH using a digital potentiometer. On the other hand, Inductively Coupled Plasma - Optical Emission Spectrometry was used to analyze metal removal from the five different spent catalysts by A. thiooxidans DSM 26636. Results obtained show that, as could be expected, sulfuric acid production is directly related to the diminish of pH, and also to highest metal removal efficiencies. It was observed that Al and Fe are recurrently removed from refinery spent catalysts regardless of their origin and previous usage, although these removals may vary from 9.5 ± 2.2 to 439 ± 3.9 mg/kg for Al, and from 7.13 ± 0.31 to 368.4 ± 47.8 mg/kg for Fe, depending on the spent catalyst proven. Besides, bioleaching of metals like Mg, Ni, and Si was also obtained from automotive spent catalysts, which removals were of up to 66 ± 2.2, 6.2±0.07, and 100±2.4, respectively. Hence, the data presented here exhibit the potential of A. thiooxidans DSM 26636 for the simultaneous bioleaching of metals contained in spent catalysts from diverse provenance.Keywords: bioleaching, metal removal, spent catalysts, Acidithiobacillus thiooxidans
Procedia PDF Downloads 1402092 Asthma Nurse Specialist Improves the Management of Acute Asthma in a University Teaching Hospital: A Quality Improvement Project
Authors: T. Suleiman, C. Mchugh, H. Ranu
Abstract:
Background; Asthma continues to be associated with poor patient outcomes, including mortality. An audit of the management of acute asthma admissions in our hospital in 2020 found poor compliance with National Asthma and COPD Audit Project (NACAP) standards which set out to improve inpatient asthma care. Clinical nurse specialists have been shown to improve patient care across a range of specialties. In September 2021, an asthma Nurse Specialist (ANS) was employed in our hospital. Aim; To re-audit management of acute asthma admissions using NACAP standards and assess for quality improvement post-employment of an ANS. Methodology; NACAP standards are wide-reaching; therefore, we focused on ‘specific elements of good practice’ in addition to the provision of inhaled corticosteroids (ICS) on discharge. Medical notes were retrospectively requested from the hospital coding department and selected as per NACAP inclusion criteria. Data collection and entry into the NACAP database were carried out. As this was a clinical audit, ethics approval was not required. Results; Cycle 1 (pre-ANS) and 2 (post-ANS) of the audit included 20 and 32 patients, respectively, with comparable baseline demographics. No patients had a discharge bundle completed on discharge in cycle 1 vs. 84% of cases in cycle 2. Regarding specific components of the bundle, 25% of patients in cycle 1 had their inhaler technique checked vs. 91% in cycle 2. Furthermore, 80% of patients had maintenance medications reviewed in cycle 1 vs. 97% in cycle 2. Medication adherence was addressed in 20% of cases in cycle 1 vs. 88% of cases in cycle 2. Personalized asthma action plans were not issued or reviewed in any cases in cycle 1 as compared with 84% of cases in cycle 2. Triggers were discussed in 30% of cases in cycle 1 vs. 88% of cases in cycle 2. Tobacco dependence was addressed in 44% of cases in cycle 1 vs. 100% of cases in cycle 2. No patients in cycle 1 had community follow-up requested within 2 days vs. 81% of the patients in cycle 2. Similarly, 20% of the patients in cycle 1 vs. 88% of the patients in cycle 2 had a 4-week asthma clinic follow-up requested. 75% of patients in cycle 1 were the recipient of ICS on discharge compared with 94% of patients in cycle 2. Conclusion; Our quality improvement project demonstrates the utility of an ANS in improving performance in the management of acute asthma admissions, evidenced here through concordance with NACAP standards. Asthma is a complex condition with biological, psychological, and sociological components; therefore, ANS is a suitable intervention to improve concordance with guidelines. ANS likely impacted performance directly, for example, by checking inhaler technique, and indirectly as a safety net ensuring doctors included ICS on discharge.Keywords: asthma, nurse specialist, clinical audit, quality improvement
Procedia PDF Downloads 3792091 Risk Factors of Hospital Acquired Infection Mortality in a Tunisian Intensive Care Unit
Authors: Ben Cheikh Asma, Bouafia Nabiha, Ammar Asma, Ezzi Olfa, Meddeb Khaoula, Chouchène Imed, Boussarsar Hamadi, Njah Mansour
Abstract:
Background: Hospital Acquired Infection (HAI) constitutes an important worldwide health problem. It was associated with high mortality rate in intensive care units (ICU). This study aimed to determine HAI mortality rate in Tunisian intensive care units and identify its risk factors. Methods: We conducted a prospective observational cohort study over a 12 months period (September 15th 2015 to September 15 th 2016) in the adult medical ICU of University Hospital-Farhat Hached (Sousse-Tunisia). All patients admitted in the ICU for more than 48 hours were included in the study. We used an anonymous standardized survey record form to collect data by a medical hygienist assisted by an intensivist. We adopted definitions of Center for Diseases Control and prevention of Atlanta to detect HAI, Kaplan Meier survival analysis and Cox proportional hazard regression to identify independent risk factor of HAI mortality. Results: Of 171 patients, 67 developed ICU-acquired infection (global incidence rate=39.2%). The mean age of patients was 59 ± 21.2 years and 60.8% were male. The most frequently identified infections were pulmonary acquired infection (ventilator associated pneumonia (VAP) and infected atelectasis with density rates 21.4 VAP/1000 days of mechanical ventilation and 9.4 infected atelectasis /1000 days of mechanical ventilation; respectively) and central venous catheter associated infection (CVC - AI) with density rate 28.4 CVC-AI / 1000 CVC-days). HAI mortality rate was 66.7% (n=44). The median survival was 20 days 3.36, 95% Confidential Interval [13.39 – 26.60]. Specific mortality rates according to infectious site were 65.5%, 36.4% and 4.5% respectively for VAP, CVC associated infection and infected atelectasis. In univariate analysis, a significant associations between mortality and cardiovascular history (p=0.04) tracheotomy (p=0.00), peripheral venous catheterization (p=0.04), VAP (p=0.04) and infected atelectasis (p=0.04) were detected. Independent risk factors for HAI mortality were VAP with Hazard Ratio = 3.14, 95% Confidential Interval [1.63 – 6.05] (p=0.001) and tracheotomy (Hazard Ratio=0.22, 95% Confidential Interval [0.10 – 0.44], p=0.000). Conclusions: In the present study, hospital acquired infection mortality rate was relatively high. We need to intensify the fight against these infections especially ventilator-associated pneumonia that is associated with higher risk of mortality in many studies. Thus, more effective infection control interventions were necessary in our hospital.Keywords: hospital acquired infection, intensive care unit, mortality, risk factors
Procedia PDF Downloads 4852090 Ways Management of Foods Not Served to Consumers in Food Service Sector
Authors: Marzena Tomaszewska, Beata Bilska, Danuta Kolozyn-Krajewska
Abstract:
Food loss and food waste are a global problem of the modern economy. The research undertaken aimed to analyze how food is handled in catering establishments when it comes to food waste and to demonstrate main ways of management with foods/dishes not served to consumers. A survey study was conducted from January to June 2019. The selection of catering establishments participating in the study was deliberate. The study included establishments located only in Mazowieckie Voivodeship (Poland). 42 completed questionnaires were collected. In some questions, answers were based on a 5-point scale of 1 to 5 (from 'always'/'every day' to 'never'). The survey also included closed questions with a suggested cafeteria of answers. The respondents stated that in their workplaces, dishes served cold and hot ready meals are discarded every day or almost every day (23.7% and 20.5% of answers respectively). A procedure most frequently used for dealing with dishes not served to consumers on a given day is their storage at a cool temperature until the following day. In the research, 1/5 of respondents admitted that consumers 'always' or 'usually' leave uneaten meals on their plates, and over 41% 'sometimes' do so. It was found additionally that food not used in food service sector is most often thrown into a public container for rubbish. Most often thrown into the public container (with communal trash) were: expired products (80.0%), plate waste (80.0%), and inedible products (fruit and vegetable peels, egg shells) (77.5%). Most frequently into the container dedicated only for food waste were thrown out used deep-frying oil (62.5%). 10% of respondents indicated that inedible products in their workplaces is allocate for animal feeds. Food waste in the food service sector still remains an insufficiently studied issue, as owners of these objects are often unwilling to disclose data pertaining to the subject. Incorrect ways of management with foods not served to consumers were observed. There is the need to develop the educational activities for employees and management in the context of food waste management in the food service sector. This publication has been developed under the contract with the National Center for Research and Development No Gospostrateg1/385753/1/NCBR/2018 for carrying out and funding of a project implemented as part of the 'The social and economic development of Poland in the conditions of globalizing markets - GOSPOSTRATEG' program entitled 'Developing a system for monitoring wasted food and an effective program to rationalize losses and reduce food wastage' (acronym PROM).Keywords: food waste, inedible products, plate waste, used deep-frying oil
Procedia PDF Downloads 1192089 Exploration of Building Information Modelling Software to Develop Modular Coordination Design Tool for Architects
Authors: Muhammad Khairi bin Sulaiman
Abstract:
The utilization of Building Information Modelling (BIM) in the construction industry has provided an opportunity for designers in the Architecture, Engineering and Construction (AEC) industry to proceed from the conventional method of using manual drafting to a way that creates alternative designs quickly, produces more accurate, reliable and consistent outputs. By using BIM Software, designers can create digital content that manipulates the use of data using the parametric model of BIM. With BIM software, more alternative designs can be created quickly and design problems can be explored further to produce a better design faster than conventional design methods. Generally, BIM is used as a documentation mechanism and has not been fully explored and utilised its capabilities as a design tool. Relative to the current issue, Modular Coordination (MC) design as a sustainable design practice is encouraged since MC design will reduce material wastage through standard dimensioning, pre-fabrication, repetitive, modular construction and components. However, MC design involves a complex process of rules and dimensions. Therefore, a tool is needed to make this process easier. Since the parameters in BIM can easily be manipulated to follow MC rules and dimensioning, thus, the integration of BIM software with MC design is proposed for architects during the design stage. With this tool, there will be an improvement in acceptance and practice in the application of MC design effectively. Consequently, this study will analyse and explore the function and customization of BIM objects and the capability of BIM software to expedite the application of MC design during the design stage for architects. With this application, architects will be able to create building models and locate objects within reference modular grids that adhere to MC rules and dimensions. The parametric modeling capabilities of BIM will also act as a visual tool that will further enhance the automation of the 3-Dimensional space planning modeling process. (Method) The study will first analyze and explore the parametric modeling capabilities of rule-based BIM objects, which eventually customize a reference grid within the rules and dimensioning of MC. Eventually, the approach will further enhance the architect's overall design process and enable architects to automate complex modeling, which was nearly impossible before. A prototype using a residential quarter will be modeled. A set of reference grids guided by specific MC rules and dimensions will be used to develop a variety of space planning and configuration. With the use of the design, the tool will expedite the design process and encourage the use of MC Design in the construction industry.Keywords: building information modeling, modular coordination, space planning, customization, BIM application, MC space planning
Procedia PDF Downloads 842088 Oral Health of Tobacco Chewers: A Cross-Sectional Study in Karachi, Pakistan
Authors: Warsi A. Ibrahim, Qureshi A. Ambrina, Younus M. Anjum
Abstract:
Introduction: Oral lesions related to commercially available Smokeless Tobacco (ST), such as, Pan, Gutka, Mahwa, Naswar is considered a serious challenge for dental health care providers in Pakistan. Majority of labored Pakistani population consume ST, where public transporters and drivers are no exception. It was necessary to identify individuals of this particular population group and screen their oral health and early signs of pre-cancerous lesions so that appropriate preventive measures could be taken to reduce the burden on health providers. Aim of Study: To estimate Prevalence of ST consumption and perception of use, and to evaluate Oral Health status among public drivers of Karachi. Material & methods: A cross-sectional study survey was conducted over duration of 2 months, through convenient sampling. Sample size (n=615) of public drivers (age > 18 years) all over Karachi was gathered. A structured proforma was used to record socio-demographics, addiction profile, perception of use and oral health status (oral lesions, oral sub-mucosal fibrosis and dental caries) of study participants. Data was entered and analyzed using SPSS version 16.0 using descriptive statistics only. Results: Prevalence of ST consumption among the study participants was figured to 92.5%. Out of these almost 70% suffered from one or the other form of oral lesion(s). Four major types of ST consumption were observed out of which 60 % of oral lesion were related to Gutka chewers showing early signs of oral cancer. In addition, occurrence of Oral sub-mucosal fibrosis (OSF) was found to be significantly high around 54.8%. Overall dental caries status was also high, showing on an average 5 teeth of an individual were decayed, missing or filled deviating from WHO normal criteria (mean < 3). It was thus proven from the study that public drivers relied on oral tobacco consumption because it helps them ‘Improve consciousness’ (p-value: < 0.01; using chi-square test). Multivariate analysis showed that there were higher prevalence of smokeless tobacco among highway drivers versus local drivers (A.O.R: 2.82 [0.83-9.61], p-value: < 0.01) Conclusion: Smokeless tobacco (ST) consumption has a direct effect on oral health. However, the type of ST, the duration of consumption are factors which are directly related to the severity. Moreover, Gutka may be considered as having most lethal effects on oral health which may lead to oral cancer and affect individual’s quality of life. Specific preventive programs must be undertaken to reduce the consumption of Gutka among public transporters and drivers.Keywords: smokeless tobacco, oral lesions, drivers, public transporters
Procedia PDF Downloads 3082087 Application of Forensic Entomology to Estimate the Post Mortem Interval
Authors: Meriem Taleb, Ghania Tail, Fatma Zohra Kara, Brahim Djedouani, T. Moussa
Abstract:
Forensic entomology has grown immensely as a discipline in the past thirty years. The main purpose of forensic entomology is to establish the post mortem interval or PMI. Three days after the death, insect evidence is often the most accurate and sometimes the only method of determining elapsed time since death. This work presents the estimation of the PMI in an experiment to test the reliability of the accumulated degree days (ADD) method and the application of this method in a real case. The study was conducted at the Laboratory of Entomology at the National Institute for Criminalistics and Criminology of the National Gendarmerie, Algeria. The domestic rabbit Oryctolagus cuniculus L. was selected as the animal model. On 08th July 2012, the animal was killed. Larvae were collected and raised to adulthood. Estimation of oviposition time was calculated by summing up average daily temperatures minus minimum development temperature (also specific to each species). When the sum is reached, it corresponds to the oviposition day. Weather data were obtained from the nearest meteorological station. After rearing was accomplished, three species emerged: Lucilia sericata, Chrysomya albiceps, and Sarcophaga africa. For Chrysomya albiceps species, a cumulation of 186°C is necessary. The emergence of adults occured on 22nd July 2012. A value of 193.4°C is reached on 9th August 2012. Lucilia sericata species require a cumulation of 207°C. The emergence of adults occurred on 23rd, July 2012. A value of 211.35°C is reached on 9th August 2012. We should also consider that oviposition may occur more than 12 hours after death. Thus, the obtained PMI is in agreement with the actual time of death. We illustrate the use of this method during the investigation of a case of a decaying human body found on 03rd March 2015 in Bechar, South West of Algerian desert. Maggots were collected and sent to the Laboratory of Entomology. Lucilia sericata adults were identified on 24th March 2015 after emergence. A sum of 211.6°C was reached on 1st March 2015 which corresponds to the estimated day of oviposition. Therefore, the estimated date of death is 1st March 2015 ± 24 hours. The estimated PMI by accumulated degree days (ADD) method seems to be very precise. Entomological evidence should always be used in homicide investigations when the time of death cannot be determined by other methods.Keywords: forensic entomology, accumulated degree days, postmortem interval, diptera, Algeria
Procedia PDF Downloads 2942086 Formulation and Test of a Model to explain the Complexity of Road Accident Events in South Africa
Authors: Dimakatso Machetele, Kowiyou Yessoufou
Abstract:
Whilst several studies indicated that road accident events might be more complex than thought, we have a limited scientific understanding of this complexity in South Africa. The present project proposes and tests a more comprehensive metamodel that integrates multiple causality relationships among variables previously linked to road accidents. This was done by fitting a structural equation model (SEM) to the data collected from various sources. The study also fitted the GARCH Model (Generalized Auto-Regressive Conditional Heteroskedasticity) to predict the future of road accidents in the country. The analysis shows that the number of road accidents has been increasing since 1935. The road fatality rate follows a polynomial shape following the equation: y = -0.0114x²+1.2378x-2.2627 (R²=0.76) with y = death rate and x = year. This trend results in an average death rate of 23.14 deaths per 100,000 people. Furthermore, the analysis shows that the number of crashes could be significantly explained by the total number of vehicles (P < 0.001), number of registered vehicles (P < 0.001), number of unregistered vehicles (P = 0.003) and the population of the country (P < 0.001). As opposed to expectation, the number of driver licenses issued and total distance traveled by vehicles do not correlate significantly with the number of crashes (P > 0.05). Furthermore, the analysis reveals that the number of casualties could be linked significantly to the number of registered vehicles (P < 0.001) and total distance traveled by vehicles (P = 0.03). As for the number of fatal crashes, the analysis reveals that the total number of vehicles (P < 0.001), number of registered (P < 0.001) and unregistered vehicles (P < 0.001), the population of the country (P < 0.001) and the total distance traveled by vehicles (P < 0.001) correlate significantly with the number of fatal crashes. However, the number of casualties and again the number of driver licenses do not seem to determine the number of fatal crashes (P > 0.05). Finally, the number of crashes is predicted to be roughly constant overtime at 617,253 accidents for the next 10 years, with the worse scenario suggesting that this number may reach 1 896 667. The number of casualties was also predicted to be roughly constant at 93 531 overtime, although this number may reach 661 531 in the worst-case scenario. However, although the number of fatal crashes may decrease over time, it is forecasted to reach 11 241 fatal crashes within the next 10 years, with the worse scenario estimated at 19 034 within the same period. Finally, the number of fatalities is also predicted to be roughly constant at 14 739 but may also reach 172 784 in the worse scenario. Overall, the present study reveals the complexity of road accidents and allows us to propose several recommendations aimed to reduce the trend of road accidents, casualties, fatal crashes, and death in South Africa.Keywords: road accidents, South Africa, statistical modelling, trends
Procedia PDF Downloads 1612085 Seasonal Short-Term Effect of Air Pollution on Cardiovascular Mortality in Belgium
Authors: Natalia Bustos Sierra, Katrien Tersago
Abstract:
It is currently proven that both extremes of temperature are associated with increased mortality and that air pollution is associated with temperature. This relationship is complex, and in countries with important seasonal variations in weather such as Belgium, some effects can appear as non-significant when the analysis is done over the entire year. We, therefore, analyzed the effect of short-term outdoor air pollution exposure on cardiovascular mortality during the warmer and colder months separately. We used daily cardiovascular deaths from acute cardiovascular diagnostics according to the International Classification of Diseases, 10th Revision (ICD-10: I20-I24, I44-I49, I50, I60-I66) during the period 2008-2013. The environmental data were population-weighted concentrations of particulates with an aerodynamic diameter less than 10 µm (PM₁₀) and less than 2.5 µm (PM₂.₅) (daily average), nitrogen dioxide (NO₂) (daily maximum of the hourly average) and ozone (O₃) (daily maximum of the 8-hour running mean). A Generalized linear model was applied adjusting for the confounding effect of season, temperature, dew point temperature, the day of the week, public holidays and the incidence of influenza-like illness (ILI) per 100,000 inhabitants. The relative risks (RR) were calculated for an increase of one interquartile range (IQR) of the air pollutant (μg/m³). These were presented for the four hottest months (June, July, August, September) and coldest months (November, December, January, February) in Belgium. We applied both individual lag model and unconstrained distributed lag model methods. The cumulative effect of a four-day exposure (day of exposure and three consecutive days) was calculated from the unconstrained distributed lag model. The IQR for PM₁₀, PM₂.₅, NO₂, and O₃ were respectively 8.2, 6.9, 12.9 and 25.5 µg/m³ during warm months and 18.8, 17.6, 18.4 and 27.8 µg/m³ during cold months. The association with CV mortality was statistically significant for the four pollutants during warm months and only for NO₂ during cold months. During the warm months, the cumulative effect of an IQR increase of ozone for the age groups 25-64, 65-84 and 85+ was 1.066 (95%CI: 1.002-1.135), 1.041 (1.008-1.075) and 1.036 (1.013-1.058) respectively. The cumulative effect of an IQR increase of NO₂ for the age group 65-84 was 1.066 (1.020-1.114) during warm months and 1.096 (1.030-1.166) during cold months. The cumulative effect of an IQR increase of PM₁₀ during warm months reached 1.046 (1.011-1.082) and 1.038 (1.015-1.063) for the age groups 65-84 and 85+ respectively. Similar results were observed for PM₂.₅. The short-term effect of air pollution on cardiovascular mortality is greater during warm months for lower pollutant concentrations compared to cold months. Spending more time outside during warm months increases population exposure to air pollution and can, therefore, be a confounding factor for this association. Age can also affect the length of time spent outdoors and the type of physical activity exercised. This study supports the deleterious effect of air pollution on cardiovascular mortality (CV) which varies according to season and age groups in Belgium. Public health measures should, therefore, be adapted to seasonality.Keywords: air pollution, cardiovascular, mortality, season
Procedia PDF Downloads 1652084 Other Cancers in Patients With Head and Neck Cancer
Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern
Abstract:
Introduction: Head and neck cancers (HNC) are often associated with the development of non-HNC primaries, as the risk factors that predispose patients to HNC are often risk factors for other cancers. Aim: We sought to evaluate whether there was an increased risk of smoking and alcohol-related cancers and also other cancers in HNC patients and to evaluate whether there is a difference between the rates of non-HNC primaries in Aboriginal compared with non-Aboriginal HNC patients. Methods: We performed a retrospective cohort analysis of 320 HNC patients from a single center in Western Australia, identifying 80 Aboriginal and 240 non-Aboriginal patients matched on a 1:3 ratio by sites, histology, rurality, and age. We collected data on the patient characteristics, tumour features, treatments, outcomes, and past and subsequent HNCs and non-HNC primaries. Results: In the overall study population, there were 86 patients (26.9%) with a metachronous or synchronous non-HNC primary. Non-HNC primaries were actually significantly more common in the non-Aboriginal population compared with the Aboriginal population (30% vs. 17.5%, p=0.02); however, half of these were patients with cutaneous squamous or basal cell carcinomas (cSCC/BCC) only. When cSCC/BCCs were excluded, non-Aboriginal patients had a similar rate as Aboriginal patients (16.7% vs. 15%, p=0.73). There were clearly more cSCC/BCCs in non-Aboriginal patients compared with Aboriginal patients (16.7% vs. 2.5%, p=0.001) and more patients with melanoma (2.5% vs. 0%, p value not significant (p=NS). Rates of most cancers were similar between non-Aboriginal and Aboriginal patients, including prostate (2.9% vs. 3.8%), colorectal (2.9% vs. 2.5%), kidney (1.2% vs. 1.2%), and these rates appeared comparable to Australian Age Standardised Incidence Rates (ASIR) in the general community. Oesophageal cancer occurred at double the rate in Aboriginal patients (3.8%) compared with non-Aboriginal patients (1.7%), which was far in excess of ASIRs which estimated a lifetime risk of 0.59% in the general population. Interestingly lung cancer rates did not appear to be significantly increased in our cohort, with 2.5% of Aboriginal patients and 3.3% of non-Aboriginal patients having lung cancer, which is in line with ASIRs which estimates a lifetime risk of 5% (by age 85yo). Interestingly the rate of Glioma in the non-Aboriginal population was higher than the ASIR, with 0.8% of non-Aboriginal patients developing Glioma, with Australian averages predicting a 0.6% lifetime risk in the general population. As these are small numbers, this finding may well be due to chance. Unsurprisingly, second HNCs occurred at an increased incidence in our cohort, in 12.5% of Aboriginal patients and 11.2% of non-Aboriginal patients, compared to an ASIR of 17 cases per 100,000 persons, estimating a lifetime risk of 1.70%. Conclusions: Overall, 26.9% of patients had a non-HNC primary. When cSCC/BCCs were excluded, Aboriginal and non-Aboriginal patients had similar rates of non-HNC primaries, although non-Aboriginal patients had a significantly higher rate of cSCC/BCCs. Aboriginal patients had double the rate of oesophageal primaries; however, this was not statistically significant, possibly due to small case numbers.Keywords: head and neck cancer, synchronous and metachronous primaries, other primaries, Aboriginal
Procedia PDF Downloads 762083 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 992082 Electronic Waste Analysis And Characterization Study: Management Input For Highly Urbanized Cities
Authors: Jilbert Novelero, Oliver Mariano
Abstract:
In a world where technological evolution and competition to create innovative products are at its peak, problems on Electronic Waste (E-Waste) are now becoming a global concern. E-waste is said to be any electrical or electronic devices that have reached the terminal of its useful life. The major issue are the volume and the raw materials used in crafting E-waste which is non-biodegradable and contains hazardous substances that are toxic to human health and the environment. The objective of this study is to gather baseline data in terms of the composition of E-waste in the solid waste stream and to determine the top 5 E-waste categories in a highly urbanized city. Recommendations in managing these wastes for its reduction were provided which may serve as a guide for acceptance and implementation in the locality. Pasig City was the chosen beneficiary of the research output and through the collaboration of the City Government of Pasig and its Solid Waste Management Office (SWMO); the researcher successfully conducted the Electronic Waste Analysis and Characterization Study (E-WACS) to achieve the objectives. E-WACS that was conducted on April 2019 showed that E-waste ranked 4th which comprises the 10.39% of the overall solid waste volume. Out of 345, 127.24kg which is the total daily domestic waste generation in the city, E-waste covers 35,858.72kg. Moreover, an average of 40 grams was determined to be the E-waste generation per person per day. The top 5 E-waste categories were then classified after the analysis. The category which ranked first is the office and telecommunications equipment that contained the 63.18% of the total generated E-waste. Second in ranking was the household appliances category with 21.13% composition. Third was the lighting devices category with 8.17%. Fourth on ranking was the consumer electronics and batteries category which was composed of 5.97% and fifth was the wires and cables category where it comprised the 1.41% of the average generated E-waste samples. One of the recommendations provided in this research is the implementation of the Pasig City Waste Advantage Card. The card can be used as a privilege card and earned points can be converted to avail of and enjoy services such as haircut, massage, dental services, medical check-up, and etc. Another recommendation raised is for the LGU to encourage a communication or dialogue with the technology and electronics manufacturers and distributors and international and local companies to plan the retrieval and disposal of the E-wastes in accordance with the Extended Producer Responsibility (EPR) policy where producers are given significant responsibilities for the treatment and disposal of post-consumer products.Keywords: E-waste, E-WACS, E-waste characterization, electronic waste, electronic waste analysis
Procedia PDF Downloads 118