Search results for: labeling cues
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 310

Search results for: labeling cues

40 Getting It Right Before Implementation: Using Simulation to Optimize Recommendations and Interventions After Adverse Event Review

Authors: Melissa Langevin, Natalie Ward, Colleen Fitzgibbons, Christa Ramsey, Melanie Hogue, Anna Theresa Lobos

Abstract:

Description: Root Cause Analysis (RCA) is used by health care teams to examine adverse events (AEs) to identify causes which then leads to recommendations for prevention Despite widespread use, RCA has limitations. Best practices have not been established for implementing recommendations or tracking the impact of interventions after AEs. During phase 1 of this study, we used simulation to analyze two fictionalized AEs that occurred in hospitalized paediatric patients to identify and understand how the errors occurred and generated recommendations to mitigate and prevent recurrences. Scenario A involved an error of commission (inpatient drug error), and Scenario B involved detecting an error that already occurred (critical care drug infusion error). Recommendations generated were: improved drug labeling, specialized drug kids, alert signs and clinical checklists. Aim: Use simulation to optimize interventions recommended post critical event analysis prior to implementation in the clinical environment. Methods: Suggested interventions from Phase 1 were designed and tested through scenario simulation in the clinical environment (medicine ward or pediatric intensive care unit). Each scenario was simulated 8 times. Recommendations were tested using different, voluntary teams and each scenario was debriefed to understand why the error was repeated despite interventions and how interventions could be improved. Interventions were modified with subsequent simulations until recommendations were felt to have an optimal effect and data saturation was achieved. Along with concrete suggestions for design and process change, qualitative data pertaining to employee communication and hospital standard work was collected and analyzed. Results: Each scenario had a total of three interventions to test. In, scenario 1, the error was reproduced in the initial two iterations and mitigated following key intervention changes. In scenario 2, the error was identified immediately in all cases where the intervention checklist was utilized properly. Independently of intervention changes and improvements, the simulation was beneficial to identify which of these should be prioritized for implementation and highlighted that even the potential solutions most frequently suggested by participants did not always translate into error prevention in the clinical environment. Conclusion: We conclude that interventions that help to change process (epinephrine kit or mandatory checklist) were more successful at preventing errors than passive interventions (signage, change in memory aids). Given that even the most successful interventions needed modifications and subsequent re-testing, simulation is key to optimizing suggested changes. Simulation is a safe, practice changing modality for institutions to use prior to implementing recommendations from RCA following AE reviews.

Keywords: adverse events, patient safety, pediatrics, root cause analysis, simulation

Procedia PDF Downloads 124
39 Tertiary Level Teachers' Beliefs about Codeswitching

Authors: Hoa Pham

Abstract:

Code switching, which can be described as the use of students’ first language in second language classrooms, has long been a controversial topic in the area of language teaching and second language acquisition. While this has been widely investigated across different contexts, little empirical research has been undertaken in Vietnam. The findings of this study contribute to our understanding of bilingual discourse and code switching practices in content and language integrated classrooms, which has significant implications for language teaching and learning in general and in particular for language pedagogy at tertiary level in Vietnam. This study examines the accounts the teachers articulated for their code switching practices in content-based Business English in Vietnam. Data were collected from five teachers through the use of stimulated recall interviews facilitated by the video data to garner the teachers' cognitive reflection, and allowed them to vocalise the motivations behind their code switching behaviour in particular contexts. The literature has recommended that when participants are provided with a large amount of stimuli or cues, they will experience an original situation again in their imagination with great accuracy. This technique can also provide a valuable "insider" perspective on the phenomenon under investigation which complements the researcher’s "outsider" observation. This can create a relaxed atmosphere during the interview process, which in turn promotes the collection of rich and diverse data. Also, participants can be empowered by this technique as they can raise their own concerns and discuss instances which they find important or interesting. The data generated through this study were analysed using a constant comparative approach. The study found that the teachers indicated their support for the use of code switching in their pedagogical practices. Particularly, as a pedagogical resource, the teachers saw code switching to the L1 playing a key role in facilitating the students' comprehension of both content knowledge and the target language. They believed the use of the L1 accommodates the students' current language competence and content knowledge. They also expressed positive opinions about the role that code switching plays in stimulating students' schematic language and content knowledge, encouraging retention and interest in learning and promoting a positive affective environment in the classroom. The teachers perceived that their use of code switching to the L1 helps them meet the students' language needs and prepares them for their study in subsequent courses and addresses functional needs so that students can cope with English language use outside the classroom. Several factors shaped the teachers' perceptions of their code switching practices, including their accumulated teaching experience, their previous experience as language learners, their theoretical understanding of language teaching and learning, and their knowledge of the teaching context. Code switching was a typical phenomenon in the observed classes and was supported by the teachers in certain contexts. This study reinforces the call in the literature to recognise this practice as a useful instructional resource.

Keywords: codeswitching, language teaching, teacher beliefs, tertiary level

Procedia PDF Downloads 412
38 Measuring Biobased Content of Building Materials Using Carbon-14 Testing

Authors: Haley Gershon

Abstract:

The transition from using fossil fuel-based building material to formulating eco-friendly and biobased building materials plays a key role in sustainable building. The growing demand on a global level for biobased materials in the building and construction industries heightens the importance of carbon-14 testing, an analytical method used to determine the percentage of biobased content that comprises a material’s ingredients. This presentation will focus on the use of carbon-14 analysis within the building materials sector. Carbon-14, also known as radiocarbon, is a weakly radioactive isotope present in all living organisms. Any fossil material older than 50,000 years will not contain any carbon-14 content. The radiocarbon method is thus used to determine the amount of carbon-14 content present in a given sample. Carbon-14 testing is performed according to ASTM D6866, a standard test method developed specifically for biobased content determination of material in solid, liquid, or gaseous form, which requires radiocarbon dating. Samples are combusted and converted into a solid graphite form and then pressed onto a metal disc and mounted onto a wheel of an accelerator mass spectrometer (AMS) machine for the analysis. The AMS instrument is used in order to count the amount of carbon-14 present. By submitting samples for carbon-14 analysis, manufacturers of building materials can confirm the biobased content of ingredients used. Biobased testing through carbon-14 analysis reports results as percent biobased content, indicating the percentage of ingredients coming from biomass sourced carbon versus fossil carbon. The analysis is performed according to standardized methods such as ASTM D6866, ISO 16620, and EN 16640. Products 100% sourced from plants, animals, or microbiological material are therefore 100% biobased, while products sourced only from fossil fuel material are 0% biobased. Any result in between 0% and 100% biobased indicates that there is a mixture of both biomass-derived and fossil fuel-derived sources. Furthermore, biobased testing for building materials allows manufacturers to submit eligible material for certification and eco-label programs such as the United States Department of Agriculture (USDA) BioPreferred Program. This program includes a voluntary labeling initiative for biobased products, in which companies may apply to receive and display the USDA Certified Biobased Product label, stating third-party verification and displaying a product’s percentage of biobased content. The USDA program includes a specific category for Building Materials. In order to qualify for the biobased certification under this product category, examples of product criteria that must be met include minimum 62% biobased content for wall coverings, minimum 25% biobased content for lumber, and a minimum 91% biobased content for floor coverings (non-carpet). As a result, consumers can easily identify plant-based products in the marketplace.

Keywords: carbon-14 testing, biobased, biobased content, radiocarbon dating, accelerator mass spectrometry, AMS, materials

Procedia PDF Downloads 136
37 Intracommunity Attitudes Toward the Gatekeeping of Asexuality in the LGBTQ+ Community on Tumblr

Authors: A.D. Fredline, Beverly Stiles

Abstract:

This is a qualitative investigation that examines the social media site, Tumblr, for the goal of analyzing the controversy regarding the inclusion of asexuality in the LGBTQ+ community. As platforms such as Tumblr permit the development of communities for marginalized groups, social media serves as a core component to exclusionary practices and boundary negotiations for community membership. This research is important because there is a paucity of research on the topic and a significant gap in the literature with regards to intracommunity gatekeeping. However, discourse on the topic is blatantly apparent on social media platforms. The objectives are to begin to bridge the gap in the literature by examining attitudes towards the inclusion of asexuality within the LGBTQ+ community. In order to analyze the attitudes developed towards the inclusion of asexuality in the LGBTQ+ community, eight publicly available blogs on Tumblr.com were selected from both the “inclusionist” and “exclusionist” perspectives. Blogs selected were found through a basic search for “inclusionist” and “exclusionist” on the Tumblr website. Out of the first twenty blogs listed for each set of results, those centrally focused on asexuality discourse were selected. For each blog, the fifty most recent postings were collected. Analysis of the collected postings exposed three central themes from the exclusionist perspective as well as for the inclusionist perspective. Findings indicate that from the inclusionist perspective, asexuality belongs to the LGBTQ+ community. One primary argument from this perspective is that asexual individuals face opposition for their identity just as do other identities included in the community. This opposition is said to take a variety of forms, such as verbal shaming, assumption of illness and corrective rape. Another argument is that the LGBTQ+ community and asexuals face a common opponent in cisheterosexism as asexuals struggle with the assumed and expected sexualization. A final central theme is that denying asexual inclusion leads to the assumption of heteronormativity. Findings also indicate that from the exclusionist perspective, asexuality does not belong to the LGBTQ+ community. One central theme from this perspective is the equivalization of cisgender heteroromantic asexuals with cisgender heterosexuals. As straight individuals are not allowed in the community, exclusionists argue that asexuals engaged in opposite gender partnerships should not be included. Another debate is that including asexuality in the community sexualizes all other identities by assuming sexual orientation is inherently sexual rather than romantic. Finally, exclusionists also argue that asexuality encourages childhood labeling and forces sexual identities on children, something not promoted by the LGBTQ+ community. Conclusions drawn from analyzing both perspectives is that integration may be a possibility, but complexities add another layer of discourse. For example, both inclusionists and exclusionists agree that privileged identities do not belong to the LGBTQ+ community. The focus of discourse is whether or not asexuals are privileged. Clearly, both sides of the debate have the same vision of what binds the community together. The question that remains is who belongs to that community.

Keywords: asexuality, exclusionists, inclusionists, Tumblr

Procedia PDF Downloads 162
36 Therapy Finding and Perspectives on Limbic Resonance in Gifted Adults

Authors: Andreas Aceranti, Riccardo Dossena, Marco Colorato, Simonetta Vernocchi

Abstract:

By the term “limbic resonance,” we usually refer to a state of deep connection, both emotional and physiological, between people who, when in resonance, find their limbic systems in tune with one another. Limbic resonance is not only about sharing emotions but also physiological states. In fact, people in such resonance can influence each other’s heart rate, blood pressure, and breathing. Limbic resonance is fundamental for human beings to connect and create deep bonds among a certain group. It is fundamental for our social skills. A relationship between gifted and resonant subjects is perceived as feeling safe, living the relation like an isle of serenity where it is possible to recharge, to communicate without words, to understand each others without giving explanations, to strengthen the balance of each member of the group. Within the circle, self-esteem is consolidated and makes it stronger to face what is outside, others, and reality. The idea that gifted people who are together may be unfit for the world does not correspond to the truth. The circle made up of people with high cognitive potential characterized by a limbic resonance is, in general, experienced as a solid platform from which you can safely move away and where you can return to recover strength. We studied 8 adults (between 21 and 47 years old). All of them with IQ higher than 130. We monitored their brain waves frequency (alpha, beta, theta, gamma, delta) by means of biosensing tracker along with their physiological states (heart beat frequency, blood pressure, breathing frequency, pO2, pCO2) and some blood works only (5-HT, dopamine, catecholamines, cortisol). The subjects of the study were asked to adhere to a protocol involving bonding activities (such as team building activities), role plays, meditation sessions, and group therapy. All these activities were carried out together. We observed that after about 4 months of activities, their brain waves frequencies tended to tune quicker and quicker. After 9 months, the bond among them was so important that they could “sense” each other inner states and sometimes also guess each others’ thoughts. According to our findings, it may be hypothesized that large synchronized outbursts of cortex neurons produces not only brain waves but also electromagnetic fields that may be able to influence the cortical neurons’ activity of other people’s brain by inducing action potentials in large groups of neurons and this is reasonably conceivable to be able to transmit information such as different emotions and cognition cues to the other’s brain. We also believe that upcoming research should focus on clarifying the role of brain magnetic particles in brain-to-brain communication. We also believe that further investigations should be carried out on the presence and role of cryptochromes to evaluate their potential roles in direct brain-to-brain communication.

Keywords: limbic resonance, psychotherapy, brain waves, emotion regulation, giftedness

Procedia PDF Downloads 65
35 Physical Contact Modulation of Macrophage-Mediated Anti-Inflammatory Response in Osteoimmune Microenvironment by Pollen-Like Nanoparticles

Authors: Qing Zhang, Janak L. Pathak, Macro N. Helder, Richard T. Jaspers, Yin Xiao

Abstract:

Introduction: Nanomaterial-based bone regeneration is greatly influenced by the immune microenvironment. Tissue-engineered nanomaterials mediate the inflammatory response of macrophages to regulate bone regeneration. Silica nanoparticles have been widely used in tissue engineering-related preclinical studies. However, the effect of topological features on the surface of silica nanoparticles on the immune response of macrophages remains unknown. Purposes: The aims of this research are to compare the influences of normal and pollen-like silica nano-surface topography on macrophage immune responses and to obtain insight into their potential regulatory mechanisms. Method: Macrophages (RAW 264.7 cells) were exposed to mesoporous silica nanoparticles with normal morphology (MSNs) and pollen-like morphology (PMSNs). RNA-seq, RT-qPCR, and LSCM were used to assess the changes in expression levels of immune response-related genes and proteins. SEM and TEM were executed to evaluate the contact and adherence of silica nanoparticles by macrophages. For the assessment of the immunomodulation-mediated osteogenic potential, BMSCs were cultured with conditioned medium (CM) from LPS pre-stimulated macrophage cultures treated with MSNs or PMSNs. Osteoimmunomodulatory potential of MSNs and PMSNs in vivo was tested in a mouse cranial bone osteolysis model. Results: The results of the RNA-seq, RT-qPCR, and LSCM assays showed that PMSNs inhibited the expression of pro-inflammatory genes and proteins in macrophages. SEM images showed distinct macrophage membrane surface binding patterns of MSNs and PMSNs. MSNs were more evenly dispersed across the macrophage cell membrane, while PMSNs were aggregated. PMSNs-induced macrophage anti-inflammatory response was associated with upregulation of the cell surface receptor CD28 and inhibition of ERK phosphorylation. TEM images showed that both MSNs and PMSNs could be phagocytosed by macrophages, and inhibiting nanoparticle phagocytosis did not affect the expression of anti-inflammatory genes and proteins. Moreover, PMSNs-induced conditioned medium from macrophages enhanced BMP-2 expression and osteogenic differentiation mBMSCs. Similarly, PMSNs prevented LPS-induced bone resorption via downregulation of inflammatory reaction. Conclusions: PMSNs can promote bone regeneration by modulating osteoimmunological processes through surface topography. The study offers insights into how surface physical contact cues can modulate the regulation of osteoimmunology and provides a basis for the application of nanoparticles with pollen-like morphology to affect immunomodulation in bone tissue engineering and regeneration.

Keywords: physical contact, osteoimmunology, macrophages, silica nanoparticles, surface morphology, membrane receptor, osteogenesis, inflammation

Procedia PDF Downloads 30
34 Chemical vs Visual Perception in Food Choice Ability of Octopus vulgaris (Cuvier, 1797)

Authors: Al Sayed Al Soudy, Valeria Maselli, Gianluca Polese, Anna Di Cosmo

Abstract:

Cephalopods are considered as a model organism with a rich behavioral repertoire. Sophisticated behaviors were widely studied and described in different species such as Octopus vulgaris, who has evolved the largest and more complex nervous system among invertebrates. In O. vulgaris, cognitive abilities in problem-solving tasks and learning abilities are associated with long-term memory and spatial memory, mediated by highly developed sensory organs. They are equipped with sophisticated eyes, able to discriminate colors even with a single photoreceptor type, vestibular system, ‘lateral line analogue’, primitive ‘hearing’ system and olfactory organs. They can recognize chemical cues either through direct contact with odors sources using suckers or by distance through the olfactory organs. Cephalopods are able to detect widespread waterborne molecules by the olfactory organs. However, many volatile odorant molecules are insoluble or have a very low solubility in water, and must be perceived by direct contact. O. vulgaris, equipped with many chemosensory neurons located in their suckers, exhibits a peculiar behavior that can be provocatively described as 'smell by touch'. The aim of this study is to establish the priority given to chemical vs. visual perception in food choice. Materials and methods: Three different types of food (anchovies, clams, and mussels) were used, and all sessions were recorded with a digital camera. During the acclimatization period, Octopuses were exposed to the three types of food to test their natural food preferences. Later, to verify if food preference is maintained, food was provided in transparent screw-jars with pierced lids to allow both visual and chemical recognition of the food inside. Subsequently, we tested alternatively octopuses with food in sealed transparent screw-jars and food in blind screw-jars with pierced lids. As a control, we used blind sealed jars with the same lid color to verify a random choice among food types. Results and discussion: During the acclimatization period, O. vulgaris shows a higher preference for anchovies (60%) followed by clams (30%), then mussels (10%). After acclimatization, using the transparent and pierced screw jars octopus’s food choices resulted in 50-50 between anchovies and clams, avoiding mussels. Later, guided by just visual sense, with transparent but not pierced jars, their food preferences resulted in 100% anchovies. With pierced but not transparent jars their food preference resulted in 100% anchovies as first food choice, the clams as a second food choice result (33.3%). With no possibility to select food, neither by vision nor by chemoreception, the results were 20% anchovies, 20% clams, and 60% mussels. We conclude that O. vulgaris uses both chemical and visual senses in an integrative way in food choice, but if we exclude one of them, it appears clear that its food preference relies on chemical sense more than on visual perception.

Keywords: food choice, Octopus vulgaris, olfaction, sensory organs, visual sense

Procedia PDF Downloads 192
33 Multimodal Rhetoric in the Wildlife Documentary, “My Octopus Teacher”

Authors: Visvaganthie Moodley

Abstract:

While rhetoric goes back as far as Aristotle who focalised its meaning as the “art of persuasion”, most scholars have focused on elocutio and dispositio canons, neglecting the rhetorical impact of multimodal texts, such as documentaries. Film documentaries are being increasingly rhetoric, often used by wildlife conservationists for influencing people to become more mindful about humanity’s connection with nature. This paper examines the award-winning film documentary, “My Octopus Teacher”, which depicts naturalist, Craig Foster’s unique discovery and relationship with a female octopus in the southern tip of Africa, the Cape of Storms in South Africa. It is anchored in Leech and Short’s (2007) framework of linguistic and stylistic categories – comprising lexical items, grammatical features, figures of speech and other rhetoric features, and cohesiveness – with particular foci on diction, anthropomorphic language, metaphors and symbolism. It also draws on Kress and van Leeuwen’s (2006) multimodal analysis to show how verbal cues (the narrator’s commentary), visual images in motion, visual images as metaphors and symbolism, and aural sensory images such as music and sound synergise for rhetoric effect. In addition, the analysis of “My Octopus Teacher” is guided by Nichol’s (2010) narrative theory; features of a documentary which foregrounds the credibility of the narrative as a text that represents real events with real people; and its modes of construction, viz., the poetic mode, the expository mode, observational mode and participatory mode, and their integration – forging documentaries as multimodal texts. This paper presents a multimodal rhetoric discussion on the sequence of salient episodes captured in the slow moving one-and-a-half-hour documentary. These are: (i) The prologue: on the brink of something extraordinary; (ii) The day it all started; (iii) The narrator’s turmoil: getting back into the ocean; (iv) The incredible encounter with the octopus; (v) Establishing a relationship; (vi) Outwitting the predatory pyjama shark; (vii) The cycle of life; and (viii) The conclusion: lessons from an octopus. The paper argues that wildlife documentaries, characterized by plausibility and which provide researchers the lens to examine the ideologies about animals and humans, offer an assimilation of the various senses – vocal, visual and audial – for engaging viewers in stylized compelling way; they have the ability to persuade people to think and act in particular ways. As multimodal texts, with its use of lexical items; diction; anthropomorphic language; linguistic, visual and aural metaphors and symbolism; and depictions of anthropocentrism, wildlife documentaries are powerful resources for promoting wildlife conservation and conscientizing people of the need for establishing a harmonious relationship with nature and humans alike.

Keywords: documentaries, multimodality, rhetoric, style, wildlife, conservation

Procedia PDF Downloads 70
32 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 141
31 Enhancing Food Quality and Safety Management in Ethiopia's Food Processing Industry: Challenges, Causes, and Solutions

Authors: Tuji Jemal Ahmed

Abstract:

Food quality and safety challenges are prevalent in Ethiopia's food processing industry, which can have adverse effects on consumers' health and wellbeing. The country is known for its diverse range of agricultural products, which are essential to its economy. However, poor food quality and safety policies and management systems in the food processing industry have led to several health problems, foodborne illnesses, and economic losses. This paper aims to highlight the causes and effects of food safety and quality issues in the food processing industry of Ethiopia and discuss potential solutions to address these issues. One of the main causes of poor food quality and safety in Ethiopia's food processing industry is the lack of adequate regulations and enforcement mechanisms. The absence of comprehensive food safety and quality policies and guidelines has led to substandard practices in the food manufacturing process. Moreover, the lack of monitoring and enforcement of existing regulations has created a conducive environment for unscrupulous businesses to engage in unsafe practices that endanger the public's health. The effects of poor food quality and safety are significant, ranging from the loss of human lives, increased healthcare costs, and loss of consumer confidence in the food processing industry. Foodborne illnesses, such as diarrhea, typhoid fever, and cholera, are prevalent in Ethiopia, and poor food quality and safety practices contribute significantly to their prevalence. Additionally, food recalls due to contamination or mislabeling often result in significant economic losses for businesses in the food processing industry. To address these challenges, the Ethiopian government has begun to take steps to improve food quality and safety in the food processing industry. One of the most notable initiatives is the Ethiopian Food and Drug Administration (EFDA), which was established in 2010 to regulate and monitor the quality and safety of food and drug products in the country. The EFDA has implemented several measures to enhance food safety, such as conducting routine inspections, monitoring the importation of food products, and enforcing strict labeling requirements. Another potential solution to improve food quality and safety in Ethiopia's food processing industry is the implementation of food safety management systems (FSMS). An FSMS is a set of procedures and policies designed to identify, assess, and control food safety hazards throughout the food manufacturing process. Implementing an FSMS can help businesses in the food processing industry identify and address potential hazards before they cause harm to consumers. Additionally, the implementation of an FSMS can help businesses comply with existing food safety regulations and guidelines. In conclusion, improving food quality and safety policies and management systems in Ethiopia's food processing industry is critical to protecting public health and enhancing the country's economy. Addressing the root causes of poor food quality and safety and implementing effective solutions, such as the establishment of regulatory agencies and the implementation of food safety management systems, can help to improve the overall safety and quality of the country's food supply.

Keywords: food quality, food safety, policy, management system, food processing industry

Procedia PDF Downloads 53
30 Walking in a Web of Animality: An Animality Informed Ethnography for an Inclusive Coexistence With (Other) Animals

Authors: Francesco De Giorgio

Abstract:

As different groups of wild animals are moving from natural to more anthropic environments, the need to overcome the human-animal gap for ethical coexistence becomes a public concern. Ethnology and ethnography play fundamental roles in the understanding of dynamics, perspective and movement in our interaction with (other) animals. In this effort, the Animality perspective provides an essential ethical lens and quality guidance for ethnography. It deconstructs the human/animal distinction and creates an inclusive approach to society. It further transgresses the rigid lines of normalizing images in human cultures, in which individuals are easily marginalized as ‘different’. Just like labeling an animal with species-specific behavior, judging and categorizing humans according to culture-specific expectations is easier than recognizing subjectivity. A fusion of anti-speciesist ethnology and ethnography of natural and social sciences can redress the shortcomings of current practices of multispecies ethnography that largely remain within an exclusively normalized human perspective. Empirically, the paper is based on current research on wild urban animals and human movement in Genua (IT), collecting data from systematic observations in the field regarding wild boars and ethnographic data collection over a period of time (18 months) where the human involved are educated in a changing perspective of coexistence. An “animality-ethnography” starts from observing our animal movement, how much and when we move, how we intersect our movement with that of other animals cohabiting with us, how we can observe and know others by moving, and ways of walking. The research will show how (interspecies) socio-cognition implies motion and movement and animal journeys between nature and the city, but also within the cities themselves, where a web of motion becomes the basic cultural matrix for cohabiting spaces, places, and systems. Here, the term "cognition" does not refer just to the brain or mind or intelligence. Indeed, cognition has a lot to do with movement, space, motion, proprioception, and the body. The ability to be informed, not only through what you see but also through the information you get from being in tune with the motion of a shared dynamic. To be an informative presence instead of an active stimulus or passive expectation, where the latter leaves too much space for projections and interpretations. What is proposed here is an understanding of our own animal movement linked to our own animal cognition. The result of breaking down your own culturally prescribed way in ethnographic research is breaking the barrier of limited options for observation and comprehension of the Other. Walking in the same way results in seeing others in the same way, studying them through only one channel of perception, causing a one-dimensional life instead of a multidimensional web. Returning to an understanding of our Animality, our animal movement, being in tune to improve a socio-cognitive context of cohabitation, both with domestic and wild animals, both in a forest or in a metropolis, represents the challenge of the coming years, and the evolution of the next centuries, to both preserve and share cultures, beyond the boundaries of species.

Keywords: antispeciesist ethology, interspecies coexistence, socio-cognition, intersectionality, animality

Procedia PDF Downloads 43
29 The Effectiveness of Using Dramatic Conventions as the Teaching Strategy on Self-Efficacy for Children With Autism Spectrum Disorder

Authors: Tso Sheng-Yang, Wang Tien-Ni

Abstract:

Introduction and Purpose: Previous researchers have documented children with ASD (Autism Spectrum Disorders) prefer to escaping internal privates and external privates when they face tough conditions they can’t control or they don’t like.Especially, when children with ASD need to learn challenging tasks, such us Chinese language, their inappropriate behaviors will occur apparently. Recently, researchers apply positive behavior support strategies for children with ASD to enhance their self-efficacy and therefore to reduce their adverse behaviors. Thus, the purpose of this research was to design a series of lecture based on art therapy and to evaluate its effectiveness on the child’s self-efficacy. Method: This research was the single-case design study that recruited a high school boy with ASD. Whole research can be separated into three conditions. First, baseline condition, before the class started and ended, the researcher collected participant’s competencies of self-efficacy every session. In intervention condition, the research used dramatic conventions to teach the child in Chinese language twice a week.When the data was stable across three documents, the period entered to the maintenance condition. In maintenance condition, the researcher only collected the score of self-efficacynot to do other interventions five times a month to represent the effectiveness of maintenance.The time and frequency of data collection among three conditions are identical. Concerning art therapy, the common approach, e.g., music, drama, or painting is to use art medium as independent variable. Due to visual cues of art medium, the ASD can be easily to gain joint attention with teachers. Besides, the ASD have difficulties in understanding abstract objectives Thus, using the drama convention is helpful for the ASD to construct the environment and understand the context of Classical Chinese. By real operation, it can improve the ASD to understand the context and construct prior knowledge. Result: Bassd on the 10-points Likert scale and research, we product following results. (a) In baseline condition, the average score of self-efficacyis 1.12 points, rangedfrom 1 to 2 points, and the level change is 0 point. (b)In intervention condition, the average score of self-efficacy is 7.66 points rangedfrom 7 to 9 points, and the level change is 1 point. (c)In maintenance condition, the average score of self-efficacy is 6.66 points rangedfrom 6 to 7 points, and the level change is 1 point. Concerning immediacy of change, between baseline and intervention conditions, the difference is 5 points. No overlaps were found between these two conditions. Conclusion: According to the result, we find that it is effective that using dramatic conventions a s teaching strategies to teach children with ASD. The result presents the score of self-efficacyimmediately enhances when the dramatic conventions commences. Thus, we suggest the teacher can use this approach and adjust, based on the student’s trait, to teach the ASD on difficult task.

Keywords: dramatic conventions, autism spectrum disorder, slef-efficacy, teaching strategy

Procedia PDF Downloads 62
28 Cockpit Integration and Piloted Assessment of an Upset Detection and Recovery System

Authors: Hafid Smaili, Wilfred Rouwhorst, Paul Frost

Abstract:

The trend of recent accident and incident cases worldwide show that the state-of-the-art automation and operations, for current and future demanding operational environments, does not provide the desired level of operational safety under crew peak workload conditions, specifically in complex situations such as loss-of-control in-flight (LOC-I). Today, the short term focus is on preparing crews to recognise and handle LOC-I situations through upset recovery training. This paper describes the cockpit integration aspects and piloted assessment of both a manually assisted and automatic upset detection and recovery system that has been developed and demonstrated within the European Advanced Cockpit for Reduction Of StreSs and workload (ACROSS) programme. The proposed system is a function that continuously monitors and intervenes when the aircraft enters an upset and provides either manually pilot-assisted guidance or takes over full control of the aircraft to recover from an upset. In order to mitigate the highly physical and psychological impact during aircraft upset events, the system provides new cockpit functionalities to support the pilot in recovering from any upset both manually assisted and automatically. A piloted simulator assessment was made in Oct-Nov 2015 using ten pilots in a representative civil large transport fly-by-wire aircraft in terms of the preference of the tested upset detection and recovery system configurations to reduce pilot workload, increase situational awareness and safe interaction with the manually assisted or automated modes. The piloted simulator evaluation of the upset detection and recovery system showed that the functionalities of the system are able to support pilots during an upset. The experiment showed that pilots are willing to rely on the guidance provided by the system during an upset. Thereby, it is important for pilots to see and understand what the aircraft is doing and trying to do especially in automatic modes. Comparing the manually assisted and the automatic recovery modes, the pilot’s opinion was that an automatic recovery reduces the workload so that they could perform a proper screening of the primary flight display. The results further show that the manually assisted recoveries, with recovery guidance cues on the cockpit primary flight display, reduced workload for severe upsets compared to today’s situation. The level of situation awareness was improved for automatic upset recoveries where the pilot could monitor what the system was trying to accomplish compared to automatic recovery modes without any guidance. An improvement in situation awareness was also noticeable with the manually assisted upset recovery functionalities as compared to the current non-assisted recovery procedures. This study shows that automatic upset detection and recovery functionalities are likely to positively impact the operational safety by means of reduced workload, improved situation awareness and crew stress reduction. It is thus believed that future developments for upset recovery guidance and loss-of-control prevention should focus on automatic recovery solutions.

Keywords: aircraft accidents, automatic flight control, loss-of-control, upset recovery

Procedia PDF Downloads 184
27 Combating the Practice of Open Defecation through Appropriate Communication Strategies in Rural India

Authors: Santiagomani Alex Parimalam

Abstract:

Lack of awareness on the consequences of open defecation and myths and misconceptions related to use of toilets have led to the continued practice of open defecation in India. Government of India initiated a multi-pronged intensive communication campaign against the practice of open defecation in the last few years. The primary vision of this communication campaign was to provide increased demand for toilets and to ensure that all have access to safe sanitation. The campaign strategy included the use of mass media, group and folk media, and interpersonal communication to expedite achieving its objectives. The campaign included the use of various media such as posters, wall writings, slides in cinema theatres, kiosks, pamphlets, newsletters, flip charts and folk media to bring behavioural changes in the communities. The author did a concurrent monitoring and process documentation of the campaigns initiated by the state of Tamilnandu, India between 2013 and 2016 commissioned by UNICEF India. The study was carried out to assess the effectiveness of the communication campaigns in combating the practice of open defecation and promote construction of toilets in the state of Tamilnadu, India. Initial findings revealed the gap in understanding the audience and the use of appropriate media. The first phase of the communication campaign by name as Chi Chi Chollapa (bringing shame concept) also revealed that use of interpersonal communication, group and community media were the most effective strategy in reaching the rural masses. The failure of various other media used especially the print media (poster, handbills, newsletter, kiosks) provides insights as to where the government needs to invest its resources in bringing health-seeking behaviour in the community. The findings shared with the government enabled to strengthen the campaign resulting in improved response. Taking cues from the study, the government understood the potency of the women, school children, youth and community leaders as the effective carriers of the message. The government narrowed down its focus and invested on the voluntary workers (village poverty reduction committee workers VPRCs) in the community. The effectiveness of interpersonal communication and peer education by the credible community worker threw light on the need for localising the content and communicator. From this study, we could derive that only community and group media are preferred by the people in the rural community. Children, youth, women, and credible local leaders are proved to be ambassadors in behaviour change communication. This study discloses the lacunae involved in the communication campaign and points out that the state should have carried out a proper communication need analysis and piloting. The study used a survey method with random sampling. The study used both quantitative and qualitative tools such as interview schedules, in-depth interviews, and focus group discussions in rural areas of Tamilnadu in phases. The findings of the study would provide directions to future campaigns to any campaign concerning health and rural development.

Keywords: appropriate, communication, combating, open defecation

Procedia PDF Downloads 102
26 A Foucauldian Analysis of Child Play: Case Study of a Preschool in the United States

Authors: Meng Wang

Abstract:

Historically, young members (children) in the society have been oppressed by adults through direct violent acts. Direct violence was evident in rampant child labor and child maltreatment cases. After acknowledging the rights of children from the United Nations, it is believed in public that children have been protected against direct physical violence. Nevertheless, at present, this paper argues from Foucauldian and disability study standpoints that similar to the old times, children are oppressed objects in the context of child play, which is constructed by adults to substitute direct violence in regulating children. Particularly, this paper suggests that on the one hand, preschool play is a new way that adults adopt to oppress preschoolers and regulate the society as a whole; on the other hand, preschoolers are taught how to play as an acquired skill and master self-regulation through play. There is a line of contemporary research that centers on child play from social constructivism perspective. Yet, current teaching practices pertaining to child play including guided child play and free play, in fact, serve the interest of adults and society at large. By acknowledging and deconstructing the prevalence of 'evidence-based best practice' in early childhood education field within western society, reconstruction of child-adult power relation could be achieved and alternative truth could be found in early childhood education. To support the argument of this paper, an on-going observational case study is conducted in a preschool setting in the United States. Age range of children is 2.5 to 4 years old. Approximately 10 children (5 boys) are participating in this case study. Observation is conducted throughout the weekdays as children follow through the classroom routine with a lead and an assistant teacher. Classroom teachers are interviewed pertaining to their classroom management strategies. Preliminary research finding of this case study suggested that preschool teachers tended to utilize scenarios from preschoolers’ dramatic play to impart core cultural values to young children. These values were pre-determined by adults. In addition, if young children have failed to follow teachers' guidance in terms of playing in a correct way, children ran the risk of being excluded from the play scenario by peers and adults. Furthermore, this study tended to indicate that through child play, preschoolers are obliged to develop an internal violence system, that is self-regulation skill to regulate their own behavior; and if this internal system is unestablished based on various assessments by adults, then potentially there will be consequences of negative labeling and disabling toward young children intended by adults. In conclusion, this paper applies Foucauldian analysis into the context of child play. At present, within preschool, child play is not free as it seems to be. Young children are expected to perform cultural tasks through their play activities designed by adults. Adults utilize child play as technologies of governmentality to further predict and regulate future society at large.

Keywords: child play, developmentally appropriate practice, DAP, poststructuralism, technologies of governmentality

Procedia PDF Downloads 130
25 Effects of Exposure to a Language on Perception of Non-Native Phonologically Contrastive Duration

Authors: Chuyu Huang, Itsuki Minemi, Kuanlin Chen, Yuki Hirose

Abstract:

It remains unclear how language speakers are able to perceive phonological contrasts that do not exist on their own. This experiment uses the vowel-length distinction in Japanese, which is phonologically contrastive and co-occurs with tonal change in some cases. For speakers whose first language does not distinguish vowel length, contrastive duration is usually misperceived, e.g., Mandarin speakers. Two alternative hypotheses for how Mandarin speakers would perceive a phonological contrast that does not exist in their language make different predictions. The stress parameter model does not have a clear prediction about the impact of tonal type. Mandarin speakers will likely be not able to perceive vowel length as well as Japanese native speakers do, but the performance might not correlate to tonal type because the prosody of their language is distinctive, which requires users to encode lexical prosody and notice subtle differences in word prosody. By contrast, cue-based phonetic models predict that Mandarin speakers may rely on pitch differences, a secondary cue, to perceive vowel length. Two groups of Mandarin speakers, including naive non-Japanese speakers and beginner learners, were recruited to participate in an AX discrimination task involving two Japanese sound stimuli that contain a phonologically contrastive environment. Participants were asked to indicate whether the two stimuli containing a vowel-length contrast (e.g., maapero vs. mapero) sound the same. The experiment was bifactorial. The first factor contrasted three syllabic positions (syllable position; initial/medial/final), as it would be likely to affect the perceptual difficulty, as seen in previous studies, and the second factor contrasted two pitch types (accent type): one with accentual change that could be distinguished with the lexical tones in Mandarin (the different condition), with the other group having no tonal distinction but only differing in vowel length (the same condition). The overall results showed that a significant main effect of accent type by applying a linear mixed-effects model (β = 1.48, SE = 0.35, p < 0.05), which implies that Mandarin speakers tend to more successfully recognize vowel-length differences when the long vowel counterpart takes on a tone that exists in Mandarin. The interaction between the accent type and the syllabic position is also significant (β = 2.30, SE = 0.91, p < 0.05), showing that vowel lengths in the different conditions are more difficult to recognize in the word-final case relative to the initial condition. The second statistical model, which compares naive speakers to beginners, was conducted with logistic regression to test the effects of the participant group. A significant difference was found between the two groups (β = 1.06, 95% CI = [0.36, 2.03], p < 0.05). This study shows that: (1) Mandarin speakers are likely to use pitch cues to perceive vowel length in a non-native language, which is consistent with the cue-based approaches; (2) an exposure effect was observed: the beginner group achieved a higher accuracy for long vowel perception, which implied the exposure effect despite the short period of language learning experience.

Keywords: cue-based perception, exposure effect, prosodic perception, vowel duration

Procedia PDF Downloads 202
24 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 236
23 Agri-Food Transparency and Traceability: A Marketing Tool to Satisfy Consumer Awareness Needs

Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli

Abstract:

The link between man and food plays, in the social and economic system, a central role where cultural and multidisciplinary aspects intertwine: food is not only nutrition, but also communication, culture, politics, environment, science, ethics, fashion. This multi-dimensionality has many implications in the food economy. In recent years, the consumer became more conscious about his food choices, involving a consistent change in consumption models. This change concerns several aspects: awareness of food system issues, employment of socially and environmentally conscious decision-making, food choices based on different characteristics than nutritional ones i.e. origin of food, how it’s produced, and who’s producing it. In this frame the ‘consumption choices’ and the ‘interests of the citizen’ become one part of the others. The figure of the ‘Citizen Consumer’ is born, a responsible and ethically motivated individual to change his lifestyle, achieving the goal of sustainable consumption. Simultaneously the branding, that before was guarantee of the product quality, today is questioned. In order to meet these needs, Agri-Food companies are developing specific product lines that follow two main philosophies: ‘Back to basics’ and ‘Less is more’. However, the issue of ethical behavior does not seem to find an adequate on market offer. Most likely due to a lack of attention on the communication strategy used, very often based on market logic and rarely on ethical one. The label in its classic concept of ‘clean labeling’ can no longer be the only instrument through which to convey product information and its evolution towards a concept of ‘clear label’ is necessary to embrace ethical and transparent concepts in progress the process of democratization of the Food System. The implementation of a voluntary traceability path, relying on the technological models of the Internet of Things or Industry 4.0, would enable the Agri-Food Supply Chain to collect data that, if properly treated, could satisfy the information need of consumers. A change of approach is therefore proposed towards Agri-Food traceability that is no longer intended as a tool to be used to respond to the legislator, but rather as a promotional tool useful to tell the company in a transparent manner and then reach the slice of the market of food citizens. The use of mobile technology can also facilitate this information transfer. However, in order to guarantee maximum efficiency, an appropriate communication model based on the ethical communication principles should be used, which aims to overcome the pipeline communication model, to offer the listener a new way of telling the food product, based on real data collected through processes traceability. The Citizen Consumer is therefore placed at the center of the new model of communication in which he has the opportunity to choose what to know and how. The new label creates a virtual access point capable of telling the product according to different point of views, following the personal interests and offering the possibility to give several content modalities to support different situations and usability.

Keywords: agri food traceability, agri-food transparency, clear label, food system, internet of things

Procedia PDF Downloads 131
22 Vehicle Timing Motion Detection Based on Multi-Dimensional Dynamic Detection Network

Authors: Jia Li, Xing Wei, Yuchen Hong, Yang Lu

Abstract:

Detecting vehicle behavior has always been the focus of intelligent transportation, but with the explosive growth of the number of vehicles and the complexity of the road environment, the vehicle behavior videos captured by traditional surveillance have been unable to satisfy the study of vehicle behavior. The traditional method of manually labeling vehicle behavior is too time-consuming and labor-intensive, but the existing object detection and tracking algorithms have poor practicability and low behavioral location detection rate. This paper proposes a vehicle behavior detection algorithm based on the dual-stream convolution network and the multi-dimensional video dynamic detection network. In the videos, the straight-line behavior of the vehicle will default to the background behavior. The Changing lanes, turning and turning around are set as target behaviors. The purpose of this model is to automatically mark the target behavior of the vehicle from the untrimmed videos. First, the target behavior proposals in the long video are extracted through the dual-stream convolution network. The model uses a dual-stream convolutional network to generate a one-dimensional action score waveform, and then extract segments with scores above a given threshold M into preliminary vehicle behavior proposals. Second, the preliminary proposals are pruned and identified using the multi-dimensional video dynamic detection network. Referring to the hierarchical reinforcement learning, the multi-dimensional network includes a Timer module and a Spacer module, where the Timer module mines time information in the video stream and the Spacer module extracts spatial information in the video frame. The Timer and Spacer module are implemented by Long Short-Term Memory (LSTM) and start from an all-zero hidden state. The Timer module uses the Transformer mechanism to extract timing information from the video stream and extract features by linear mapping and other methods. Finally, the model fuses time information and spatial information and obtains the location and category of the behavior through the softmax layer. This paper uses recall and precision to measure the performance of the model. Extensive experiments show that based on the dataset of this paper, the proposed model has obvious advantages compared with the existing state-of-the-art behavior detection algorithms. When the Time Intersection over Union (TIoU) threshold is 0.5, the Average-Precision (MP) reaches 36.3% (the MP of baselines is 21.5%). In summary, this paper proposes a vehicle behavior detection model based on multi-dimensional dynamic detection network. This paper introduces spatial information and temporal information to extract vehicle behaviors in long videos. Experiments show that the proposed algorithm is advanced and accurate in-vehicle timing behavior detection. In the future, the focus will be on simultaneously detecting the timing behavior of multiple vehicles in complex traffic scenes (such as a busy street) while ensuring accuracy.

Keywords: vehicle behavior detection, convolutional neural network, long short-term memory, deep learning

Procedia PDF Downloads 99
21 Conceptualizing Health-Seeking Behavior among Adolescents and Youth with Substance Use Disorder in Urban Kwazulu-Natal. A Candidacy Framework Analysis

Authors: Siphesihle Hlongwane

Abstract:

Background: Globally, alcohol consumption, smoking, and the use of illicit drugs kill more than 11.8 million people each year. In sub-Saharan Africa, substance abuse is responsible for more than 6.4% of all deaths recorded and about 4.7% of all Disability Adjusted Life Years (DALYs), with numbers still expected to grow if no drastic measures are taken to curb and address drug use. In a setting where substance use is rife, understanding contextual factors that influence an individual’s perceived eligibility to seek rehabilitation is paramount. Using the candidacy framework, we unpack how situational factors influence an individual’s perceived eligibility for healthcare uptake in adolescents and youth with substance use disorder (SUD). Methods: The candidacy framework is concerned with how people consider their eligibility for accessing a health service. The study collected and analyzed primary qualitative data to answer the research question. Data were collected between January and July 2022 on participants aged between 18 and 35 for drug users and 18 to 60 for family members. Participants include 20 previous and current drug users and 20 family members that experience the effects of addiction. A pre-drafted semi-structured interview guide was administered to a conveniently sampled population supplemented with a referral sampling method. Data were thematically analyzed using the NVivo 12pro software to manage the data. Findings: Our findings show that people with substance use disorders are aware of their drug use habits and acknowledge their candidacy for health services. Candidacy for health services is also acknowledged by those around them, such as family members and peers, and as such, information on the navigation of health services for drug users is shared by those who have attended health services, those affected by drug use, and this includes health service research by family members to identify accessible health services. While participants reported willingness to quit drug use if assistance is provided, the permeability of health care services is hindered by both individual determinations to quit drug use from long-time use and the availability of health services for drug users, such as rehabilitation centers. Our findings also show that drug users are conscious and can articulate their ailments; however, the hunt for the next dose of drugs and long waiting cues for health service acquisition overshadows their claim to health services. Participants reported a mixture of treatments prescribed, with some more gruesome than others prescribed, thus serving as both a facilitator and barrier for health service uptake. Despite some unorthodox forms of treatments prescribed in health care, the majority of those who enter treatment complete the process of treatment, although some are met with setbacks and sometimes relapse after treatment has finished. Conclusion: Drug users are able to ascertain their candidacy for health services; however, individual and environmental characteristics relating to drug use hinder the use of health services. Drug use interventions need to entice health service uptake as a way to improve candidacy for health use.

Keywords: substance use disorder, rehabilitation, drug use, relapse, South Africa, candidacy framework

Procedia PDF Downloads 67
20 Investigation of Linezolid, 127I-Linezolid and 131I-Linezolid Effects on Slime Layer of Staphylococcus with Nuclear Methods

Authors: Hasan Demiroğlu, Uğur Avcıbaşı, Serhan Sakarya, Perihan Ünak

Abstract:

Implanted devices are progressively practiced in innovative medicine to relieve pain or improve a compromised function. Implant-associated infections represent an emerging complication, caused by organisms which adhere to the implant surface and grow embedded in a protective extracellular polymeric matrix, known as a biofilm. In addition, the microorganisms within biofilms enter a stationary growth phase and become phenotypically resistant to most antimicrobials, frequently causing treatment failure. In such cases, surgical removal of the implant is often required, causing high morbidity and substantial healthcare costs. Staphylococcus aureus is the most common pathogen causing implant-associated infections. Successful treatment of these infections includes early surgical intervention and antimicrobial treatment with bactericidal drugs that also act on the surface-adhering microorganisms. Linezolid is a promising anti-microbial with ant-staphylococcal activity, used for the treatment of MRSA infections. Linezolid is a synthetic antimicrobial and member of oxazolidinoni group, with a bacteriostatic or bactericidal dose-dependent antimicrobial mechanism against gram-positive bacteria. Intensive use of antibiotics, have emerged multi-resistant organisms over the years and major problems have begun to be experienced in the treatment of infections occurred with them. While new drugs have been developed worldwide, on the other hand infections formed with microorganisms which gained resistance against these drugs were reported and the scale of the problem increases gradually. Scientific studies about the production of bacterial biofilm increased in recent years. For this purpose, we investigated the activity of Lin, Lin radiolabeled with 131I (131I-Lin) and cold iodinated Lin (127I-Lin) against clinical strains of Staphylococcus aureus DSM 4910 in biofilm. In the first stage, radio and cold labeling studies were performed. Quality-control studies of Lin and iodo (radio and cold) Lin derivatives were carried out by using TLC (Thin Layer Radiochromatography) and HPLC (High Pressure Liquid Chromatography). In this context, it was found that the binding yield was obtained to be about 86±2 % for 131I-Lin. The minimal inhibitory concentration (MIC) of Lin, 127I-Lin and 131I-Lin for Staphylococcus aureus DSM 4910 strain were found to be 1µg/mL. In time-kill studies of Lin, 127I-Lin and 131I-Lin were producing ≥ 3 log10 decreases in viable counts (cfu/ml) within 6 h at 2 and 4 fold of MIC respectively. No viable bacteria were observed within the 24 h of the experiments. Biofilm eradication of S. aureus started with 64 µg/mL of Lin, 127I-Lin and 131I-Lin, and OD630 was 0.507±0.0.092, 0.589±0.058 and 0.266±0.047, respectively. The media control of biofilm producing Staphylococcus was 1.675±0,01 (OD630). 131I and 127I did not have any effects on biofilms. Lin and 127I-Lin were found less effectively than 131I-Lin at killing cells in biofilm and biofilm eradication. Our results demonstrate that the 131I-Lin have potent anti-biofilm activity against S. aureus compare to Lin, 127I-Lin and media control. This is suggested that, 131I may have harmful effect on biofilm structure.

Keywords: iodine-131, linezolid, radiolabeling, slime layer, Staphylococcus

Procedia PDF Downloads 537
19 A Qualitative Study of Newspaper Discourse and Online Discussions of Climate Change in China

Authors: Juan Du

Abstract:

Climate change is one of the most crucial issues of this era, with contentious debates on it among scholars. But there are sparse studies on climate change discourse in China. Including China in the study of climate change is essential for a sociological understanding of climate change. China -- as a developing country and an essential player in tackling climate change -- offers an ideal case for studying climate change for scholars moving beyond developed countries and enriching their understandings of climate change by including diverse social settings. This project contrasts the macro- and micro-level understandings of climate change in China, which helps scholars move beyond a focus on climate skepticism and denialism and enriches sociology of climate change knowledge. The macro-level understanding of climate change is obtained by analyzing over 4,000 newspaper articles from various official outlets in China. State-controlled newspapers play an essential role in transmitting essential and high-quality information and promoting broader public understanding of climate change and its anthropogenic nature. Thus, newspaper articles can be seen as tools employed by governments to mobilize the public in terms of supporting the development of a strategy shift from economy-growth to an ecological civilization. However, media is just one of the significant factors influencing an individual’s climate change concern. Extreme weather events, access to accurate scientific information, elite cues, and movement/countermovement advocacy influence an individual’s perceptions of climate change. Hence, there are differences in the ways that both newspaper articles and the public frame the issues. The online forum is an informative channel for scholars to understand the public’s opinion. The micro-level data comes from Zhihu, which is China’s equivalence of Quora. Users can propose, answer, and comment on questions. This project analyzes the questions related to climate change which have over 20 answers. By open-coding both the macro- and micro-level data, this project will depict the differences between ideology as presented in government-controlled newspapers and how people talk and act with respect to climate change in cyberspace, which may provide an idea about any existing disconnect in public behavior and their willingness to change daily activities to facilitate a greener society. The contemporary Yellow Vest protests in France illustrate that the large gap between governmental policies of climate change mitigation and the public’s understanding may lead to social movement activity and social instability. Effective environmental policy is impossible without the public’s support. Finding existing gaps in understanding may help policy-makers develop effective ways of framing climate change and obtain more supporters of climate change related policies. Overall, this qualitative project provides answers to the following research questions: 1) How do different state-controlled newspapers transmit their ideology on climate change to the public and in what ways? 2) How do individuals frame climate change online? 3) What are the differences between newspapers’ framing and individual’s framing?

Keywords: climate change, China, framing theory, media, public’s climate change concern

Procedia PDF Downloads 110
18 Mathematical Modeling of Avascular Tumor Growth and Invasion

Authors: Meitham Amereh, Mohsen Akbari, Ben Nadler

Abstract:

Cancer has been recognized as one of the most challenging problems in biology and medicine. Aggressive tumors are a lethal type of cancers characterized by high genomic instability, rapid progression, invasiveness, and therapeutic resistance. Their behavior involves complicated molecular biology and consequential dynamics. Although tremendous effort has been devoted to developing therapeutic approaches, there is still a huge need for new insights into the dark aspects of tumors. As one of the key requirements in better understanding the complex behavior of tumors, mathematical modeling and continuum physics, in particular, play a pivotal role. Mathematical modeling can provide a quantitative prediction on biological processes and help interpret complicated physiological interactions in tumors microenvironment. The pathophysiology of aggressive tumors is strongly affected by the extracellular cues such as stresses produced by mechanical forces between the tumor and the host tissue. During the tumor progression, the growing mass displaces the surrounding extracellular matrix (ECM), and due to the level of tissue stiffness, stress accumulates inside the tumor. The produced stress can influence the tumor by breaking adherent junctions. During this process, the tumor stops the rapid proliferation and begins to remodel its shape to preserve the homeostatic equilibrium state. To reach this, the tumor, in turn, upregulates epithelial to mesenchymal transit-inducing transcription factors (EMT-TFs). These EMT-TFs are involved in various signaling cascades, which are often associated with tumor invasiveness and malignancy. In this work, we modeled the tumor as a growing hyperplastic mass and investigated the effects of mechanical stress from surrounding ECM on tumor invasion. The invasion is modeled as volume-preserving inelastic evolution. In this framework, principal balance laws are considered for tumor mass, linear momentum, and diffusion of nutrients. Also, mechanical interactions between the tumor and ECM is modeled using Ciarlet constitutive strain energy function, and dissipation inequality is utilized to model the volumetric growth rate. System parameters, such as rate of nutrient uptake and cell proliferation, are obtained experimentally. To validate the model, human Glioblastoma multiforme (hGBM) tumor spheroids were incorporated inside Matrigel/Alginate composite hydrogel and was injected into a microfluidic chip to mimic the tumor’s natural microenvironment. The invasion structure was analyzed by imaging the spheroid over time. Also, the expression of transcriptional factors involved in invasion was measured by immune-staining the tumor. The volumetric growth, stress distribution, and inelastic evolution of tumors were predicted by the model. Results showed that the level of invasion is in direct correlation with the level of predicted stress within the tumor. Moreover, the invasion length measured by fluorescent imaging was shown to be related to the inelastic evolution of tumors obtained by the model.

Keywords: cancer, invasion, mathematical modeling, microfluidic chip, tumor spheroids

Procedia PDF Downloads 89
17 Defective Autophagy Disturbs Neural Migration and Network Activity in hiPSC-Derived Cockayne Syndrome B Disease Models

Authors: Julia Kapr, Andrea Rossi, Haribaskar Ramachandran, Marius Pollet, Ilka Egger, Selina Dangeleit, Katharina Koch, Jean Krutmann, Ellen Fritsche

Abstract:

It is widely acknowledged that animal models do not always represent human disease. Especially human brain development is difficult to model in animals due to a variety of structural and functional species-specificities. This causes significant discrepancies between predicted and apparent drug efficacies in clinical trials and their subsequent failure. Emerging alternatives based on 3D in vitro approaches, such as human brain spheres or organoids, may in the future reduce and ultimately replace animal models. Here, we present a human induced pluripotent stem cell (hiPSC)-based 3D neural in a vitro disease model for the Cockayne Syndrome B (CSB). CSB is a rare hereditary disease and is accompanied by severe neurologic defects, such as microcephaly, ataxia and intellectual disability, with currently no treatment options. Therefore, the aim of this study is to investigate the molecular and cellular defects found in neural hiPSC-derived CSB models. Understanding the underlying pathology of CSB enables the development of treatment options. The two CSB models used in this study comprise a patient-derived hiPSC line and its isogenic control as well as a CSB-deficient cell line based on a healthy hiPSC line (IMR90-4) background thereby excluding genetic background-related effects. Neurally induced and differentiated brain sphere cultures were characterized via RNA Sequencing, western blot (WB), immunocytochemistry (ICC) and multielectrode arrays (MEAs). CSB-deficiency leads to an altered gene expression of markers for autophagy, focal adhesion and neural network formation. Cell migration was significantly reduced and electrical activity was significantly increased in the disease cell lines. These data hint that the cellular pathologies is possibly underlying CSB. By induction of autophagy, the migration phenotype could be partially rescued, suggesting a crucial role of disturbed autophagy in defective neural migration of the disease lines. Altered autophagy may also lead to inefficient mitophagy. Accordingly, disease cell lines were shown to have a lower mitochondrial base activity and a higher susceptibility to mitochondrial stress induced by rotenone. Since mitochondria play an important role in neurotransmitter cycling, we suggest that defective mitochondria may lead to altered electrical activity in the disease cell lines. Failure to clear the defective mitochondria by mitophagy and thus missing initiation cues for new mitochondrial production could potentiate this problem. With our data, we aim at establishing a disease adverse outcome pathway (AOP), thereby adding to the in-depth understanding of this multi-faced disorder and subsequently contributing to alternative drug development.

Keywords: autophagy, disease modeling, in vitro, pluripotent stem cells

Procedia PDF Downloads 98
16 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction

Authors: Yan Zhang

Abstract:

Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.

Keywords: Internet of Things, machine learning, predictive maintenance, streaming data

Procedia PDF Downloads 360
15 Construction and Cross-Linking of Polyelectrolyte Multilayers Based on Polysaccharides as Antifouling Coatings

Authors: Wenfa Yu, Thuva Gnanasampanthan, John Finlay, Jessica Clarke, Charlotte Anderson, Tony Clare, Axel Rosenhahn

Abstract:

Marine biofouling is a worldwide problem at vast economic and ecological costs. Historically it was combated with toxic coatings such as tributyltin. As those coatings being banned nowadays, finding environmental friendly antifouling solution has become an urgent topic. In this study antifouling coatings consisted of natural occurring polysaccharides hyaluronic acid (HA), alginic acid (AA), chitosan (Ch) and polyelectrolyte polyethylenimine (PEI) are constructed into polyelectrolyte multilayers (PEMs) in a Layer-by-Layer (LbL) method. LbL PEM construction is a straightforward way to assemble biomacromolecular coatings on surfaces. Advantages about PEM include ease of handling, highly diverse PEM composition, precise control over the thickness and so on. PEMs have been widely employed in medical application and there are numerous studies regarding their protein adsorption, elasticity and cell adhesive properties. With the adjustment of coating composition, termination layer charge, coating morphology and cross-linking method, it is possible to prepare low marine biofouling coatings with PEMs. In this study, using spin coating technology, PEM construction was achieved at smooth multilayers with roughness as low as 2nm rms and highly reproducible thickness around 50nm. To obtain stability in sea water, the multilayers were covalently cross-linked either thermally or chemically. The cross-linking method affected surface energy, which was reflected in water contact angle, thermal cross-linking led to hydrophobic surfaces and chemical cross-linking generated hydrophilic surfaces. The coatings were then evaluated regarding its protein resistance and biological species resistance. While the hydrophobic thermally cross-linked PEM had low resistance towards proteins, the resistance of chemically cross-linked PEM strongly depended on the PEM termination layer and the charge of the protein, opposite charge caused high adsorption and same charge low adsorption, indicating electrostatic interaction plays a crucial role in the protein adsorption processes. Ulva linza was chosen as the biological species for antifouling performance evaluation. Despite of the poor resistance towards protein adsorption, thermally cross-linked PEM showed good resistance against Ulva spores settlement, the chemically cross-linked multilayers showed poor resistance regardless of the termination layer. Marine species adhesion is a complex process, although it involves proteins as bioadhesives, protein resistance its own is not a fully indicator for its antifouling performance. The species will pre select the surface, responding to cues like surface energy, chemistry, or charge and so on. Thus making it difficult for one single factors to determine its antifouling performance. Preparing PEM coating is a comprehensive work involving choosing polyelectrolyte combination, determining termination layer and the method for cross-linking. These decisions will affect PEM properties such as surface energy, charge, which is crucial, since biofouling is a process responding to surface properties in a highly sensitive and dynamic way.

Keywords: hyaluronic acid, polyelectrolyte multilayers, protein resistance, Ulva linza zoospores

Procedia PDF Downloads 139
14 Investigating Role of Autophagy in Cispaltin Induced Stemness and Chemoresistance in Oral Squamous Cell Carcinoma

Authors: Prajna Paramita Naik, Sujit Kumar Bhutia

Abstract:

Background: Regardless of the development multimodal treatment strategies, oral squamous cell carcinoma (OSCC) is often associated with a high rate of recurrence, metastasis and chemo- and radio- resistance. The present study inspected the relevance of CD44, ABCB1 and ADAM17 expression as a putative stem cell compartment in oral squamous cell carcinoma (OSCC) and deciphered the role of autophagy in regulating the expression of aforementioned proteins, stemness and chemoresistance. Methods: A retrospective analysis of CD44, ABCB1 and ADAM17 expression with respect to the various clinicopathological factors of sixty OSCC patients were determined via immunohistochemistry. The correlation among CD44, ABCB1 and ADAM17 expression was established. Sphere formation assay, flow cytometry and fluorescence microscopy were conducted to elucidate the stemness and chemoresistance nature of established cisplatin-resistant oral cancer cells (FaDu). The pattern of expression of CD44, ABCB1 and ADAM17 in parental (FaDu-P) and resistant FaDu cells (FaDu-CDDP-R) were investigated through fluorescence microscopy. Western blot analysis of autophagy marker proteins was performed to compare the status of autophagy in parental and resistant FaDu cell. To investigate the role of autophagy in chemoresistance and stemness, sphere formation assay, immunofluorescence and Western blot analysis was performed post transfection with siATG14 and the level of expression of autophagic proteins, mitochondrial protein and stemness-associated proteins were analyzed. The statistical analysis was performed by GraphPad Prism 4.0 software. p-value was defined as follows: not significant (n.s.): p > 0.05;*: p ≤ 0.05; **: p ≤ 0.01; ***: p ≤ 0.001; ****: p ≤ 0.0001 were considered statistically significant. Results: In OSCC, high CD44, ABCB1 and ADAM17 expression were significantly correlated with higher tumor grades and poor differentiation. However, the expression of these proteins was not related to the age and sex of OSCC patients. Moreover, the expression of CD44, ABCB1 and ADAM17 were positively correlated with each other. In vitro and OSCC tissue double labeling experiment data showed that CD44+ cells were highly associated with ABCB1 and ADAM17 expression. Further, FaDu-CDDP-R cells showed higher sphere forming capacity along with increased fraction of the CD44+ population and β-catenin expression FaDu-CDDP-R cells also showed accelerated expression of CD44, ABCB1 and ADAM17. A comparatively higher autophagic flux was observed in FaDu-CDDP-R against FaDu-P cells. The expression of mitochondrial proteins was noticeably reduced in resistant cells as compared to parental cells indicating the occurrence of autophagy-mediated mitochondrial degradation in oral cancer. Moreover, inhibition of autophagy was coupled with the decreased formation of orospheres suggesting autophagy-mediated stemness in oral cancer. Blockade of autophagy was also found to induce the restoration of mitochondrial proteins in FaDu-CDDP-R cells indicating the involvement of mitophagy in chemoresistance. Furthermore, a reduced expression of CD44, ABCB1 and ADAM17 was also observed in ATG14 deficient cells FaDu-P and FaDu-CDDP-R cells. Conclusion: The CD44+ ⁄ABCB1+ ⁄ADAM17+ expression in OSCC might be associated with chemoresistance and a putative CSC compartment. Further, the present study highlights the contribution of mitophagy in chemoresistance and confirms the potential involvement of autophagic regulation in acquisition of stem-like characteristics in OSCC.

Keywords: ABCB1, ADAM17, autophagy, CD44, chemoresistance, mitophagy, OSCC, stemness

Procedia PDF Downloads 175
13 Treatment of Neuronal Defects by Bone Marrow Stem Cells Differentiation to Neuronal Cells Cultured on Gelatin-PLGA Scaffolds Coated with Nano-Particles

Authors: Alireza Shams, Ali Zamanian, Atefehe Shamosi, Farnaz Ghorbani

Abstract:

Introduction: Although the application of a new strategy remains a remarkable challenge for treatment of disabilities due to neuronal defects, progress in Nanomedicine and tissue engineering, suggesting the new medical methods. One of the promising strategies for reconstruction and regeneration of nervous tissue is replacing of lost or damaged cells by specific scaffolds after Compressive, ischemic and traumatic injuries of central nervous system. Furthermore, ultrastructure, composition, and arrangement of tissue scaffolds are effective on cell grafts. We followed implantation and differentiation of mesenchyme stem cells to neural cells on Gelatin Polylactic-co-glycolic acid (PLGA) scaffolds coated with iron nanoparticles. The aim of this study was to evaluate the capability of stem cells to differentiate into motor neuron-like cells under topographical cues and morphogenic factors. Methods and Materials: Bone marrow mesenchymal stem cells (BMMSCs) was obtained by primary cell culturing of adult rat bone marrow got from femur bone by flushing method. BMMSCs were incubated with DMEM/F12 (Gibco), 15% FBS and 100 U/ml pen/strep as media. Then, BMMSCs seeded on Gel/PLGA scaffolds and tissue culture (TCP) polystyrene embedded and incorporated by Fe Nano particles (FeNPs) (Fe3o4 oxide (M w= 270.30 gr/mol.). For neuronal differentiation, 2×10 5 BMMSCs were seeded on Gel/PLGA/FeNPs scaffolds was cultured for 7 days and 0.5 µ mol. Retinoic acid, 100 µ mol. Ascorbic acid,10 ng/ml. Basic fibroblast growth factor (Sigma, USA), 250 μM Iso butyl methyl xanthine, 100 μM 2-mercaptoethanol, and 0.2 % B27 (Invitrogen, USA) added to media. Proliferation of BMMSCs was assessed by using MTT assay for cell survival. The morphology of BMMSCs and scaffolds was investigated by scanning electron microscopy analysis. Expression of neuron-specific markers was studied by immunohistochemistry method. Data were analyzed by analysis of variance, and statistical significance was determined by Turkey’s test. Results: Our results revealed that differentiation and survival of BMMSCs into motor neuron-like cells on Gel/PLGA/FeNPs as a biocompatible and biodegradable scaffolds were better than those cultured in Gel/PLGA in absence of FeNPs and TCP scaffolds. FeNPs had raised physical power but decreased capacity absorption of scaffolds. Well defined oriented pores in scaffolds due to FeNPs may activate differentiation and synchronized cells as a mechanoreceptor. Induction effects of magnetic FeNPs by One way flow of channels in scaffolds help to lead the cells and can facilitate direction of their growth processes. Discussion: Progression of biological properties of BMMSCs and the effects of FeNPs spreading under magnetic field was evaluated in this investigation. In vitro study showed that the Gel/PLGA/FeNPs scaffold provided a suitable structure for motor neuron-like cells differentiation. This could be a promising candidate for enhancing repair and regeneration in neural defects. Dynamic and static magnetic field for inducing and construction of cells can provide better results for further experimental studies.

Keywords: differentiation, mesenchymal stem cells, nano particles, neuronal defects, Scaffolds

Procedia PDF Downloads 144
12 A Single Cell Omics Experiments as Tool for Benchmarking Bioinformatics Oncology Data Analysis Tools

Authors: Maddalena Arigoni, Maria Luisa Ratto, Raffaele A. Calogero, Luca Alessandri

Abstract:

The presence of tumor heterogeneity, where distinct cancer cells exhibit diverse morphological and phenotypic profiles, including gene expression, metabolism, and proliferation, poses challenges for molecular prognostic markers and patient classification for targeted therapies. Understanding the causes and progression of cancer requires research efforts aimed at characterizing heterogeneity, which can be facilitated by evolving single-cell sequencing technologies. However, analyzing single-cell data necessitates computational methods that often lack objective validation. Therefore, the establishment of benchmarking datasets is necessary to provide a controlled environment for validating bioinformatics tools in the field of single-cell oncology. Benchmarking bioinformatics tools for single-cell experiments can be costly due to the high expense involved. Therefore, datasets used for benchmarking are typically sourced from publicly available experiments, which often lack a comprehensive cell annotation. This limitation can affect the accuracy and effectiveness of such experiments as benchmarking tools. To address this issue, we introduce omics benchmark experiments designed to evaluate bioinformatics tools to depict the heterogeneity in single-cell tumor experiments. We conducted single-cell RNA sequencing on six lung cancer tumor cell lines that display resistant clones upon treatment of EGFR mutated tumors and are characterized by driver genes, namely ROS1, ALK, HER2, MET, KRAS, and BRAF. These driver genes are associated with downstream networks controlled by EGFR mutations, such as JAK-STAT, PI3K-AKT-mTOR, and MEK-ERK. The experiment also featured an EGFR-mutated cell line. Using 10XGenomics platform with cellplex technology, we analyzed the seven cell lines together with a pseudo-immunological microenvironment consisting of PBMC cells labeled with the Biolegend TotalSeq™-B Human Universal Cocktail (CITEseq). This technology allowed for independent labeling of each cell line and single-cell analysis of the pooled seven cell lines and the pseudo-microenvironment. The data generated from the aforementioned experiments are available as part of an online tool, which allows users to define cell heterogeneity and generates count tables as an output. The tool provides the cell line derivation for each cell and cell annotations for the pseudo-microenvironment based on CITEseq data by an experienced immunologist. Additionally, we created a range of pseudo-tumor tissues using different ratios of the aforementioned cells embedded in matrigel. These tissues were analyzed using 10XGenomics (FFPE samples) and Curio Bioscience (fresh frozen samples) platforms for spatial transcriptomics, further expanding the scope of our benchmark experiments. The benchmark experiments we conducted provide a unique opportunity to evaluate the performance of bioinformatics tools for detecting and characterizing tumor heterogeneity at the single-cell level. Overall, our experiments provide a controlled and standardized environment for assessing the accuracy and robustness of bioinformatics tools for studying tumor heterogeneity at the single-cell level, which can ultimately lead to more precise and effective cancer diagnosis and treatment.

Keywords: single cell omics, benchmark, spatial transcriptomics, CITEseq

Procedia PDF Downloads 77
11 Machine Learning Approach for Automating Electronic Component Error Classification and Detection

Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski

Abstract:

The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.

Keywords: augmented reality, machine learning, object recognition, virtual laboratories

Procedia PDF Downloads 112