Search results for: special standards for planing
542 Finite Element Study of Coke Shape Deep Beam to Column Moment Connection Subjected to Cyclic Loading
Authors: Robel Wondimu Alemayehu, Sihwa Jung, Manwoo Park, Young K. Ju
Abstract:
Following the aftermath of the 1994 Northridge earthquake, intensive research on beam to column connections is conducted, leading to the current design basis. The current design codes require the use of either a prequalified connection or a connection that passes the requirements of large-scale cyclic qualification test prior to use in intermediate or special moment frames. The second alternative is expensive both in terms of money and time. On the other hand, the maximum beam depth in most of the prequalified connections is limited to 900mm due to the reduced rotation capacity of deeper beams. However, for long span beams the need to use deeper beams may arise. In this study, a beam to column connection detail suitable for deep beams is presented. The connection detail comprises of thicker-tapered beam flange adjacent to the beam to column connection. Within the thicker-tapered flange region, two reduced beam sections are provided with the objective of forming two plastic hinges within the tapered-thicker flange region. In addition, the length, width, and thickness of the tapered-thicker flange region are proportioned in such a way that a third plastic hinge forms at the end of the tapered-thicker flange region. As a result, the total rotation demand is distributed over three plastic zones. Making it suitable for deeper beams that have lower rotation capacity at one plastic hinge. The effectiveness of this connection detail is studied through finite element analysis. For the study, a beam that has a depth of 1200mm is used. Additionally, comparison with welded unreinforced flange-welded web (WUF-W) moment connection and reduced beam section moment connection is made. The results show that the rotation capacity of a WUF-W moment connection is increased from 2.0% to 2.2% by applying the proposed moment connection detail. Furthermore, the maximum moment capacity, energy dissipation capacity and stiffness of the WUF-W moment connection is increased up to 58%, 49%, and 32% respectively. In contrast, applying the reduced beam section detail to the same WUF-W moment connection reduced the rotation capacity from 2.0% to 1.50% plus the maximum moment capacity and stiffness of the connection is reduced by 22% and 6% respectively. The proposed connection develops three plastic hinge regions as intended and it shows improved performance compared to both WUF-W moment connection and reduced beam section moment connection. Moreover, the achieved rotation capacity satisfies the minimum required for use in intermediate moment frames.Keywords: connections, finite element analysis, seismic design, steel intermediate moment frame
Procedia PDF Downloads 168541 Rethinking Propaganda Discourse: Convergence and Divergence Unveiled
Authors: Mandy Tao Benec
Abstract:
Propaganda, understood as a ‘deliberate attempt to persuade people to think and behave in a desired way’, contributes to the fabric of mass media discourse as an important component, albeit often under various alternative expressions except ‘propaganda’. When the word ‘propaganda’ does appear in the mainstream media of the West, it is often selectively applied upon undesiring parties such as China, the North Korea, Russia’s Putin, or terrorists, etc.. This attitude reveals an ‘us verse them’ mentality; and a presupposition that propaganda is something only ‘they’ do whilst ‘we’ do not. This phenomenon not only runs in danger of generating political naivety, but also calls for the necessity of re-examining propaganda which will benefit from analysing it in contrasting social and political environments. Therefore, this paper aims to compare how propaganda has been understood and put in practice both in the Anglo-American context and by the Chinese Communist Party (CCP). By revealing the convergence and divergence of the propaganda discourses between China and the West, it will help clarify the misconception and misunderstanding of the term. Historical narrative analysis and critical discourse analysis are the main methodologies. By carefully examining data from academic research on propaganda in both English and Chinese, the landscape of how propaganda is defined throughout different eras is mapped, with special attention paid to analysing the parallelism and/or correspondence between China and the West when applicable. Meanwhile, critically analysing the official documents such as speeches and guidelines for propaganda administration given by top-rank CCP leaders will help reveal that in contrast to the West’s ‘us-them’ mentality, China sees oneself in no difference with the Western democracies when propaganda is concerned. Major findings of this study will identify a series of convergence and divergence between Chinese and Western propaganda discourses, and the relationship between propaganda the ‘signified’ (its essence) and propaganda the ‘signifier’ (the term itself), including (yet not limited to): 1) convergence in China catching up with the West, acknowledging the perceived pejorative connotation of the term 2) divergence in propaganda activities disassociated from the term in the West; and convergence in adopting such practice when China following suit in its external propaganda towards the West 3) convergence in utilising alternative notions to replace ‘propaganda’, first by the West, then imported and incorporated enthusiastically by China into its propaganda discourse 4) divergence between China’s internal and external propaganda and the subsequent differentiation between in which contexts the CCP sees fit to utilise the concept 5) convergence between China and the West in their English language propaganda discourses, whilst simultaneous divergence in their presuppositions: ‘usthem’ by the West and ‘we are the same’ by China. To conclude, this paper will contribute to the study of propaganda and its discourse by analysing how propaganda is understood and utilised in both worlds, and hence to uncover the discourse power struggle between the two, which contributes to the propaganda discourse itself. Hence, to untie the misconception of propaganda.Keywords: China, discourse, power, propaganda
Procedia PDF Downloads 86540 Care as a Situated Universal: Defining Care as a Practical Phenomenology Study
Authors: Amanda Aliende da Matta
Abstract:
This communication presents an aspect of phenomenon selection in an applied hermeneutic phenomenology study on care and vulnerability: the need to consider it as a situated universal. For that, we will first present the study and its methodology. Secondly, we will expose the need to understand phenomena as situation-defined, incorporating feminist thought. In an informatics class for 14 year olds, we explained the exercise: students have to make a 5 slide presentation about a topic of their choice. A does it on streetwear, B on Cristiano Ronaldo, C on Marvel, but J did it on Down Syndrome. Introducing it to the class, J explains the physical and cognitive differences caused by trisomy; when asked to explain it further, he says: "they are angels, teacher," and shows us a poster on his cellphone that says: if you laugh at a different child he will laugh with you because his innocence outweighs your ignorance. The anecdote shows, better than any theoretical explanation, something that some vulnerable people have; something beautiful and special but difficult to define. Let's call this something caring. The research has the main objective of accounting for the experience of caregiving in vulnerability, and it will be carried out with Applied Hermeneutic Phenomenology (AHP). The method's objective is to investigate the lived human experience in its pre-reflexive dimension to know its meaning structures. Contrary to other research methods, AHP does not produce theory about a specific context but seeks the meaning of the lived experience, in its characteristic of human experience. However, it is necessary that we understand care as defined in a concrete situation. We cannot start the research with an a priori definitive concept of care, or we would fall into the mistake of closing ourselves to only what we already know, as explained by Levinas. We incorporate, then, the notion of situated universals. Loyal to phenomenology, the definition of the phenomenon should start with an investigation of the word's etymology: the word cura, in its etymological root, means care. And care comes from the Latin word cogitātus/cōgĭto, which means "to pursue something in mind" and "to consider thoroughly." The verb cōgĭto, meanwhile, is composed of co- (altogether) and agitare (to deal with or think committedly about something, to concern oneself with) / ăgĭto (to set in motion, to move). Care, therefore, has in its origin a meditation on something, a concern about something, a verb that has a sense of action and movement. To care is to act out of concern for something/someone. This etymology, though, is not the final definition of the phenomenon, but only its skeleton. It needs to be embodied in the concrete situation to become a possible lived experience. And that means that the lived experience descriptions (LEDs) should be selected by taking into consideration how and if care was engendered in that concrete experience. Defining the phenomenon has to take into consideration situated knowledge.Keywords: applied hermeneutic phenomenology, care ethics, hermeneutics, phenomenology, situated universalism
Procedia PDF Downloads 92539 Recognizing Juxtaposition Patterns of the Dwelling Units in Housing Cluster: The Case Study of Aghayan Complex: An Example of Rural Residential Development in Qajar Era in Iran
Authors: Outokesh Fatemeh, Jourabchi Keivan, Talebi Maryam, Nikbakht Fatemeh
Abstract:
Mayamei is a small town in Iran that is located between Shahrud and Sabzevar cities, on the Silk Road. It enjoys a history of approximately 1000 years. An alley entitled ‘Aghayan’ exists in this town that comprises residential buildings of a famous family. Bathhouse, mosque, telegraph center, cistern are all related to this alley. This architectural complex belongs to Sadat Mousavi, who is one of the Mayamei's major grandees and religious household. The alley after construction has been inherited from generation to generation within the family masters. The purpose of this study, which was conducted on Aghayan alley and its associated complex, was to elucidate Iranian vernacular domestic architecture of Qajar era in small towns and villages. We searched for large, medium, and small architectural patterns in the contemplated complex, and tried to elaborate their evolution from past to the present. The other objective of this project was finding a correlation between changes in the lifestyle of the alley’s inhabitants with the form of the building's architecture. Our investigation methods included: literature review especially in regard to historical travelogues, peer site visiting, mapping, interviewing of the elderly people of the Mousavi family (the owners), and examining the available documents especially the 4 meters’ scroll-type testament of 150 years ago. For the analysis of the aforementioned data, an effort was made to discover (1) the patterns of placing of different buildings in respect of the others, (2) finding the relation between function of the buildings with their relative location in the complex, as was considered in the original design, and (3) possible changes of functions of the buildings during the time. In such an investigation, special attention was paid to the chronological changes of lifestyles of the residents. In addition, we tried to take all different activities of the residents into account including their daily life activities, religious ceremonies, etc. By combining such methods, we were able to obtain a picture of the buildings in their original (construction) state, along with a knowledge of the temporal evolution of the architecture. An interesting finding is that the Aghayan complex seems to be a big structure of the horizontal type apartments, which are placed next to each other. The houses made in this way are connected to the adjacent neighbors both by the bifacial rooms and from the roofs.Keywords: Iran, Qajar period, vernacular domestic architecture, life style, residential complex
Procedia PDF Downloads 168538 OER on Academic English, Educational Research and ICT Literacy, Promoting International Graduate Programs in Thailand
Authors: Maturos Chongchaikit, Sitthikorn Sumalee, Nopphawan Chimroylarp, Nongluck Manowaluilou, Thapanee Thammetha
Abstract:
The 2015 Kasetsart University Research Plan, which was funded by the National Research Institutes: TRF – NRCT, comprises four sub-research projects on the development of three OER websites and on their usage study by students in international programs. The goals were to develop the open educational resources (OER) in the form of websites that will promote three key skills of quality learning and achievement: Academic English, Educational Research, and ICT Literacy, to graduate students in international programs of Thailand. The statistics from the Office of Higher Education showed that the number of foreign students who come to study in international higher education of Thailand has increased respectively by 25 percent per year, proving that the international education system and institutes of Thailand have been already recognized regionally and globally as meeting the standards. The output of the plan: the OER websites and their materials, and the outcome: students’ learning improvement due to lecturers’ readiness for open educational media, will ultimately lead the country to higher business capabilities for international education services in ASEAN Community in the future. The OER innovation is aimed at sharing quality knowledge to the world, with the adoption of Creative Commons Licenses that makes sharing be able to do freely (5Rs openness), without charge and leading to self and life-long learning. The research has brought the problems on the low usage of existing OER in the English language to develop the OER on three specific skills and try them out with the sample of 100 students randomly selected from the international graduate programs of top 10 Thai universities, according to QS Asia University Rankings 2014. The R&D process was used for product evaluation in 2 stages: the development stage and the usage study stage. The research tools were the questionnaires for content and OER experts, the questionnaires for the sample group and the open-ended interviews for the focus group discussions. The data were analyzed using frequency, percentage, mean and SD. The findings revealed that the developed websites were fully qualified as OERs by the experts. The students’ opinions and satisfaction were at the highest levels for both the content and the technology used for presentation. The usage manual and self-assessment guide were finalized during the focus group discussions. The direct participation according to the concept of 5Rs Openness Activities through the provided tools of OER models like MERLOT and OER COMMONS, as well as the development of usage manual and self-assessment guide, were revealed as a key approach to further extend the output widely and sustainably to the network of users in various higher education institutions.Keywords: open educational resources, international education services business, academic English, educational research, ICT literacy, international graduate program, OER
Procedia PDF Downloads 225537 Analysis on the Converged Method of Korean Scientific and Mathematical Fields and Liberal Arts Programme: Focusing on the Intervention Patterns in Liberal Arts
Authors: Jinhui Bak, Bumjin Kim
Abstract:
The purpose of this study is to analyze how the scientific and mathematical fields (STEM) and liberal arts (A) work together in the STEAM program. In the future STEAM programs that have been designed and developed, the humanities will act not just as a 'tool' for science technology and mathematics, but as a 'core' content to have an equivalent status. STEAM was first introduced to the Republic of Korea in 2011 when the Ministry of Education emphasized fostering creative convergence talent. Many programs have since been developed under the name STEAM, but with the majority of programs focusing on technology education, arts and humanities are considered secondary. As a result, arts is most likely to be accepted as an option that can be excluded from the teachers who run the STEAM program. If what we ultimately pursue through STEAM education is in fostering STEAM literacy, we should no longer turn arts into a tooling area for STEM. Based on this consciousness, this study analyzed over 160 STEAM programs in middle and high schools, which were produced and distributed by the Ministry of Education and the Korea Science and Technology Foundation from 2012 to 2017. The framework of analyses referenced two criteria presented in the related prior studies: normative convergence and technological convergence. In addition, we divide Arts into fine arts and liberal arts and focused on Korean Language Course which is in liberal arts and analyzed what kind of curriculum standards were selected, and what kind of process the Korean language department participated in teaching and learning. In this study, to ensure the reliability of the analysis results, we have chosen to cross-check the individual analysis results of the two researchers and only if they are consistent. We also conducted a reliability check on the analysis results of three middle and high school teachers involved in the STEAM education program. Analyzing 10 programs selected randomly from the analyzed programs, Cronbach's α .853 showed a reliable level. The results of this study are summarized as follows. First, the convergence ratio of the liberal arts was lowest in the department of moral at 14.58%. Second, the normative convergence is 28.19%, which is lower than that of the technological convergence. Third, the language and achievement criteria selected for the program were limited to functional areas such as listening, talking, reading and writing. This means that the convergence of Korean language departments is made only by the necessary tools to communicate opinions or promote scientific products. In this study, we intend to compare these results with the STEAM programs in the United States and abroad to explore what elements or key concepts are required for the achievement criteria for Korean language and curriculum. This is meaningful in that the humanities field (A), including Korean, provides basic data that can be fused into 'equivalent qualifications' with science (S), technical engineering (TE) and mathematics (M).Keywords: Korean STEAM Programme, liberal arts, STEAM curriculum, STEAM Literacy, STEM
Procedia PDF Downloads 160536 Semi-Empirical Modeling of Heat Inactivation of Enterococci and Clostridia During the Hygienisation in Anaerobic Digestion Process
Authors: Jihane Saad, Thomas Lendormi, Caroline Le Marechal, Anne-marie Pourcher, Céline Druilhe, Jean-louis Lanoiselle
Abstract:
Agricultural anaerobic digestion consists in the conversion of animal slurry and manure into biogas and digestate. They need, however, to be treated at 70 ºC during 60 min before anaerobic digestion according to the European regulation (EC n°1069/2009 & EU n°142/2011). The impact of such heat treatment on the outcome of bacteria has been poorly studied up to now. Moreover, a recent study¹ has shown that enterococci and clostridia are still detected despite the application of such thermal treatment, questioning the relevance of this approach for the hygienisation of digestate. The aim of this study is to establish the heat inactivation kinetics of two species of enterococci (Enterococcus faecalis and Enterococcus faecium) and two species of clostridia (Clostridioides difficile and Clostridium novyi as a non-toxic model for Clostridium botulinum of group III). A pure culture of each strain was prepared in a specific sterile medium at concentration of 10⁴ – 10⁷ MPN / mL (Most Probable number), depending on the bacterial species. Bacterial suspensions were then filled in sterilized capillary tubes and placed in a water or oil bath at desired temperature for a specific period of time. Each bacterial suspension was enumerated using a MPN approach, and tests were repeated three times for each temperature/time couple. The inactivation kinetics of the four indicator bacteria is described using the Weibull model and the classical Bigelow model of first-order kinetics. The Weibull model takes biological variation, with respect to thermal inactivation, into account and is basically a statistical model of distribution of inactivation times as the classical first-order approach is a special case of the Weibull model. The heat treatment at 70 ºC / 60 min contributes to a reduction greater than 5 log10 for E. faecium and E. faecalis. However, it results only in a reduction of about 0.7 log10 for C. difficile and an increase of 0.5 log10 for C. novyi. Application of treatments at higher temperatures is required to reach a reduction greater or equal to 3 log10 for C. novyi (such as 30 min / 100 ºC, 13 min / 105 ºC, 3 min / 110 ºC, and 1 min / 115 ºC), raising the question of the relevance of the application of heat treatment at 70 ºC / 60 min for these spore-forming bacteria. To conclude, the heat treatment (70 ºC / 60 min) defined by the European regulation is sufficient to inactivate non-sporulating bacteria. Higher temperatures (> 100 ºC) are required as far as spore-forming bacteria concerns to reach a 3 log10 reduction (sporicidal activity).Keywords: heat treatment, enterococci, clostridia, inactivation kinetics
Procedia PDF Downloads 117535 Utilization of Fly Ash Amended Sewage Sludge as Sustainable Building Material
Authors: Kaling Taki, Rohit Gahlot, Manish Kumar
Abstract:
Disposal of Sewage Sludge (SS) is a big issue especially in developing nation like India, where there is no control in the dynamicity of SS produced. The present research work demonstrates the potential application of SS amended with varying percentage (0-100%) of Fly Ash (FA) for brick manufacturing as an alternative of SS management. SS samples were collected from Jaspur sewage treatment plant (Ahmedabad, India) and subjected to different preconditioning treatments: (i) atmospheric drying (ii) pulverization (iii) heat treatment in oven (110°C, moisture removal) and muffle furnace (440°C, organic content removal). Geotechnical parameters of the SS were obtained as liquid limit (52%), plastic limit (24%), shrinkage limit (10%), plasticity index (28%), differential free swell index (DFSI, 47%), silt (68%), clay (27%), organic content (5%), optimum moisture content (OMC, 20%), maximum dry density (MDD, 1.55gm/cc), specific gravity (2.66), swell pressure (57kPa) and unconfined compressive strength (UCS, 207kPa). For FA liquid limit, plastic limit and specific gravity was 44%, 0% and 2.2 respectively. Initially, for brick casting pulverized SS sample was heat treated in a muffle furnace around 440℃ (5 hours) for removal of organic matter. Later, mixing of SS, FA and water by weight ratio was done at OMC. 7*7*7 cm3 sample mold was used for casting bricks at MDD. Brick samples were then first dried in room temperature for 24 hours, then in oven at 100℃ (24 hours) and finally firing in muffle furnace for 1000℃ (10 hours). The fired brick samples were then cured for 3 days according to Indian Standards (IS) common burnt clay building bricks- specification (5th revision). The Compressive strength of brick samples (0, 10, 20, 30, 40, 50 ,60, 70, 80, 90, 100%) of FA were 0.45, 0.76, 1.89, 1.83, 4.02, 3.74, 3.42, 3.19, 2.87, 0.78 and 4.95MPa when evaluated through compressive testing machine (CTM) for a stress rate of 14MPa/min. The highest strength was obtained at 40% FA mixture i.e. 4.02MPa which is much higher than the pure SS brick sample. According to IS 1077: 1992 this combination gives strength more than 3.5 MPa and can be utilized as common building bricks. The loss in weight after firing was much higher than the oven treatment, this might be due to degradation temperature higher than 100℃. The thermal conductivity of the fired brick was obtained as 0.44Wm-1K-1, indicating better insulation properties than other reported studies. TCLP (Toxicity characteristic leaching procedure) test of Cr, Cu, Co, Fe and Ni in raw SS was found as 69, 70, 21, 39502 and 47 mg/kg. The study positively concludes that SS and FA at optimum ratio can be utilized as common building bricks such as partitioning wall and other small strength requirement works. The uniqueness of the work is it emphasizes on utilization of FA for stabilizing SS as construction material as a replacement of natural clay as reported in existing studies.Keywords: Compressive strength, Curing, Fly Ash, Sewage Sludge.
Procedia PDF Downloads 114534 Oat βeta Glucan Attenuates the Development of Atherosclerosis and Improves the Intestinal Barrier Function by Reducing Bacterial Endotoxin Translocation in APOE-/- MICE
Authors: Dalal Alghawas, Jetty Lee, Kaisa Poutanen, Hani El-Nezami
Abstract:
Oat β-glucan a water soluble non starch linear polysaccharide has been approved as a cholesterol lowering agent by various food safety administrations and is commonly used to reduce the risk of heart disease. The molecular weight of oat β-glucan can vary depending on the extraction and fractionation methods. It is not clear whether the molecular weight has a significant impact at reducing the acceleration of atherosclerosis. The aim of this study was to investigate three different oat β-glucan fractionations on the development of atherosclerosis in vivo. With special focus on plaque stability and the intestinal barrier function. To test this, ApoE-/- female mice were fed a high fat diet supplemented with oat bran, high molecular weight (HMW) oat β-glucan fractionate and low molecular weight (LMW) oat β-glucan fractionate for 16 weeks. Atherosclerosis risk markers were measured in the plasma, heart and aortic tree. Plaque size was measured in the aortic root and aortic tree. ICAM-1, VCAM-1, E-Selectin, P-Selectin, protein levels were assessed from the aortic tree to determine plaque stability at 16 weeks. The expression of p22phox at the aortic root was evaluated to study the NADPH oxidase complex involved in nitric oxide bioavailability and vascular elasticity. The tight junction proteins E-cadherin and beta-catenin from western blot analyses were analysed as an intestinal barrier function test. Plasma LPS, intestinal D-lactate levels and hepatic FMO gene expression were carried out to confirm whether the compromised intestinal barrier lead to endotoxemia. The oat bran and HMW oat β-glucan diet groups were more effective than the LMW β-glucan diet group at reducing the plaque size and showed marked improvements in plaque stability. The intestinal barrier was compromised for all the experimental groups however the endotoxemia levels were higher in the LMW β-glucan diet group. The oat bran and HMW oat β-glucan diet groups were more effective at attenuating the development of atherosclerosis. Reasons for this could be due to the LMW oat β-glucan diet group’s low viscosity in the gut and the inability to block the reabsorption of cholesterol. Furthermore the low viscosity may allow more bacterial endotoxin translocation through the impaired intestinal barrier. In future food technologists should carefully consider how to incorporate LMW oat β-glucan as a health promoting food.Keywords: Atherosclerosis, beta glucan, endotoxemia, intestinal barrier function
Procedia PDF Downloads 427533 Teachers of the Pandemic: Retention, Resilience, and Training
Authors: Theoni Soublis
Abstract:
The COVID-19 pandemic created a severe interruption in teaching and learning in K-12 schools. It is essential that educational researchers, teachers, and administrators understand the long term effects that COVID-19 had on a variety of stakeholders in education. This investigation aims to analyze the research since the beginning of the pandemic that focuses specifically on teacher retention, resilience, and training. The results of this investigation will help to inform future research in order to better understand how the institution of education can continue to be prepared and to better prepare for future significant shifts in the modalities of instruction. The results of this analysis will directly impact the field of education as it will broaden the scope of understanding regarding how COVID- 19 impacted teaching and learning. The themes that will emerge from the data analysis will directly inform policy makers, administrators, and researchers about how to best implement training and curriculum design in order to support teacher effectiveness this in the classroom. Educational researchers have written about how teacher morale plummeted and how many teachers reported early burnout and higher stress levels. Teachers’ stress and anxiety soared during the COVID-19 pandemic, but so has their resilience and dedication to the field of education. This research aims to understand how public-school teachers overcame teaching obstacles presented to them during COVID-19. Research has been conducted to identify a variety of information regarding the impact the pandemic has had on K-12 teachers, students, and families. This research aims to understand how teachers continued to pursue their teaching objectives without significant training of effective online instruction methods. Not many educators even heard of the video conferencing platform Zoom before the spring of 2020. Researchers are interested in understanding how teachers used their expertise, prior knowledge, and training to institute immediate and effective online learning environments, what types of relationships did teachers build with students while teaching 100% remotely, and how did relationships change with students while teaching remotely? Furthermore, did the teacher-student relationship propel teacher resolve to be successful while teaching during a pandemic. Recent world events have significantly impacted the field of public-school teaching. The pandemic forced teachers to shift their paradigm about how to maintain high academic expectations, meet state curriculum standards, and assess students learning gains to make data-informed decisions while simultaneously adapting modes of instruction through multiple outlets with little to no training on remote, synchronous, asynchronous, virtual, and hybrid teaching. While it would be very interesting to study how teaching positively impacted students learning during the pandemic, I am more interested in understanding how teaches stayed the course and maintained their mental health while dealing with the stress and pressure of teaching during COVID-19.Keywords: teacher retention, COVID-19, teacher education, teacher moral
Procedia PDF Downloads 88532 Analyze the Properties of Different Surgical Sutures
Authors: Doaa H. Elgohary, Tamer F. Khalifa, Mona M. Salem, M. A. Saad, Ehab Haider Sherazy
Abstract:
Textiles have conquered new areas over the past three decades, including agriculture, transportation, filtration, military, and medicine. The use of textiles in the medical field has increased significantly in recent years and covers almost everything. Medical textiles represent a huge market as they are widely used not only in hospitals, hygiene, and healthcare but also in hotels and other environments where hygiene is required. However, not all fibers are suitable for the manufacture of medical textile products. Some special properties are required for the manufactured materials, e.g. Strength, elasticity, spinnability, etc. In addition to the usual properties of medical fibers, non-toxicity, sterilizability, biocompatibility, biodegradability, good absorbability, softness, and freedom from additives, etc., desirable properties include impurities. Stitching is one of the most common practices in the medical field. as it is a biomaterial device, either natural or synthetic, used to connect blood vessels and connect tissues. In addition to being very strong, suture material should easily dissolve in bodily fluids and lose strength as the tissue gains strength. In this work, a study to select the most used materials for sutures, it was found that silk, VICRYL and polypropylene were the most used materials in varying numbers. The research involved the analysis of 36 samples from three different materials (mostly commonly used), the tests were carried out on 36 imported samples for four different companies. Each company supplied three different materials (silk, VICRYL and polypropylene) with three different gauges (4, 3.5 and 3 metric). The results of the study were tabulated, presented, and discussed. Practical statistical science serves to support the practical analysis of experimental work products and the various relationships between variables to achieve the best sampling performance with the functional purpose generated for it. Analysis of the imported sutures shows that VICRYL sutures had the highest tensile strength, toughness, knot tensile strength and knot toughness, followed by polypropylene and silk. As yarn counts, weight and diameter increase, its tensile strength and toughness increase while its elongation and knot tension decrease. The multifilament yarn construction (silk and VICRYL) scores higher compared to the monofilament construction (polypropylene), resulting in increases in tenacity, toughness, knot tensile strength and knot toughness.Keywords: biodegradable yarns, braided sutures, irritation, knot tying, medical textiles, surgical sutures, wound healing
Procedia PDF Downloads 61531 Health Communication and the Diabetes Narratives of Key Social Media Influencers in the UK
Authors: Z. Sun
Abstract:
Health communication is essential in promoting healthy lifestyles, managing disease conditions, and eventually reducing health disparities. The key elements of successful health communication always include the development of communication strategies to engage people in thinking about their health, inform them about healthy choices, persuade them to adopt safe and healthy behaviours, and eventually achieve public health objectives. The use of 'Narrative' is recognised as a kind of health communication strategy to enhance personal and public health due to its potential persuasive effect in motivating and supporting individuals change their beliefs and behaviours by inviting them into a narrative world, breaking down their cognitive and emotional resistance and enhance their acceptance of the ideas portrayed in narratives. Meanwhile, the popularity of social media has provided a novel means of communication for both healthcare stakeholders, and a special group of active social media users (influencers) have started playing a pivotal role in providing health ‘solutions’. Such individuals are often referred to as ‘influencers’ because of their central position in the online communication system and the persuasive effect their actions may have on audiences. They may have established a positive rapport with their audience, earned trust and credibility in a specific area, and thus, their audience considers the information they delivered to be authentic and influential. To our best knowledge, to date, there is no published research that examines the effect of diabetes narratives presented by social media influencers and their impacts on health-related outcomes. The primary aim of this study is to investigate the diabetes narratives presented by social media influencers in the UK because of the new dimension they bring to health communication and the potential impact they may have on audiences' health outcomes. This study is situated within the interpretivist and narrative paradigms. A mixed methodology combining both quantitative and qualitative approaches has been adopted. Qualitative data has been derived to provide a better understanding of influencers’ personal experiences and how they construct meanings and make sense of their world, while quantitative data has been accumulated to identify key social media influencers in the UK and measure the impact of diabetes narratives on audiences. Twitter has been chosen as the social media platform to initially identify key influencers. Two groups of participants are the top 10 key social media influencers in the UK and 100 audiences of each influencer, which means a total of 1000 audiences have been invited. This paper is going to discuss, first of all, the background of the research under the context of health communication; Secondly, the necessity and contribution of this research; then, the major research questions being explored; and finally, the methods to be used.Keywords: diabetes, health communication, narratives, social media influencers
Procedia PDF Downloads 107530 Spatial and Temporal Analysis of Forest Cover Change with Special Reference to Anthropogenic Activities in Kullu Valley, North-Western Indian Himalayan Region
Authors: Krisala Joshi, Sayanta Ghosh, Renu Lata, Jagdish C. Kuniyal
Abstract:
Throughout the world, monitoring and estimating the changing pattern of forests across diverse landscapes through remote sensing is instrumental in understanding the interactions of human activities and the ecological environment with the changing climate. Forest change detection using satellite imageries has emerged as an important means to gather information on a regional scale. Kullu valley in Himachal Pradesh, India is situated in a transitional zone between the lesser and the greater Himalayas. Thus, it presents a typical rugged mountainous terrain with moderate to high altitude which varies from 1200 meters to over 6000 meters. Due to changes in agricultural cropping patterns, urbanization, industrialization, hydropower generation, climate change, tourism, and anthropogenic forest fire, it has undergone a tremendous transformation in forest cover in the past three decades. The loss and degradation of forest cover results in soil erosion, loss of biodiversity including damage to wildlife habitats, and degradation of watershed areas, and deterioration of the overall quality of nature and life. The supervised classification of LANDSAT satellite data was performed to assess the changes in forest cover in Kullu valley over the years 2000 to 2020. Normalized Burn Ratio (NBR) was calculated to discriminate between burned and unburned areas of the forest. Our study reveals that in Kullu valley, the increasing number of forest fire incidents specifically, those due to anthropogenic activities has been on a rise, each subsequent year. The main objective of the present study is, therefore, to estimate the change in the forest cover of Kullu valley and to address the various social aspects responsible for the anthropogenic forest fires. Also, to assess its impact on the significant changes in the regional climatic factors, specifically, temperature, humidity, and precipitation over three decades, with the help of satellite imageries and ground data. The main outcome of the paper, we believe, will be helpful for the administration for making a quantitative assessment of the forest cover area changes due to anthropogenic activities and devising long-term measures for creating awareness among the local people of the area.Keywords: Anthropogenic Activities, Forest Change Detection, Normalized Burn Ratio (NBR), Supervised Classification
Procedia PDF Downloads 175529 Pulmonary Complications of Dengue Infection
Authors: Shilpa Avarebeel
Abstract:
Background: India is one of the seven identified countries in South-East Asia region, regularly reporting dengue infection and may soon transform into a major niche for dengue epidemics. Objective: To study the clinical profile of dengue in our setting with special reference to respiratory complication. Study design: Descriptive and exploratory study, for one year in 2014. All patients confirmed as dengue infection were followed and their clinical profile, along with outcome was determined. Study proforma was designed based on the objective of the study and it was pretested and used after modification. Data was analyzed using statistical software SPSS-Version 16. Data were expressed as mean ±S .D for parametric variables and actual frequencies or percentage for non-parametric data. Comparison between groups was done using students’ t-test for independent groups, Chie square test, one-way ANOVA test, Karl Pearson’s correlation test. Statistical significance is taken at P < 0.05. Results: Study included 134 dengue positive cases. 81% had dengue fever, 18% had dengue hemorrhagic fever, and one had dengue shock syndrome. Most of the cases reported were during the month of June. Maximum number of cases was in the age group of 26-35 years. Average duration of hospital stay was less than seven days. Fever and myalgia was present in all the 134 patients, 16 had bleeding manifestation. 38 had respiratory symptoms, 24 had breathlessness, and 14 had breathlessness and dry cough. On clinical examination of patients with respiratory symptoms, all twenty-eight had hypoxia features, twenty-four had signs of pleural effusion, and four had ARDS features. Chest x-ray confirmed the same. Among the patients with respiratory symptoms, the mean platelet count was 26,537 c/cmm. There was no statistical significant difference in the platelet count in those with ARDS and other dengue complications. Average four units of platelets were transfused to all those who had ARDS in view of bleeding tendency. Mechanical ventilator support was provided for ARDS patients. Those with pleural effusion and pulmonary oedema were given NIV (non-invasive ventilation) support along with supportive care. However, steroids were given to patients with ARDS and 10 patients with signs of respiratory distress. 100%. Mortality was seen in patients with ARDS. Conclusion: Dengue has to be checked for those presenting with fever and breathlessness. Supportive treatments remain the cornerstone of treatment. Platelet transfusion has to be given only by clinical judgment. Steroids have no role except in early ARDS, which is controversial. Early NIV support helps in speedy recovery of dengue patients with respiratory distress.Keywords: adult respiratory distress syndrome, dengue fever, non-invasive ventilation, pulmonary complication
Procedia PDF Downloads 434528 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration
Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu
Abstract:
Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery
Procedia PDF Downloads 137527 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy
Procedia PDF Downloads 140526 Implementing a Structured, yet Flexible Tool for Critical Information Handover
Authors: Racheli Magnezi, Inbal Gazit, Michal Rassin, Joseph Barr, Orna Tal
Abstract:
An effective process for transmitting patient critical information is essential for patient safety and for improving communication among healthcare staff. Previous studies have discussed handover tools such as SBAR (Situation, Background, Assessment, Recommendation) or SOFI (Short Observational Framework for Inspection). Yet, these formats lack flexibility, and require special training. In addition, nurses and physicians have different procedures for handing over information. The objectives of this study were to establish a universal, structured tool for handover, for both physicians and nurses, based on parameters that were defined as ‘important’ and ‘appropriate’ by the medical team, and to implement this tool in various hospital departments, with flexibility for each ward. A questionnaire, based on established procedures and on the literature, was developed to assess attitudes towards the most important information for effective handover between shifts (Cronbach's alpha 0.78). It was distributed to 150 senior physicians and nurses in 62 departments. Among senior medical staff, 12 physicians and 66 nurses responded to the questionnaire (52% response rate). Based on the responses, a handover form suitable for all hospital departments was designed and implemented. Important information for all staff included: Patient demographics (full name and age); Health information (diagnosis or patient complaint, changes in hemodynamic status, new medical treatment or equipment required); and Social Information (suspicion of violence, mental or behavioral changes, and guardianship). Additional information relevant to each unit included treatment provided, laboratory or imaging required, and change in scheduled surgery in surgical departments. ICU required information on background illnesses, Pediatrics required information on diet and food provided and Obstetrics required the number of days after cesarean section. Based on the model described, a flexible tool was developed that enables handover of both common and unique information. In addition, it includes general logistic information that must be transmitted to the next shift, such as planned disruptions in service or operations, staff training, etc. Development of a simple, clear, comprehensive, universal, yet flexible tool designed for all medical staff for transmitting critical information between shifts was challenging. Physicians and nurses found it useful and it was widely implemented. Ongoing research is needed to examine the efficiency of this tool, and whether the enthusiasm that accompanied its initial use is maintained.Keywords: handover, nurses, hospital, critical information
Procedia PDF Downloads 251525 Reuse of Historic Buildings for Tourism: Policy Gaps
Authors: Joseph Falzon, Margaret Nelson
Abstract:
Background: Regeneration and re-use of abandoned historic buildings present a continuous challenge for policy makers and stakeholders in the tourism and leisure industry. Obsolete historic buildings provide great potential for tourism and leisure accommodation, presenting unique heritage experiences to travellers and host communities. Contemporary demands in the hospitality industry continuously require higher standards, some of which are in conflict with heritage conservation principles. Objective: The aim of this research paper is to critically discuss regeneration policies with stakeholders of the tourism and leisure industry and to examine current practices in policy development and the resultant impact of policies on the Maltese tourism and leisure industry. Research Design: Six semi-structured interviews with stakeholders involved in the tourism and leisure industry participated in the research. A number of measures were taken to reduce bias and thus improve trustworthiness. Clear statements of the purpose of the research study were provided at the start of each interview to reduce expectancy bias. The interviews were semi-structured to minimise interviewer bias. Interviewees were allowed to expand and elaborate as necessary, with only necessary probing questions, to allow free expression of opinion and practices. Interview guide was submitted to participants at least two weeks before the interview to allow participants to prepare for the interview and prevent recall bias during the interview as much as possible. Interview questions and probes contained both positive and negative aspects to prevent interviewer bias. Policy documents were available during the interview to prevent recall bias. Interview recordings were transcribed ‘intelligent’ verbatim. Analysis was carried out using thematic analysis with the coding frame developed independently by two researchers. All phases of the study were governed by research ethics. Findings: Findings were grouped in main themes: financing of regeneration, governance, legislation and policies. Other key issues included value of historic buildings and approaches for regeneration. Whist regeneration of historic buildings was noted, participants discussed a number of barriers that hindered regeneration. Stakeholders identified gaps in policies and gaps at policy implementation stages. European Union funding policies facilitated regeneration initiatives but funding criteria based on economic deliverables presented the intangible heritage gap. Stakeholders identified niche markets for heritage tourism accommodation. Lack of research-based policies was also identified. Conclusion: Potential of regeneration is hindered by inadequate legal framework that supports contemporary needs of the tourism industry. Policies should be developed by active stakeholder participation. Adequate funding schemes have to support the tangible and intangible components of the built heritage.Keywords: governance, historic buildings, policy, tourism
Procedia PDF Downloads 239524 Outcomes-Based Qualification Design and Vocational Subject Literacies: How Compositional Fallacy Short-Changes School-Leavers’ Literacy Development
Authors: Rose Veitch
Abstract:
Learning outcomes-based qualifications have been heralded as the means to raise vocational education and training (VET) standards, meet the needs of the changing workforce, and establish equivalence with existing academic qualifications. Characterized by explicit, measurable performance statements and atomistically specified assessment criteria, the outcomes model has been adopted by many VET systems worldwide since its inception in the United Kingdom in the 1980s. Debate to date centers on how the outcomes model treats knowledge. Flaws have been identified in terms of the overemphasis of end-points, neglect of process and a failure to treat curricula coherently. However, much of this censure has evaluated the outcomes model from a theoretical perspective; to date, there has been scant empirical research to support these criticisms. Various issues therefore remain unaddressed. This study investigates how the outcomes model impacts the teaching of subject literacies. This is of particular concern for subjects on the academic-vocational boundary such as Business Studies, since many of these students progress to higher education in the United Kingdom. This study also explores the extent to which the outcomes model is compatible with borderline vocational subjects. To fully understand if this qualification model is fit for purpose in the 16-18 year-old phase, it is necessary to investigate how teachers interpret their qualification specifications in terms of curriculum, pedagogy and assessment. Of particular concern is the nature of the interaction between the outcomes model and teachers’ understandings of their subject-procedural knowledge, and how this affects their capacity to embed literacy into their teaching. This present study is part of a broader doctoral research project which seeks to understand if and how content-area, disciplinary literacy and genre approaches can be adapted to outcomes-based VET qualifications. This qualitative research investigates the ‘what’ and ‘how’ of literacy embedding from the perspective of in-service teacher development in the 16-18 phase of education. Using ethnographic approaches, it is based on fieldwork carried out in one Further Education college in the United Kingdom. Emergent findings suggest that the outcomes model is not fit for purpose in the context of borderline vocational subjects. It is argued that the outcomes model produces inferior qualifications due to compositional fallacy; the sum of a subject’s components do not add up to the whole. Findings indicate that procedural knowledge, largely unspecified by some outcomes-based qualifications, is where subject-literacies are situated, and that this often gets lost in ‘delivery’. It seems that the outcomes model provokes an atomistic treatment of knowledge amongst teachers, along with the privileging of propositional knowledge over procedural knowledge. In other words, outcomes-based VET is a hostile environment for subject-literacy embedding. It is hoped that this research will produce useful suggestions for how this problem can be ameliorated, and will provide an empirical basis for the potential reforms required to address these issues in vocational education.Keywords: literacy, outcomes-based, qualification design, vocational education
Procedia PDF Downloads 19523 Psychological Variables Predicting Academic Achievement in Argentinian Students: Scales Development and Recent Findings
Authors: Fernandez liporace, Mercedes Uriel Fabiana
Abstract:
Academic achievement in high school and college students is currently a matter of concern. National and international assessments show high schoolers as low achievers, and local statistics indicate alarming dropout percentages in this educational level. Even so, 80% of those students intend attending higher education. On the other hand, applications to Public National Universities are free and non-selective by examination procedures. Though initial registrations are massive (307.894 students), only 50% of freshmen pass their first year classes, and 23% achieves a degree. Low performances use to be a common problem. Hence, freshmen adaptation, their adjustment, dropout and low academic achievement arise as topics of agenda. Besides, the hinge between high school and college must be examined in depth, in order to get an integrated and successful path from one educational stratum to the other. Psychology aims at developing two main research lines to analyse the situation. One regarding psychometric scales, designing and/or adapting tests, examining their technical properties and their theoretical validity (e.g., academic motivation, learning strategies, learning styles, coping, perceived social support, parenting styles and parental consistency, paradoxical personality as correlated to creative skills, psychopathological symptomatology). The second research line emphasizes relationships within the variables measured by the former scales, facing the formulation and testing of predictive models of academic achievement, establishing differences by sex, age, educational level (high school vs college), and career. Pursuing these goals, several studies were carried out in recent years, reporting findings and producing assessment technology useful to detect students academically at risk as well as good achievers. Multiple samples were analysed totalizing more than 3500 participants (2500 from college and 1000 from high school), including descriptive, correlational, group differences and explicative designs. A brief on the most relevant results is presented. Providing information to design specific interventions according to every learner’s features and his/her educational environment comes up as a mid-term accomplishment. Furthermore, that information might be helpful to adapt curricula by career, as well as for implementing special didactic strategies differentiated by sex and personal characteristics.Keywords: academic achievement, higher education, high school, psychological assessment
Procedia PDF Downloads 371522 A Post-Colonial Reading of Maria Edgeworth's Anglo-Irish Novels: Castle Rackrent and the Absentee
Authors: Al. Harshan, Hazamah Ali Mahdi
Abstract:
The Big House literature embodies Irish history. It requires a special dimension of moral and social significance in relation to its owners. The Big House is a metaphor for the decline of the protestant Ascendancy that ruled in a catholic country and oppressed a native people. In the tradition of the Big House fiction, Maria Edgeworth's Castle Rackrent and the Absentee explore the effect of the Anglo-Irish protestant Ascendancy as it governed and misgoverned Ireland. Edgeworth illustrates the tradition of the Big House as a symbol of both a personal and historical theme. This paper provides a reading of Castle Rackrent and The Absentee from a post-colonial perspective. The paper maintains that Edgeworth's novel contain elements of a radical critique of the colonialist enterprise. In our postcolonial reading of Maria Edgeworth's novels, one that goes beyond considering works as those of Sir Walter Scoot, regional evidence has been found of Edgeworth's colonial ideology. The significance of Castle Rackrent lies mainly in the fact that is the first English novel to speak in the voice of the colonized Irish. What is more important is that the irony and the comic aspect of the novel comes from its Irish narrator (Thady Quirk) and its Irish setting Ireland. Edgeworth reveals the geographical 'other' to her English reader, by placing her colonized Irish narrator and his son, Jason Quirk, in a position of inferiority to emphasize the gap between Englishness and Irishness. Furthermore, this satirical aspect is a political one. It works to create and protect the superiority of the domestic English reader over the Irish subject. In other words, the implication of the colonial system of the novel and of its structure of dominance and subordination is overlooked by its comic dimension. The matrimonial plot in the Absentee functions as an imperial plot, constructing Ireland as a complementary but ever unequal partner in the family of Great Britain. This imperial marriage works hegemonically to produce the domestic stability considered so crucial to national and colonial stability. Moreover, in order to achieve her proper imperial plot, Edgeworth reconciliation of England and Ireland is seen in the marriage of the Anglo-Irish (hero/Colambre) with the Irish (heroine/Grace Nugent), and the happy bourgeois family; consequently, it becomes the model for colonizer-colonized relationships. Edgeworth must establish modes of legitimate behavior for women and men. The Absentee explains more purposely how familial reorganization is dependent on the restitution of masculine authority and advantage, particularly for Irish community.Keywords: Maria Edgeworth, post-colonial, reading, Irish
Procedia PDF Downloads 548521 Data Mining in Healthcare for Predictive Analytics
Authors: Ruzanna Muradyan
Abstract:
Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health
Procedia PDF Downloads 66520 Electronic Structure Studies of Mn Doped La₀.₈Bi₀.₂FeO₃ Multiferroic Thin Film Using Near-Edge X-Ray Absorption Fine Structure
Authors: Ghazala Anjum, Farooq Hussain Bhat, Ravi Kumar
Abstract:
Multiferroic materials are vital for new application and memory devices, not only because of the presence of multiple types of domains but also as a result of cross correlation between coexisting forms of magnetic and electrical orders. In spite of wide studies done on multiferroic bulk ceramic materials their realization in thin film form is yet limited due to some crucial problems. During the last few years, special attention has been devoted to synthesis of thin films like of BiFeO₃. As they allow direct integration of the material into the device technology. Therefore owing to the process of exploration of new multiferroic thin films, preparation, and characterization of La₀.₈Bi₀.₂Fe₀.₇Mn₀.₃O₃ (LBFMO3) thin film on LaAlO₃ (LAO) substrate with LaNiO₃ (LNO) being the buffer layer has been done. The fact that all the electrical and magnetic properties are closely related to the electronic structure makes it inevitable to study the electronic structure of system under study. Without the knowledge of this, one may never be sure about the mechanism responsible for different properties exhibited by the thin film. Literature review reveals that studies on change in atomic and the hybridization state in multiferroic samples are still insufficient except few. The technique of x-ray absorption (XAS) has made great strides towards the goal of providing such information. It turns out to be a unique signature to a given material. In this milieu, it is time honoured to have the electronic structure study of the elements present in the LBFMO₃ multiferroic thin film on LAO substrate with buffer layer of LNO synthesized by RF sputtering technique. We report the electronic structure studies of well characterized LBFMO3 multiferroic thin film on LAO substrate with LNO as buffer layer using near-edge X-ray absorption fine structure (NEXAFS). Present exploration has been performed to find out the valence state and crystal field symmetry of ions present in the system. NEXAFS data of O K- edge spectra reveals a slight shift in peak position along with growth in intensities of low energy feature. Studies of Mn L₃,₂- edge spectra indicates the presence of Mn³⁺/Mn⁴⁺ network apart from very small contribution from Mn²⁺ ions in the system that substantiates the magnetic properties exhibited by the thin film. Fe L₃,₂- edge spectra along with spectra of reference compound reveals that Fe ions are present in +3 state. Electronic structure and valence state are found to be in accordance with the magnetic properties exhibited by LBFMO/LNO/LAO thin film.Keywords: magnetic, multiferroic, NEXAFS, x-ray absorption fine structure, XMCD, x-ray magnetic circular dichroism
Procedia PDF Downloads 161519 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 100518 Suicide Wrongful Death: Standard of Care Problems Involving the Inaccurate Discernment of Lethal Risk When Focusing on the Elicitation of Suicide Ideation
Authors: Bill D. Geis
Abstract:
Suicide wrongful death forensic cases are the fastest rising tort in mental health law. It is estimated that suicide-related cases have accounted for 15% of U.S. malpractice claims since 2006. Most suicide-related personal injury claims fall into the legal category of “wrongful death.” Though mental health experts may be called on to address a range of forensic questions in wrongful death cases, the central consultation that most experts provide is about the negligence element—specifically, the issue of whether the clinician met the clinical standard of care in assessing, treating, and managing the deceased person’s mental health care. Standards of care, varying from U.S. state to state, are broad and address what a reasonable clinician might do in a similar circumstance. This fact leaves the issue of the suicide standard of care, in each case, up to forensic experts to put forth a reasoned estimate of what the standard of care should have been in the specific case under litigation. Because the general state guidelines for standard of care are broad, forensic experts are readily retained to provide scientific and clinical opinions about whether or not a clinician met the standard of care in their suicide assessment, treatment, and management of the case. In the past and in much of current practice, the assessment of suicide has centered on the elicitation of verbalized suicide ideation. Research in recent years, however, has indicated that the majority of persons who end their lives do not say they are suicidal at their last medical or psychiatric contact. Near-term risk assessment—that goes beyond verbalized suicide ideation—is needed. Our previous research employed structural equation modeling to predict lethal suicide risk--eight negative thought patterns (feeling like a burden on others, hopelessness, self-hatred, etc.) mediated by nine transdiagnostic clinical factors (mental torment, insomnia, substance abuse, PTSD intrusions, etc.) were combined to predict acute lethal suicide risk. This structural equation model, the Lethal Suicide Risk Pattern (LSRP), Acute model, had excellent goodness-of-fit [χ2(df) = 94.25(47)***, CFI = .98, RMSEA = .05, .90CI = .03-.06, p(RMSEA = .05) = .63. AIC = 340.25, ***p < .001.]. A further SEQ analysis was completed for this paper, adding a measure of Acute Suicide Ideation to the previous SEQ. Acceptable prediction model fit was no longer achieved [χ2(df) = 3.571, CFI > .953, RMSEA = .075, .90% CI = .065-.085, AIC = 529.550].This finding suggests that, in this additional study, immediate verbalized suicide ideation information was unhelpful in the assessment of lethal risk. The LSRP and other dynamic, near-term risk models (such as the Acute Suicide Affective Disorder Model and the Suicide Crisis Syndrome Model)—going beyond elicited suicide ideation—need to be incorporated into current clinical suicide assessment training. Without this training, the standard of care for suicide assessment is out of sync with current research—an emerging dilemma for the forensic evaluation of suicide wrongful death cases.Keywords: forensic evaluation, standard of care, suicide, suicide assessment, wrongful death
Procedia PDF Downloads 71517 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 140516 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety
Authors: Neeti Nayak, Khalid Duri
Abstract:
Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design
Procedia PDF Downloads 111515 Cut-Off of CMV Cobas® Taqman® (CAP/CTM Roche®) for Introduction of Ganciclovir Pre-Emptive Therapy in Allogeneic Hematopoietic Stem Cell Transplant Recipients
Authors: B. B. S. Pereira, M. O. Souza, L. P. Zanetti, L. C. S. Oliveira, J. R. P. Moreno, M. P. Souza, V. R. Colturato, C. M. Machado
Abstract:
Background: The introduction of prophylactic or preemptive therapies has effectively decreased the CMV mortality rates after hematopoietic stem cell transplantation (HSCT). CMV antigenemia (pp65) or quantitative PCR are methods currently approved for CMV surveillance in pre-emptive strategies. Commercial assays are preferred as cut-off levels defined by in-house assays may vary among different protocols and in general show low reproducibility. Moreover, comparison of published data among different centers is only possible if international standards of quantification are included in the assays. Recently, the World Health Organization (WHO) established the first international standard for CMV detection. The real time PCR COBAS Ampliprep/ CobasTaqMan (CAP/CTM) (Roche®) was developed using the WHO standard for CMV quantification. However, the cut-off for the introduction of antiviral has not been determined yet. Methods: We conducted a retrospective study to determine: 1) the sensitivity and specificity of the new CMV CAP/CTM test in comparison with pp65 antigenemia to detect episodes of CMV infection/reactivation, and 2) the cut-off of viral load for introduction of ganciclovir (GCV). Pp65 antigenemia was performed and the corresponding plasma samples were stored at -20°C for further CMV detection by CAP/CTM. Comparison of tests was performed by kappa index. The appearance of positive antigenemia was considered the state variable to determine the cut-off of CMV viral load by ROC curve. Statistical analysis was performed using SPSS software version 19 (SPSS, Chicago, IL, USA.). Results: Thirty-eight patients were included and followed from August 2014 through May 2015. The antigenemia test detected 53 episodes of CMV infection in 34 patients (89.5%), while CAP/CTM detected 37 episodes in 33 patients (86.8%). AG and PCR results were compared in 431 samples and Kappa index was 30.9%. The median time for first AG detection was 42 (28-140) days, while CAP/CTM detected at a median of 7 days earlier (34 days, ranging from 7 to 110 days). The optimum cut-off value of CMV DNA was 34.25 IU/mL to detect positive antigenemia with 88.2% of sensibility, 100% of specificity and AUC of 0.91. This cut-off value is below the limit of detection and quantification of the equipment which is 56 IU/mL. According to CMV recurrence definition, 16 episodes of CMV recurrence were detected by antigenemia (47.1%) and 4 (12.1%) by CAP/CTM. The duration of viremia as detected by antigenemia was shorter (60.5% of the episodes lasted ≤ 7 days) in comparison to CAP/CTM (57.9% of the episodes lasting 15 days or more). This data suggests that the use of antigenemia to define the duration of GCV therapy might prompt early interruption of antiviral, which may favor CMV reactivation. The CAP/CTM PCR could possibly provide a safer information concerning the duration of GCV therapy. As prolonged treatment may increase the risk of toxicity, this hypothesis should be confirmed in prospective trials. Conclusions: Even though CAP/CTM by ROCHE showed great qualitative correlation with the antigenemia technique, the fully automated CAP/CTM did not demonstrate increased sensitivity. The cut-off value below the limit of detection and quantification may result in delayed introduction of pre-emptive therapy.Keywords: antigenemia, CMV COBAS/TAQMAN, cytomegalovirus, antiviral cut-off
Procedia PDF Downloads 194514 Study of Palung Granite in Central Nepal with Special Reference to Field Occurrence, Petrography and Mineralization
Authors: Narayan Bhattarai, Arjun Bhattarai, Kabi Raj Paudyal, Lalu Paudel
Abstract:
Palung granite is leucocratic, alkali feldspar granite, which is one of the six major granite bodies of the Lesser Himalaya of Nepal. The Cambro-Ordovician granite body has intruded on the Palaeozoic metasedimentary rock of the Kathmandu Complex in Central Nepal. The granite crystallized from magma that was mainly generated by anatexis of the Precambrian continental crust. The magma is heterogeneous with respect to the primary ages and/or metamorphic histories of the magma source rocks. This indicates either a derivation from (meta-) sediments or an intense mixing of different crustally derived magmas. The genesis of the Palung granite is possibly related to an orogeny which affected the Indian shield in lower Paleozoic times. The granite body has been mapped into different zones with visual inspection and petrographical study: i. Quartz rich granite: Quartz is smokey to grayish, euhedral to subherdal, 0.2 to 0.7 cm, and constitutes 30 to 40%. Feldspar is white to brownish, subhedral to euhedral, more than 3 cm, and constitutes 20–30%. Tourmaline is black, 0.1 to 0.2 cm in size, and consists of 10 to 20%. Biotite is black flakes up to o.2 cm, representing 5-8%. ii. Feldspar rich granite: white to grayish, medium to coarse-grained, containing feldspar, quartz, biotite, muscovite and tourmaline. Feldspar porphyritic crystals up to 2.5 cm subherdral represent 50–60%, quartz is smokey transparent and represents 30–40%, biotite is dark brown to black, crystals are irregular, 0.5 cm and represent 8–20%, tourmaline is black fractured, small needles represent 5–10%, and muscovite is white to brown and represents 1-4%. iii. Biotite granite: grey to white, medium to coarse-grained, containing quartz, feldspar, biotite and tourmaline. Feldspar crystals up to 2.5 cm represent 40–50%, quartz is smokey, representing 30–40%, biotite is dark brown to black, crystal size 0.5cm, representing 10–20%, tourmaline is black, small needle, 5–10%, and muscovite is white to brown, representing 3-5%. and iv. Muscovite granite: medium-coarse-grained, brown and gray, containing quartz, feldspar, muscovite and tourmaline. Feldspar is white to brown; crystal sizes 0.2–0.4 cm represents 40–50%; quartz is brown and white, transparent, crystals up to 1 cm represent 35–50%; tourmaline is black, opaque, needle shaped; size up to 7–20%; and muscovite is brownish to white, with flakes up to 0.3 cm representing 5–10%. The xenoliths are very common and are not genetically related. Xenoliths are composed mostly of fine-grained, grayish quartz biotite (muscovite) schist and garnetiferous quartz mica schist.Keywords: leucocratic granite, cambro-ordovician granite, lesser himalayan granite, pegmatite
Procedia PDF Downloads 77513 Changing from Crude (Rudimentary) to Modern Method of Cassava Processing in the Ngwo Village of Njikwa Sub Division of North West Region of Cameroon
Authors: Loveline Ambo Angwah
Abstract:
The processing of cassava from tubers or roots into food using crude and rudimentary method (hand peeling, grating, frying and to sun drying) is a very cumbersome and difficult process. The crude methods are time consuming and labour intensive. While on the other hand, modern processing method, that is using machines to perform the various processes as washing, peeling, grinding, oven drying, fermentation and frying is easier, less time consuming, and less labour intensive. Rudimentarily, cassava roots are processed into numerous products and utilized in various ways according to local customs and preferences. For the people of Ngwo village, cassava is transformed locally into flour or powder form called ‘cumcum’. It is also sucked into water to give a kind of food call ‘water fufu’ and fried to give ‘garri’. The leaves are consumed as vegetables. Added to these, its relative high yields; ability to stay underground after maturity for long periods give cassava considerable advantage as a commodity that is being used by poor rural folks in the community, to fight poverty. It plays a major role in efforts to alleviate the food crisis because of its efficient production of food energy, year-round availability, tolerance to extreme stress conditions, and suitability to present farming and food systems in Africa. Improvement of cassava processing and utilization techniques would greatly increase labor efficiency, incomes, and living standards of cassava farmers and the rural poor, as well as enhance the-shelf life of products, facilitate their transportation, increase marketing opportunities, and help improve human and livestock nutrition. This paper presents a general overview of crude ways in cassava processing and utilization methods now used by subsistence and small-scale farmers in Ngwo village of the North West region in Cameroon, and examine the opportunities of improving processing technologies. Cassava needs processing because the roots cannot be stored for long because they rot within 3-4 days of harvest. They are bulky with about 70% moisture content, and therefore transportation of the tubers to markets is difficult and expensive. The roots and leaves contain varying amounts of cyanide which is toxic to humans and animals, while the raw cassava roots and uncooked leaves are not palatable. Therefore, cassava must be processed into various forms in order to increase the shelf life of the products, facilitate transportation and marketing, reduce cyanide content and improve palatability.Keywords: cassava roots, crude ways, food system, poverty
Procedia PDF Downloads 169