Search results for: standardized
136 Radiation Induced DNA Damage and Its Modification by Herbal Preparation of Hippophae rhamnoides L. (SBL-1): An in vitro and in vivo Study in Mice
Authors: Anuranjani Kumar, Madhu Bala
Abstract:
Ionising radiation exposure induces generation of free radicals and the oxidative DNA damage. SBL-1, a radioprotective leaf extract prepared from leaves Hippophae rhamnoides L. (Common name; Seabuckthorn), showed > 90% survival in mice population that was treated with lethal dose (10 Gy) of ⁶⁰Co gamma irradiation. In this study, early effects of pre-treatment with or without SBL-1 in blood peripheral blood lymphocytes (PBMCs) were investigated by cell viability assays (trypan blue and MTT). The quantitative in vitro study of Hoescht/PI staining was performed to check the apoptosis/necrosis in PBMCs irradiated at 2 Gy with or without pretreatment of SBL-1 (at different concentrations) up to 24 and 48h. Comet assay was performed in vivo, to detect the DNA strands breaks and its repair mechanism on peripheral blood lymphocytes at lethal dose (10 Gy). For this study, male mice (wt. 28 ± 2g) were administered radioprotective dose (30mg/kg body weight) of SBL-1, 30 min prior to irradiation. Animals were sacrificed at 24h and 48h. Blood was drawn through cardiac puncture, and blood lymphocytes were separated using histopaque column. Both neutral and alkaline comet assay were performed using standardized technique. In irradiated animals, alkaline comet assay revealed single strand breaks (SSBs) that showed significant (p < 0.05) increase in percent DNA in tail and Olive tail moment (OTM) at 24 h while at 48h the percent DNA in tail further increased significantly (p < 0.02). The double strands breaks (DSBs) increased significantly (p < 0.01) at 48 h in neutral assay, in comparison to untreated control. The animals pre-treated with SBL-1 before irradiation showed significantly (p < 0.05) less DSBs at 48 h treatment in comparison to irradiated group of animals. The SBL-1 alone treated group itself showed no toxicity. The antioxidant potential of SBL-1 were also investigated by in vitro biochemical assays such as DPPH (p < 0.05), ABTS, reducing ability (p < 0.09), hydroxyl radical scavenging (p < 0.05), ferric reducing antioxidant power (FRAP), superoxide radical scavenging activity (p < 0.05), hydrogen peroxide scavenging activity (p < 0.05) etc. SBL-1 showed strong free radical scavenging power that plays important role in the studies of radiation-induced injuries. The SBL-1 treated PBMCs showed significant (p < 0.02) viability in trypan blue assay at 24-hour incubation.Keywords: radiation, SBL-1, SSBs, DSBs, FRAP, PBMCs
Procedia PDF Downloads 154135 The Risk of Bleeding in Knee or Shoulder Injections in Patients on Warfarin Treatment
Authors: Muhammad Yasir Tarar
Abstract:
Background: Intraarticular steroid injections are an effective option in alleviating the symptoms of conditions like osteoarthritis, rheumatoid arthritis, crystal arthropathy, and rotator cuff tendinopathy. Most of these injections are conducted in the elderly who are on polypharmacy, including anticoagulants at times. Up to 6% of patients aged 80-84 years have been reported to be taking Warfarin. The literature availability on safety quotient for patients undergoing intraarticular injections on Warfarin is scarce. It has remained debatable over the years which approach is safe for these patients. Continuing warfarin has a theoretical bleeding risk, and stopping it can lead to even severe life-threatening thromboembolic events in high-risk patients. Objectives: To evaluate the risk of bleeding complications in patients on warfarin undergoing intraarticular injections or arthrocentesis. Study Design & Methods: A literature search of MEDLINE (1946 to present), EMBASE (1974 to present), and Cochrane CENTRAL (1988 to present) databases were conducted using any combination of the keywords, Injection, Knee, Shoulder, Joint, Intraarticular, arthrocentesis, Warfarin, and Anticoagulation in November 2020 for articles published in any language with no publication year limit. The study inclusion criteria included reporting on the rate of bleeding complications following injection of the knee or shoulder in patients on warfarin treatment. Randomized control trials and prospective and retrospective study designs were included. An electronic standardized Performa for data extraction was made. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) the methodology was used. The articles were appraised using the methodological index for nonrandomized studies. The Cochrane Risk of Bias Tool used to assess the risk of bias in included RCTs and the MINORS tool for assessment of bias in observational studies. Results: The search of databases resulted in a total of 852 articles. Relevant articles as per the inclusion criteria were shortlisted, 7 articles deemed suitable to be include. A total of 1033 joints sample size was undertaken with specified knee and shoulder joints of a total of 820. Only 6 joints had bleeding complications, 5 early bleeding at the time of injection or aspiration, and one late bleeding complication with INR of 5, additionally, 2 patients complained of bruising, 3 of pain, and 1 managed for infection. Conclusions: The results of the metanalysis show that it is relatively safe to perform intraarticular injections in patients on Warfarin regardless of the INR range.Keywords: arthrocentesis, warfarin, bleeding, injection
Procedia PDF Downloads 77134 Threading Professionalism Through Occupational Therapy Curriculum: A Framework and Resources
Authors: Ashley Hobson, Ashley Efaw
Abstract:
Professionalism is an essential skill for clinicians, particularly for Occupational Therapy Providers (OTPs). The World Federation of Occupational Therapy (WFOT) Guiding Principles for Ethical Occupational Therapy and American Occupational Therapy Association (AOTA) Code of Ethics establishes expectations for professionalism among OTPs, emphasizing its importance in the field. However, the teaching and assessment of professionalism vary across OTP programs. The flexibility provided by the country standards allows programs to determine their own approaches to meeting these standards, resulting in inconsistency. Educators in both academic and fieldwork settings face challenges in objectively assessing and providing feedback on student professionalism. Although they observe instances of unprofessional behavior, there is no standardized assessment measure to evaluate professionalism in OTP students. While most students are committed to learning and applying professionalism skills, they enter OTP programs with varying levels of proficiency in this area. Consequently, they lack a uniform understanding of professionalism and lack an objective means to self-assess their current skills and identify areas for growth. It is crucial to explicitly teach professionalism, have students to self-assess their professionalism skills, and have OTP educators assess student professionalism. This approach is necessary for fostering students' professionalism journeys. Traditionally, there has been no objective way for students to self-assess their professionalism or for educators to provide objective assessments and feedback. To establish a uniform approach to professionalism, the authors incorporated professionalism content into our curriculum. Utilizing an operational definition of professionalism, the authors integrated professionalism into didactic, fieldwork, and capstone courses. The complexity of the content and the professionalism skills expected of students increase each year to ensure students graduate with the skills to practice in accordance with the WFOT Guiding Principles for Ethical Occupational Therapy Practice and AOTA Code of Ethics. Two professionalism assessments were developed based on the expectations outlined in the both documents. The Professionalism Self-Assessment allows students to evaluate their professionalism, reflect on their performance, and set goals. The Professionalism Assessment for Educators is a modified version of the same tool designed for educators. The purpose of this workshop is to provide educators with a framework and tools for assessing student professionalism. The authors discuss how to integrate professionalism content into OTP curriculum and utilize professionalism assessments to provide constructive feedback and equitable learning opportunities for OTP students in academic, fieldwork, and capstone settings. By adopting these strategies, educators can enhance the development of professionalism among OTP students, ensuring they are well-prepared to meet the demands of the profession.Keywords: professionalism, assessments, student learning, student preparedness, ethical practice
Procedia PDF Downloads 41133 Medial Temporal Tau Predicts Memory Decline in Cognitively Unimpaired Elderly
Authors: Angela T. H. Kwan, Saman Arfaie, Joseph Therriault, Zahra Azizi, Firoza Z. Lussier, Cecile Tissot, Mira Chamoun, Gleb Bezgin, Stijn Servaes, Jenna Stevenon, Nesrine Rahmouni, Vanessa Pallen, Serge Gauthier, Pedro Rosa-Neto
Abstract:
Alzheimer’s disease (AD) can be detected in living people using in vivo biomarkers of amyloid-β (Aβ) and tau, even in the absence of cognitive impairment during the preclinical phase. [¹⁸F]-MK-6420 is a high affinity positron emission tomography (PET) tracer that quantifies tau neurofibrillary tangles, but its ability to predict cognitive changes associated with early AD symptoms, such as memory decline, is unclear. Here, we assess the prognostic accuracy of baseline [18F]-MK-6420 tau PET for predicting longitudinal memory decline in asymptomatic elderly individuals. In a longitudinal observational study, we evaluated a cohort of cognitively normal elderly participants (n = 111) from the Translational Biomarkers in Aging and Dementia (TRIAD) study (data collected between October 2017 and July 2020, with a follow-up period of 12 months). All participants underwent tau PET with [¹⁸F]-MK-6420 and Aβ PET with [¹⁸F]-AZD-4694. The exclusion criteria included the presence of head trauma, stroke, or other neurological disorders. There were 111 eligible participants who were chosen based on the availability of Aβ PET, tau PET, magnetic resonance imaging (MRI), and APOEε4 genotyping. Among these participants, the mean (SD) age was 70.1 (8.6) years; 20 (18%) were tau PET positive, and 71 of 111 (63.9%) were women. A significant association between baseline Braak I-II [¹⁸F]-MK-6240 SUVR positivity and change in composite memory score was observed at the 12-month follow-up, after correcting for age, sex, and years of education (Logical Memory and RAVLT, standardized beta = -0.52 (-0.82-0.21), p < 0.001, for dichotomized tau PET and -1.22 (-1.84-(-0.61)), p < 0.0001, for continuous tau PET). Moderate cognitive decline was observed for A+T+ over the follow-up period, whereas no significant change was observed for A-T+, A+T-, and A-T-, though it should be noted that the A-T+ group was small.Our results indicate that baseline tau neurofibrillary tangle pathology is associated with longitudinal changes in memory function, supporting the use of [¹⁸F]-MK-6420 PET to predict the likelihood of asymptomatic elderly individuals experiencing future memory decline. Overall, [¹⁸F]-MK-6420 PET is a promising tool for predicting memory decline in older adults without cognitive impairment at baseline. This is of critical relevance as the field is shifting towards a biological model of AD defined by the aggregation of pathologic tau. Therefore, early detection of tau pathology using [¹⁸F]-MK-6420 PET provides us with the hope that living patients with AD may be diagnosed during the preclinical phase before it is too late.Keywords: alzheimer’s disease, braak I-II, in vivo biomarkers, memory, PET, tau
Procedia PDF Downloads 76132 Biosorption of Nickel by Penicillium simplicissimum SAU203 Isolated from Indian Metalliferous Mining Overburden
Authors: Suchhanda Ghosh, A. K. Paul
Abstract:
Nickel, an industrially important metal is not mined in India, due to the lack of its primary mining resources. But, the chromite deposits occurring in the Sukinda and Baula-Nuasahi region of Odhisa, India, is reported to contain around 0.99% of nickel entrapped in the goethite matrix of the lateritic iron rich ore. Weathering of the dumped chromite mining overburden often leads to the contamination of the ground as well as the surface water with toxic nickel. Microbes inherent to this metal contaminated environment are reported to be capable of removal as well as detoxification of various metals including nickel. Nickel resistant fungal isolates obtained in pure form from the metal rich overburden were evaluated for their potential to biosorb nickel by using their dried biomass. Penicillium simplicissimum SAU203 was the best nickel biosorbant among the 20 fungi tested and was capable to sorbing 16.85 mg Ni/g biomass from a solution containing 50 mg/l of Ni. The identity of the isolate was confirmed using 18S rRNA gene analysis. The sorption capacity of the isolate was further standardized following Langmuir and Freundlich adsorption isotherm models and the results reflected energy efficient sorption. Fourier-transform infrared spectroscopy studies of the nickel loaded and control biomass in a comparative basis revealed the involvement of hydroxyl, amine and carboxylic groups in Ni binding. The sorption process was also optimized for several standard parameters like initial metal ion concentration, initial sorbet concentration, incubation temperature and pH, presence of additional cations and pre-treatment of the biomass by different chemicals. Optimisation leads to significant improvements in the process of nickel biosorption on to the fungal biomass. P. simplicissimum SAU203 could sorb 54.73 mg Ni/g biomass with an initial Ni concentration of 200 mg/l in solution and 21.8 mg Ni/g biomass with an initial biomass concentration of 1g/l solution. Optimum temperature and pH for biosorption was recorded to be 30°C and pH 6.5 respectively. Presence of Zn and Fe ions improved the sorption of Ni(II), whereas, cobalt had a negative impact. Pre-treatment of biomass with various chemical and physical agents has affected the proficiency of Ni sorption by P. simplicissimum SAU203 biomass, autoclaving as well as treatment of biomass with 0.5 M sulfuric acid and acetic acid reduced the sorption as compared to the untreated biomass, whereas, NaOH and Na₂CO₃ and Twin 80 (0.5 M) treated biomass resulted in augmented metal sorption. Hence, on the basis of the present study, it can be concluded that P. simplicissimum SAU203 has the potential for the removal as well as detoxification of nickel from contaminated environments in general and particularly from the chromite mining areas of Odhisa, India.Keywords: nickel, fungal biosorption, Penicillium simplicissimum SAU203, Indian chromite mines, mining overburden
Procedia PDF Downloads 191131 Progress Toward More Resilient Infrastructures
Authors: Amir Golalipour
Abstract:
In recent years, resilience emerged as an important topic in transportation infrastructure practice, planning, and design to address the myriad stressors of future climate facing the Nation. Climate change has increased the frequency of extreme weather events and also causes climate and weather patterns to diverge from historic trends, culminating in circumstances where transportation infrastructure and assets are operating outside the scope of their design. To design and maintain transportation infrastructure that can continue meeting objectives over the infrastructure’s design life, these systems must be made adaptable to the changing climate by incorporating resilience wherever practically and financially feasible. This study is focused on the adaptation strategies and incorporation of resilience in infrastructure construction, maintenance, rehabilitation, and preservation processes. This study will include highlights from some of the recent FHWA activities on resilience. This study describes existing resilience planning and decision-making practices related to transportation infrastructure; mechanisms to identify, analyze, and prioritize adaptation options; and the strain that future climate and extreme weather event pressures place on existing transportation assets and the stressors these systems face for both single and combined stressor scenarios. Results of two case studies from Transportation Engineering Approaches to Climate Resiliency (TEACR) projects with focus on temperature and precipitation impacts on transportation infrastructures will be presented. These case studies looked at the impact of infrastructure performance using future temperature and precipitation compared to traditional climate design parameters. The research team used the adaptation decision making assessment and Coupled Model Intercomparison Project (CMIP) processing tool to determine which solution is best to pursue. The CMIP tool provided project climate data for temperature and precipitation which then could be incorporated into the design procedure to estimate the performance. As a result, using the future climate scenarios would impact the design. These changes were noted to have only a slight increase in costs, however it is acknowledged that network wide these costs could be significant. This study will also focus on what we have learned from recent storms, floods, and climate related events that will help us be better prepared to ensure our communities have a resilient transportation network. It should be highlighted that standardized mechanisms to incorporate resilience practices are required to encourage widespread implementation, mitigate the effects of climate stressors, and ensure the continuance of transportation systems and assets in an evolving climate.Keywords: adaptation strategies, extreme events, resilience, transportation infrastructure
Procedia PDF Downloads 3130 The Emerging Role of Cannabis as an Anti-Nociceptive Agent in the Treatment of Chronic Back Pain
Authors: Josiah Damisa, Michelle Louise Richardson, Morenike Adewuyi
Abstract:
Lower back pain is a significant cause of disability worldwide and associated with great implications in terms of the well-being of affected individuals and society as a whole due to its undeniable socio-economic impact. With its prevalence on the increase as a result of an aging global population, the need for novel forms of pain management is ever paramount. This review aims to provide further insight into current research regarding a role for the endocannabinoid signaling pathway as a target in the treatment of chronic pain, with particular emphasis on its potential use as part of the treatment of lower back pain. Potential advantages and limitations of cannabis-based medicines over other forms of analgesia currently licensed for medical use are discussed in addition to areas that require ongoing consideration and research. To evaluate the efficacy of cannabis-based medicines in chronic pain, studies pertaining to the role of medical cannabis in chronic disease were reviewed. Standard searches of PubMed, Google Scholar and Web of Science databases were undertaken with peer-reviewed journal articles reviewed based on the indication for pain management, cannabis treatment modality used and study outcomes. Multiple studies suggest an emerging role for cannabis-based medicines as therapeutic agents in the treatment of chronic back pain. A potential synergistic effect has also been purported if these medicines are co-administered with opiate analgesia due to the similarity of the opiate and endocannabinoid signaling pathways. However, whilst recent changes to legislation in the United Kingdom mean that cannabis is now licensed for medicinal use on NHS prescription for a number of chronic health conditions, concerns remain as to the efficacy and safety of cannabis-based medicines. Research is lacking into both their side effect profiles and the long-term effects of cannabis use. Legal and ethical considerations to the use of these products in standardized medical practice also persist due to the notoriety of cannabis as a drug of abuse. Despite this, cannabis is beginning to gain traction as an alternative or even complementary drug to opiates, with some preclinical studies showing opiate-sparing effects. Whilst there is a paucity of clinical trials in this field, there is scope for cannabinoids to be successful anti-nociceptive agents in managing chronic back pain. The ultimate aim would be to utilize cannabis-based medicines as alternative or complementary therapies, thereby reducing opiate over-reliance and providing hope to individuals who have exhausted all other forms of standard treatment.Keywords: endocannabinoids, cannabis-based medicines, chronic pain, lower back pain
Procedia PDF Downloads 200129 Predicting Reading Comprehension in Spanish: The Evidence for the Simple View Model
Authors: Gabriela Silva-Maceda, Silvia Romero-Contreras
Abstract:
Spanish is a more transparent language than English given that it has more direct correspondences between sounds and letters. It has become important to understand how decoding and linguistic comprehension contribute to reading comprehension in the framework of the widely known Simple View Model. This study aimed to identify the level of prediction by these two components in a sample of 1st to 4th grade children attending two schools in central Mexico (one public and one private). Within each school, ten children were randomly selected in each grade level, and their parents were asked about reading habits and socioeconomic information. In total, 79 children completed three standardized tests measuring decoding (pseudo-word reading), linguistic comprehension (understanding of paragraphs) and reading comprehension using subtests from the Clinical Evaluation of Language Fundamentals-Spanish, Fourth Edition, and the Test de Lectura y Escritura en Español (LEE). The data were analyzed using hierarchical regression, with decoding as a first step and linguistic comprehension as a second step. Results showed that decoding accounted for 19.2% of the variance in reading comprehension, while linguistic comprehension accounted for an additional 10%, adding up to 29.2% of variance explained: F (2, 75)= 15.45, p <.001. Socioeconomic status derived from parental questionnaires showed a statistically significant association with the type of school attended, X2 (3, N= 79) = 14.33, p =.002. Nonetheless when analyzing the Simple View components, only decoding differences were statistically significant (t = -6.92, df = 76.81, p < .001, two-tailed); reading comprehension differences were also significant (t = -3.44, df = 76, p = .001, two-tailed). When socioeconomic status was included in the model, it predicted a 5.9% unique variance, even when already accounting for Simple View components, adding to a 35.1% total variance explained. This three-predictor model was also significant: F (3, 72)= 12.99, p <.001. In addition, socioeconomic status was significantly correlated with the amount of non-textbook books parents reported to have at home for both adults (rho = .61, p<.001) and children (rho= .47, p<.001). Results converge with a large body of literature finding socioeconomic differences in reading comprehension; in addition this study suggests that these differences were also present in decoding skills. Although linguistic comprehension differences between schools were expected, it is argued that the test used to collect this variable was not sensitive to linguistic differences, since it came from a test to diagnose clinical language disabilities. Even with this caveat, results show that the components of the Simple View Model can predict less than a third of the variance in reading comprehension in Spanish. However, the results also suggest that a fuller model of reading comprehension is obtained when considering the family’s socioeconomic status, given the potential differences shown by the socioeconomic status association with books at home, factors that are particularly important in countries where inequality gaps are relatively large.Keywords: decoding, linguistic comprehension, reading comprehension, simple view model, socioeconomic status, Spanish
Procedia PDF Downloads 328128 Adaptive Environmental Control System Strategy for Cabin Air Quality in Commercial Aircrafts
Authors: Paolo Grasso, Sai Kalyan Yelike, Federico Benzi, Mathieu Le Cam
Abstract:
The cabin air quality (CAQ) in commercial aircraft is of prime interest, especially in the context of the COVID-19 pandemic. Current Environmental Control Systems (ECS) rely on a prescribed fresh airflow per passenger to dilute contaminants. An adaptive ECS strategy is proposed, leveraging air sensing and filtration technologies to ensure a better CAQ. This paper investigates the CAQ level achieved in commercial aircraft’s cabin during various flight scenarios. The modeling and simulation analysis is performed in a Modelica-based environment describing the dynamic behavior of the system. The model includes the following three main systems: cabin, recirculation loop and air-conditioning pack. The cabin model evaluates the thermo-hygrometric conditions and the air quality in the cabin depending on the number of passengers and crew members, the outdoor conditions and the conditions of the air supplied to the cabin. The recirculation loop includes models of the recirculation fan, ordinary and novel filtration technology, mixing chamber and outflow valve. The air-conditioning pack includes models of heat exchangers and turbomachinery needed to condition the hot pressurized air bled from the engine, as well as selected contaminants originated from the outside or bled from the engine. Different ventilation control strategies are modeled and simulated. Currently, a limited understanding of contaminant concentrations in the cabin and the lack of standardized and systematic methods to collect and record data constitute a challenge in establishing a causal relationship between CAQ and passengers' comfort. As a result, contaminants are neither measured nor filtered during flight, and the current sub-optimal way to avoid their accumulation is their dilution with the fresh air flow. However, the use of a prescribed amount of fresh air comes with a cost, making the ECS the most energy-demanding non-propulsive system within an aircraft. In such a context, this study shows that an ECS based on a reduced and adaptive fresh air flow, and relying on air sensing and filtration technologies, provides promising results in terms of CAQ control. The comparative simulation results demonstrate that the proposed adaptive ECS brings substantial improvements to the CAQ in terms of both controlling the asymptotic values of the concentration of the contaminant and in mitigating hazardous scenarios, such as fume events. Original architectures allowing for adaptive control of the inlet air flow rate based on monitored CAQ will change the requirements for filtration systems and redefine the ECS operation.Keywords: cabin air quality, commercial aircraft, environmental control system, ventilation
Procedia PDF Downloads 101127 Moving Target Defense against Various Attack Models in Time Sensitive Networks
Authors: Johannes Günther
Abstract:
Time Sensitive Networking (TSN), standardized in the IEEE 802.1 standard, has been lent increasing attention in the context of mission critical systems. Such mission critical systems, e.g., in the automotive domain, aviation, industrial, and smart factory domain, are responsible for coordinating complex functionalities in real time. In many of these contexts, a reliable data exchange fulfilling hard time constraints and quality of service (QoS) conditions is of critical importance. TSN standards are able to provide guarantees for deterministic communication behaviour, which is in contrast to common best-effort approaches. Therefore, the superior QoS guarantees of TSN may aid in the development of new technologies, which rely on low latencies and specific bandwidth demands being fulfilled. TSN extends existing Ethernet protocols with numerous standards, providing means for synchronization, management, and overall real-time focussed capabilities. These additional QoS guarantees, as well as management mechanisms, lead to an increased attack surface for potential malicious attackers. As TSN guarantees certain deadlines for priority traffic, an attacker may degrade the QoS by delaying a packet beyond its deadline or even execute a denial of service (DoS) attack if the delays lead to packets being dropped. However, thus far, security concerns have not played a major role in the design of such standards. Thus, while TSN does provide valuable additional characteristics to existing common Ethernet protocols, it leads to new attack vectors on networks and allows for a range of potential attacks. One answer to these security risks is to deploy defense mechanisms according to a moving target defense (MTD) strategy. The core idea relies on the reduction of the attackers' knowledge about the network. Typically, mission-critical systems suffer from an asymmetric disadvantage. DoS or QoS-degradation attacks may be preceded by long periods of reconnaissance, during which the attacker may learn about the network topology, its characteristics, traffic patterns, priorities, bandwidth demands, periodic characteristics on links and switches, and so on. Here, we implemented and tested several MTD-like defense strategies against different attacker models of varying capabilities and budgets, as well as collaborative attacks of multiple attackers within a network, all within the context of TSN networks. We modelled the networks and tested our defense strategies on an OMNET++ testbench, with networks of different sizes and topologies, ranging from a couple dozen hosts and switches to significantly larger set-ups.Keywords: network security, time sensitive networking, moving target defense, cyber security
Procedia PDF Downloads 73126 A Qualitative Assessment of the Internal Communication of the College of Comunication: Basis for a Strategic Communication Plan
Authors: Edna T. Bernabe, Joshua Bilolo, Sheila Mae Artillero, Catlicia Joy Caseda, Liezel Once, Donne Ynah Grace Quirante
Abstract:
Internal communication is significant for an organization to function to its full extent. A strategic communication plan builds an organization’s structure and makes it more systematic. Information is a vital part of communication inside the organization as this lays every possible outcome—be it positive or negative. It is, therefore, imperative to assess the communication structure of a particular organization to secure a better and harmonious communication environment in any organization. Thus, this research was intended to identify the internal communication channels used in Polytechnic University of the Philippines-College of Communication (PUP-COC) as an organization, to identify the flow of information specifically in downward, upward, and horizontal communication, to assess the accuracy, consistency, and timeliness of its internal communication channels; and to come up with a proposed strategic communication plan of information dissemination to improve the existing communication flow in the college. The researchers formulated a framework from Input-Throughout-Output-Feedback-Goal of General System Theory and gathered data to assess the PUP-COC’s internal communication. The communication model links the objectives of the study to know the internal organization of the college. The qualitative approach and case study as the tradition of inquiry were used to gather deeper understanding of the internal organizational communication in PUP-COC, using Interview, as the primary methods for the study. This was supported with a quantitative data which were gathered through survey from the students of the college. The researchers interviewed 17 participants: the College dean, the 4 chairpersons of the college departments, the 11 faculty members and staff, and the acting Student Council president. An interview guide and a standardized questionnaire were formulated as instruments to generate the data. After a thorough analysis of the study, it was found out that two-way communication flow exists in PUP-COC. The type of communication channel the internal stakeholders use varies as to whom a particular person is communicating with. The members of the PUP-COC community also use different types of communication channels depending on the flow of communication being used. Moreover, the most common types of internal communication are the letters and memoranda for downward communication, while letters, text messages, and interpersonal communication are often used in upward communication. Various forms of social media have been found out to be of use in horizontal communication. Accuracy, consistency, and timeliness play a significant role in information dissemination within the college. However, some problems have also been found out in the communication system. The most common problem are the delay in the dissemination of memoranda and letters and the uneven distribution of information and instruction to faculty, staff, and students. This has led the researchers to formulate a strategic communication plan which aims to propose strategies that will solve the communication problems that are being experienced by the internal stakeholders.Keywords: communication plan, downward communication, internal communication, upward communication
Procedia PDF Downloads 518125 Student Feedback of a Major Curricular Reform Based on Course Integration and Continuous Assessment in Electrical Engineering
Authors: Heikki Valmu, Eero Kupila, Raisa Vartia
Abstract:
A major curricular reform was implemented in Metropolia UAS in 2014. The teaching was to be based on larger course entities and collaborative pedagogy. The most thorough reform was conducted in the department of electrical engineering and automation technology. It has been already shown that the reform has been extremely successful with respect to student progression and drop-out rate. The improvement of the results has been much more significant in this department compared to the other engineering departments making only minor pedagogical changes. In the beginning of the spring term of 2017, a thorough student feedback project was conducted in the department. The study consisted of thirty questions about the implementation of the curriculum, the student workload and other matters related to student satisfaction. The reply rate was more than 40%. The students were divided to four different categories: first year students [cat.1] and students of all the three different majors [categories 2-4]. These categories were found valid since all the students have the same course structure in the first two semesters after which they may freely select the major. All staff members are divided into four teams respectively. The curriculum consists of consecutive 15 credit (ECTS) courses each taught by a group of teachers (3-5). There are to be no end exams and continuous assessment is to be employed. In 2014 the different teacher groups were encouraged to employ innovatively different assessment methods within the given specs. One of these methods has been since used in categories 1 and 2. These students have to complete a number of compulsory tasks each week to pass the course and the actual grade is defined by a smaller number of tests throughout the course. The tasks vary from homework assignments, reports and laboratory exercises to larger projects and the actual smaller tests are usually organized during the regular lecture hours. The teachers of the other two majors have been pedagogically more conservative. The student progression has been better in categories 1 and 2 compared to categories 3 and 4. One of the main goals of this survey was to analyze the reasons for the difference and the assessment methods in detail besides the general student satisfaction. The results show that in the categories following more strictly the specified assessment model much more versatile assessment methods are used and the basic spirit of the new pedagogy is followed. Also, the student satisfaction is significantly better in categories 1 and 2. It may be clearly stated that continuous assessment and teacher cooperation improve the learning outcomes, student progression as well as student satisfaction. Too much academic freedom seems to lead to worse results [cat 3 and 4]. A standardized assessment model is launched for all students in autumn 2017. This model is different from the one used so far in categories 1 and 2 allowing more flexibility to teacher groups, but it will force all the teacher groups to follow the general rules in order to improve the results and the student satisfaction further.Keywords: continuous assessment, course integration, curricular reform, student feedback
Procedia PDF Downloads 203124 Nuclear Materials and Nuclear Security in India: A Brief Overview
Authors: Debalina Ghoshal
Abstract:
Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.Keywords: India, nuclear security, nuclear materials, non proliferation
Procedia PDF Downloads 352123 A Comparison and Discussion of Modern Anaesthetic Techniques in Elective Lower Limb Arthroplasties
Authors: P. T. Collett, M. Kershaw
Abstract:
Introduction: The discussion regarding which method of anesthesia provides better results for lower limb arthroplasty is a continuing debate. Multiple meta-analysis has been performed with no clear consensus. The current recommendation is to use neuraxial anesthesia for lower limb arthroplasty; however, the evidence to support this decision is weak. The Enhanced Recovery After Surgery (ERAS) society has recommended, either technique can be used as part of a multimodal anesthetic regimen. A local study was performed to see if the current anesthetic practice correlates with the current recommendations and to evaluate the efficacy of the different techniques utilized. Method: 90 patients who underwent total hip or total knee replacements at Nevill Hall Hospital between February 2019 to July 2019 were reviewed. Data collected included the anesthetic technique, day one opiate use, pain score, and length of stay. The data was collected from anesthetic charts, and the pain team follows up forms. Analysis: The average of patients undergoing lower limb arthroplasty was 70. Of those 83% (n=75) received a spinal anaesthetic and 17% (n=15) received a general anaesthetic. For patients undergoing knee replacement under general anesthetic the average day, one pain score was 2.29 and 1.94 if a spinal anesthetic was performed. For hip replacements, the scores were 1.87 and 1.8, respectively. There was no statistical significance between these scores. Day 1 opiate usage was significantly higher in knee replacement patients who were given a general anesthetic (45.7mg IV morphine equivalent) vs. those who were operated on under spinal anesthetic (19.7mg). This difference was not noticeable in hip replacement patients. There was no significant difference in length of stay between the two anesthetic techniques. Discussion: There was no significant difference in the day one pain score between the patients who received a general or spinal anesthetic for either knee or hip replacements. The higher pain scores in the knee replacement group overall are consistent with this being a more painful procedure. This is a small patient population, which means any difference between the two groups is unlikely to be representative of a larger population. The pain scale has 4 points, which means it is difficult to identify a significant difference between pain scores. Conclusion: There is currently little standardization between the different anesthetic approaches utilized in Nevill Hall Hospital. This is likely due to the lack of adherence to a standardized anesthetic regimen. In accordance with ERAS recommends a standard anesthetic protocol is a core component. The results of this study and the guidance from the ERAS society will support the implementation of a new health board wide ERAS protocol.Keywords: anaesthesia, orthopaedics, intensive care, patient centered decision making, treatment escalation
Procedia PDF Downloads 127122 Impact of Alkaline Activator Composition and Precursor Types on Properties and Durability of Alkali-Activated Cements Mortars
Authors: Sebastiano Candamano, Antonio Iorfida, Patrizia Frontera, Anastasia Macario, Fortunato Crea
Abstract:
Alkali-activated materials are promising binders obtained by an alkaline attack on fly-ashes, metakaolin, blast slag among others. In order to guarantee the highest ecological and cost efficiency, a proper selection of precursors and alkaline activators has to be carried out. These choices deeply affect the microstructure, chemistry and performances of this class of materials. Even if, in the last years, several researches have been focused on mix designs and curing conditions, the lack of exhaustive activation models, standardized mix design and curing conditions and an insufficient investigation on shrinkage behavior, efflorescence, additives and durability prevent them from being perceived as an effective and reliable alternative to Portland. The aim of this study is to develop alkali-activated cements mortars containing high amounts of industrial by-products and waste, such as ground granulated blast furnace slag (GGBFS) and ashes obtained from the combustion process of forest biomass in thermal power plants. In particular, the experimental campaign was performed in two steps. In the first step, research was focused on elucidating how the workability, mechanical properties and shrinkage behavior of produced mortars are affected by the type and fraction of each precursor as well as by the composition of the activator solutions. In order to investigate the microstructures and reaction products, SEM and diffractometric analyses have been carried out. In the second step, their durability in harsh environments has been evaluated. Mortars obtained using only GGBFS as binder showed mechanical properties development and shrinkage behavior strictly dependent on SiO2/Na2O molar ratio of the activator solutions. Compressive strengths were in the range of 40-60 MPa after 28 days of curing at ambient temperature. Mortars obtained by partial replacement of GGBFS with metakaolin and forest biomass ash showed lower compressive strengths (≈35 MPa) and shrinkage values when higher amount of ashes were used. By varying the activator solutions and binder composition, compressive strength up to 70 MPa associated with shrinkage values of about 4200 microstrains were measured. Durability tests were conducted to assess the acid and thermal resistance of the different mortars. They all showed good resistance in a solution of 5%wt of H2SO4 also after 60 days of immersion, while they showed a decrease of mechanical properties in the range of 60-90% when exposed to thermal cycles up to 700°C.Keywords: alkali activated cement, biomass ash, durability, shrinkage, slag
Procedia PDF Downloads 325121 Cognitive Linguistic Features Underlying Spelling Development in a Second Language: A Case Study of L2 Spellers in South Africa
Authors: A. Van Staden, A. Tolmie, E. Vorster
Abstract:
Research confirms the multifaceted nature of spelling development and underscores the importance of both cognitive and linguistic skills that affect sound spelling development such as working and long-term memory, phonological and orthographic awareness, mental orthographic images, semantic knowledge and morphological awareness. This has clear implications for many South African English second language spellers (L2) who attempt to become proficient spellers. Since English has an opaque orthography, with irregular spelling patterns and insufficient sound/grapheme correspondences, L2 spellers can neither rely, nor draw on the phonological awareness skills of their first language (for example Sesotho and many other African languages), to assist them to spell the majority of English words. Epistemologically, this research is informed by social constructivism. In addition the researchers also hypothesized that the principles of the Overlapping Waves Theory was an appropriate lens through which to investigate whether L2 spellers could significantly improve their spelling skills via the implementation of an alternative route to spelling development, namely the orthographic route, and more specifically via the application of visual imagery. Post-test results confirmed the results of previous research that argues for the interactive nature of different cognitive and linguistic systems such as working memory and its subsystems and long-term memory, as learners were systematically guided to store visual orthographic images of words in their long-term lexicons. Moreover, the results have shown that L2 spellers in the experimental group (n = 9) significantly outperformed L2 spellers (n = 9) in the control group whose intervention involved phonological awareness (and coding) including the teaching of spelling rules. Consequently, L2 learners in the experimental group significantly improved in all the post-test measures included in this investigation, namely the four sub-tests of short-term memory; as well as two spelling measures (i.e. diagnostic and standardized measures). Against this background, the findings of this study look promising and have shown that, within a social-constructivist learning environment, learners can be systematically guided to apply higher-order thinking processes such as visual imagery to successfully store and retrieve mental images of spelling words from their output lexicons. Moreover, results from the present study could play an important role in directing research into this under-researched aspect of L2 literacy development within the South African education context.Keywords: English second language spellers, phonological and orthographic coding, social constructivism, visual imagery as spelling strategy
Procedia PDF Downloads 359120 Sceletium Tortuosum: A review on its Phytochemistry, Pharmacokinetics, Biological and Clinical Activities
Authors: Tomi Lois Olatunji, Frances Siebert, Ademola Emmanuel Adetunji, Brian Harvey, Johane Gericke, Josias Hamman, Frank Van Der Kooy
Abstract:
Ethnopharmacological relevance: Sceletium tortuosum (L.) N.E.Br, the most sought after and widely researched species in the genus Sceletium is a succulent forb endemic to South Africa. Traditionally, this medicinal plant is mainly masticated or smoked and used for the relief of toothache, abdominal pain, and as a mood-elevator, analgesic, hypnotic, anxiolytic, thirst and hunger suppressant, and for its intoxicating/euphoric effects. Sceletium tortuosum is currently of widespread scientific interest due to its clinical potential in treating anxiety and depression, relieving stress in healthy individuals, and enhancing cognitive functions. These pharmacological actions are attributed to its phytochemical constituents referred to as mesembrine-type alkaloids. Aim of the review: The aim of this review was to comprehensively summarize and critically evaluate recent research advances on the phytochemistry, pharmacokinetics, biological and clinical activities of the medicinal plant S. tortuosum. Additionally, current ongoing research and future perspectives are also discussed. Methods: All relevant scientific articles, books, MSc and Ph.D. dissertations on botany, behavioral pharmacology, traditional uses, and phytochemistry of S. tortuosum were retrieved from different databases (including Science Direct, PubMed, Google Scholar, Scopus and Web of Science). For pharmacokinetics and pharmacological effects of S. tortuosum, the focus fell on relevant publications published between 2009 and 2021. Results: Twenty-five alkaloids belonging to four structural classes viz: mesembrine, Sceletium A4, joubertiamine, and tortuosamine, have been identified from S. tortuosum, of which the mesembrine class is predominant. The crude extracts and commercially available standardized extracts of S. tortuosum have displayed a wide spectrum of biological activities (e.g. antimalarial, anti-oxidant, immunomodulatory, anti-HIV, neuroprotection, enhancement of cognitive function) in in vitro or in vivo studies. This plant has not yet been studied in a clinical population, but has potential for enhancing cognitive function, and managing anxiety and depression. Conclusion: As an important South African medicinal plant, S. tortuosum has garnered many research advances on its phytochemistry and biological activities over the last decade. These scientific studies have shown that S. tortuosum has various bioactivities. The findings have further established the link between the phytochemistry and pharmacological application, and support the traditional use of S. tortuosum in the indigenous medicine of South Africa.Keywords: Aizoaceae, Mesembrine, Serotonin, Sceletium tortuosum, Zembrin®, psychoactive, antidepressant
Procedia PDF Downloads 215119 Validation of an Acuity Measurement Tool for Maternity Services
Authors: Cherrie Lowe
Abstract:
The TrendCare Patient Dependency System is currently utilized by a large number of Maternity Services across Australia, New Zealand and Singapore. In 2012, 2013, and 2014 validation studies were initiated in all three countries to validate the acuity tools used for Women in Labour, and Postnatal Mothers and Babies. This paper will present the findings of the validation study. Aim: The aim of this study was to; Identify if the care hours provided by the TrendCare Acuity System was an accurate reflection of the care required by Women and Babies. Obtain evidence of changes required to acuity indicators and/or category timings to ensure the TrendCare acuity system remains reliable and valid across a range of Maternity care models in three countries. Method: A non-experimental action research methodology was used across four District Health Boards in New Zealand, two large public Australian Maternity services and a large tertiary Maternity service in Singapore. Standardized data collection forms and timing devices were used to collect Midwife contact times with Women and Babies included in the study. Rejection processes excluded samples where care was not completed/rationed. The variances between actual timed Midwife/Mother/Baby contact and actual Trend Care acuity times were identified and investigated. Results: 87.5% (18) of TrendCare acuity category timings matched the actual timings recorded for Midwifery care. 12.5% (3) of TrendCare night duty categories provided less minutes of care than the actual timings. 100% of Labour Ward TrendCare categories matched actual timings for Midwifery care. The actual times given for assistance to New Zealand independent Midwives in Labour Ward showed a significant deviation to previous studies demonstrating the need for additional time allocations in Trend Care. Conclusion: The results demonstrated the importance of regularly validating the Trend Care category timings with the care hours required, as variances to models of care and length of stay in Maternity units have increased Midwifery workloads on the night shift. The level of assistance provided by the core labour ward staff to the Independent Midwife has increased substantially. Outcomes: As a consequence of this study changes were made to the night duty TrendCare Maternity categories, additional acuity indicators developed and times for assisting independent Midwives increased. The updated TrendCare version was delivered to Maternity services in 2014.Keywords: maternity, acuity, research, nursing workloads
Procedia PDF Downloads 378118 Interdisciplinary Evaluations of Children with Autism Spectrum Disorder in a Telehealth Arena
Authors: Janice Keener, Christine Houlihan
Abstract:
Over the last several years, there has been an increase in children identified as having Autism Spectrum Disorder (ASD). Specialists across several disciplines: mental health and medical professionals have been tasked with ensuring accurate and timely evaluations for children with suspected ASD. Due to the nature of the ASD symptom presentation, an interdisciplinary assessment and treatment approach best addresses the needs of the whole child. During the unprecedented COVID-19 Pandemic, clinicians were faced with how to continue with interdisciplinary assessments in a telehealth arena. Instruments that were previously used to assess ASD in-person were no longer appropriate measures to use due to the safety restrictions. For example, The Autism Diagnostic Observation Schedule requires examiners and children to be in very close proximity of each other and if masks or face shields are worn, they render the evaluation invalid. Similar issues arose with the various cognitive measures that are used to assess children such as the Weschler Tests of Intelligence and the Differential Ability Scale. Thus the need arose to identify measures that are able to be safely and accurately administered using safety guidelines. The incidence of ASD continues to rise over time. Currently, the Center for Disease Control estimates that 1 in 59 children meet the criteria for a diagnosis of ASD. The reasons for this increase are likely multifold, including changes in diagnostic criteria, public awareness of the condition, and other environmental and genetic factors. The rise in the incidence of ASD has led to a greater need for diagnostic and treatment services across the United States. The uncertainty of the diagnostic process can lead to an increased level of stress for families of children with suspected ASD. Along with this increase, there is a need for diagnostic clarity to avoid both under and over-identification of this condition. Interdisciplinary assessment is ideal for children with suspected ASD, as it allows for an assessment of the whole child over the course of time and across multiple settings. Clinicians such as Psychologists and Developmental Pediatricians play important roles in the initial evaluation of autism spectrum disorder. An ASD assessment may consist of several types of measures such as standardized checklists, structured interviews, and direct assessments such as the ADOS-2 are just a few examples. With the advent of telehealth clinicians were asked to continue to provide meaningful interdisciplinary assessments via an electronic platform and, in a sense, going to the family home and evaluating the clinical symptom presentation remotely and confidently making an accurate diagnosis. This poster presentation will review the benefits, limitations, and interpretation of these various instruments. The role of other medical professionals will also be addressed, including medical providers, speech pathology, and occupational therapy.Keywords: Autism Spectrum Disorder Assessments, Interdisciplinary Evaluations , Tele-Assessment with Autism Spectrum Disorder, Diagnosis of Autism Spectrum Disorder
Procedia PDF Downloads 209117 Analysis and Quantification of Historical Drought for Basin Wide Drought Preparedness
Authors: Joo-Heon Lee, Ho-Won Jang, Hyung-Won Cho, Tae-Woong Kim
Abstract:
Drought is a recurrent climatic feature that occurs in virtually every climatic zone around the world. Korea experiences the drought almost every year at the regional scale mainly during in the winter and spring seasons. Moreover, extremely severe droughts at a national scale also occurred at a frequency of six to seven years. Various drought indices had developed as tools to quantitatively monitor different types of droughts and are utilized in the field of drought analysis. Since drought is closely related with climatological and topographic characteristics of the drought prone areas, the basins where droughts are frequently occurred need separate drought preparedness and contingency plans. In this study, an analysis using statistical methods was carried out for the historical droughts occurred in the five major river basins in Korea so that drought characteristics can be quantitatively investigated. It was also aimed to provide information with which differentiated and customized drought preparedness plans can be established based on the basin level analysis results. Conventional methods which quantifies drought execute an evaluation by applying a various drought indices. However, the evaluation results for same drought event are different according to different analysis technique. Especially, evaluation of drought event differs depend on how we view the severity or duration of drought in the evaluation process. Therefore, it was intended to draw a drought history for the most severely affected five major river basins of Korea by investigating a magnitude of drought that can simultaneously consider severity, duration, and the damaged areas by applying drought run theory with the use of SPI (Standardized Precipitation Index) that can efficiently quantifies meteorological drought. Further, quantitative analysis for the historical extreme drought at various viewpoints such as average severity, duration, and magnitude of drought was attempted. At the same time, it was intended to quantitatively analyze the historical drought events by estimating the return period by derived SDF (severity-duration-frequency) curve for the five major river basins through parametric regional drought frequency analysis. Analysis results showed that the extremely severe drought years were in the years of 1962, 1988, 1994, and 2014 in the Han River basin. While, the extreme droughts were occurred in 1982 and 1988 in the Nakdong river basin, 1994 in the Geumg basin, 1988 and 1994 in Youngsan river basin, 1988, 1994, 1995, and 2000 in the Seomjin river basin. While, the extremely severe drought years at national level in the Korean Peninsula were occurred in 1988 and 1994. The most damaged drought were in 1981~1982 and 1994~1995 which lasted for longer than two years. The return period of the most severe drought at each river basin was turned out to be at a frequency of 50~100 years.Keywords: drought magnitude, regional frequency analysis, SPI, SDF(severity-duration-frequency) curve
Procedia PDF Downloads 406116 Effect of Plant Growth Regulators on in vitro Biosynthesis of Antioxidative Compounds in Callus Culture and Regenerated Plantlets Derived from Taraxacum officinale
Authors: Neha Sahu, Awantika Singh, Brijesh Kumar, K. R. Arya
Abstract:
Taraxacum officinale Weber or dandelion (Asteraceae) is an important Indian traditional herb used to treat liver detoxification, digestive problems, spleen, hepatic and kidney disorders, etc. The plant is well known to possess important phenolic and flavonoids to serve as a potential source of antioxidative and chemoprotective agents. Biosynthesis of bioactive compounds through in vitro cultures is a requisite for natural resource conservation and to provide an alternative source for pharmaceutical applications. Thus an efficient and reproducible protocol was developed for in vitro biosynthesis of bioactive antioxidative compounds from leaf derived callus and in vitro regenerated cultures of Taraxacum officinale using MS media fortified with various combinations of auxins and cytokinins. MS media containing 0.25 mg/l 2, 4-D (2, 4-Dichloro phenoxyacetic acid) with 0.05 mg/l 2-iP [N6-(2-Isopentenyl adenine)] was found as an effective combination for the establishment of callus with 92 % callus induction frequency. Moreover, 2.5 mg/l NAA (α-Naphthalene acetic acid) with 0.5 mg/l BAP (6-Benzyl aminopurine) and 1.5 mg/l NAA showed the optimal response for in vitro plant regeneration with 80 % regeneration frequency and rooting respectively. In vitro regenerated plantlets were further transferred to soil and acclimatized. Quantitative variability of accumulated bioactive compounds in cultures (in vitro callus, plantlets and acclimatized) were determined through UPLC-MS/MS (ultra-performance liquid chromatography-triple quadrupole-linear ion trap mass spectrometry) and compared with wild plants. The phytochemical determination of in vitro and wild grown samples showed the accumulation of 6 compounds. In in vitro callus cultures and regenerated plantlets, two major antioxidative compounds i.e. chlorogenic acid (14950.0 µg/g and 4086.67 µg/g) and umbelliferone (10400.00 µg/g and 2541.67 µg/g) were found respectively. Scopoletin was found to be highest in vitro regenerated plants (83.11 µg/g) as compared to wild plants (52.75 µg/g). Notably, scopoletin is not detected in callus and acclimatized plants, but quinic acid (6433.33 µg/g) and protocatechuic acid (92.33 µg/g) were accumulated at the highest level in acclimatized plants as compared to other samples. Wild grown plants contained highest content (948.33 µg/g) of flavonoid glycoside i.e. luteolin-7-O-glucoside. Our data suggests that in vitro callus and regenerated plants biosynthesized higher content of antioxidative compounds in controlled conditions when compared to wild grown plants. These standardized cultural conditions may be explored as a sustainable source of plant materials for enhanced production and adequate supply of oxidative polyphenols.Keywords: anti-oxidative compounds, in vitro cultures, Taraxacum officinale, UPLC-MS/MS
Procedia PDF Downloads 201115 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 159114 Evaluation of Cooperative Hand Movement Capacity in Stroke Patients Using the Cooperative Activity Stroke Assessment
Authors: F. A. Thomas, M. Schrafl-Altermatt, R. Treier, S. Kaufmann
Abstract:
Stroke is the main cause of adult disability. Especially upper limb function is affected in most patients. Recently, cooperative hand movements have been shown to be a promising type of upper limb training in stroke rehabilitation. In these movements, which are frequently found in activities of daily living (e.g. opening a bottle, winding up a blind), the force of one upper limb has to be equally counteracted by the other limb to successfully accomplish a task. The use of standardized and reliable clinical assessments is essential to evaluate the efficacy of therapy and the functional outcome of a patient. Many assessments for upper limb function or impairment are available. However, the evaluation of cooperative hand movement tasks are rarely included in those. Thus, the aim of this study was (i) to develop a novel clinical assessment (CASA - Cooperative Activity Stroke Assessment) for the evaluation of patients’ capacity to perform cooperative hand movements and (ii) to test its inter- and interrater reliability. Furthermore, CASA scores were compared to current gold standard assessments for upper extremity in stroke patients (i.e. Fugl-Meyer Assessment, Box & Blocks Test). The CASA consists of five cooperative activities of daily living including (1) opening a jar, (2) opening a bottle, (3) open and closing of a zip, (4) unscrew a nut and (5) opening a clipbox. Here, the goal is to accomplish the tasks as fast as possible. In addition to the quantitative rating (i.e. time) which is converted to a 7-point scale, also the quality of the movement is rated in a 4-point scale. To test the reliability of CASA, fifteen stroke subjects were tested within a week twice by the same two raters. Intra-and interrater reliability was calculated using the intraclass correlation coefficient (ICC) for total CASA score and single items. Furthermore, Pearson-correlation was used to compare the CASA scores to the scores of Fugl-Meyer upper limb assessment and the box and blocks test, which were assessed in every patient additionally to the CASA. ICC scores of the total CASA score indicated an excellent- and single items established a good to excellent inter- and interrater reliability. Furthermore, the CASA score was significantly correlated to the Fugl-Meyer and Box & Blocks score. The CASA provides a reliable assessment for cooperative hand movements which are crucial for many activities of daily living. Due to its non-costly setup, easy and fast implementation, we suggest it to be well suitable for clinical application. In conclusion, the CASA is a useful tool in assessing the functional status and therapy related recovery in cooperative hand movement capacity in stroke patients.Keywords: activitites of daily living, clinical assessment, cooperative hand movements, reliability, stroke
Procedia PDF Downloads 319113 On-Ice Force-Velocity Modeling Technical Considerations
Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra
Abstract:
Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.Keywords: ice-hockey, sprint, skating, power
Procedia PDF Downloads 100112 Kitchen Bureaucracy: The Preparation of Banquets for Medieval Japanese Royalty
Authors: Emily Warren
Abstract:
Despite the growing body of research on Japanese food history, little has been written about the attitudes and perspectives premodern Japanese people held about their food, even on special celebratory days. In fact, the overall image that arises from the literature is one of ambivalence: that the medieval nobility of the Heian and Kamakura periods (795-1333) did not much care about what they ate and for that reason, food seems relatively scarce in certain historical records. This study challenges this perspective by analyzing the manuals written to guide palace management and feast preparation for royals, introducing two of the sources into English for the first time. This research is primarily based on three manuals that address different aspects of royal food culture and preparation. The Chujiruiki, or Record of the Palace Kitchens (1295), is a fragmentary manual written by a bureaucrat in charge of the main palace kitchen office. This document collection details the utensils, furnishing, and courses that officials organized for the royals’ two daily meals in the morning (asagarei gozen) and in the afternoon (hiru gozen) when they enjoyed seven courses, each one carefully cooked and plated. The orchestration of daily meals and frequent banquets would have been complicated affairs for those preparing the tableware and food, thus requiring texts like the Chûjiruiki, as well as another manual, the Nicchûgyôji (11th c.), or The Daily Functions. Because of the complex coordination between various kitchen-related bureaucratic offices, kitchen officials endeavored to standardize the menus and place settings depending on the time of year, religious abstinence days, and available ingredients flowing into the capital as taxes. For the most important annual banquets and rites celebrating deities and the royal family, kitchen officials would likely refer to the Engi Shiki (927), or Protocols of the Engi Era, for details on offerings, servant payments, and menus. This study proposes that many of the great feast events, and indeed even daily meals at the palace, were so standardized and carefully planned for repetition that there would have been little need for the contents of such feasts to be detailed in diaries or novels—places where historians have noted a lack of the mention of food descriptions. These descriptions were not included for lack of interest on the part of the nobility, but rather because knowledge of what would be served at banquets and feasts would be considered a matter-of-course in the same way that a modern American would likely not need to state the menu of a traditional Thanksgiving meal to an American audience. Where food was concerned, novelty more so than tradition prompted a response in personal records, like diaries.Keywords: banquets, bureaucracy, Engi shiki, Japanese food
Procedia PDF Downloads 111111 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 397110 Effect of Fresh Concrete Curing Methods on Its Compressive Strength
Authors: Xianghe Dai, Dennis Lam, Therese Sheehan, Naveed Rehman, Jie Yang
Abstract:
Concrete is one of the most used construction materials that may be made onsite as fresh concrete and then placed in formwork to produce the desired shapes of structures. It has been recognized that the raw materials and mix proportion of concrete dominate the mechanical characteristics of hardened concrete, and the curing method and environment applied to the concrete in early stages of hardening will significantly influence the concrete properties, such as compressive strength, durability, permeability etc. In construction practice, there are various curing methods to maintain the presence of mixing water throughout the early stages of concrete hardening. They are also beneficial to concrete in hot weather conditions as they provide cooling and prevent the evaporation of water. Such methods include ponding or immersion, spraying or fogging, saturated wet covering etc. Also there are various curing methods that may be implemented to decrease the level of water lost which belongs to the concrete surface, such as putting a layer of impervious paper, plastic sheeting or membrane on the concrete to cover it. In the concrete material laboratory, accelerated strength gain methods supply the concrete with heat and additional moisture by applying live steam, coils that are subject to heating or pads that have been warmed electrically. Currently when determining the mechanical parameters of a concrete, the concrete is usually sampled from fresh concrete on site and then cured and tested in laboratories where standardized curing procedures are adopted. However, in engineering practice, curing procedures in the construction sites after the placing of concrete might be very different from the laboratory criteria, and this includes some standard curing procedures adopted in the laboratory that can’t be applied on site. Sometimes the contractor compromises the curing methods in order to reduce construction costs etc. Obviously the difference between curing procedures adopted in the laboratory and those used on construction sites might over- or under-estimate the real concrete quality. This paper presents the effect of three typical curing methods (air curing, water immersion curing, plastic film curing) and of maintaining concrete in steel moulds on the compressive strength development of normal concrete. In this study, Portland cement with 30% fly ash was used and different curing periods, 7 days, 28 days and 60 days were applied. It was found that the highest compressive strength was observed from concrete samples to which 7-day water immersion curing was applied and from samples maintained in steel moulds up to the testing date. The research results implied that concrete used as infill in steel tubular members might develop a higher strength than predicted by design assumptions based on air curing methods. Wrapping concrete with plastic film as a curing method might delay the concrete strength development in the early stages. Water immersion curing for 7 days might significantly increase the concrete compressive strength.Keywords: compressive strength, air curing, water immersion curing, plastic film curing, maintaining in steel mould, comparison
Procedia PDF Downloads 293109 Prevalence of Fast-Food Consumption on Overweight or Obesity on Employees (Age Between 25-45 Years) in Private Sector; A Cross-Sectional Study in Colombo, Sri Lanka
Authors: Arosha Rashmi De Silva, Ananda Chandrasekara
Abstract:
This study seeks to comprehensively examine the influence of fast-food consumption and physical activity levels on the body weight of young employees within the private sector of Sri Lanka. The escalating popularity of fast food has raised concerns about its nutritional content and associated health ramifications. To investigate this phenomenon, a cohort of 100 individuals aged between 25 and 45, employed in Sri Lanka's private sector, participated in this research. These participants provided socio-demographic data through a standardized questionnaire, enabling the characterization of their backgrounds. Additionally, participants disclosed their frequency of fast-food consumption and engagement in physical activities, utilizing validated assessment tools. The collected data was meticulously compiled into an Excel spreadsheet and subjected to rigorous statistical analysis. Descriptive statistics, such as percentages and proportions, were employed to delineate the body weight status of the participants. Employing chi-square tests, our study identified significant associations between fast-food consumption, levels of physical activity, and body weight categories. Furthermore, through binary logistic regression analysis, potential risk factors contributing to overweight and obesity within the young employee cohort were elucidated. Our findings revealed a disconcerting trend, with 6% of participants classified as underweight, 32% within the normal weight range, and a substantial 62% categorized as overweight or obese. These outcomes underscore the alarming prevalence of overweight and obesity among young private-sector employees, particularly within the bustling urban landscape of Colombo, Sri Lanka. The data strongly imply a robust correlation between fast-food consumption, sedentary behaviors, and higher body weight categories, reflective of the evolving lifestyle patterns associated with the nation's economic growth. This study emphasizes the urgent need for effective interventions to counter the detrimental effects of fast-food consumption. The implementation of awareness campaigns elucidating the adverse health consequences of fast food, coupled with comprehensive nutritional education, can empower individuals to make informed dietary choices. Workplace interventions, including the provision of healthier meal alternatives and the facilitation of physical activity opportunities, are essential in fostering a healthier workforce and mitigating the escalating burden of overweight and obesity in Sri LankaKeywords: fast food consumption, obese, overweight, physical activity level
Procedia PDF Downloads 50108 The GRIT Study: Getting Global Rare Disease Insights Through Technology Study
Authors: Aneal Khan, Elleine Allapitan, Desmond Koo, Katherine-Ann Piedalue, Shaneel Pathak, Utkarsh Subnis
Abstract:
Background: Disease management of metabolic, genetic disorders is long-term and can be cumbersome to patients and caregivers. Patient-Reported Outcome Measures (PROMs) have been a useful tool in capturing patient perspectives to help enhance treatment compliance and engagement with health care providers, reduce utilization of emergency services, and increase satisfaction with their treatment choices. Currently, however, PROMs are collected during infrequent and decontextualized clinic visits, which makes translation of patient experiences challenging over time. The GRIT study aims to evaluate a digital health journal application called Zamplo that provides a personalized health diary to record self-reported health outcomes accurately and efficiently in patients with metabolic, genetic disorders. Methods: This is a randomized controlled trial (RCT) (1:1) that assesses the efficacy of Zamplo to increase patient activation (primary outcome), improve healthcare satisfaction and confidence to manage medications (secondary outcomes), and reduce costs to the healthcare system (exploratory). Using standardized online surveys, assessments will be collected at baseline, 1 month, 3 months, 6 months, and 12 months. Outcomes will be compared between patients who were given access to the application versus those with no access. Results: Seventy-seven patients were recruited as of November 30, 2021. Recruitment for the study commenced in November 2020 with a target of n=150 patients. The accrual rate was 50% from those eligible and invited for the study, with the majority of patients having Fabry disease (n=48) and the remaining having Pompe disease and mitochondrial disease. Real-time clinical responses, such as pain, are being measured and correlated to disease-modifying therapies, supportive treatments like pain medications, and lifestyle interventions. Engagement with the application, along with compliance metrics of surveys and journal entries, are being analyzed. An interim analysis of the engagement data along with preliminary findings from this pilot RCT, and qualitative patient feedback will be presented. Conclusions: The digital self-care journal provides a unique approach to disease management, allowing patients direct access to their progress and actively participating in their care. Findings from the study can help serve the virtual care needs of patients with metabolic, genetic disorders in North America and the world over.Keywords: eHealth, mobile health, rare disease, patient outcomes, quality of life (QoL), pain, Fabry disease, Pompe disease
Procedia PDF Downloads 151107 Effect of Antimony on Microorganisms in Aerobic and Anaerobic Environments
Authors: Barrera C. Monserrat, Sierra-Alvarez Reyes, Pat-Espadas Aurora, Moreno Andrade Ivan
Abstract:
Antimony is a toxic and carcinogenic metalloid considered a pollutant of priority interest by the United States Environmental Protection Agency. It is present in the environment in two oxidation states: antimonite (Sb (III)) and antimony (Sb (V)). Sb (III) is toxic to several aquatic organisms, but the potential inhibitory effect of Sb species for microorganisms has not been extensively evaluated. The fate and possible toxic impact of antimony on aerobic and anaerobic wastewater treatment systems are unknown. For this reason, the objective of this study was to evaluate the microbial toxicity of Sb (V) and Sb (III) in aerobic and anaerobic environments. Sb(V) and Sb(III) were used as potassium hexahydroxoantimonate (V) and potassium antimony tartrate, respectively (Sigma-Aldrich). The toxic effect of both Sb species in anaerobic environments was evaluated on methanogenic activity and the inhibition of hydrogen production of microorganisms from a wastewater treatment bioreactor. For the methanogenic activity, batch experiments were carried out in 160 mL serological bottles; each bottle contained basal mineral medium (100 mL), inoculum (1.5 g of VSS/L), acetate (2.56 g/L) as substrate, and variable concentrations of Sb (V) or Sb (III). Duplicate bioassays were incubated at 30 ± 2°C on an orbital shaker (105 rpm) in the dark. Methane production was monitored by gas chromatography. The hydrogen production inhibition tests were carried out in glass bottles with a working volume of 0.36 L. Glucose (50 g/L) was used as a substrate, pretreated inoculum (5 g VSS/L), mineral medium and varying concentrations of the two species of antimony. The bottles were kept under stirring and at a temperature of 35°C in an AMPTSII device that recorded hydrogen production. The toxicity of Sb on aerobic microorganisms (from a wastewater activated sludge treatment plant) was tested with a Microtox standardized toxicity test and respirometry. Results showed that Sb (III) is more toxic than Sb (V) for methanogenic microorganisms. Sb (V) caused a 50% decrease in methanogenic activity at 250 mg/L. In contrast, exposure to Sb (III) resulted in a 50% inhibition at a concentration of only 11 mg/L, and an almost complete inhibition (95%) at 25 mg/L. For hydrogen-producing microorganisms, Sb (III) and Sb (V) inhibited 50% of this production with 12.6 mg/L and 87.7 mg/L, respectively. The results for aerobic environments showed that 500 mg/L of Sb (V) do not inhibit the Allivibrio fischeri (Microtox) activity or specific oxygen uptake rate of activated sludge. In the case of Sb (III), this caused a loss of 50% of the respiration of the microorganisms at concentrations below 40 mg/L. The results obtained indicate that the toxicity of the antimony will depend on the speciation of this metalloid and that Sb (III) has a significantly higher inhibitory potential compared to Sb (V). It was shown that anaerobic microorganisms can reduce Sb (V) to Sb (III). Acknowledgments: This work was funded in part by grants from the UA-CONACYT Binational Consortium for the Regional Scientific Development and Innovation (CAZMEX), the National Institute of Health (NIH ES- 04940), and PAPIIT-DGAPA-UNAM (IN105220).Keywords: aerobic inhibition, antimony reduction, hydrogen inhibition, methanogenic toxicity
Procedia PDF Downloads 167