Search results for: up/down draft routing tool
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5346

Search results for: up/down draft routing tool

486 The Trumping of Science: Exploratory Study into Discrepancy between Politician and Scientist Sources in American Covid-19 News Coverage

Authors: Wafa Unus

Abstract:

Science journalism has been vanishing from America’s national newspapers for decades. Reportage on scientific topics is limited to only a handful of newspapers and of those, few employ dedicated science journalists to cover stories that require this specialized expertise. News organizations' lack of readiness to convey complex scientific concepts to a mass populace becomes particularly problematic when events like the Covid-19 pandemic occur. The lack of coverage of Covid-19 prior to its onset in the United States, suggests something more troubling - that the deprioritization of reporting on hard science as an educational tool in favor of political frames of coverage, places dangerous blinders on the American public. This research looks at the disparity between voices of health and science experts in news articles and the voices of political figures, in order to better understand the approach of American newspapers in conveying expert opinion on Covid-19. A content analysis of 300 articles on Covid-19 by major newspapers in the United States between January 1st, 2020 and April 30th, 2020 illuminates this investigation. The Boston Globe, the New York Times, and the Los Angeles Times are included in the content analysis. Initial findings reveal a significant disparity in the number of articles that mention Anthony Fauci, the director of the National Institute Allergy and Infectious Disease, and the number that make reference to political figures. Covid-related articles in the New York Times that focused on health topics (as opposed to economic or social issues) contained the voices of 54 different politicians who were mentioned a total of 608 times. Only five members of the scientific community were mentioned a total of 24 times (out of 674 articles). In the Boston Globe, 36 different politicians were mentioned a total of 147 times, and only two members of the scientific community, one being Anthony Fauci, were mentioned a total of nine times (out of 423 articles). In the Los Angeles Times, 52 different politicians were mentioned a total of 600 times, and only six members of the scientific community were included and were mentioned a total of 82 times with Fauci being mentioned 48 times (out of 851 articles). Results provide a better understanding of the frames in which American journalists in Covid hotspots conveyed information of expert analysis on Covid-19 during one of the most pressing news events of the century. Ultimately, the objective of this study is to utilize the exploratory data to evaluate the nature, extent and impact of Covid-19 reporting in the context of trustworthiness and scientific expertise. Secondarily, this data will illuminate the degree to which Covid-19 reporting focused on politics over science.

Keywords: science reporting, science journalism, covid, misinformation, news

Procedia PDF Downloads 216
485 Using a Card Game as a Tool for Developing a Design

Authors: Matthias Haenisch, Katharina Hermann, Marc Godau, Verena Weidner

Abstract:

Over the past two decades, international music education has been characterized by a growing interest in informal learning for formal contexts and a "compositional turn" that has moved from closed to open forms of composing. This change occurs under social and technological conditions that permeate 21st-century musical practices. This forms the background of Musical Communities in the (Post)Digital Age (MusCoDA), a four-year joint research project of the University of Erfurt (UE) and the University of Education Karlsruhe (PHK), funded by the German Federal Ministry of Education and Research (BMBF). Both explore songwriting processes as an example of collective creativity in (post)digital communities, one in formal and the other in informal learning contexts. Collective songwriting will be studied from a network perspective, that will allow us to view boundaries between both online and offline as well as formal and informal or hybrid contexts as permeable and to reconstruct musical learning practices. By comparing these songwriting processes, possibilities for a pedagogical-didactic interweaving of different educational worlds are highlighted. Therefore, the subproject of the University of Erfurt investigates school music lessons with the help of interviews, videography, and network maps by analyzing new digital pedagogical and didactic possibilities. In the first step, the international literature on songwriting in the music classroom was examined for design development. The analysis focused on the question of which methods and practices are circulating in the current literature. Results from this stage of the project form the basis for the first instructional design that will help teachers in planning regular music classes and subsequently reconstruct musical learning practices under these conditions. In analyzing the literature, we noticed certain structural methods and concepts that recur, such as the Building Blocks method and the pre-structuring of the songwriting process. From these findings, we developed a deck of cards that both captures the current state of research and serves as a method for design development. With this deck of cards, both teachers and students themselves can plan their individual songwriting lessons by independently selecting and arranging topic, structure, and action cards. In terms of science communication, music educators' interactions with the card game provide us with essential insights for developing the first design. The overall goal of MusCoDA is to develop an empirical model of collective musical creativity and learning and an instructional design for teaching music in the postdigital age.

Keywords: card game, collective songwriting, community of practice, network, postdigital

Procedia PDF Downloads 64
484 A Dynamic Mechanical Thermal T-Peel Test Approach to Characterize Interfacial Behavior of Polymeric Textile Composites

Authors: J. R. Büttler, T. Pham

Abstract:

Basic understanding of interfacial mechanisms is of importance for the development of polymer composites. For this purpose, we need techniques to analyze the quality of interphases, their chemical and physical interactions and their strength and fracture resistance. In order to investigate the interfacial phenomena in detail, advanced characterization techniques are favorable. Dynamic mechanical thermal analysis (DMTA) using a rheological system is a sensitive tool. T-peel tests were performed with this system, to investigate the temperature-dependent peel behavior of woven textile composites. A model system was made of polyamide (PA) woven fabric laminated with films of polypropylene (PP) or PP modified by grafting with maleic anhydride (PP-g-MAH). Firstly, control measurements were performed with solely PP matrixes. Polymer melt investigations, as well as the extensional stress, extensional viscosity and extensional relaxation modulus at -10°C, 100 °C and 170 °C, demonstrate similar viscoelastic behavior for films made of PP-g-MAH and its non-modified PP-control. Frequency sweeps have shown that PP-g-MAH has a zero phase viscosity of around 1600 Pa·s and PP-control has a similar zero phase viscosity of 1345 Pa·s. Also, the gelation points are similar at 2.42*104 Pa (118 rad/s) and 2.81*104 Pa (161 rad/s) for PP-control and PP-g-MAH, respectively. Secondly, the textile composite was analyzed. The extensional stress of PA66 fabric laminated with either PP-control or PP-g-MAH at -10 °C, 25 °C and 170 °C for strain rates of 0.001 – 1 s-1 was investigated. The laminates containing the modified PP need more stress for T-peeling. However, the strengthening effect due to the modification decreases by increasing temperature and at 170 °C, just above the melting temperature of the matrix, the difference disappears. Independent of the matrix used in the textile composite, there is a decrease of extensional stress by increasing temperature. It appears that the more viscous is the matrix, the weaker the laminar adhesion. Possibly, the measurement is influenced by the fact that the laminate becomes stiffer at lower temperatures. Adhesive lap-shear testing at room temperature supports the findings obtained with the T-peel test. Additional analysis of the textile composite at the microscopic level ensures that the fibers are well embedded in the matrix. Atomic force microscopy (AFM) imaging of a cross section of the composite shows no gaps between the fibers and matrix. Measurements of the water contact angle show that the MAH grafted PP is more polar than the virgin-PP, and that suggests a more favorable chemical interaction of PP-g-MAH with PA, compared to the non-modified PP. In fact, this study indicates that T-peel testing by DMTA is a technique to achieve more insights into polymeric textile composites.

Keywords: dynamic mechanical thermal analysis, interphase, polyamide, polypropylene, textile composite

Procedia PDF Downloads 129
483 Organic Matter Distribution in Bazhenov Source Rock: Insights from Sequential Extraction and Molecular Geochemistry

Authors: Margarita S. Tikhonova, Alireza Baniasad, Anton G. Kalmykov, Georgy A. Kalmykov, Ralf Littke

Abstract:

There is a high complexity in the pore structure of organic-rich rocks caused by the combination of inter-particle porosity from inorganic mineral matter and ultrafine intra-particle porosity from both organic matter and clay minerals. Fluids are retained in that pore space, but there are major uncertainties in how and where the fluids are stored and to what extent they are accessible or trapped in 'closed' pores. A large degree of tortuosity may lead to fractionation of organic matter so that the lighter and flexible compounds would diffuse to the reservoir whereas more complicated compounds may be locked in place. Additionally, parts of hydrocarbons could be bound to solid organic matter –kerogen– and mineral matrix during expulsion and migration. Larger compounds can occupy thin channels so that clogging or oil and gas entrapment will occur. Sequential extraction of applying different solvents is a powerful tool to provide more information about the characteristics of trapped organic matter distribution. The Upper Jurassic – Lower Cretaceous Bazhenov shale is one of the most petroliferous source rock extended in West Siberia, Russia. Concerning the variable mineral composition, pore space distribution and thermal maturation, there are high uncertainties in distribution and composition of organic matter in this formation. In order to address this issue geological and geochemical properties of 30 samples including mineral composition (XRD and XRF), structure and texture (thin-section microscopy), organic matter contents, type and thermal maturity (Rock-Eval) as well as molecular composition (GC-FID and GC-MS) of different extracted materials during sequential extraction were considered. Sequential extraction was performed by a Soxhlet apparatus using different solvents, i.e., n-hexane, chloroform and ethanol-benzene (1:1 v:v) first on core plugs and later on pulverized materials. The results indicate that the studied samples are mainly composed of type II kerogen with TOC contents varied from 5 to 25%. The thermal maturity ranged from immature to late oil window. Whereas clay contents decreased with increasing maturity, the amount of silica increased in the studied samples. According to molecular geochemistry, stored hydrocarbons in open and closed pore space reveal different geochemical fingerprints. The results improve our understanding of hydrocarbon expulsion and migration in the organic-rich Bazhenov shale and therefore better estimation of hydrocarbon potential for this formation.

Keywords: Bazhenov formation, bitumen, molecular geochemistry, sequential extraction

Procedia PDF Downloads 170
482 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique

Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina

Abstract:

The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.

Keywords: diffusion, glass-ceramics, ion exchange, vitrification

Procedia PDF Downloads 269
481 The Invaluable Contributions of Radiography and Radiotherapy in Modern Medicine

Authors: Sahar Heidary

Abstract:

Radiography and radiotherapy have emerged as crucial pillars of modern medical practice, revolutionizing diagnostics and treatment for a myriad of health conditions. This abstract highlights the pivotal role of radiography and radiotherapy in favor of healthcare and society. Radiography, a non-invasive imaging technique, has significantly advanced medical diagnostics by enabling the visualization of internal structures and abnormalities within the human body. With the advent of digital radiography, clinicians can obtain high-resolution images promptly, leading to faster diagnoses and informed treatment decisions. Radiography plays a pivotal role in detecting fractures, tumors, infections, and various other conditions, allowing for timely interventions and improved patient outcomes. Moreover, its widespread accessibility and cost-effectiveness make it an indispensable tool in healthcare settings worldwide. On the other hand, radiotherapy, a branch of medical science that utilizes high-energy radiation, has become an integral component of cancer treatment and management. By precisely targeting and damaging cancerous cells, radiotherapy offers a potent strategy to control tumor growth and, in many cases, leads to cancer eradication. Additionally, radiotherapy is often used in combination with surgery and chemotherapy, providing a multifaceted approach to combat cancer comprehensively. The continuous advancements in radiotherapy techniques, such as intensity-modulated radiotherapy and stereotactic radiosurgery, have further improved treatment precision while minimizing damage to surrounding healthy tissues. Furthermore, radiography and radiotherapy have demonstrated their worth beyond oncology. Radiography is instrumental in guiding various medical procedures, including catheter placement, joint injections, and dental evaluations, reducing complications and enhancing procedural accuracy. On the other hand, radiotherapy finds applications in non-cancerous conditions like benign tumors, vascular malformations, and certain neurological disorders, offering therapeutic options for patients who may not benefit from traditional surgical interventions. In conclusion, radiography and radiotherapy stand as indispensable tools in modern medicine, driving transformative improvements in patient care and treatment outcomes. Their ability to diagnose, treat, and manage a wide array of medical conditions underscores their favor in medical practice. As technology continues to advance, radiography and radiotherapy will undoubtedly play an ever more significant role in shaping the future of healthcare, ultimately saving lives and enhancing the quality of life for countless individuals worldwide.

Keywords: radiology, radiotherapy, medical imaging, cancer treatment

Procedia PDF Downloads 69
480 Assessment of Serum Osteopontin, Osteoprotegerin and Bone-Specific Alp as Markers of Bone Turnover in Patients with Disorders of Thyroid Function in Nigeria, Sub-Saharan Africa

Authors: Oluwabori Emmanuel Olukoyejo, Ogra Victor Ogra, Bosede Amodu, Tewogbade Adeoye Adedeji

Abstract:

Background: Disorders of thyroid function are the second most common endocrine disorders worldwide, with a direct relationship with metabolic bone diseases. These metabolic bone complications are often subtle but manifest as bone pains and an increased risk of fractures. The gold standard for diagnosis, Dual Energy X-ray Absorptiometry (DEXA), is limited in this environment due to unavailability, cumbersomeness and cost. However, bone biomarkers have shown prospects in assessing alterations in bone remodeling, which has not been studied in this environment. Aim: This study evaluates serum levels of bone-specific alkaline phosphatase (bone-specific ALP), osteopontin and osteoprotegerin biomarkers of bone turnover in patients with disorders of thyroid function. Methods: This is a cross-sectional study carried out over a period of one and a half years. Forty patients with thyroid dysfunctions, aged 20 to 50 years, and thirty-eight age and sex-matched healthy euthyroid controls were included in this study. Patients were further stratified into hyperthyroid and hypothyroid groups. Bone-specific ALP, osteopontin, and osteoprotegerin, alongside serum total calcium, ionized calcium and inorganic phosphate, were assayed for all patients and controls. A self-administered questionnaire was used to obtain data on sociodemographic and medical history. Then, 5 ml of blood was collected in a plain bottle and serum was harvested following clotting and centrifugation. Serum samples were assayed for B-ALP, osteopontin, and osteoprotegerin using the ELISA technique. Total calcium and ionized calcium were assayed using an ion-selective electrode, while the inorganic phosphate was assayed with automated photometry. Results: The hyperthyroid and hypothyroid patient groups had significantly increased median serum B-ALP (30.40 and 26.50) ng/ml and significantly lower median OPG (0.80 and 0.80) ng/ml than the controls (10.81 and 1.30) ng/ml respectively, p < 0.05. However, serum osteopontin in the hyperthyroid group was significantly higher and significantly lower in the hypothyroid group when compared with the controls (11.00 and 2.10 vs 3.70) ng/ml, respectively, p < 0.05. Both hyperthyroid and hypothyroid groups had significantly higher mean serum total calcium, ionized calcium and inorganic phosphate than the controls (2.49 ± 0.28, 1.27 ± 0.14 and 1.33 ± 0.33) mmol/l and (2.41 ± 0.04, 1.20 ± 0.04 and 1.15 ± 0.16) mmol/l vs (2.27 ± 0.11, 1.17 ± 0.06 and 1.08 ± 0.16) mmol/l respectively, p < 0.05. Conclusion: Patients with disorders of thyroid function have metabolic imbalances of all the studied bone markers, suggesting a higher bone turnover. The routine bone markers will be an invaluable tool for monitoring bone health in patients with thyroid dysfunctions, while the less readily available markers can be introduced as supplementary tools. Moreover, bone-specific ALP, osteopontin and osteoprotegerin were found to be the strongest independent predictors of metabolic bone markers’ derangements in patients with thyroid dysfunctions.

Keywords: metabolic bone diseases, biomarker, bone turnover, hyperthyroid, hypothyroid, euthyroid

Procedia PDF Downloads 36
479 Participatory Approach: A Tool for Improving Food Security and Empowering a Local Community in Chitima, Mozambique

Authors: Matias Hargreaves, Martin Del Valle, Diego Rodriguez, Riveros Jose Luis

Abstract:

Trough years, all kind of social development projects have tried to solve social problems such as hunger, poverty, malnutrition, food insecurity, among others, with poor success. Both private and state initiatives have invested resources in several countries and communities. Nevertheless, most of these initiatives are scientific or external developers-centered, with a lack of local participation. This compromises the sustainability of any intervention and also leads to a poor empowerment of local community. The participatory approach aims to rescue and enhance the local knowledge since it recognizes that this kind of problems are better known by native actors. The objective of the study was to describe the role played by the community empowerment on food security improvement in the NGO “O Viveiro” (15°43'37.77"S; 32°46'27.53"E) and Barrio Broma village (15°43'58.78"S; 32°46'7.27"E) in Chitima, Mozambique. A center for training in goat livestock and orchard was build. A community orchard was co-constructed between foreign technicians and local actors. The prototype was installed in February, 2016 by the technician team and local community with 16 m2 as a nursery garden. Two orchard workshops were conducted in order to design a sustainable productive model which mixes both local and technological approaches. Two goat meat workshops were conducted in order to describe local methods and train the community to conduce their own techniques with high sanitary and productive standards. Technician team stayed in Mozambique until May, 2016. The quorum for the orchard workshops was 20 and 14 persons respectively, which represents 100% and 70%of the total requested quorum (20). For the goat meat workshops were 4 and 5 persons, which representa80% and 100% of the total requested quorum (5). Until August, 2016, the orchard is 3.219 m2 and it grows several vegetables as beans, chili pepper, garlic, onion, tomatoes, lettuce, sweet potato, yuca potato, cabbage, eggplant, papaya trees, mango, and cassava. The process of increasing in size and diversification of vegetables grown was led entirely by the local community. In connection with this, the local community started to harvest and began to sell the vegetable products at the local market. At the meat goat workshops, local participants rescued a local knowledge by describing and practicing a traditional way to process goat meat by drying it outdoors and then doing a smoked treatment. This information might contribute to describe the level of empowerment of this community, and thus give evidence of acceptance of foreign intervention for improving their own proceedings and traditions.

Keywords: children malnutrition, food security, Local community, participatory approach

Procedia PDF Downloads 276
478 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme

Authors: Eleanor Nel

Abstract:

Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.

Keywords: evaluation, framework, impact, research output

Procedia PDF Downloads 76
477 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)

Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg

Abstract:

One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.

Keywords: arsenic, fluoride, groundwater contamination, logistic regression

Procedia PDF Downloads 348
476 Neuropsychiatric Outcomes of Intensive Music Therapy in Stroke Rehabilitation A Premilitary Investigation

Authors: Honey Bryant, Elvina Chu

Abstract:

Stroke is the leading cause of disability in adults in Canada and directly related to depression, anxiety, and sleep disorders; with an estimated annual cost of $50 billion in health care. Strokes not only impact the individual but society as a whole. Current stroke rehabilitation does not include Music Therapy, although it has success in clinical research in the use of stroke rehabilitation. This study examines the use of neurologic music therapy (NMT) in conjunction with stroke rehabilitation to improve sleep quality, reduce stress levels, and promote neurogenesis. Existing research on NMT in stroke is limited, which means any conclusive information gathered during this study will be significant. My novel hypotheses are a.) stroke patients will become less depressed and less anxious with improved sleep following NMT. b.) NMT will reduce stress levels and promote neurogenesis in stroke patients admitted for rehabilitation. c.) Beneficial effects of NMT will be sustained at least short-term following treatment. Participants were recruited from the in-patient stroke rehabilitation program at Providence Care Hospital in Kingston, Ontario, Canada. All participants-maintained stroke rehabilitation treatment as normal. The study was spilt into two groups, the first being Passive Music Listening (PML) and the second Neurologic Music Therapy (NMT). Each group underwent 10 sessions of intensive music therapy lasting 45 minutes for 10 consecutive days, excluding weekends. Psychiatric Assessments, Epworth Sleepiness Scale (ESS), Hospital Anxiety & Depression Rating Scale (HADS), and Music Engagement Questionnaire (MusEQ), were completed, followed by a general feedback interview. Physiological markers of stress were measured through blood pressure measurements and heart rate variability. Serum collections reviewed neurogenesis via Brain-derived neurotrophic factor (BDNF) and stress markers of cortisol levels. As this study is still on-going, a formal analysis of data has not been fully completed, although trends are following our hypotheses. A decrease in sleepiness and anxiety is seen upon the first cohort of PML. Feedback interviews have indicated most participants subjectively felt more relaxed and thought PML was useful in their recovery. If the hypothesis is supported, larger external funding which will allow for greater investigation of the use of NMT in stroke rehabilitation. As we know, NMT is not covered under Ontario Health Insurance Plan (OHIP), so there is limited scientific data surrounding its uses as a clinical tool. This research will provide detailed findings of the treatment of neuropsychiatric aspects of stroke. Concurrently, a passive music listening study is being designed to further review the use of PML in rehabilitation as well.

Keywords: music therapy, psychotherapy, neurologic music therapy, passive music listening, neuropsychiatry, counselling, behavioural, stroke, stroke rehabilitation, rehabilitation, neuroscience

Procedia PDF Downloads 113
475 Landslide Susceptibility Analysis in the St. Lawrence Lowlands Using High Resolution Data and Failure Plane Analysis

Authors: Kevin Potoczny, Katsuichiro Goda

Abstract:

The St. Lawrence lowlands extend from Ottawa to Quebec City and are known for large deposits of sensitive Leda clay. Leda clay deposits are responsible for many large landslides, such as the 1993 Lemieux and 2010 St. Jude (4 fatalities) landslides. Due to the large extent and sensitivity of Leda clay, regional hazard analysis for landslides is an important tool in risk management. A 2018 regional study by Farzam et al. on the susceptibility of Leda clay slopes to landslide hazard uses 1 arc second topographical data. A qualitative method known as Hazus is used to estimate susceptibility by checking for various criteria in a location and determine a susceptibility rating on a scale of 0 (no susceptibility) to 10 (very high susceptibility). These criteria are slope angle, geological group, soil wetness, and distance from waterbodies. Given the flat nature of St. Lawrence lowlands, the current assessment fails to capture local slopes, such as the St. Jude site. Additionally, the data did not allow one to analyze failure planes accurately. This study majorly improves the analysis performed by Farzam et al. in two aspects. First, regional assessment with high resolution data allows for identification of local locations that may have been previously identified as low susceptibility. This then provides the opportunity to conduct a more refined analysis on the failure plane of the slope. Slopes derived from 1 arc second data are relatively gentle (0-10 degrees) across the region; however, the 1- and 2-meter resolution 2022 HRDEM provided by NRCAN shows that short, steep slopes are present. At a regional level, 1 arc second data can underestimate the susceptibility of short, steep slopes, which can be dangerous as Leda clay landslides behave retrogressively and travel upwards into flatter terrain. At the location of the St. Jude landslide, slope differences are significant. 1 arc second data shows a maximum slope of 12.80 degrees and a mean slope of 4.72 degrees, while the HRDEM data shows a maximum slope of 56.67 degrees and a mean slope of 10.72 degrees. This equates to a difference of three susceptibility levels when the soil is dry and one susceptibility level when wet. The use of GIS software is used to create a regional susceptibility map across the St. Lawrence lowlands at 1- and 2-meter resolutions. Failure planes are necessary to differentiate between small and large landslides, which have so far been ignored in regional analysis. Leda clay failures can only retrogress as far as their failure planes, so the regional analysis must be able to transition smoothly into a more robust local analysis. It is expected that slopes within the region, once previously assessed at low susceptibility scores, contain local areas of high susceptibility. The goal is to create opportunities for local failure plane analysis to be undertaken, which has not been possible before. Due to the low resolution of previous regional analyses, any slope near a waterbody could be considered hazardous. However, high-resolution regional analysis would allow for more precise determination of hazard sites.

Keywords: hazus, high-resolution DEM, leda clay, regional analysis, susceptibility

Procedia PDF Downloads 76
474 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 94
473 The Effects of Alpha-Lipoic Acid Supplementation on Post-Stroke Patients: A Systematic Review and Meta-Analysis of Randomized Controlled Trials

Authors: Hamid Abbasi, Neda Jourabchi, Ranasadat Abedi, Kiarash Tajernarenj, Mehdi Farhoudi, Sarvin Sanaie

Abstract:

Background: Alpha lipoic acid (ALA), fat- and water-soluble, coenzyme with sulfuret content, has received considerable attention for its potential therapeutic role in diabetes, cardiovascular diseases, cancers, and central nervous disease. This investigation aims to evaluate the probable protective effects of ALA in stroke patients. Methods: Based on Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines, This meta-analysis was performed. The PICO criteria for this meta-analysis were as follows: Population/Patients (P: stroke patients); Intervention (I: ALA); Comparison (C: control); Outcome (O: blood glucose, lipid profile, oxidative stress, inflammatory factors).In addition, Studies that were excluded from the analysis consisted of in vitro, in vivo, and ex vivo studies, case reports, quasi-experimental studies. Scopus, PubMed, Web of Science, EMBASE databases were searched until August 2023. Results: Of 496 records that were screened in the title/abstract stage, 9 studies were included in this meta-analysis. The sample sizes in the included studies vary between 28 and 90. The result of risk of bias was performed via risk of bias (RoB) in randomized-controlled trials (RCTs) based on the second version of the Cochrane RoB assessment tool. 8 studies had a definitely high risk of bias. Discussion: To the best of our knowledge, The present meta-analysis is the first study addressing the effectiveness of ALA supplementation in enhancing post-stroke metabolic markers, including lipid profile, oxidative stress, and inflammatory indices. It is imperative to acknowledge certain potential limitations inherent in this study. First of all, type of treatment (oral or intravenous infusion) could alter the bioavailability of ALA. Our study had restricted evidence regarding the impact of ALA supplementation on included outcomes. Therefore, further research is warranted to develop into the effects of ALA specifically on inflammation and oxidative stress. Funding: The research protocol was approved and supported by the Student Research Committee, Tabriz University of Medical Sciences (grant number: 72825). Registration: This study was registered in the International prospective register of systematic reviews (PROSPERO ID: CR42023461612).

Keywords: alpha-lipoic acid, lipid profile, blood glucose, inflammatory factors, oxidative stress, meta-analysis, post-stroke

Procedia PDF Downloads 63
472 Salmonella Emerging Serotypes in Northwestern Italy: Genetic Characterization by Pulsed-Field Gel Electrophoresis

Authors: Clara Tramuta, Floris Irene, Daniela Manila Bianchi, Monica Pitti, Giulia Federica Cazzaniga, Lucia Decastelli

Abstract:

This work presents the results obtained by the Regional Reference Centre for Salmonella Typing (CeRTiS) in a retrospective study aimed to investigate, through Pulsed-field Gel Electrophoresis (PFGE) analysis, the genetic relatedness of emerging Salmonella serotypes of human origin circulating in North-West of Italy. Furthermore, the goal of this work was to create a Regional database to facilitate foodborne outbreak investigation and to monitor them at an earlier stage. A total of 112 strains, isolated from 2016 to 2018 in hospital laboratories, were included in this study. The isolates were previously identified as Salmonella according to standard microbiological techniques and serotyping was performed according to ISO 6579-3 and the Kaufmann-White scheme using O and H antisera (Statens Serum Institut®). All strains were characterized by PFGE: analysis was conducted according to a standardized PulseNet protocol. The restriction enzyme XbaI was used to generate several distinguishable genomic fragments on the agarose gel. PFGE was performed on a CHEF Mapper system, separating large fragments and generating comparable genetic patterns. The agarose gel was then stained with GelRed® and photographed under ultraviolet transillumination. The PFGE patterns obtained from the 112 strains were compared using Bionumerics version 7.6 software with the Dice coefficient with 2% band tolerance and 2% optimization. For each serotype, the data obtained with the PFGE were compared according to the geographical origin and the year in which they were isolated. Salmonella strains were identified as follow: S. Derby n. 34; S. Infantis n. 38; S. Napoli n. 40. All the isolates had appreciable restricted digestion patterns ranging from approximately 40 to 1100 kb. In general, a fairly heterogeneous distribution of pulsotypes has emerged in the different provinces. Cluster analysis indicated high genetic similarity (≥ 83%) among strains of S. Derby (n. 30; 88%), S. Infantis (n. 36; 95%) and S. Napoli (n. 38; 95%) circulating in north-western Italy. The study underlines the genomic similarities shared by the emerging Salmonella strains in Northwest Italy and allowed to create a database to detect outbreaks in an early stage. Therefore, the results confirmed that PFGE is a powerful and discriminatory tool to investigate the genetic relationships among strains in order to monitoring and control Salmonellosis outbreak spread. Pulsed-field gel electrophoresis (PFGE) still represents one of the most suitable approaches to characterize strains, in particular for the laboratories for which NGS techniques are not available.

Keywords: emerging Salmonella serotypes, genetic characterization, human strains, PFGE

Procedia PDF Downloads 105
471 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 122
470 Link People from Different Age Together: Attitude and Behavior Changes in Inter-Generational Interaction Program

Authors: Qian Sun, Dannie Dai, Vivian Lou

Abstract:

Background: Changes in population structure and modernization have left traditional channels of achieving intergenerational solidarity in crisis. Policies and projects purposefully structuring intergenerational interaction are regarded as effective ways to enhance positive attitude changes between generations. However, few inter-generational interaction program has put equal emphasis on promoting positive changes on both attitude and behavior across generational groups. Objective: This study evaluated the effectiveness of an intergenerational interaction program which aims to facilitate positive attitude and behavioral interaction between both young and old individuals in Hong Kong. Method: A quasi-experimental design was adopted with the sample of 150 older participants and 161 young participants. Among 73 older and 78 young participants belong to experiment groups while 77 older participants and 84 young participants belong to control groups. The Age Group Evaluation and Description scale (AGED) was adopted to measure attitude toward young people by older participants and the Chinese version of Kogan’s Attitude towards Older People (KAOP) as well as Polizzi’s refined version of the Ageing Semantic Differential Scale (ASD) were used to measure attitude toward older people by the younger generation. The interpersonal behaviour of participants was assessed using Beglgrave’s behavioural observation tool. Six primary verbal or non-verbal interpersonal behaviours including smiles, looks, touches, encourages, initiated conversations and assists were identified and observed. Findings Effectiveness of attitude and behavior changes on both younger and older participants was confirmed in results. Compared with participants from the control group, experimental participants of elderly showed significant positive changes of attitudes toward the younger generation as assessed by AGED (F=138.34, p < .001). Moreover, older participants showed significant positive changes on three out of six behaviours (visual attention: t=2.26, p<0.05; initiate conversation: t=3.42, p<0.01; and touch: t=2.28, p<0.05). For younger participants, participants from experimental group showed significant positive changes in attitude toward older people (with F-score of 47.22 for KAOP and 72.75 for ASD, p<.001). Young participants also showed significant positive changes in two out of six behaviours (visual attention: t=3.70, p<0.01; initiate conversation: t=2.04, p<0.001). There is no significant relationship between attitude change and behaviour change in both older (p=0.86) and younger (p=0.22) groups. Conclusion: This study has brought practical implications for social work. The effective model of this program could assist social workers and allied professionals to design relevant projects for nurture intergenerational solidarity. Furthermore, insignificant results between attitude and behavior changes revealed that attitude change was not a strong predictor for behavior change, hence, intergenerational programs against age-stereotype should put equal emphasis on both attitudinal and behavioral aspects.

Keywords: attitude and behaviour changes, intergenerational interaction, intergenerational solidarity, program design

Procedia PDF Downloads 243
469 Advancing Women's Participation in SIDS' Renewable Energy Sector: A Multicriteria Evaluation Framework

Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos

Abstract:

Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.

Keywords: gender, women, spatial analysis, renewable energy, access

Procedia PDF Downloads 69
468 A Multicriteria Evaluation Framework for Enhancing Women's Participation in SIDS Renewable Energy Sector

Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos

Abstract:

Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.

Keywords: gender, women, spatial analysis, renewable energy, access

Procedia PDF Downloads 83
467 Evaluation Method for Fouling Risk Using Quartz Crystal Microbalance

Authors: Natsuki Kishizawa, Keiko Nakano, Hussam Organji, Amer Shaiban, Mohammad Albeirutty

Abstract:

One of the most important tasks in operating desalination plants using a reverse osmosis (RO) method is preventing RO membrane fouling caused by foulants found in seawater. Optimal design of the pre-treatment process of RO process for plants enables the reduction of foulants. Therefore, a quantitative evaluation of the fouling risk in pre-treated water, which is fed to RO, is required for optimal design. Some measurement methods for water quality such as silt density index (SDI) and total organic carbon (TOC) have been conservatively applied for evaluations. However, these methods have not been effective in some situations for evaluating the fouling risk of RO feed water. Furthermore, stable management of plants will be possible by alerts and appropriate control of the pre-treatment process by using the method if it can be applied to the inline monitoring system for the fouling risk of RO feed water. The purpose of this study is to develop a method to evaluate the fouling risk of RO feed water. We applied a quartz crystal microbalance (QCM) to measure the amount of foulants found in seawater using a sensor whose surface is coated with polyamide thin film, which is the main material of a RO membrane. The increase of the weight of the sensor after a certain length of time in which the sample water passes indicates the fouling risk of the sample directly. We classified the values as “FP: Fouling Potential”. The characteristics of the method are to measure the very small amount of substances in seawater in a short time: < 2h, and from a small volume of the sample water: < 50mL. Using some RO cell filtration units, a higher correlation between the pressure increase given by RO fouling and the FP from the method than SDI and TOC was confirmed in the laboratory-scale test. Then, to establish the correlation in the actual bench-scale RO membrane module, and to confirm the feasibility of the monitoring system as a control tool for the pre-treatment process, we have started a long-term test at an experimental desalination site by the Red Sea in Jeddah, Kingdom of Saudi Arabia. Implementing inline equipment for the method made it possible to measure FP intermittently (4 times per day) and automatically. Moreover, for two 3-month long operations, the RO operation pressure among feed water samples of different qualities was compared. The pressure increase through a RO membrane module was observed at a high FP RO unit in which feed water was treated by a cartridge filter only. On the other hand, the pressure increase was not observed at a low FP RO unit in which feed water was treated by an ultra-filter during the operation. Therefore, the correlation in an actual scale RO membrane was established in two runs of two types of feed water. The result suggested that the FP method enables the evaluation of the fouling risk of RO feed water.

Keywords: fouling, monitoring, QCM, water quality

Procedia PDF Downloads 212
466 Nutritional Education in Health Resort Institutions in the Face of Demographic and Epidemiological Changes in Poland

Authors: J. Woźniak-Holecka, T. Holecki, S. Jaruga

Abstract:

Spa treatment is an important area of the health care system in Poland due to the increasing needs of the population and the context of historical conditions for this form of therapy. It extends the range of financing possibilities of the outlets and increases the potential of spa services, which is very important in the context of demographic and epidemiological changes. The main advantages of spa treatment services include its relatively wide availability, low risk of side effects, good patient tolerance, long-lasting curative effect and a relatively low cost. In addition, patients should be provided with a proper diet and enable participation in health education and health promotion classes aimed at health problems consistent with the treatment profile. Challenges for global health care systems include a sharp increase in spending on benefits, dynamic development of health technologies and growing social expectations. This requires extending the competences of health resort facilities for health promotion. Within each type of health resort institutions in Poland, nutritional education services are implemented, aimed at creating and consolidating proper eating habits. Choosing the right diet can speed up recovery or become one of the methods to alleviate the symptoms of chronic diseases. During spa treatment patient learns the principles of rational nutrition and adequate dietotherapy to his diseases. The aim of the project is to assess the frequency and quality of nutritional education provided to patients in health resort facilities in a nationwide perspective. The material for the study will be data obtained as part of an in-depth interview conducted among Heads of Nutrition Departments of selected institutions. The use of nutritional education in a health resort may be an important goal of implementing the state health policy as a useful tool to reduce the risk of diet-related diseases. Recognizing nutritional education in health resort institutions as a type of full-value health service can be effective system support for health policy, including seniors, due to demographic changes currently occurring in the Polish population. Furthermore, it is necessary to increase the interest and motivation of patients to follow the recommendations of nutritional education, because it will bring tangible benefits for the long-term effects of therapy and care should be taken for the form and methodology of nutrition education implemented in health resort institutions. Finally it is necessary to construct an educational offer in terms of selected groups of patients with the highest health needs: the elderly and the disabled. In conclusion, it can be said that the system of nutritional education implemented in polish health resort institutions should be subjected to global changes and strong systemic correction.

Keywords: health care system, nutritional education, public health, spa and treatment

Procedia PDF Downloads 114
465 Assessing Children’s Probabilistic and Creative Thinking in a Non-formal Learning Context

Authors: Ana Breda, Catarina Cruz

Abstract:

Daily, we face unpredictable events, often attributed to chance, as there is no justification for such an occurrence. Chance, understood as a source of uncertainty, is present in several aspects of human life, such as weather forecasts, dice rolling, and lottery. Surprisingly, humans and some animals can quickly adjust their behavior to handle efficiently doubly stochastic processes (random events with two layers of randomness, like unpredictable weather affecting dice rolling). This adjustment ability suggests that the human brain has built-in mechanisms for perceiving, understanding, and responding to simple probabilities. It also explains why current trends in mathematics education include probability concepts in official curriculum programs, starting from the third year of primary education onwards. In the first years of schooling, children learn to use a certain type of (specific) vocabulary, such as never, always, rarely, perhaps, likely, and unlikely, to help them to perceive and understand the probability of some events. These are keywords of crucial importance for their perception and understanding of probabilities. The development of the probabilistic concepts comes from facts and cause-effect sequences resulting from the subject's actions, as well as the notion of chance and intuitive estimates based on everyday experiences. As part of a junior summer school program, which took place at a Portuguese university, a non-formal learning experiment was carried out with 18 children in the 5th and 6th grades. This experience was designed to be implemented in a dynamic of a serious ice-breaking game, to assess their levels of probabilistic, critical, and creative thinking in understanding impossible, certain, equally probable, likely, and unlikely events, and also to gain insight into how the non-formal learning context influenced their achievements. The criteria used to evaluate probabilistic thinking included the creative ability to conceive events classified in the specified categories, the ability to properly justify the categorization, the ability to critically assess the events classified by other children, and the ability to make predictions based on a given probability. The data analysis employs a qualitative, descriptive, and interpretative-methods approach based on students' written productions, audio recordings, and researchers' field notes. This methodology allowed us to conclude that such an approach is an appropriate and helpful formative assessment tool. The promising results of this initial exploratory study require a future research study with children from these levels of education, from different regions, attending public or private schools, to validate and expand our findings.

Keywords: critical and creative thinking, non-formal mathematics learning, probabilistic thinking, serious game

Procedia PDF Downloads 27
464 A Numerical Studies for Improving the Performance of Vertical Axis Wind Turbine by a Wind Power Tower

Authors: Soo-Yong Cho, Chong-Hyun Cho, Chae-Whan Rim, Sang-Kyu Choi, Jin-Gyun Kim, Ju-Seok Nam

Abstract:

Recently, vertical axis wind turbines (VAWT) have been widely used to produce electricity even in urban. They have several merits such as low sound noise, easy installation of the generator and simple structure without yaw-control mechanism and so on. However, their blades are operated under the influence of the trailing vortices generated by the preceding blades. This phenomenon deteriorates its output power and makes difficulty predicting correctly its performance. In order to improve the performance of VAWT, wind power towers can be applied. Usually, the wind power tower can be constructed as a multi-story building to increase the frontal area of the wind stream. Hence, multiple sets of the VAWT can be installed within the wind power tower, and they can be operated at high elevation. Many different types of wind power tower can be used in the field. In this study, a wind power tower with circular column shape was applied, and the VAWT was installed at the center of the wind power tower. Seven guide walls were used as a strut between the floors of the wind power tower. These guide walls were utilized not only to increase the wind velocity within the wind power tower but also to adjust the wind direction for making a better working condition on the VAWT. Hence, some important design variables, such as the distance between the wind turbine and the guide wall, the outer diameter of the wind power tower, the direction of the guide wall against the wind direction, should be considered to enhance the output power on the VAWT. A numerical analysis was conducted to find the optimum dimension on design variables by using the computational fluid dynamics (CFD) among many prediction methods. The CFD could be an accurate prediction method compared with the stream-tube methods. In order to obtain the accurate results in the CFD, it needs the transient analysis and the full three-dimensional (3-D) computation. However, this full 3-D CFD could be hard to be a practical tool because it requires huge computation time. Therefore, the reduced computational domain is applied as a practical method. In this study, the computations were conducted in the reduced computational domain and they were compared with the experimental results in the literature. It was examined the mechanism of the difference between the experimental results and the computational results. The computed results showed this computational method could be an effective method in the design methodology using the optimization algorithm. After validation of the numerical method, the CFD on the wind power tower was conducted with the important design variables affecting the performance of VAWT. The results showed that the output power of the VAWT obtained using the wind power tower was increased compared to them obtained without the wind power tower. In addition, they showed that the increased output power on the wind turbine depended greatly on the dimension of the guide wall.

Keywords: CFD, performance, VAWT, wind power tower

Procedia PDF Downloads 387
463 Suggestion of Methodology to Detect Building Damage Level Collectively with Flood Depth Utilizing Geographic Information System at Flood Disaster in Japan

Authors: Munenari Inoguchi, Keiko Tamura

Abstract:

In Japan, we were suffered by earthquake, typhoon, and flood disaster in 2019. Especially, 38 of 47 prefectures were affected by typhoon #1919 occurred in October 2019. By this disaster, 99 people were dead, three people were missing, and 484 people were injured as human damage. Furthermore, 3,081 buildings were totally collapsed, 24,998 buildings were half-collapsed. Once disaster occurs, local responders have to inspect damage level of each building by themselves in order to certificate building damage for survivors for starting their life reconstruction process. At that disaster, the total number to be inspected was so high. Based on this situation, Cabinet Office of Japan approved the way to detect building damage level efficiently, that is collectively detection. However, they proposed a just guideline, and local responders had to establish the concrete and infallible method by themselves. Against this issue, we decided to establish the effective and efficient methodology to detect building damage level collectively with flood depth. Besides, we thought that the flood depth was relied on the land height, and we decided to utilize GIS (Geographic Information System) for analyzing the elevation spatially. We focused on the analyzing tool of spatial interpolation, which is utilized to survey the ground water level usually. In establishing the methodology, we considered 4 key-points: 1) how to satisfy the condition defined in the guideline approved by Cabinet Office for detecting building damage level, 2) how to satisfy survivors for the result of building damage level, 3) how to keep equitability and fairness because the detection of building damage level was executed by public institution, 4) how to reduce cost of time and human-resource because they do not have enough time and human-resource for disaster response. Then, we proposed a methodology for detecting building damage level collectively with flood depth utilizing GIS with five steps. First is to obtain the boundary of flooded area. Second is to collect the actual flood depth as sampling over flooded area. Third is to execute spatial analysis of interpolation with sampled flood depth to detect two-dimensional flood depth extent. Fourth is to divide to blocks by four categories of flood depth (non-flooded, over the floor to 100 cm, 100 cm to 180 cm and over 180 cm) following lines of roads for getting satisfaction from survivors. Fifth is to put flood depth level to each building. In Koriyama city of Fukushima prefecture, we proposed the methodology of collectively detection for building damage level as described above, and local responders decided to adopt our methodology at typhoon #1919 in 2019. Then, we and local responders detect building damage level collectively to over 1,000 buildings. We have received good feedback that the methodology was so simple, and it reduced cost of time and human-resources.

Keywords: building damage inspection, flood, geographic information system, spatial interpolation

Procedia PDF Downloads 124
462 Managed Aquifer Recharge (MAR) for the Management of Stormwater on the Cape Flats, Cape Town

Authors: Benjamin Mauck, Kevin Winter

Abstract:

The city of Cape Town in South Africa, has shown consistent economic and population growth in the last few decades and that growth is expected to continue to increase into the future. These projected economic and population growth rates are set to place additional pressure on the city’s already strained water supply system. Thus, given Cape Town’s water scarcity, increasing water demands and stressed water supply system, coupled with global awareness around the issues of sustainable development, environmental protection and climate change, alternative water management strategies are required to ensure water is sustainably managed. Water Sensitive Urban Design (WSUD) is an approach to sustainable urban water management that attempts to assign a resource value to all forms of water in the urban context, viz. stormwater, wastewater, potable water and groundwater. WSUD employs a wide range of strategies to improve the sustainable management of urban water such as the water reuse, developing alternative available supply sources, sustainable stormwater management and enhancing the aesthetic and recreational value of urban water. Managed Aquifer Recharge (MAR) is one WSUD strategy which has proven to be a successful reuse strategy in a number of places around the world. MAR is the process where an aquifer is intentionally or artificially recharged, which provides a valuable means of water storage while enhancing the aquifers supply potential. This paper investigates the feasibility of implementing MAR in the sandy, unconfined Cape Flats Aquifer (CFA) in Cape Town. The main objective of the study is to assess if MAR is a viable strategy for stormwater management on the Cape Flats, aiding the prevention or mitigation of the seasonal flooding that occurs on the Cape Flats, while also improving the supply potential of the aquifer. This involves the infiltration of stormwater into the CFA during the wet winter months and in turn, abstracting from the CFA during the dry summer months for fit-for-purpose uses in order to optimise the recharge and storage capacity of the CFA. The fully-integrated MIKE SHE model is used in this study to simulate both surface water and groundwater hydrology. This modelling approach enables the testing of various potential recharge and abstraction scenarios required for implementation of MAR on the Cape Flats. Further MIKE SHE scenario analysis under projected future climate scenarios provides insight into the performance of MAR as a stormwater management strategy under climate change conditions. The scenario analysis using an integrated model such as MIKE SHE is a valuable tool for evaluating the feasibility of the MAR as a stormwater management strategy and its potential to contribute towards improving Cape Town’s water security into the future.

Keywords: managed aquifer recharge, stormwater management, cape flats aquifer, MIKE SHE

Procedia PDF Downloads 248
461 Comparison of Two Methods of Cryopreservation of Testicular Tissue from Prepubertal Lambs

Authors: Rensson Homero Celiz Ygnacio, Marco Aurélio Schiavo Novaes, Lucy Vanessa Sulca Ñaupas, Ana Paula Ribeiro Rodrigues

Abstract:

The cryopreservation of testicular tissue emerges as an alternative for the preservation of the reproductive potential of individuals who still cannot produce sperm; however, they will undergo treatments that may affect their fertility (e.g., chemotherapy). Therefore, the present work aims to compare two cryopreservation methods (slow freezing and vitrification) in testicular tissue of prepubertal lambs. For that, to obtain the testicular tissue, the animals were castrated and the testicles were collected immediately in a physiological solution supplemented with antibiotics. In the laboratory, the testis was split into small pieces. The total size of the testicular fragments was 3×3x1 mm³ and was placed in a dish contained in Minimum Essential Medium (MEM-HEPES). The fragments were distributed randomly into non-cryopreserved (fresh control), slow freezing (SF), and vitrified. To SF procedures, two fragments from a given male were then placed in a 2,0 mL cryogenic vial containing 1,0 mL MEM-HEPES supplemented with 20% fetal bovine serum (FBS) and 20% dimethylsulfoxide (DMSO). Tubes were placed into a Mr. Frosty™ Freezing container with isopropyl alcohol and transferred to a -80°C freezer for overnight storage. On the next day, each tube was plunged into liquid nitrogen (NL). For vitrification, the ovarian tissue cryosystem (OTC) device was used. Testicular fragments were placed in the OTC device and exposed to the first vitrification solution composed of MEM-HEPES supplemented with 10 mg/mL Bovine Serum Albumin (BSA), 0.25 M sucrose, 10% Ethylene glycol (EG), 10% DMSO and 150 μM alpha-lipoic acid for four min. The VS1 was discarded and then the fragments were submerged into a second vitrification solution (VS2) containing the same composition of VS1 but 20% EG and 20% DMSO. VS2 was then discarded and each OTC device containing up to four testicular fragments was closed and immersed in NL. After the storage period, the fragments were removed from the NL, kept at room temperature for one min and then immersed at 37 °C in a water bath for 30 s. Samples were warmed by sequentially immersing in solutions of MEM-HEPES supplemented with 3 mg/mL BSA and decreasing concentrations of sucrose. Hematoxylin-eosin staining to analyze the tissue architecture was used. The score scale used was from 0 to 3, classified with a score 0 representing normal morphologically, and 3 were considered a lot of alteration. The histomorphological evaluation of the testicular tissue shows that when evaluating the nuclear alteration (distinction of nucleoli and condensation of nuclei), there are no differences when using slow freezing with respect to the control. However, vitrification presents greater damage (p <0.05). On the other hand, when evaluating the epithelial alteration, we observed that the freezing showed scores statistically equal to the control in variables such as retraction of the basement membrane, formation of gaps and organization of the peritubular cells. The results of the study demonstrated that cryopreservation using the slow freezing method is an excellent tool for the preservation of pubertal testicular tissue.

Keywords: cryopreservation, slow freezing, vitrification, testicular tissue, lambs

Procedia PDF Downloads 174
460 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms

Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat

Abstract:

In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.

Keywords: availability, design for maintenance (DFM), dynamic maintenance, life cycle cost (LCC), maintenance free operating period (MFOP), simultaneous optimization

Procedia PDF Downloads 118
459 Evaluating the Knowledge and Skill of Final Year Pharmacy Students in Maternal and Child Health at a University in South Africa

Authors: E. O. Egieyeh, N. Butler, R. Coetzee, M. Van Huyssteen, A. Bheekie

Abstract:

Background: High rate of maternal and child mortality is a global concern. Nationally, it constitutes one of South Africa’s quadruple burdens of diseases. Pharmacists have a crucial role in maternal and child health care delivery and as such should be equipped with adequate knowledge and skill required to contribute to maternal and child well-being. The International Pharmaceutical Federation statement of policy (2013) outlines pharmacist-led interventions in accordance with the World Health Organisation’s interventions in maternal, new-born and child health care. The South African Pharmacy Council’s guideline on Good Pharmacy Practice (2010) also stipulates the minimum standards required to participate in reproductive, maternal and child care. Pharmacy schools are obliged to train pharmacy students to meet priority health needs of the population so that graduates are ‘fit for purpose’. The purpose of the study is to evaluate the knowledge and skill of final year pharmacy students at a university in South Africa to determine their preparedness to contribute effectively to maternal and child health care. Method: A quantitative, descriptive, non-randomized baseline study was conducted among the final year students at the School of Pharmacy. Data was collected using a questionnaire designed in sections to assess knowledge of contraception, maternal and child health directed at the primary care level and framed within the scope of practice required of an entry-level generalist pharmacist. Participants’ skill in infant growth assessment was assessed in a section of the questionnaire in a written format. Participants ticked the topics they had been exposed to on a curriculum content assessment tool which was not graded. A pilot study examined the clarity and suitability of question items, and duration to complete the questionnaire. A score of 50% in each section of the questionnaire indicated a pass. The questionnaire was delivered in campus lecture venue. Results: Of the 102 students in final year, 53 (52%) students consented to participate in the study. Only 13.2% of participants scored above 50% in each section. Forty five (85%) participants scored above 50% in the contraception section while 40 (75%) scored less than 50% in the skills assessment. Less than half (45.3%) of the participants had a total score above 50%. Being a parent or working part-time as pharmacist assistance did not have any influence on the performance of the participants. Evaluation of participants’ curriculum content exposure showed differences in exposure to the various topics. Exposure to contraception teaching received the most recognition. Conclusion: Maternal and child health curriculum content should be reviewed at the university to enhance the knowledge and skill of pharmacy graduates.

Keywords: final year pharmacy students, knowledge and skill, maternal and child health, South Africa

Procedia PDF Downloads 152
458 Hypoglossal Nerve Stimulation (Baseline vs. 12 months) for Obstructive Sleep Apnea: A Meta-Analysis

Authors: Yasmeen Jamal Alabdallat, Almutazballlah Bassam Qablan, Hamza Al-Salhi, Salameh Alarood, Ibraheem Alkhawaldeh, Obada Abunar, Adam Abdallah

Abstract:

Obstructive sleep apnea (OSA) is a disorder caused by the repeated collapse of the upper airway during sleep. It is the most common cause of sleep-related breathing disorder, as OSA can cause loud snoring, daytime fatigue, or more severe problems such as high blood pressure, cardiovascular disease, coronary artery disease, insulin-resistant diabetes, and depression. The hypoglossal nerve stimulator (HNS) is an implantable medical device that reduces the occurrence of obstructive sleep apnea by electrically stimulating the hypoglossal nerve in rhythm with the patient's breathing, causing the tongue to move. This stimulation helps keep the patient's airways clear while they sleep. This systematic review and meta-analysis aimed to assess the clinical outcome of hypoglossal nerve stimulation as a treatment of obstructive sleep apnea. A computer literature search of PubMed, Scopus, Web of Science, and Cochrane Central Register of Controlled Trials was conducted from inception until August 2022. Studies assessing the following clinical outcomes (Apnea-Hypopnea Index (AHI), Epworth Sleepiness Scale (ESS), Functional Outcomes of Sleep Questionnaire (FOSQ), Oxygen Desaturation Indices (ODI), (Oxygen Saturation (SaO2)) were pooled in the meta-analysis using Review Manager Software. We assessed the quality of studies according to the Cochrane risk-of-bias tool for randomized trials (RoB2), Risk of Bias In Non-randomized Studies - of Interventions (ROBINS-I), and a modified version of NOS for the non-comparative cohort studies.13 Studies (Six Clinical Trials and Seven prospective cohort studies) with a total of 817 patients were included in the meta-analysis. The results of AHI were reported in 11 studies examining OSA 696 patients. We found that there was a significant improvement in the AHI after 12 months of HNS (MD = 18.2 with 95% CI, (16.7 to 19.7; I2 = 0%); P < 0.00001). Further, 12 studies reported the results of ESS after 12 months of intervention with a significant improvement in the range of sleepiness among the examined 757 OSA patients (MD = 5.3 with 95% CI, (4.75 to 5.86; I2 = 65%); P < 0.0001). Moreover, nine studies involving 699 participants reported the results of FOSQ after 12 months of HNS with a significant reported improvement (MD = -3.09 with 95% CI, (-3.41 to 2.77; I2 = 0%); P < 0.00001). In addition, ten studies reported the results of ODI with a significant improvement after 12 months of HNS among the 817 examined patients (MD = 14.8 with 95% CI, (13.25 to 16.32; I2 = 0%); P < 000001). The Hypoglossal Nerve Stimulation showed a significant positive impact on obstructive sleep apnea patients after 12 months of therapy in terms of apnea-hypopnea index, oxygen desaturation indices, manifestations of the behavioral morbidity associated with obstructive sleep apnea, and functional status resulting from sleepiness.

Keywords: apnea, meta-analysis, hypoglossal, stimulation

Procedia PDF Downloads 114
457 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning

Authors: Ioanna Taouki, Marie Lallier, David Soto

Abstract:

Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.

Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition

Procedia PDF Downloads 150