Search results for: Service Provided
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7116

Search results for: Service Provided

216 Adaptive Programming for Indigenous Early Learning: The Early Years Model

Authors: Rachel Buchanan, Rebecca LaRiviere

Abstract:

Context: The ongoing effects of colonialism continue to be experienced through paternalistic policies and funding processes that cause disjuncture between and across Indigenous early childhood programming on-reserve and in urban and Northern settings in Canada. While various educational organizations and social service providers have risen to address these challenges in the short, medium and long term, there continues to be a lack in nation-wide cohesive, culturally grounded, and meaningful early learning programming for Indigenous children in Canada. Indigenous-centered early learning programs tend to face one of two scaling dilemmas: their program goals are too prescriptive to enable the program to be meaningfully replicated in different cultural/ community settings, or their program goals are too broad to be meaningfully adapted to the unique cultural and contextual needs and desires of Indigenous communities (the “franchise approach”). There are over 600 First Nations communities in Canada representing more than 50 Nations and languages. Consequently, Indigenous early learning programming cannot be applied with a universal or “one size fits all” approach. Sustainable and comprehensive programming must be responsive to each community context, building upon existing strengths and assets to avoid program duplication and irrelevance. Thesis: Community-driven and culturally adapted early childhood programming is critical but cannot be achieved on a large scale within traditional program models that are constrained by prescriptive overarching program goals. Principles, rather than goals, are an effective way to navigate and evaluate complex and dynamic systems. Principles guide an intervention to be adaptable, flexible and scalable. The Martin Family Initiative (MFI) ’s Early Years program engages a principles-based approach to programming. As will be discussed in this paper, this approach enables the program to catalyze existing community-based strengths and organizational assets toward bridging gaps across and disjuncture between Indigenous early learning programs, as well as to scale programming in sustainable, context-responsive and dynamic ways. This paper argues that using a principles-driven and adaptive scaling approach, the Early Years model establishes important learnings for culturally adapted Indigenous early learning programming in Canada. Methodology: The Early Years has leveraged this approach to develop an array of programming with partner organizations and communities across the country. The Early Years began as a singular pilot project in one First Nation. In just three years, it has expanded to five different regions and community organizations. In each context, the program supports the partner organization through different means and to different ends, the extent to which is determined in partnership with each community-based organization: in some cases, this means supporting the organization to build home visiting programming from the ground-up; in others, it means offering organization-specific culturally adapted early learning resources to support the programming that already exists in communities. Principles underpin but do not define the practices of the program in each of these relationships. This paper will explore numerous examples of principles-based adaptability with the context of the Early Years, concluding that the program model offers theadaptability and dynamism necessary to respond to unique and ever-evolving community contexts and needs of Indigenous children today.

Keywords: culturally adapted programming, indigenous early learning, principles-based approach, program scaling

Procedia PDF Downloads 151
215 The Role of Serum Fructosamine as a Monitoring Tool in Gestational Diabetes Mellitus Treatment in Vietnam

Authors: Truong H. Le, Ngoc M. To, Quang N. Tran, Luu T. Cao, Chi V. Le

Abstract:

Introduction: In Vietnam, the current monitoring and treatment for ordinary diabetic patient mostly based on glucose monitoring with HbA1c test for every three months (recommended goal is HbA1c < 6.5%~7%). For diabetes in pregnant women or Gestational diabetes mellitus (GDM), glycemic control until the time of delivery is extremly important because it could reduce significantly medical implications for both the mother and the child. Besides, GDM requires continuos glucose monitoring at least every two weeks and therefore an alternative marker of glycemia for short-term control is considering a potential tool for the healthcare providers. There are published studies have indicated that the glycosylated serum protein is a better indicator than glycosylated hemoglobin in GDM monitoring. Based on the actual practice in Vietnam, this study was designed to evaluate the role of serum fructosamine as a monitoring tool in GDM treament and its correlations with fasting blood glucose (G0), 2-hour postprandial glucose (G2) and glycosylated hemoglobin (HbA1c). Methods: A cohort study on pregnant women diagnosed with GDM by the 75-gram oralglucose tolerance test was conducted at Endocrinology Department, Cho Ray hospital, Vietnam from June 2014 to March 2015. Cho Ray hospital is the final destination for GDM patient in the southern of Vietnam, the study population has many sources from other pronvinces and therefore researchers belive that this demographic characteristic can help to provide the study result as a reflection for the whole area. In this study, diabetic patients received a continuos glucose monitoring method which consists of bi-weekly on-site visit every 2 weeks with glycosylated serum protein test, fasting blood glucose test and 2-hour postprandial glucose test; HbA1c test for every 3 months; and nutritious consultance for daily diet program. The subjects still received routine treatment at the hospital, with tight follow-up from their healthcare providers. Researchers recorded bi-weekly health conditions, serum fructosamine level and delivery outcome from the pregnant women, using Stata 13 programme for the analysis. Results: A total of 500 pregnant women was enrolled and follow-up in this study. Serum fructosamine level was found to have a light correlation with G0 ( r=0.3458, p < 0.001) and HbA1c ( r=0.3544, p < 0.001), and moderately correlated with G2 ( r=0.4379, p < 0.001). During study timeline, the delivery outcome of 287 women were recorded with the average age of 38.5 ± 1.5 weeks, 9% of them have macrosomia, 2.8% have premature birth before week 35th and 9.8% have premature birth before week 37th; 64.8% of cesarean section and none of them have perinatal or neonatal mortality. The study provides a reference interval of serum fructosamine for GDM patient was 112.9 ± 20.7 μmol/dL. Conclusion: The present results suggests that serum fructosamine is as effective as HbA1c as a reflection of blood glucose control in GDM patient, with a positive result in delivery outcome (0% perinatal or neonatal mortality). The reference value of serum fructosamine measurement provided a potential monitoring utility in GDM treatment for hospitals in Vietnam. Healthcare providers in Cho Ray hospital is considering to conduct more studies to test this reference as a target value in their GDM treatment and monitoring.

Keywords: gestational diabetes mellitus, monitoring tool, serum fructosamine, Vietnam

Procedia PDF Downloads 256
214 Avoidance of Brittle Fracture in Bridge Bearings: Brittle Fracture Tests and Initial Crack Size

Authors: Natalie Hoyer

Abstract:

Bridges in both roadway and railway systems depend on bearings to ensure extended service life and functionality. These bearings enable proper load distribution from the superstructure to the substructure while permitting controlled movement of the superstructure. The design of bridge bearings, according to Eurocode DIN EN 1337 and the relevant sections of DIN EN 1993, increasingly requires the use of thick plates, especially for long-span bridges. However, these plate thicknesses exceed the limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with DIN EN 1993-1-10 regulations regarding material toughness and through-thickness properties necessitates further modifications. Consequently, these standards cannot be directly applied to the selection of bearing materials without supplementary guidance and design rules. In this context, a recommendation was developed in 2011 to regulate the selection of appropriate steel grades for bearing components. Prior to the initiation of the research project underlying this contribution, this recommendation had only been available as a technical bulletin. Since July 2023, it has been integrated into guideline 804 of the German railway. However, recent findings indicate that certain bridge-bearing components are exposed to high fatigue loads, which necessitate consideration in structural design, material selection, and calculations. Therefore, the German Centre for Rail Traffic Research called a research project with the objective of defining a proposal to expand the current standards in order to implement a sufficient choice of steel material for bridge bearings to avoid brittle fracture, even for thick plates and components subjected to specific fatigue loads. The results obtained from theoretical considerations, such as finite element simulations and analytical calculations, are validated through large-scale component tests. Additionally, experimental observations are used to calibrate the calculation models and modify the input parameters of the design concept. Within the large-scale component tests, a brittle failure is artificially induced in a bearing component. For this purpose, an artificially generated initial defect is introduced at the previously defined hotspot into the specimen using spark erosion. Then, a dynamic load is applied until the crack initiation process occurs to achieve realistic conditions in the form of a sharp notch similar to a fatigue crack. This initiation process continues until the crack length reaches a predetermined size. Afterward, the actual test begins, which requires cooling the specimen with liquid nitrogen until a temperature is reached where brittle fracture failure is expected. In the next step, the component is subjected to a quasi-static tensile test until failure occurs in the form of a brittle failure. The proposed paper will present the latest research findings, including the results of the conducted component tests and the derived definition of the initial crack size in bridge bearings.

Keywords: bridge bearings, brittle fracture, fatigue, initial crack size, large-scale tests

Procedia PDF Downloads 4
213 Analyzing the Heat Transfer Mechanism in a Tube Bundle Air-PCM Heat Exchanger: An Empirical Study

Authors: Maria De Los Angeles Ortega, Denis Bruneau, Patrick Sebastian, Jean-Pierre Nadeau, Alain Sommier, Saed Raji

Abstract:

Phase change materials (PCM) present attractive features that made them a passive solution for thermal comfort assessment in buildings during summer time. They show a large storage capacity per volume unit in comparison with other structural materials like bricks or concrete. If their use is matched with the peak load periods, they can contribute to the reduction of the primary energy consumption related to cooling applications. Despite these promising characteristics, they present some drawbacks. Commercial PCMs, as paraffines, offer a low thermal conductivity affecting the overall performance of the system. In some cases, the material can be enhanced, adding other elements that improve the conductivity, but in general, a design of the unit that optimizes the thermal performance is sought. The material selection is the departing point during the designing stage, and it does not leave plenty of room for optimization. The PCM melting point depends highly on the atmospheric characteristics of the building location. The selection must relay within the maximum, and the minimum temperature reached during the day. The geometry of the PCM container and the geometrical distribution of these containers are designing parameters, as well. They significantly affect the heat transfer, and therefore its phenomena must be studied exhaustively. During its lifetime, an air-PCM unit in a building must cool down the place during daytime, while the melting of the PCM occurs. At night, the PCM must be regenerated to be ready for next uses. When the system is not in service, a minimal amount of thermal exchanges is desired. The aforementioned functions result in the presence of sensible and latent heat storage and release. Hence different types of mechanisms drive the heat transfer phenomena. An experimental test was designed to study the heat transfer phenomena occurring in a circular tube bundle air-PCM exchanger. An in-line arrangement was selected as the geometrical distribution of the containers. With the aim of visual identification, the containers material and a section of the test bench were transparent. Some instruments were placed on the bench for measuring temperature and velocity. The PCM properties were also available through differential scanning calorimeter (DSC) tests. An evolution of the temperature during both cycles, melting and solidification were obtained. The results showed some phenomena at a local level (tubes) and on an overall level (exchanger). Conduction and convection appeared as the main heat transfer mechanisms. From these results, two approaches to analyze the heat transfer were followed. The first approach described the phenomena in a single tube as a series of thermal resistances, where a pure conduction controlled heat transfer was assumed in the PCM. For the second approach, the temperature measurements were used to find some significant dimensionless numbers and parameters as Stefan, Fourier and Rayleigh numbers, and the melting fraction. These approaches allowed us to identify the heat transfer phenomena during both cycles. The presence of natural convection during melting might have been stated from the influence of the Rayleigh number on the correlations obtained.

Keywords: phase change materials, air-PCM exchangers, convection, conduction

Procedia PDF Downloads 153
212 Interactions between Sodium Aerosols and Fission Products: A Theoretical Chemistry and Experimental Approach

Authors: Ankita Jadon, Sidi Souvi, Nathalie Girault, Denis Petitprez

Abstract:

Safety requirements for Generation IV nuclear reactor designs, especially the new generation sodium-cooled fast reactors (SFR) require a risk-informed approach to model severe accidents (SA) and their consequences in case of outside release. In SFRs, aerosols are produced during a core disruptive accident when primary system sodium is ejected into the containment and burn in contact with the air; producing sodium aerosols. One of the key aspects of safety evaluation is the in-containment sodium aerosol behavior and their interaction with fission products. The study of the effects of sodium fires is essential for safety evaluation as the fire can both thermally damage the containment vessel and cause an overpressurization risk. Besides, during the fire, airborne fission product first dissolved in the primary sodium can be aerosolized or, as it can be the case for fission products, released under the gaseous form. The objective of this work is to study the interactions between sodium aerosols and fission products (Iodine, toxic and volatile, being the primary concern). Sodium fires resulting from an SA would produce aerosols consisting of sodium peroxides, hydroxides, carbonates, and bicarbonates. In addition to being toxic (in oxide form), this aerosol will then become radioactive. If such aerosols are leaked into the environment, they can pose a danger to the ecosystem. Depending on the chemical affinity of these chemical forms with fission products, the radiological consequences of an SA leading to containment leak tightness loss will also be affected. This work is split into two phases. Firstly, a method to theoretically understand the kinetics and thermodynamics of the heterogeneous reaction between sodium aerosols and fission products: I2 and HI are proposed. Ab-initio, density functional theory (DFT) calculations using Vienna ab-initio simulation package are carried out to develop an understanding of the surfaces of sodium carbonate (Na2CO3) aerosols and hence provide insight on its affinity towards iodine species. A comprehensive study of I2 and HI adsorption, as well as bicarbonate formation on the calculated lowest energy surface of Na2CO3, was performed which provided adsorption energies and description of the optimized configuration of adsorbate on the stable surface. Secondly, the heterogeneous reaction between (I2)g and Na2CO3 aerosols were investigated experimentally. To study this, (I2)g was generated by heating a permeation tube containing solid I2, and, passing it through a reaction chamber containing Na2CO3 aerosol deposit. The concentration of iodine was then measured at the exit of the reaction chamber. Preliminary observations indicate that there is an effective uptake of (I2)g on Na2CO3 surface, as suggested by our theoretical chemistry calculations. This work is the first step in addressing the gaps in knowledge of in-containment and atmospheric source term which are essential aspects of safety evaluation of SFR SA. In particular, this study is aimed to determine and characterize the radiological and chemical source term. These results will then provide useful insights for the developments of new models to be implemented in integrated computer simulation tool to analyze and evaluate SFR safety designs.

Keywords: iodine adsorption, sodium aerosols, sodium cooled reactor, DFT calculations, sodium carbonate

Procedia PDF Downloads 191
211 Method of Nursing Education: History Review

Authors: Cristina Maria Mendoza Sanchez, Maria Angeles Navarro Perán

Abstract:

Introduction: Nursing as a profession, from its initial formation and after its development in practice, has been built and identified mainly from its technical competence and professionalization within the positivist approach of the XIX century that provides a conception of the disease built on the basis of to the biomedical paradigm, where the care provided is more focused on the physiological processes and the disease than on the suffering person understood as a whole. The main issue that is in need of study here is a review of the nursing profession's history to get to know how the nursing profession was before the XIX century. It is unclear if there were organizations or people with knowledge about looking after others or if many people survived by chance. The holistic care, in which the appearance of the disease directly affects all its dimensions: physical, emotional, cognitive, social and spiritual. It is not a concept from the 21st century. It is common practice, most probably since established life in this world, with the final purpose of covering all these perspectives through quality care. Objective: In this paper, we describe and analyze the history of education in nursing learning in terms of reviewing and analysing theoretical foundations of clinical teaching and learning in nursing, with the final purpose of determining and describing the development of the nursing profession along the history. Method: We have done a descriptive systematic review study, doing a systematically searched of manuscripts and articles in the following health science databases: Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL. The selection of articles has been made according to PRISMA criteria, doing a critical reading of the full text using the CASPe method. A compliment to this, we have read a range of historical and contemporary sources to support the review, such as manuals of Florence Nightingale and John of God as primary manuscripts to establish the origin of modern nursing and her professionalization. We have considered and applied ethical considerations of data processing. Results: After applying inclusion and exclusion criteria in our search, in Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL, we have obtained 51 research articles. We have analyzed them in such a way that we have distinguished them by year of publication and the type of study. With the articles obtained, we can see the importance of our background as a profession before modern times in public health and as a review of our past to face challenges in the near future. Discussion: The important influence of key figures other than Nightingale has been overlooked and it emerges that nursing management and development of the professional body has a longer and more complex history than is generally accepted. Conclusions: There is a paucity of studies on the subject of the review to be able to extract very precise evidence and recommendations about nursing before modern times. But even so, as more representative data, an increase in research about nursing history has been observed. In light of the aspects analyzed, the need for new research in the history of nursing emerges from this perspective; in order to germinate studies of the historical construction of care before the XIX century and theories created then. We can assure that pieces of knowledge and ways of care were taught before the XIX century, but they were not called theories, as these concepts were created in modern times.

Keywords: nursing history, nursing theory, Saint John of God, Florence Nightingale, learning, nursing education

Procedia PDF Downloads 78
210 Gamification of eHealth Business Cases to Enhance Rich Learning Experience

Authors: Kari Björn

Abstract:

Introduction of games has expanded the application area of computer-aided learning tools to wide variety of age groups of learners. Serious games engage the learners into a real-world -type of simulation and potentially enrich the learning experience. Institutional background of a Bachelor’s level engineering program in Information and Communication Technology is introduced, with detailed focus on one of its majors, Health Technology. As part of a Customer Oriented Software Application thematic semester, one particular course of “eHealth Business and Solutions” is described and reflected in a gamified framework. Learning a consistent view into vast literature of business management, strategies, marketing and finance in a very limited time enforces selection of topics relevant to the industry. Health Technology is a novel and growing industry with a growing sector in consumer wearable devices and homecare applications. The business sector is attracting new entrepreneurs and impatient investor funds. From engineering education point of view the sector is driven by miniaturizing electronics, sensors and wireless applications. However, the market is highly consumer-driven and usability, safety and data integrity requirements are extremely high. When the same technology is used in analysis or treatment of patients, very strict regulatory measures are enforced. The paper introduces a course structure using gamification as a tool to learn the most essential in a new market: customer value proposition design, followed by a market entry game. Students analyze the existing market size and pricing structure of eHealth web-service market and enter the market as a steering group of their company, competing against the legacy players and with each other. The market is growing but has its rules of demand and supply balance. New products can be developed with an R&D-investment, and targeted to market with unique quality- and price-combinations. Product cost structure can be improved by investing to enhanced production capacity. Investments can be funded optionally by foreign capital. Students make management decisions and face the dynamics of the market competition in form of income statement and balance sheet after each decision cycle. The focus of the learning outcome is to understand customer value creation to be the source of cash flow. The benefit of gamification is to enrich the learning experience on structure and meaning of financial statements. The paper describes the gamification approach and discusses outcomes after two course implementations. Along the case description of learning challenges, some unexpected misconceptions are noted. Improvements of the game or the semi-gamified teaching pedagogy are discussed. The case description serves as an additional support to new game coordinator, as well as helps to improve the method. Overall, the gamified approach has helped to engage engineering student to business studies in an energizing way.

Keywords: engineering education, integrated curriculum, learning experience, learning outcomes

Procedia PDF Downloads 220
209 Law of the River and Indigenous Water Rights: Reassessing the International Legal Frameworks for Indigenous Rights and Water Justice

Authors: Sultana Afrin Nipa

Abstract:

Life on Earth cannot thrive or survive without water. Water is intimately tied with community, culture, spirituality, identity, socio-economic progress, security, self-determination, and livelihood. Thus, access to water is a United Nations recognized human right due to its significance in these realms. However, there is often conflict between those who consider water as the spiritual and cultural value and those who consider it an economic value thus being threatened by economic development, corporate exploitation, government regulation, and increased privatization, highlighting the complex relationship between water and culture. The Colorado River basin is home to over 29 federally recognized tribal nations. To these tribes, it holds cultural, economic, and spiritual significance and often extends to deep human-to-non-human connections frequently precluded by the Westphalian regulations and settler laws. Despite the recognition of access to rivers as a fundamental human right by the United Nations, tribal communities and their water rights have been historically disregarded through inter alia, colonization, and dispossession of their resources. Law of the River such as ‘Winter’s Doctrine’, ‘Bureau of Reclamation (BOR)’ and ‘Colorado River Compact’ have shaped the water governance among the shareholders. However, tribal communities have been systematically excluded from these key agreements. While the Winter’s Doctrine acknowledged that tribes have the right to withdraw water from the rivers that pass through their reservations for self-sufficiency, the establishment of the BOR led to the construction of dams without tribal consultation, denying the ‘Winters’ regulation and violating these rights. The Colorado River Compact, which granted only 20% of the water to the tribes, diminishes the significance of international legal frameworks that prioritize indigenous self-determination and free pursuit of socio-economic and cultural development. Denial of this basic water right is the denial of the ‘recognition’ of their sovereignty and self-determination that questions the effectiveness of the international law. This review assesses the international legal frameworks concerning indigenous rights and water justice and aims to pinpoint gaps hindering the effective recognition and protection of Indigenous water rights in Colorado River Basin. This study draws on a combination of historical and qualitative data sets. The historical data encompasses the case settlements provided by the Bureau of Reclamation (BOR) respectively the notable cases of Native American water rights settlements on lower Colorado basin related to Arizona from 1979-2008. This material serves to substantiate the context of promises made to the Indigenous people and establishes connections between existing entities. The qualitative data consists of the observation of recorded meetings of the Central Arizona Project (CAP) to evaluate how the previously made promises are reflected now. The study finds a significant inconsistency in participation in the decision-making process and the lack of representation of Native American tribes in water resource management discussions. It highlights the ongoing challenges faced by the indigenous people to achieve their self-determination goal despite the legal arrangements.

Keywords: colorado river, indigenous rights, law of the river, water governance, water justice

Procedia PDF Downloads 12
208 Problem-Based Learning for Hospitality Students. The Case of Madrid Luxury Hotels and the Recovery after the Covid Pandemic

Authors: Caridad Maylin-Aguilar, Beatriz Duarte-Monedero

Abstract:

Problem-based learning (PBL) is a useful tool for adult and practice oriented audiences, as University students. As a consequence of the huge disruption caused by the COVID pandemic in the hospitality industry, hotels of all categories closed down in Spain from March 2020. Since that moment, the luxury segment was blooming with optimistic prospects for new openings. Hence, Hospitality students were expecting a positive situation in terms of employment and career development. By the beginning of the 2020-21 academic year, these expectations were seriously harmed. By October 2020, only 9 of the 32 hotels in the luxury segment were opened with an occupation rate of 9%. Shortly after, the evidence of a second wave affecting especially Spain and the homelands of incoming visitors bitterly smashed all forecasts. In accordance with the situation, a team of four professors and practitioners, from four different subject areas, developed a real case, inspired in one of these hotels, the 5-stars Emperatriz by Barceló. Students in their 2nd course were provided with real information as marketing plans, profit and losses and operational accounts, employees profiles and employment costs. The challenge for them was to act as consultants, identifying potential courses of action, related to best, base and worst case. In order to do that, they were organized in teams and supported by 4th course students. Each professor deployed the problem in their subject; thus, research on the customers behavior and feelings were necessary to review, as part of the marketing plan, if the current offering of the hotel was clear enough to guarantee and to communicate a safe environment, as well as the ranking of other basic, supporting and facilitating services. Also, continuous monitoring of competitors’ activity was necessary to understand what was the behavior of the open outlets. The actions designed after the diagnose were ranked in accordance with their impact and feasibility in terms of time and resources. Also they must be actionable by the current staff of the hotel and their managers and a vision of internal marketing was appreciated. After a process of refinement, seven teams presented their conclusions to Emperatriz general manager and the rest of professors. Four main ideas were chosen, and all the teams, irrespectively of authorship, were asked to develop them to the state of a minimum viable product, with estimations of impacts and costs. As the process continues, students are nowadays accompanying the hotel and their staff in the prudent reopening of facilities, almost one year after the closure. From a professor’s point of view, key learnings were 1.- When facing a real problem, a holistic view is needed. Therefore, the vision of subjects as silos collapses, 2- When educating new professionals, providing them with the resilience and resistance necessaries to deal with a problem is always mandatory, but now seems more relevant and 3.- collaborative work and contact with real practitioners in such an uncertain and changing environment is a challenge, but it is worth when considering the learning result and its potential.

Keywords: problem-based learning, hospitality recovery, collaborative learning, resilience

Procedia PDF Downloads 162
207 The Lifecycle of a Heritage Language: A Comparative Case Study of Volga German Descendants in North America

Authors: Ashleigh Dawn Moeller

Abstract:

This is a comparative case study which examines the language attitudes and behaviors of descendants of Volga German immigrants in North America and how these attitudes combined with surrounding social conditions have caused their heritage language to develop differently within each community. Of particular interest for this study are the accounts of second- and third-generation descendants in Oregon, Kansas, and North Dakota regarding their parents’ and grandparents’ attitudes toward their language and how this correlates with the current sentiment as well as visibility of their heritage language and culture. This study discusses the point at which cultural identity could diverge from language identity and what elements play a role in this development, establishing the potential for environments (linguistic landscapes) which uphold their heritage yet have detached from the language itself. Emigrating from Germany in the 1700s, these families settled for over a hundred years along the Volga Region of Imperial Russia. Subsequently, many descendants of these settlers immigrated to the Americas in the 1800-1900s. Identifying neither as German nor Russian, they called themselves Wolgadeutche (Volga Germans). During their time in Russia, the German language was maintained relatively homogenously, yet the use and status of their heritage language diverged considerably upon settlement across the Americas. Data shows that specific conditions, such as community isolation, size, religion, location as well as language policy established prior to and following the Volga German immigration to North America have had a substantial impact on the maintenance of their heritage language—causing complete loss in some areas and peripheral use or even full rebirth in others. These past conditions combined with the family accounts correlate directly with the general attitudes and ideologies of the descendants toward their heritage language. Data also shows that in many locations, despite a strong presence of German within the linguistic landscape, minimal to no German is spoken nor understood; the attitude toward the language is indifferent while a staunch holding to the heritage is maintained and boasted. Data for this study was gathered from historical accounts, archived records and newspapers, and published biographies as well as from formal interviews with second- and third-generation descendants of Volga German immigrants conducted in Oregon and Kansas. Through the interviews, members of the community have shared and provided their family genealogies as well as biographies published by family members. These have helped to trace their relatives back to specific locations, thus allowing for comparisons within the same families residing in distinctly different areas of North America. This study is part of a larger ongoing project which researches the immigration of Volga and Black Sea Germans to North America and diachronically examines the over-arching sociological factors which have directly impacted the maintenance, loss, or rebirth of their heritage language. This project follows specific families who settled in areas of Colorado, Kansas, Nebraska, Illinois, Minnesota, North and South Dakota, Saskatchewan, and Manitoba, and who later had relatives move west to areas of Oregon and Washington State. Interviews for the larger project will continue into the following year.

Keywords: heritage language, immigrant language, language change, language contact, linguistic landscape, Volga Germans, Wolgadeutsche

Procedia PDF Downloads 100
206 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 92
205 Catalytic Decomposition of Formic Acid into H₂/CO₂ Gas: A Distinct Approach

Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy

Abstract:

Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of the biomass platform, comprising a potential pool of hydrogen energy that stands as a distinct energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need for in-situ H₂ production, which plays a key role in the hydrogenation reactions of biomass into higher-value components. It is reported elsewhere in the literature that catalytic decomposition of FA is usually performed in poorly designed setups using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. Our work suggests an approach that integrates designing a distinct catalyst featuring magnetic properties with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for the dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H₂ gas from FA. Using an ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under an inert medium. Through a distinct approach, FA is charged into the reactor via a high-pressure positive displacement pump at steady-state conditions. The produced gas (H₂+CO₂) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The uniqueness of this work lies in designing a very responsive catalyst, pumping a consistent amount of FA into a sealed reactor running at steady-state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at a lower temperature range (35-50°C) yielded more gas, while the catalyst loading and Pd doping wt.% were found to be the most significant factors with P-values 0.026 & 0.031, respectively.

Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles

Procedia PDF Downloads 18
204 Cultural Intelligence for the Managers of Tomorrow: A Data-Based Analysis of the Antecedents and Training Needs of Today’s Business School Students

Authors: Justin Byrne, Jose Ramon Cobo

Abstract:

The growing importance of cross- or intercultural competencies (used here interchangeably) for the business and management professionals is now a commonplace in both academic and professional literature. This reflects two parallel developments. On the one hand, it is a consequence of the increased attention paid to a whole range of 'soft skills', now seen as fundamental in both individuals' and corporate success. On the other hand, and more specifically, the increasing demand for interculturally competent professionals is a corollary of ongoing processes of globalization, which multiply and intensify encounters between individuals and companies from different cultural backgrounds. Business schools have, for some decades, responded to the needs of the job market and their own students by providing students with training in intercultural skills, as they are encouraged to do so by the major accreditation agencies on both sides of the Atlantic. Adapting Early and Ang's (2003) formulation of Cultural Intelligence (CQ), this paper aims to help fill the lagunae in the current literature on intercultural training in three main ways. First, it offers an in-depth analysis of the CQ of a little studied group: contemporary Millenial and 'Generation Z' Business School students. The level of analysis distinguishes between the four different dimensions of CQ, cognition, metacognition, motivation and behaviour, and thereby provides a detailed picture of the strengths and weaknesses in CQ of the group as a whole, as well as of different sub-groups and profiles of students. Secondly, by crossing these individual-level findings with respondents' socio-cultural and educational data, this paper also proposes and tests hypotheses regarding the relative impact and importance of four possible antecedents of intercultural skills identified in the literature: prior international experience; intercultural training, foreign language proficiency, and experience of cultural diversity in habitual country of residence. Third, we use this analysis to suggest data-based intercultural training priorities for today's management students. These conclusions are based on the statistical analysis of individual responses of some 300 Bachelor or Masters students in a major European Business School provided to two on-line surveys: Ang, Van Dyne, et al's (2007) standard 20-question self-reporting CQ Scale, and an original questionnaire designed by the authors to collate information on respondent's socio-demographic and educational profile relevant to our four hypotheses and explanatory variables. The data from both instruments was crossed in both descriptive statistical analysis and regression analysis. This research shows that there is no statistically significant and positive relationship between the four antecedents analyzed and overall CQ level. The exception in this respect is the statistically significant correlation between international experience, and the cognitive dimension of CQ. In contrast, the results show that the combination of international experience and foreign language skills acting together, does have a strong overall impact on CQ levels. These results suggest that selecting and/or training students with strong foreign language skills and providing them with international experience (through multinational programmes, academic exchanges or international internships) constitutes one effective way of training culturally intelligent managers of tomorrow.

Keywords: business school, cultural intelligence, millennial, training

Procedia PDF Downloads 131
203 Establishing Feedback Partnerships in Higher Education: A Discussion of Conceptual Framework and Implementation Strategies

Authors: Jessica To

Abstract:

Feedback is one of the powerful levers for enhancing students’ performance. However, some students are under-engaged with feedback because they lack responsibility for feedback uptake. To resolve this conundrum, recent literature proposes feedback partnerships in which students and teachers share the power and responsibilities to co-construct feedback. During feedback co-construction, students express feedback needs to teachers, and teachers respond to individuals’ needs in return. Though this approach can increase students’ feedback ownership, its application is lagging as the field lacks conceptual clarity and implementation guide. This presentation aims to discuss the conceptual framework of feedback partnerships and feedback co-construction strategies. It identifies the components of feedback partnerships and strategies which could facilitate feedback co-construction. A systematic literature review was conducted to answer the questions. The literature search was performed using ERIC, PsycINFO, and Google Scholar with the keywords “assessment partnership”, “student as partner,” and “feedback engagement”. No time limit was set for the search. The inclusion criteria encompassed (i) student-teacher partnerships in feedback, (ii) feedback engagement in higher education, (iii) peer-reviewed publications, and (iv) English as the language of publication. Those without addressing conceptual understanding and implementation strategies were excluded. Finally, 65 publications were identified and analysed using thematic analysis. For the procedure, the texts relating to the questions were first extracted. Then, codes were assigned to summarise the ideas of the texts. Upon subsuming similar codes into themes, four themes emerged: students’ responsibilities, teachers’ responsibilities, conditions for partnerships development, and strategies. Their interrelationships were examined iteratively for framework development. Establishing feedback partnerships required different responsibilities of students and teachers during feedback co-construction. Students needed to self-evaluate performance against task criteria, identify inadequacies and communicate their needs to teachers. During feedback exchanges, they interpreted teachers’ comments, generated self-feedback through reflection, and co-developed improvement plans with teachers. Teachers had to increase students’ understanding of criteria and evaluation skills and create opportunities for students’ expression of feedback needs. In feedback dialogue, teachers responded to students’ needs and advised on the improvement plans. Feedback partnerships would be best grounded in an environment with trust and psychological safety. Four strategies could facilitate feedback co-construction. First, students’ understanding of task criteria could be increased by rubrics explanation and exemplar analysis. Second, students could sharpen evaluation skills if they participated in peer review and received teacher feedback on the quality of peer feedback. Third, provision of self-evaluation checklists and prompts and teacher modeling of self-assessment process could aid students in articulating feedback needs. Fourth, the trust could be fostered when teachers explained the benefits of feedback co-construction, showed empathy, and provided personalised comments in dialogue. Some strategies were applied in interactive cover sheets in which students performed self-evaluation and made feedback requests on a cover sheet during assignment submission, followed by teachers’ response to individuals’ requests. The significance of this presentation lies in unpacking the conceptual framework of feedback partnerships and outlining feedback co-construction strategies. With a solid foundation in theory and practice, researchers and teachers could better enhance students’ engagement with feedback.

Keywords: conceptual framework, feedback co-construction, feedback partnerships, implementation strategies

Procedia PDF Downloads 58
202 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow

Authors: Masood Otarod, Ronald M. Supkowski

Abstract:

This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.

Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations

Procedia PDF Downloads 240
201 Emergency Department Utilisation of Older People Presenting to Four Emergency Departments

Authors: M. Fry, L. Fitzpatrick, Julie Considine, R. Z. Shaban, Kate Curtis

Abstract:

Introduction: The vast majority of older Australians lives independently and are self-managing at home, despite a growing number living with a chronic illness that requires health intervention. Evidence shows that between 50% and 80% of people presenting to the emergency department (ED) are in pain. Australian EDs manage 7.2 million attendances every year and 1.4 million of these are people aged 65 years or more. Research shows that 28% of ED patients aged 65 years or more have Cognitive impairment (CI) associated with dementia, delirium and neurological conditions. Background: Traditional ED service delivery may not be suitable for older people who present with multiple, complex and ongoing illnesses. Likewise, ED clinical staff often perceive that their role should be focused more on immediate and potential lifethreatening illness and conditions which are episodic in nature. Therefore, the needs of older people and their family/carers may not be adequately addressed in the context of an ED presentation. Aim: We aimed to explore the utilisation and characteristics of older people presenting to four metropolitan EDs. Method: The findings being presented are part of a program of research exploring pain management practices for older persons with long bone fractures. The study was conducted across four metropolitan emergency departments of older patients (65years and over) and involved a 12-month randomised medical record audit (n=255). Results: ED presentations across four ED sites in 2012 numbered 168021, with 44778 (26.6%) patients aged 65 and over. Of the 44778 patients, the average age was 79.1 years (SD 8.54). There were more females 23932 (53.5%). The majority (26925: 85.0%) of older persons self-referred to the ED and lived independently. The majority arrived by ambulance (n=18553: 41.4%) and were allocated triage category was 3 (n=19,507:43.65%) or Triage category 4 at (n=15,389: 34.43%). The top five triage symptom presentations involved pain (n=8088; 18.25%), dyspnoea (n=4735; 10.7%), falls (n=4032; 9.1%), other (n=3984; 9.0%), cardiac (n=2987; 6.7%). The top five system based diagnostic presentations involved musculoskeletal (n=8902; 20.1%), cardiac (n=6704:15.0%), respiratory (n=4933; 11.0%), neurological (n=4909; 11.0%), gastroenterology (n=4321; 9.7%). On review of one tertiary hospital database the vital signs on average at time triage: Systolic Blood Pressure 143.6mmHg. Heart Rate 83.4 beats/minute; Respiratory Rate 18.5 breaths/ minute; Oxygen saturation 97.0% and Tympanic temperature 36.7 and Blood Glucose Level 7.4mmols/litre. The majority presented with a Glasgow Coma Score of 14 or higher. On average the older person stayed in the ED 4:56 (SD 3:28minutes).The average time to be seen was 39 minutes (SD 48 minutes). The majority of older persons were admitted (n=27562: 61.5%), did not wait for treatment (n= 8879: 0.02%) discharged home (n=16256: 36.0%). Conclusion: The vast majority of older persons are living independently, although many require admission on arrival to the ED. Many arrived in pain and with musculoskeletal injuries and or conditions. New models of care need to be considered, which may better support self-management and independent living of the older person and the National Emergency Access Targets.

Keywords: chronic, older person, aged care, emergency department

Procedia PDF Downloads 208
200 Developing Primary Care Datasets for a National Asthma Audit

Authors: Rachael Andrews, Viktoria McMillan, Shuaib Nasser, Christopher M. Roberts

Abstract:

Background and objective: The National Review of Asthma Deaths (NRAD) found that asthma management and care was inadequate in 26% of cases reviewed. Major shortfalls identified were adherence to national guidelines and standards and, particularly, the organisation of care, including supervision and monitoring in primary care, with 70% of cases reviewed having at least one avoidable factor in this area. 5.4 million people in the UK are diagnosed with and actively treated for asthma, and approximately 60,000 are admitted to hospital with acute exacerbations each year. The majority of people with asthma receive management and treatment solely in primary care. This has therefore created concern that many people within the UK are receiving sub-optimal asthma care resulting in unnecessary morbidity and risk of adverse outcome. NRAD concluded that a national asthma audit programme should be established to measure and improve processes, organisation, and outcomes of asthma care. Objective: To develop a primary care dataset enabling extraction of information from GP practices in Wales and providing robust data by which results and lessons could be drawn and drive service development and improvement. Methods: A multidisciplinary group of experts, including general practitioners, primary care organisation representatives, and asthma patients was formed and used as a source of governance and guidance. A review of asthma literature, guidance, and standards took place and was used to identify areas of asthma care which, if improved, would lead to better patient outcomes. Modified Delphi methodology was used to gain consensus from the expert group on which of the areas identified were to be prioritised, and an asthma patient and carer focus group held to seek views and feedback on areas of asthma care that were important to them. Areas of asthma care identified by both groups were mapped to asthma guidelines and standards to inform and develop primary and secondary care datasets covering both adult and pediatric care. Dataset development consisted of expert review and a targeted consultation process in order to seek broad stakeholder views and feedback. Results: Areas of asthma care identified as requiring prioritisation by the National Asthma Audit were: (i) Prescribing, (ii) Asthma diagnosis (iii) Asthma Reviews (iv) Personalised Asthma Action Plans (PAAPs) (v) Primary care follow-up after discharge from hospital (vi) Methodologies and primary care queries were developed to cover each of the areas of poor and variable asthma care identified and the queries designed to extract information directly from electronic patients’ records. Conclusion: This paper describes the methodological approach followed to develop primary care datasets for a National Asthma Audit. It sets out the principles behind the establishment of a National Asthma Audit programme in response to a national asthma mortality review and describes the development activities undertaken. Key process elements included: (i) mapping identified areas of poor and variable asthma care to national guidelines and standards, (ii) early engagement of experts, including clinicians and patients in the process, and (iii) targeted consultation of the queries to provide further insight into measures that were collectable, reproducible and relevant.

Keywords: asthma, primary care, general practice, dataset development

Procedia PDF Downloads 145
199 Development of Knowledge Discovery Based Interactive Decision Support System on Web Platform for Maternal and Child Health System Strengthening

Authors: Partha Saha, Uttam Kumar Banerjee

Abstract:

Maternal and Child Healthcare (MCH) has always been regarded as one of the important issues globally. Reduction of maternal and child mortality rates and increase of healthcare service coverage were declared as one of the targets in Millennium Development Goals till 2015 and thereafter as an important component of the Sustainable Development Goals. Over the last decade, worldwide MCH indicators have improved but could not match the expected levels. Progress of both maternal and child mortality rates have been monitored by several researchers. Each of the studies has stated that only less than 26% of low-income and middle income countries (LMICs) were on track to achieve targets as prescribed by MDG4. Average worldwide annual rate of reduction of under-five mortality rate and maternal mortality rate were 2.2% and 1.9% as on 2011 respectively whereas rates should be minimum 4.4% and 5.5% annually to achieve targets. In spite of having proven healthcare interventions for both mothers and children, those could not be scaled up to the required volume due to fragmented health systems, especially in the developing and under-developed countries. In this research, a knowledge discovery based interactive Decision Support System (DSS) has been developed on web platform which would assist healthcare policy makers to develop evidence-based policies. To achieve desirable results in MCH, efficient resource planning is very much required. In maximum LMICs, resources are big constraint. Knowledge, generated through this system, would help healthcare managers to develop strategic resource planning for combatting with issues like huge inequity and less coverage in MCH. This system would help healthcare managers to accomplish following four tasks. Those are a) comprehending region wise conditions of variables related with MCH, b) identifying relationships within variables, c) segmenting regions based on variables status, and d) finding out segment wise key influential variables which have major impact on healthcare indicators. Whole system development process has been divided into three phases. Those were i) identifying contemporary issues related with MCH services and policy making; ii) development of the system; and iii) verification and validation of the system. More than 90 variables under three categories, such as a) educational, social, and economic parameters; b) MCH interventions; and c) health system building blocks have been included into this web-based DSS and five separate modules have been developed under the system. First module has been designed for analysing current healthcare scenario. Second module would help healthcare managers to understand correlations among variables. Third module would reveal frequently-occurring incidents along with different MCH interventions. Fourth module would segment regions based on previously mentioned three categories and in fifth module, segment-wise key influential interventions will be identified. India has been considered as case study area in this research. Data of 601 districts of India has been used for inspecting effectiveness of those developed modules. This system has been developed by importing different statistical and data mining techniques on Web platform. Policy makers would be able to generate different scenarios from the system before drawing any inference, aided by its interactive capability.

Keywords: maternal and child heathcare, decision support systems, data mining techniques, low and middle income countries

Procedia PDF Downloads 222
198 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)

Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara

Abstract:

Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.

Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry

Procedia PDF Downloads 138
197 Creation of a Test Machine for the Scientific Investigation of Chain Shot

Authors: Mark McGuire, Eric Shannon, John Parmigiani

Abstract:

Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.

Keywords: chain shot, safety, testing, timber harvesters

Procedia PDF Downloads 124
196 Integrating Data Mining within a Strategic Knowledge Management Framework: A Platform for Sustainable Competitive Advantage within the Australian Minerals and Metals Mining Sector

Authors: Sanaz Moayer, Fang Huang, Scott Gardner

Abstract:

In the highly leveraged business world of today, an organisation’s success depends on how it can manage and organize its traditional and intangible assets. In the knowledge-based economy, knowledge as a valuable asset gives enduring capability to firms competing in rapidly shifting global markets. It can be argued that ability to create unique knowledge assets by configuring ICT and human capabilities, will be a defining factor for international competitive advantage in the mid-21st century. The concept of KM is recognized in the strategy literature, and increasingly by senior decision-makers (particularly in large firms which can achieve scalable benefits), as an important vehicle for stimulating innovation and organisational performance in the knowledge economy. This thinking has been evident in professional services and other knowledge intensive industries for over a decade. It highlights the importance of social capital and the value of the intellectual capital embedded in social and professional networks, complementing the traditional focus on creation of intellectual property assets. Despite the growing interest in KM within professional services there has been limited discussion in relation to multinational resource based industries such as mining and petroleum where the focus has been principally on global portfolio optimization with economies of scale, process efficiencies and cost reduction. The Australian minerals and metals mining industry, although traditionally viewed as capital intensive, employs a significant number of knowledge workers notably- engineers, geologists, highly skilled technicians, legal, finance, accounting, ICT and contracts specialists working in projects or functions, representing potential knowledge silos within the organisation. This silo effect arguably inhibits knowledge sharing and retention by disaggregating corporate memory, with increased operational and project continuity risk. It also may limit the potential for process, product, and service innovation. In this paper the strategic application of knowledge management incorporating contemporary ICT platforms and data mining practices is explored as an important enabler for knowledge discovery, reduction of risk, and retention of corporate knowledge in resource based industries. With reference to the relevant strategy, management, and information systems literature, this paper highlights possible connections (currently undergoing empirical testing), between an Strategic Knowledge Management (SKM) framework incorporating supportive Data Mining (DM) practices and competitive advantage for multinational firms operating within the Australian resource sector. We also propose based on a review of the relevant literature that more effective management of soft and hard systems knowledge is crucial for major Australian firms in all sectors seeking to improve organisational performance through the human and technological capability captured in organisational networks.

Keywords: competitive advantage, data mining, mining organisation, strategic knowledge management

Procedia PDF Downloads 383
195 Typology of Fake News Dissemination Strategies in Social Networks in Social Events

Authors: Mohadese Oghbaee, Borna Firouzi

Abstract:

The emergence of the Internet and more specifically the formation of social media has provided the ground for paying attention to new types of content dissemination. In recent years, Social media users share information, communicate with others, and exchange opinions on social events in this space. Many of the information published in this space are suspicious and produced with the intention of deceiving others. These contents are often called "fake news". Fake news, by disrupting the circulation of the concept and similar concepts such as fake news with correct information and misleading public opinion, has the ability to endanger the security of countries and deprive the audience of the basic right of free access to real information; Competing governments, opposition elements, profit-seeking individuals and even competing organizations, knowing about this capacity, act to distort and overturn the facts in the virtual space of the target countries and communities on a large scale and influence public opinion towards their goals. This process of extensive de-truthing of the information space of the societies has created a wave of harm and worries all over the world. The formation of these concerns has led to the opening of a new path of research for the timely containment and reduction of the destructive effects of fake news on public opinion. In addition, the expansion of this phenomenon has the potential to create serious and important problems for societies, and its impact on events such as the 2016 American elections, Brexit, 2017 French elections, 2019 Indian elections, etc., has caused concerns and led to the adoption of approaches It has been dealt with. In recent years, a simple look at the growth trend of research in "Scopus" shows an increasing increase in research with the keyword "false information", which reached its peak in 2020, namely 524 cases, reached, while in 2015, only 30 scientific-research contents were published in this field. Considering that one of the capabilities of social media is to create a context for the dissemination of news and information, both true and false, in this article, the classification of strategies for spreading fake news in social networks was investigated in social events. To achieve this goal, thematic analysis research method was chosen. In this way, an extensive library study was first conducted in global sources. Then, an in-depth interview was conducted with 18 well-known specialists and experts in the field of news and media in Iran. These experts were selected by purposeful sampling. Then by analyzing the data using the theme analysis method, strategies were obtained; The strategies achieved so far (research is in progress) include unrealistically strengthening/weakening the speed and content of the event, stimulating psycho-media movements, targeting emotional audiences such as women, teenagers and young people, strengthening public hatred, calling the reaction legitimate/illegitimate. events, incitement to physical conflict, simplification of violent protests and targeted publication of images and interviews were introduced.

Keywords: fake news, social network, social events, thematic analysis

Procedia PDF Downloads 35
194 Explosive Clad Metals for Geothermal Energy Recovery

Authors: Heather Mroz

Abstract:

Geothermal fluids can provide a nearly unlimited source of renewable energy but are often highly corrosive due to dissolved carbon dioxide (CO2), hydrogen sulphide (H2S), Ammonia (NH3) and chloride ions. The corrosive environment drives material selection for many components, including piping, heat exchangers and pressure vessels, to higher alloys of stainless steel, nickel-based alloys and titanium. The use of these alloys is cost-prohibitive and does not offer the pressure rating of carbon steel. One solution, explosion cladding, has been proven to reduce the capital cost of the geothermal equipment while retaining the mechanical and corrosion properties of both the base metal and the cladded surface metal. Explosion cladding is a solid-state welding process that uses precision explosions to bond two dissimilar metals while retaining the mechanical, electrical and corrosion properties. The process is commonly used to clad steel with a thin layer of corrosion-resistant alloy metal, such as stainless steel, brass, nickel, silver, titanium, or zirconium. Additionally, explosion welding can join a wider array of compatible and non-compatible metals with more than 260 metal combinations possible. The explosion weld is achieved in milliseconds; therefore, no bulk heating occurs, and the metals experience no dilution. By adhering to a strict set of manufacturing requirements, both the shear strength and tensile strength of the bond will exceed the strength of the weaker metal, ensuring the reliability of the bond. For over 50 years, explosion cladding has been used in the oil and gas and chemical processing industries and has provided significant economic benefit in reduced maintenance and lower capital costs over solid construction. The focus of this paper will be on the many benefits of the use of explosion clad in process equipment instead of more expensive solid alloy construction. The method of clad-plate production with explosion welding as well as the methods employed to ensure sound bonding of the metals. It will also include the origins of explosion cladding as well as recent technological developments. Traditionally explosion clad plate was formed into vessels, tube sheets and heads but recent advances include explosion welded piping. The final portion of the paper will give examples of the use of explosion-clad metals in geothermal energy recovery. The classes of materials used for geothermal brine will be discussed, including stainless steels, nickel alloys and titanium. These examples will include heat exchangers (tube sheets), high pressure and horizontal separators, standard pressure crystallizers, piping and well casings. It is important to educate engineers and designers on material options as they develop equipment for geothermal resources. Explosion cladding is a niche technology that can be successful in many situations, like geothermal energy recovery, where high temperature, high pressure and corrosive environments are typical. Applications for explosion clad metals include vessel and heat exchanger components as well as piping.

Keywords: clad metal, explosion welding, separator material, well casing material, piping material

Procedia PDF Downloads 136
193 Chemopreventive Efficacy of Andrographolide in Rat Colon Carcinogenesis Model Using Aberrant Crypt Foci (ACF) as Endpoint Marker

Authors: Maryam Hajrezaie, Mahmood Ameen Abdulla, Nazia Abdul Majid, Hapipa Mohd Ali, Pouya Hassandarvish, Maryam Zahedi Fard

Abstract:

Background: Colon cancer is one of the most prevalent cancers in the world and is the third leading cause of death among cancers in both males and females. The incidence of colon cancer is ranked fourth among all cancers but varies in different parts of the world. Cancer chemoprevention is defined as the use of natural or synthetic compounds capable of inducing biological mechanisms necessary to preserve genomic fidelity. Andrographolide is the major labdane diterpenoidal constituent of the plant Andrographis paniculata (family Acanthaceae), used extensively in the traditional medicine. Extracts of the plant and their constituents are reported to exhibit a wide spectrum of biological activities of therapeutic importance. Laboratory animal model studies have provided evidence that Andrographolide play a role in inhibiting the risk of certain cancers. Objective: Our aim was to evaluate the chemopreventive efficacy of the Andrographolide in the AOM induced rat model. Methods: To evaluate inhibitory properties of andrographolide on colonic aberrant crypt foci (ACF), five groups of 7-week-old male rats were used. Group 1 (control group) were fed with 10% Tween 20 once a day, Group 2 (cancer control) rats were intra-peritoneally injected with 15 mg/kg Azoxymethan, Gropu 3 (drug control) rats were injected with 15 mg/kg azoxymethan and 5-Flourouracil, Group 4 and 5 (experimental groups) were fed with 10 and 20 mg/kg andrographolide each once a day. After 1 week, the treatment group rats received subcutaneous injections of azoxymethane, 15 mg/kg body weight, once weekly for 2 weeks. Control rats were continued on Tween 20 feeding once a day and experimental groups 10 and 20 mg/kg andrographolide feeding once a day for 8 weeks. All rats were sacrificed 8 weeks after the azoxymethane treatment. Colons were evaluated grossly and histopathologically for ACF. Results: Administration of 10 mg/kg and 20 mg/kg andrographolide were found to be effectively chemoprotective, as evidenced microscopily and biochemically. Andrographolide suppressed total colonic ACF formation up to 40% to 60%, respectively, when compared with control group. Pre-treatment with andrographolide, significantly reduced the impact of AOM toxicity on plasma protein and urea levels as well as on plasma aspartate aminotransferase (AST), alanine aminotransferase (ALT), lactate dehydrogenase (LDH) and gamma-glutamyl transpeptidase (GGT) activities. Grossly, colorectal specimens revealed that andrographolide treatments decreased the mean score of number of crypts in AOM-treated rats. Importantly, rats fed andrographolide showed 75% inhibition of foci containing four or more aberrant crypts. The results also showed a significant increase in glutathione (GSH), superoxide dismutase (SOD), nitric oxide (NO), and Prostaglandin E2 (PGE2) activities and a decrease in malondialdehyde (MDA) level. Histologically all treatment groups showed a significant decrease of dysplasia as compared to control group. Immunohistochemical staining showed up-regulation of Hsp70 and down-regulation of Bax proteins. Conclusion: The current study demonstrated that Andrographolide reduce the number of ACF. According to these data, Andrographolide might be a promising chemoprotective activity, in a model of AOM-induced in ACF.

Keywords: chemopreventive, andrographolide, colon cancer, aberrant crypt foci (ACF)

Procedia PDF Downloads 408
192 Stuck Spaces as Moments of Learning: Uncovering Threshold Concepts in Teacher Candidate Experiences of Teaching in Inclusive Classrooms

Authors: Joy Chadwick

Abstract:

There is no doubt that classrooms of today are more complex and diverse than ever before. Preparing teacher candidates to meet these challenges is essential to ensure the retention of teachers within the profession and to ensure that graduates begin their teaching careers with the knowledge and understanding of how to effectively meet the diversity of students they will encounter. Creating inclusive classrooms requires teachers to have a repertoire of effective instructional skills and strategies. Teachers must also have the mindset to embrace diversity and value the uniqueness of individual students in their care. This qualitative study analyzed teacher candidates' experiences as they completed a fourteen-week teaching practicum while simultaneously completing a university course focused on inclusive pedagogy. The research investigated the challenges and successes teacher candidates had in navigating the translation of theory related to inclusive pedagogy into their teaching practice. Applying threshold concept theory as a framework, the research explored the troublesome concepts, liminal spaces, and transformative experiences as connected to inclusive practices. Threshold concept theory suggests that within all disciplinary fields, there exists particular threshold concepts that serve as gateways or portals into previously inaccessible ways of thinking and practicing. It is in these liminal spaces that conceptual shifts in thinking and understanding and deep learning can occur. The threshold concept framework provided a lens to examine teacher candidate struggles and successes with the inclusive education course content and the application of this content to their practicum experiences. A qualitative research approach was used, which included analyzing twenty-nine course reflective journals and six follow up one-to-one semi structured interviews. The journals and interview transcripts were coded and themed using NVivo software. Threshold concept theory was then applied to the data to uncover the liminal or stuck spaces of learning and the ways in which the teacher candidates navigated those challenging places of teaching. The research also sought to uncover potential transformative shifts in teacher candidate understanding as connected to teaching in an inclusive classroom. The findings suggested that teacher candidates experienced difficulties when they did not feel they had the knowledge, skill, or time to meet the needs of the students in the way they envisioned they should. To navigate the frustration of this thwarted vision, they relied on present and previous course content and experiences, collaborative work with other teacher candidates and their mentor teachers, and a proactive approach to planning for students. Transformational shifts were most evident in their ability to reframe their perceptions of children from a deficit or disability lens to a strength-based belief in the potential of students. It was evident that through their course work and practicum experiences, their beliefs regarding struggling students shifted as they saw the value of embracing neurodiversity, the importance of relationships, and planning for and teaching through a strength-based approach. Research findings have implications for teacher education programs and for understanding threshold concepts theory as connected to practice-based learning experiences.

Keywords: inclusion, inclusive education, liminal space, teacher education, threshold concepts, troublesome knowledge

Procedia PDF Downloads 39
191 Online Faculty Professional Development: An Approach to the Design Process

Authors: Marie Bountrogianni, Leonora Zefi, Krystle Phirangee, Naza Djafarova

Abstract:

Faculty development is critical for any institution as it impacts students’ learning experiences and faculty performance with regards to course delivery. With that in mind, The Chang School at Ryerson University embarked on an initiative to develop a comprehensive, relevant faculty development program for online faculty and instructors. Teaching Adult Learners Online (TALO) is a professional development program designed to build capacity among online teaching faculty to enhance communication/facilitation skills for online instruction and establish a Community of Practice to allow for opportunities for online faculty to network and exchange ideas and experiences. TALO is comprised of four online modules and each module provides three hours of learning materials. The topics focus on online teaching and learning experience, principles and practices, opportunities and challenges in online assessments as well as course design and development. TALO offers a unique experience for online instructors who are placed in the role of a student and an instructor through interactivities involving discussions, hands-on assignments, peer mentoring while experimenting with technological tools available for their online teaching. Through exchanges and informal peer mentoring, a small interdisciplinary community of practice has started to take shape. Successful participants have to meet four requirements for completion: i) participate actively in online discussions and activities, ii) develop a communication plan for the course they are teaching, iii) design one learning activity/or media component, iv) design one online learning module. This study adopted a mixed methods exploratory sequential design. For the qualitative phase of this study, a thorough literature review was conducted on what constitutes effective faculty development programs. Based on that review, the design team identified desired competencies for online teaching/facilitation and course design. Once the competencies were identified, a focus group interview with The Chang School teaching community was conducted as a needs assessment and to validate the competencies. In the quantitative phase, questionnaires were distributed to instructors and faculty after the program was launched to continue ongoing evaluation and revisions, in hopes of further improving the program to meet the teaching community’s needs. Four faculty members participated in a one-hour focus group interview. Major findings from the focus group interview revealed that for the training program, faculty wanted i) to better engage students online, ii) to enhance their online teaching with specific strategies, iii) to explore different ways to assess students online. 91 faculty members completed the questionnaire in which findings indicated that: i) the majority of faculty stated that they gained the necessary skills to demonstrate instructor presence through communication and use of technological tools provided, ii) increased faculty confidence with course management strategies, iii) learning from peers is most effective – the Community of Practice is strengthened and valued even more as program alumni become facilitators. Although this professional development program is not mandatory for online instructors, since its launch in Fall 2014, over 152 online instructors have successfully completed the program. A Community of Practice emerged as a result of the program and participants continue to exchange thoughts and ideas about online teaching and learning.

Keywords: community of practice, customized, faculty development, inclusive design

Procedia PDF Downloads 147
190 Early Initiation of Breastfeeding and Its Determinants among Non-Caesarean Deliveries at Primary and Secondary Health Facilities: A Case Observational Study

Authors: Farhana Karim, Abdullah N. S. Khan, Mohiuddin A. K. Chowdhury, Nabila Zaka, Alexander Manu, Shams El Arifeen, Sk Masum Billah

Abstract:

Breastfeeding, an integral part of newborn care, can reduce 55-87% of all-cause neonatal mortality and morbidity. Early initiation of breastfeeding within 1 hour of birth can avert 22% of newborn mortality. Only 45% of world’s newborns and 42% of newborns in South-Asia are put to the breast within one hour of birth. In Bangladesh, only a half of the mothers practice early initiation of breastfeeding which is less likely to be practiced if the baby is born in a health facility. This study aims to generate strong evidence for early initiation of breastfeeding practices in the government health facilities and to explore the associated factors influencing the practice. The study was conducted in selected health facilities in three neighbouring districts of Northern Bangladesh. Total 249 normal vaginal delivery cases were observed for 24 hours since the time of birth. The outcome variable was initiation of breastfeeding within 1 hour while the explanatory variables included type of health facility, privacy, presence of support person, stage of labour at admission, need for augmentation of labour, complications during delivery, need for episiotomy, spontaneous cry of the newborn, skin-to-skin contact with mother, post-natal contact with the service provider, receiving a post-natal examination and counselling on breastfeeding during postnatal contact. The simple descriptive statistics were employed to see the distribution of samples according to socio-demographic characteristics. Kruskal-Wallis test was carried out for testing the equality of medians among two or more categories of each variable and P-value is reported. A series of simple logistic regressions were conducted with all the potential explanatory variables to identify the determining factors for breastfeeding within 1 hour in a health facility. Finally, multiple logistic regression was conducted including the variables found significant at bi-variate analyses. Almost 90% participants initiated breastfeeding at the health facility and median time to initiate breastfeeding was 38 minutes. However, delivering in a sub-district hospital significantly delayed the breastfeeding initiation in comparison to delivering in a district hospital. Maintenance of adequate privacy and presence of separate staff for taking care of newborn significantly reduced the time in early breastfeeding initiation. Initiation time was found longer if the mother had an augmented labour, obstetric complications, and the newborn needed resuscitation. However, the initiation time was significantly early if the baby was put skin-to-skin on mother’s abdomen and received a postnatal examination by a provider. After controlling for the potential confounders, the odds of initiating breastfeeding within one hour of birth is higher if mother gives birth in a district hospital (AOR 3.0: 95% CI 1.5, 6.2), privacy is well-maintained (AOR 2.3: 95% CI 1.1, 4.5), babies cry spontaneously (AOR 7.7: 95% CI 3.3, 17.8), babies are put to skin-to-skin contact with mother (AOR 4.6: 95% CI 1.9, 11.2) and if the baby is examined by a provider in the facility (AOR 4.4: 95% CI 1.4, 14.2). The evidence generated by this study will hopefully direct the policymakers to identify and prioritize the scopes for creating and supporting early initiation of breastfeeding in the health facilities.

Keywords: Bangladesh, early initiation of breastfeeding, health facility, normal vaginal delivery, skin to skin contact

Procedia PDF Downloads 120
189 Scoping Review of the Potential to Embed Mental Health Impact in Global Challenges Research

Authors: Netalie Shloim, Brian Brown, Siobhan Hugh-Jones, Jane Plastow, Diana Setiyawati, Anna Madill

Abstract:

In June 2021, the World Health Organization launched its guidance and technical packages on community mental health services, stressing a human rights-based approach to care. This initiative stems from an increasing acknowledgment of the role mental health plays in achieving the Sustainable Development Goals. Nevertheless, mental health remains a relatively neglected research area and the estimates for untreated mental disorders in low-and-middle-income countries (LMICs) are as high as 78% for adults. Moreover, the development sector and research programs too often side-line mental health as a privilege in the face of often immediate threats to life and livelihood. As a way of addressing this problem, this study aimed to examine past or ongoing GCRF projects to see if there were opportunities where mental health impact could have been achieved without compromising a study's main aim and without overburdening a project. Projects funded by the UKRI Global Challenges Research Fund (GCRF) were analyzed. This program was initiated in 2015 to support cutting-edge research that addresses the challenges faced by developing countries. By the end of May 2020, a total of 15,279 projects were funded of which only 3% had an explicit mental health focus. A sample of 36 non-mental-health-focused projects was then sampled for diversity across research council, challenge portfolio and world region. Each of these 36 projects was coded by two coders for opportunities to embed mental health impact. To facilitate coding, the literature was inspected for dimensions relevant to LMIC settings. Three main psychological and three main social dimensions were identified: promote a positive sense of self; promote positive emotions, safe expression and regulation of challenging emotions, coping strategies, and help-seeking; facilitate skills development; and facilitate community-building; preserve sociocultural identity; support community mobilization. Coding agreement was strong on missed opportunities for mental health impact on the three social dimensions: support community mobilization (92%), facilitate community building (83%), preserve socio-cultural identity (70%). Coding agreement was reasonably strong on missed opportunities for mental health impact on the three psychological dimensions: promote positive emotions (67%), facilitate skills development (61%), positive sense of self (58%). In order of frequency, the agreed perceived opportunities from the highest to lowest are: support community mobilization, facilitate community building, facilitate skills development, promote a positive sense of self, promote positive emotions, preserve sociocultural identity. All projects were considered to have an opportunity to support community mobilization and to facilitate skills development by at least one coder. Findings provided support that there were opportunities to embed mental health impact in research across the range of development sectors and identifies what kind of missed opportunities are most frequent. Hence, mainstreaming mental health has huge potential to tackle the lack of priority and funding it has attracted traditionally. The next steps are to understand the barriers to mainstreaming mental health and to work together to overcome them.

Keywords: GCRF, mental health, psychosocial wellbeing, LMIC

Procedia PDF Downloads 149
188 Expanding Access and Deepening Engagement: Building an Open Source Digital Platform for Restoration-Based Stem Education in the Largest Public-School System in the United States

Authors: Lauren B. Birney

Abstract:

This project focuses upon the expansion of the existing "Curriculum and Community Enterprise for the Restoration of New York Harbor in New York City Public Schools" NSF EHR DRL 1440869, NSF EHR DRL 1839656 and NSF EHR DRL 1759006. This project is recognized locally as “Curriculum and Community Enterprise for Restoration Science,” or CCERS. CCERS is a comprehensive model of ecological restoration-based STEM education for urban public-school students. Following an accelerated rollout, CCERS is now being implemented in 120+ Title 1 funded NYC Department of Education middle schools, led by two cohorts of 250 teachers, serving more than 11,000 students in total. Initial results and baseline data suggest that the CCERS model, with the Billion Oyster Project (BOP) as its local restoration ecology-based STEM curriculum, is having profound impacts on students, teachers, school leaders, and the broader community of CCERS participants and stakeholders. Students and teachers report being receptive to the CCERS model and deeply engaged in the initial phase of curriculum development, citizen science data collection, and student-centered, problem-based STEM learning. The BOP CCERS Digital Platform will serve as the central technology hub for all research, data, data analysis, resources, materials and student data to promote global interactions between communities, Research conducted included qualitative and quantitative data analysis. We continue to work internally on making edits and changes to accommodate a dynamic society. The STEM Collaboratory NYC® at Pace University New York City continues to act as the prime institution for the BOP CCERS project since the project’s inception in 2014. The project continues to strive to provide opportunities in STEM for underrepresented and underserved populations in New York City. The replicable model serves as an opportunity for other entities to create this type of collaboration within their own communities and ignite a community to come together and address the notable issue. Providing opportunities for young students to engage in community initiatives allows for a more cohesive set of stakeholders, ability for young people to network and provide additional resources for those students in need of additional support, resources and structure. The project has planted more than 47 million oysters across 12 acres and 15 reef sites, with the help of more than 8,000 students and 10,000 volunteers. Additional enhancements and features on the BOP CCERS Digital Platform will continue over the next three years through funding provided by the National Science Foundation, NSF DRL EHR 1759006/1839656 Principal Investigator Dr. Lauren Birney, Professor Pace University. Early results from the data indicate that the new version of the Platform is creating traction both nationally and internationally among community stakeholders and constituents. This project continues to focus on new collaborative partners that will support underrepresented students in STEM Education. The advanced Digital Platform will allow for us connect with other countries and networks on a larger Global scale.

Keywords: STEM education, environmental restoration science, technology, citizen science

Procedia PDF Downloads 63
187 Treatment of Neuronal Defects by Bone Marrow Stem Cells Differentiation to Neuronal Cells Cultured on Gelatin-PLGA Scaffolds Coated with Nano-Particles

Authors: Alireza Shams, Ali Zamanian, Atefehe Shamosi, Farnaz Ghorbani

Abstract:

Introduction: Although the application of a new strategy remains a remarkable challenge for treatment of disabilities due to neuronal defects, progress in Nanomedicine and tissue engineering, suggesting the new medical methods. One of the promising strategies for reconstruction and regeneration of nervous tissue is replacing of lost or damaged cells by specific scaffolds after Compressive, ischemic and traumatic injuries of central nervous system. Furthermore, ultrastructure, composition, and arrangement of tissue scaffolds are effective on cell grafts. We followed implantation and differentiation of mesenchyme stem cells to neural cells on Gelatin Polylactic-co-glycolic acid (PLGA) scaffolds coated with iron nanoparticles. The aim of this study was to evaluate the capability of stem cells to differentiate into motor neuron-like cells under topographical cues and morphogenic factors. Methods and Materials: Bone marrow mesenchymal stem cells (BMMSCs) was obtained by primary cell culturing of adult rat bone marrow got from femur bone by flushing method. BMMSCs were incubated with DMEM/F12 (Gibco), 15% FBS and 100 U/ml pen/strep as media. Then, BMMSCs seeded on Gel/PLGA scaffolds and tissue culture (TCP) polystyrene embedded and incorporated by Fe Nano particles (FeNPs) (Fe3o4 oxide (M w= 270.30 gr/mol.). For neuronal differentiation, 2×10 5 BMMSCs were seeded on Gel/PLGA/FeNPs scaffolds was cultured for 7 days and 0.5 µ mol. Retinoic acid, 100 µ mol. Ascorbic acid,10 ng/ml. Basic fibroblast growth factor (Sigma, USA), 250 μM Iso butyl methyl xanthine, 100 μM 2-mercaptoethanol, and 0.2 % B27 (Invitrogen, USA) added to media. Proliferation of BMMSCs was assessed by using MTT assay for cell survival. The morphology of BMMSCs and scaffolds was investigated by scanning electron microscopy analysis. Expression of neuron-specific markers was studied by immunohistochemistry method. Data were analyzed by analysis of variance, and statistical significance was determined by Turkey’s test. Results: Our results revealed that differentiation and survival of BMMSCs into motor neuron-like cells on Gel/PLGA/FeNPs as a biocompatible and biodegradable scaffolds were better than those cultured in Gel/PLGA in absence of FeNPs and TCP scaffolds. FeNPs had raised physical power but decreased capacity absorption of scaffolds. Well defined oriented pores in scaffolds due to FeNPs may activate differentiation and synchronized cells as a mechanoreceptor. Induction effects of magnetic FeNPs by One way flow of channels in scaffolds help to lead the cells and can facilitate direction of their growth processes. Discussion: Progression of biological properties of BMMSCs and the effects of FeNPs spreading under magnetic field was evaluated in this investigation. In vitro study showed that the Gel/PLGA/FeNPs scaffold provided a suitable structure for motor neuron-like cells differentiation. This could be a promising candidate for enhancing repair and regeneration in neural defects. Dynamic and static magnetic field for inducing and construction of cells can provide better results for further experimental studies.

Keywords: differentiation, mesenchymal stem cells, nano particles, neuronal defects, Scaffolds

Procedia PDF Downloads 143