Search results for: risk-based PMO development
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16169

Search results for: risk-based PMO development

1169 Children's Literature with Mathematical Dialogue for Teaching Mathematics at Elementary Level: An Exploratory First Phase about Students’ Difficulties and Teachers’ Needs in Third and Fourth Grade

Authors: Goulet Marie-Pier, Voyer Dominic, Simoneau Victoria

Abstract:

In a previous research project (2011-2019) funded by the Quebec Ministry of Education, an educational approach was developed based on the teaching and learning of place value through children's literature. Subsequently, the effect of this approach on the conceptual understanding of the concept among first graders (6-7 years old) was studied. The current project aims to create a series of children's literature to help older elementary school students (8-10 years old) in developing a conceptual understanding of complex mathematical concepts taught at their grade level rather than a more typical procedural understanding. Knowing that there are no educational material or children's books that exist to achieve our goals, four stories, accompanied by mathematical activities, will be created to support students, and their teachers, in the learning and teaching of mathematical concepts that can be challenging within their mathematic curriculum. The stories will also introduce a mathematical dialogue into the characters' discourse with the aim to address various mathematical foundations for which there are often erroneous statements among students and occasionally among teachers. In other words, the stories aim to empower students seeking a real understanding of difficult mathematical concepts, as well as teachers seeking a way to teach these difficult concepts in a way that goes beyond memorizing rules and procedures. In order to choose the concepts that will be part of the stories, it is essential to understand the current landscape regarding the main difficulties experienced by students in third and fourth grade (8-10 years old) and their teacher’s needs. From this perspective, the preliminary phase of the study, as discussed in the presentation, will provide critical insight into the mathematical concepts with which the target grade levels struggle the most. From this data, the research team will select the concepts and develop their stories in the second phase of the study. Two questions are preliminary to the implementation of our approach, namely (1) what mathematical concepts are considered the most “difficult to teach” by teachers in the third and fourth grades? and (2) according to teachers, what are the main difficulties encountered by their students in numeracy? Self-administered online questionnaires using the SimpleSondage software will be sent to all third and fourth-grade teachers in nine school service centers in the Quebec region, representing approximately 300 schools. The data that will be collected in the fall of 2022 will be used to compare the difficulties identified by the teachers with those prevalent in the scientific literature. Considering that this ensures consistency between the proposed approach and the true needs of the educational community, this preliminary phase is essential to the relevance of the rest of the project. It is also an essential first step in achieving the two ultimate goals of the research project, improving the learning of elementary school students in numeracy, and contributing to the professional development of elementary school teachers.

Keywords: children’s literature, conceptual understanding, elementary school, learning and teaching, mathematics

Procedia PDF Downloads 88
1168 Translating Creativity to an Educational Context: A Method to Augment the Professional Training of Newly Qualified Secondary School Teachers

Authors: Julianne Mullen-Williams

Abstract:

This paper will provide an overview of a three year mixed methods research project that explores if methods from the supervision of dramatherapy can augment the occupational psychology of newly qualified secondary school teachers. It will consider how creativity and the use of metaphor, as applied in the supervision of dramatherapists, can be translated to an educational context in order to explore the explicit / implicit dynamics between the teacher trainee/ newly qualified teacher and the organisation in order to support the super objective in training for teaching; how to ‘be a teacher.’ There is growing evidence that attrition rates among teachers are rising after only five years of service owing to too many national initiatives, an unmanageable curriculum and deteriorating student discipline. The fieldwork conducted entailed facilitating a reflective space for Newly Qualified Teachers from all subject areas, using methods from the supervision of dramatherapy, to explore the social and emotional aspects of teaching and learning with the ultimate aim of improving the occupational psychology of teachers. Clinical supervision is a formal process of professional support and learning which permits individual practitioners in frontline service jobs; counsellors, psychologists, dramatherapists, social workers and nurses to expand their knowledge and proficiency, take responsibility for their own practice, and improve client protection and safety of care in complex clinical situations. It is deemed integral to continued professional practice to safeguard vulnerable people and to reduce practitioner burnout. Dramatherapy supervision incorporates all of the above but utilises creative methods as a tool to gain insight and a deeper understanding of the situation. Creativity and the use of metaphor enable the supervisee to gain an aerial view of the situation they are exploring. The word metaphor in Greek means to ‘carry across’ indicating a transfer of meaning form one frame of reference to another. The supervision support was incorporated into each group’s induction training programme. The first year group attended fortnightly one hour sessions, the second group received two one hour sessions every term. The existing literature on the supervision and mentoring of secondary school teacher trainees calls for changes in pre-service teacher education and in the induction period. There is a particular emphasis on the need to include reflective and experiential learning, within training programmes and within the induction period, in order to help teachers manage the interpersonal dynamics and emotional impact within a high pressurised environment

Keywords: dramatherapy supervision, newly qualified secondary school teachers, professional development, teacher education

Procedia PDF Downloads 387
1167 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways

Authors: Anirudh Lahiri

Abstract:

Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.

Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.

Procedia PDF Downloads 41
1166 Dry Reforming of Methane Using Metal Supported and Core Shell Based Catalyst

Authors: Vinu Viswanath, Lawrence Dsouza, Ugo Ravon

Abstract:

Syngas typically and intermediary gas product has a wide range of application of producing various chemical products, such as mixed alcohols, hydrogen, ammonia, Fischer-Tropsch products methanol, ethanol, aldehydes, alcohols, etc. There are several technologies available for the syngas production. An alternative to the conventional processes an attractive route of utilizing carbon dioxide and methane in equimolar ratio to generate syngas of ratio close to one has been developed which is also termed as Dry Reforming of Methane technology. It also gives the privilege to utilize the greenhouse gases like CO2 and CH4. The dry reforming process is highly endothermic, and indeed, ΔG becomes negative if the temperature is higher than 900K and practically, the reaction occurs at 1000-1100K. At this temperature, the sintering of the metal particle is happening that deactivate the catalyst. However, by using this strategy, the methane is just partially oxidized, and some cokes deposition occurs that causing the catalyst deactivation. The current research work was focused to mitigate the main challenges of dry reforming process such coke deposition, and metal sintering at high temperature.To achieve these objectives, we employed three different strategies of catalyst development. 1) Use of bulk catalysts such as olivine and pyrochlore type materials. 2) Use of metal doped support materials, like spinel and clay type material. 3) Use of core-shell model catalyst. In this approach, a thin layer (shell) of redox metal oxide is deposited over the MgAl2O4 /Al2O3 based support material (core). For the core-shell approach, an active metal is been deposited on the surface of the shell. The shell structure formed is a doped metal oxide that can undergo reduction and oxidation reactions (redox), and the core is an alkaline earth aluminate having a high affinity towards carbon dioxide. In the case of metal-doped support catalyst, the enhanced redox properties of doped CeO2 oxide and CO2 affinity property of alkaline earth aluminates collectively helps to overcome coke formation. For all of the mentioned three strategies, a systematic screening of the metals is carried out to optimize the efficiency of the catalyst. To evaluate the performance of them, the activity and stability test were carried out under reaction conditions of temperature ranging from 650 to 850 ̊C and an operating pressure ranging from 1 to 20 bar. The result generated infers that the core-shell model catalyst showed high activity and better stable DR catalysts under atmospheric as well as high-pressure conditions. In this presentation, we will show the results related to the strategy.

Keywords: carbon dioxide, dry reforming, supports, core shell catalyst

Procedia PDF Downloads 174
1165 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings

Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey

Abstract:

Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.

Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing

Procedia PDF Downloads 149
1164 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients

Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho

Abstract:

Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).

Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper

Procedia PDF Downloads 145
1163 Efficiency of Different Types of Addition onto the Hydration Kinetics of Portland Cement

Authors: Marine Regnier, Pascal Bost, Matthieu Horgnies

Abstract:

Some of the problems to be solved for the concrete industry are linked to the use of low-reactivity cement, the hardening of concrete under cold-weather and the manufacture of pre-casted concrete without costly heating step. The development of these applications needs to accelerate the hydration kinetics, in order to decrease the setting time and to obtain significant compressive strengths as soon as possible. The mechanisms enhancing the hydration kinetics of alite or Portland cement (e.g. the creation of nucleation sites) were already studied in literature (e.g. by using distinct additions such as titanium dioxide nanoparticles, calcium carbonate fillers, water-soluble polymers, C-S-H, etc.). However, the goal of this study was to establish a clear ranking of the efficiency of several types of additions by using a robust and reproducible methodology based on isothermal calorimetry (performed at 20°C). The cement was a CEM I 52.5N PM-ES (Blaine fineness of 455 m²/kg). To ensure the reproducibility of the experiments and avoid any decrease of the reactivity before use, the cement was stored in waterproof and sealed bags to avoid any contact with moisture and carbon dioxide. The experiments were performed on Portland cement pastes by using a water-to-cement ratio of 0.45, and incorporating different compounds (industrially available or laboratory-synthesized) that were selected according to their main composition and their specific surface area (SSA, calculated using the Brunauer-Emmett-Teller (BET) model and nitrogen adsorption isotherms performed at 77K). The intrinsic effects of (i) dry powders (e.g. fumed silica, activated charcoal, nano-precipitates of calcium carbonate, afwillite germs, nanoparticles of iron and iron oxides , etc.), and (ii) aqueous solutions (e.g. containing calcium chloride, hydrated Portland cement or Master X-SEED 100, etc.) were investigated. The influence of the amount of addition, calculated relatively to the dry extract of each addition compared to cement (and by conserving the same water-to-cement ratio) was also studied. The results demonstrated that the X-SEED®, the hydrated calcium nitrate, the calcium chloride (and, at a minor level, a solution of hydrated Portland cement) were able to accelerate the hydration kinetics of Portland cement, even at low concentration (e.g. 1%wt. of dry extract compared to cement). By using higher rates of additions, the fumed silica, the precipitated calcium carbonate and the titanium dioxide can also accelerate the hydration. In the case of the nano-precipitates of calcium carbonate, a correlation was established between the SSA and the accelerating effect. On the contrary, the nanoparticles of iron or iron oxides, the activated charcoal and the dried crystallised hydrates did not show any accelerating effect. Future experiments will be scheduled to establish the ranking of these additions, in terms of accelerating effect, by using low-reactivity cements and other water to cement ratios.

Keywords: acceleration, hydration kinetics, isothermal calorimetry, Portland cement

Procedia PDF Downloads 255
1162 Flood Vulnerability Zoning for Blue Nile Basin Using Geospatial Techniques

Authors: Melese Wondatir

Abstract:

Flooding ranks among the most destructive natural disasters, impacting millions of individuals globally and resulting in substantial economic, social, and environmental repercussions. This study's objective was to create a comprehensive model that assesses the Nile River basin's susceptibility to flood damage and improves existing flood risk management strategies. Authorities responsible for enacting policies and implementing measures may benefit from this research to acquire essential information about the flood, including its scope and susceptible areas. The identification of severe flood damage locations and efficient mitigation techniques were made possible by the use of geospatial data. Slope, elevation, distance from the river, drainage density, topographic witness index, rainfall intensity, distance from road, NDVI, soil type, and land use type were all used throughout the study to determine the vulnerability of flood damage. Ranking elements according to their significance in predicting flood damage risk was done using the Analytic Hierarchy Process (AHP) and geospatial approaches. The analysis finds that the most important parameters determining the region's vulnerability are distance from the river, topographic witness index, rainfall, and elevation, respectively. The consistency ratio (CR) value obtained in this case is 0.000866 (<0.1), which signifies the acceptance of the derived weights. Furthermore, 10.84m2, 83331.14m2, 476987.15m2, 24247.29m2, and 15.83m2 of the region show varying degrees of vulnerability to flooding—very low, low, medium, high, and very high, respectively. Due to their close proximity to the river, the northern-western regions of the Nile River basin—especially those that are close to Sudanese cities like Khartoum—are more vulnerable to flood damage, according to the research findings. Furthermore, the AUC ROC curve demonstrates that the categorized vulnerability map achieves an accuracy rate of 91.0% based on 117 sample points. By putting into practice strategies to address the topographic witness index, rainfall patterns, elevation fluctuations, and distance from the river, vulnerable settlements in the area can be protected, and the impact of future flood occurrences can be greatly reduced. Furthermore, the research findings highlight the urgent requirement for infrastructure development and effective flood management strategies in the northern and western regions of the Nile River basin, particularly in proximity to major towns such as Khartoum. Overall, the study recommends prioritizing high-risk locations and developing a complete flood risk management plan based on the vulnerability map.

Keywords: analytic hierarchy process, Blue Nile Basin, geospatial techniques, flood vulnerability, multi-criteria decision making

Procedia PDF Downloads 66
1161 X-Ray Detector Technology Optimization In CT Imaging

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 269
1160 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 103
1159 Empowering Learners: From Augmented Reality to Shared Leadership

Authors: Vilma Zydziunaite, Monika Kelpsiene

Abstract:

In early childhood and preschool education, play has an important role in learning and cognitive processes. In the context of a changing world, personal autonomy and the use of technology are becoming increasingly important for the development of a wide range of learner competencies. By integrating technology into learning environments, the educational reality is changed, promoting unusual learning experiences for children through play-based activities. Alongside this, teachers are challenged to develop encouragement and motivation strategies that empower children to act independently. The aim of the study was to reveal the changes in the roles and experiences of teachers in the application of AR technology for the enrichment of the learning process. A quantitative research approach was used to conduct the study. The data was collected through an electronic questionnaire. Participants: 319 teachers of 5-6-year-old children using AR technology tools in their educational process. Methods of data analysis: Cronbach alpha, descriptive statistical analysis, normal distribution analysis, correlation analysis, regression analysis (SPSS software). Results. The results of the study show a significant relationship between children's learning and the educational process modeled by the teacher. The strongest predictor of child learning was found to be related to the role of the educator. Other predictors, such as pedagogical strategies, the concept of AR technology, and areas of children's education, have no significant relationship with child learning. The role of the educator was found to be a strong determinant of the child's learning process. Conclusions. The greatest potential for integrating AR technology into the teaching-learning process is revealed in collaborative learning. Teachers identified that when integrating AR technology into the educational process, they encourage children to learn from each other, develop problem-solving skills, and create inclusive learning contexts. A significant relationship has emerged - how the changing role of the teacher relates to the child's learning style and the aspiration for personal leadership and responsibility for their learning. Teachers identified the following key roles: observer of the learning process, proactive moderator, and creator of the educational context. All these roles enable the learner to become an autonomous and active participant in the learning process. This provides a better understanding and explanation of why it becomes crucial to empower the learner to experiment, explore, discover, actively create, and foster collaborative learning in the design and implementation of the educational content, also for teachers to integrate AR technologies and the application of the principles of shared leadership. No statistically significant relationship was found between the understanding of the definition of AR technology and the teacher’s choice of role in the learning process. However, teachers reported that their understanding of the definition of AR technology influences their choice of role, which has an impact on children's learning.

Keywords: teacher, learner, augmented reality, collaboration, shared leadership, preschool education

Procedia PDF Downloads 37
1158 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 186
1157 From Clients to Colleagues: Supporting the Professional Development of Survivor Social Work Students

Authors: Stephanie Jo Marchese

Abstract:

This oral presentation is a reflective piece regarding current social work teaching methods that value and devalue the lived experiences of survivor students. This presentation grounds the term ‘survivor’ in feminist frameworks. A survivor-defined approach to feminist advocacy assumes an individual’s agency, considers each case and needs independent of generalizations, and provides resources and support to empower victims. Feminist ideologies are ripe arenas to update and influence the rapport-building schools of social work have with these students. Survivor-based frameworks are rooted in nuanced understandings of intersectional realities, staunchly combat both conscious and unconscious deficit lenses wielded against victims, elevate lived experiences to the realm of experiential expertise, and offer alternatives to traditional power structures and knowledge exchanges. Actively importing a survivor framework into the methodology of social work teaching breaks open barriers many survivor students have faced in institutional settings, this author included. The profession of social work is at an important crux of change, both in the United States and globally. The United States is currently undergoing a radical change in its citizenry and outlier communities have taken to the streets again in opposition to their othered-ness. New waves of students are entering this field, emboldened by their survival of personal and systemic oppressions- heavily influenced by third-wave feminism, critical race theory, queer theory, among other post-structuralist ideologies. Traditional models of sociological and psychological studies are actively being challenged. The profession of social work was not founded on the diagnosis of disorders but rather a grassroots-level activism that heralded and demanded resources for oppressed communities. Institutional and classroom acceptance and celebration of survivor narratives can catapult the resurgence of these values needed in the profession’s service-delivery models and put social workers back in the driver's seat of social change (a combined advocacy and policy perspective), moving away from outsider-based intervention models. Survivor students should be viewed as agents of change, not solely former victims and clients. The ideas of this presentation proposal are supported through various qualitative interviews, as well as reviews of ‘best practices’ in the field of education that incorporate feminist methods of inclusion and empowerment. Curriculum and policy recommendations are also offered.

Keywords: deficit lens bias, empowerment theory, feminist praxis, inclusive teaching models, strengths-based approaches, social work teaching methods

Procedia PDF Downloads 288
1156 Fuel Cells Not Only for Cars: Technological Development in Railways

Authors: Marita Pigłowska, Beata Kurc, Paweł Daszkiewicz

Abstract:

Railway vehicles are divided into two groups: traction (powered) vehicles and wagons. The traction vehicles include locomotives (line and shunting), railcars (sometimes referred to as railbuses), and multiple units (electric and diesel), consisting of several or a dozen carriages. In vehicles with diesel traction, fuel energy (petrol, diesel, or compressed gas) is converted into mechanical energy directly in the internal combustion engine or via electricity. In the latter case, the combustion engine generator produces electricity that is then used to drive the vehicle (diesel-electric drive or electric transmission). In Poland, such a solution dominates both in heavy linear and shunting locomotives. The classic diesel drive is available for the lightest shunting locomotives, railcars, and passenger diesel multiple units. Vehicles with electric traction do not have their own source of energy -they use pantographs to obtain electricity from the traction network. To determine the competitiveness of the hydrogen propulsion system, it is essential to understand how it works. The basic elements of the construction of a railway vehicle drive system that uses hydrogen as a source of traction force are fuel cells, batteries, fuel tanks, traction motors as well as main and auxiliary converters. The compressed hydrogen is stored in tanks usually located on the roof of the vehicle. This resource is supplemented with the use of specialized infrastructure while the vehicle is stationary. Hydrogen is supplied to the fuel cell, where it oxidizes. The effect of this chemical reaction is electricity and water (in two forms -liquid and water vapor). Electricity is stored in batteries (so far, lithium-ion batteries are used). Electricity stored in this way is used to drive traction motors and supply onboard equipment. The current generated by the fuel cell passes through the main converter, whose task is to adjust it to the values required by the consumers, i.e., batteries and the traction motor. The work will attempt to construct a fuel cell with unique electrodes. This research is a trend that connects industry with science. The first goal will be to obtain hydrogen on a large scale in tube furnaces, to thoroughly analyze the obtained structures (IR), and to apply the method in fuel cells. The second goal is to create low-energy energy storage and distribution station for hydrogen and electric vehicles. The scope of the research includes obtaining a carbon variety and obtaining oxide systems on a large scale using a tubular furnace and then supplying vehicles. Acknowledgments: This work is supported by the Polish Ministry of Science and Education, project "The best of the best! 4.0", number 0911/MNSW/4968 – M.P. and grant 0911/SBAD/2102—B.K.

Keywords: railway, hydrogen, fuel cells, hybrid vehicles

Procedia PDF Downloads 186
1155 Teachers’ Instructional Decisions When Teaching Geometric Transformations

Authors: Lisa Kasmer

Abstract:

Teachers’ instructional decisions shape the structure and content of mathematics lessons and influence the mathematics that students are given the opportunity to learn. Therefore, it is important to better understand how teachers make instructional decisions and thus find new ways to help practicing and future teachers give their students a more effective and robust learning experience. Understanding the relationship between teachers’ instructional decisions and their goals, resources, and orientations (beliefs) is important given the heightened focus on geometric transformations in the middle school mathematics curriculum. This work is significant as the development and support of current and future teachers need more effective ways to teach geometry to their students. The following research questions frame this study: (1) As middle school mathematics teachers plan and enact instruction related to teaching transformations, what thinking processes do they engage in to make decisions about teaching transformations with or without a coordinate system and (2) How do the goals, resources and orientations of these teachers impact their instructional decisions and reveal about their understanding of teaching transformations? Teachers and students alike struggle with understanding transformations; many teachers skip or hurriedly teach transformations at the end of the school year. However, transformations are an important mathematical topic as this topic supports students’ understanding of geometric and spatial reasoning. Geometric transformations are a foundational concept in mathematics, not only for understanding congruence and similarity but for proofs, algebraic functions, and calculus etc. Geometric transformations also underpin the secondary mathematics curriculum, as features of transformations transfer to other areas of mathematics. Teachers’ instructional decisions in terms of goals, orientations, and resources that support these instructional decisions were analyzed using open-coding. Open-coding is recognized as an initial first step in qualitative analysis, where comparisons are made, and preliminary categories are considered. Initial codes and categories from current research on teachers’ thinking processes that are related to the decisions they make while planning and reflecting on the lessons were also noted. Surfacing ideas and additional themes common across teachers while seeking patterns, were compared and analyzed. Finally, attributes of teachers’ goals, orientations and resources were identified in order to begin to build a picture of the reasoning behind their instructional decisions. These categories became the basis for the organization and conceptualization of the data. Preliminary results suggest that teachers often rely on their own orientations about teaching geometric transformations. These beliefs are underpinned by the teachers’ own mathematical knowledge related to teaching transformations. When a teacher does not have a robust understanding of transformations, they are limited by this lack of knowledge. These shortcomings impact students’ opportunities to learn, and thus disadvantage their own understanding of transformations. Teachers’ goals are also limited by their paucity of knowledge regarding transformations, as these goals do not fully represent the range of comprehension a teacher needs to teach this topic well.

Keywords: coordinate plane, geometric transformations, instructional decisions, middle school mathematics

Procedia PDF Downloads 87
1154 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 75
1153 Beyond Sexual Objectification: Moderation Analysis of Trauma and Overexcitability Dynamics in Women

Authors: Ritika Chaturvedi

Abstract:

Introduction: Sexual objectification, characterized by the reduction of an individual to a mere object of sexual desire, remains a pervasive societal issue with profound repercussions on individual well-being. Such experiences, often rooted in systemic and cultural norms, have long-lasting implications for mental and emotional health. This study aims to explore the intricate relationship between experiences of sexual objectification and insidious trauma, further investigating the potential moderating effects of overexcitability as proposed by Dabrowski's theory of positive disintegration. Methodology: The research involved a comprehensive cohort of 204 women, spanning ages from 18 to 65 years. Participants were tasked with completing self-administered questionnaires designed to capture their experiences with sexual objectification. Additionally, the questionnaire assessed symptoms indicative of insidious trauma and explored overexcitability across five distinct domains: emotional, intellectual, psychomotor, sensory, and imaginational. Employing advanced statistical techniques, including multiple regression and moderation analysis, the study sought to decipher the intricate interplay among these variables. Findings: The study's results revealed a compelling positive correlation between experiences of sexual objectification and the onset of symptoms indicative of insidious trauma. This correlation underscores the profound and detrimental effects of sexual objectification on an individual's psychological well-being. Interestingly, the moderation analyses introduced a nuanced understanding, highlighting the differential roles of various overexcitability. Specifically, emotional, intellectual, and sensual overexcitability were found to exacerbate trauma symptomatology. In contrast, psychomotor overexcitability emerged as a protective factor, demonstrating a mitigating influence on the relationship between sexual objectification and trauma. Implications: The study's findings hold significant implications for a diverse array of stakeholders, encompassing mental health practitioners, educators, policymakers, and advocacy groups. The identified moderating effects of overexcitability emphasize the need for tailored interventions that consider individual differences in coping and resilience mechanisms. By recognizing the pivotal role of overexcitability in modulating the traumatic consequences of sexual objectification, this research advocates for the development of more nuanced and targeted support frameworks. Moreover, the study underscores the importance of continued research endeavors to unravel the intricate mechanisms and dynamics underpinning these relationships. Such endeavors are crucial for fostering the evolution of informed, evidence-based interventions and strategies aimed at mitigating the adverse effects of sexual objectification and promoting holistic well-being.

Keywords: sexual objectification, insidious trauma, emotional overexcitability, intellectual overexcitability, sensual overexcitability, psychomotor overexcitability, imaginational overexcitability

Procedia PDF Downloads 53
1152 Simplified Modeling of Post-Soil Interaction for Roadside Safety Barriers

Authors: Charly Julien Nyobe, Eric Jacquelin, Denis Brizard, Alexy Mercier

Abstract:

The performance of road side safety barriers depends largely on the dynamic interactions between post and soil. These interactions play a key role in the response of barriers to crash testing. In the literature, soil-post interaction is modeled in crash test simulations using three approaches. Many researchers have initially used the finite element approach, in which the post is embedded in a continuum soil modelled by solid finite elements. This method represents a more comprehensive and detailed approach, employing a mesh-based continuum to model the soil’s behavior and its interaction with the post. Although this method takes all soil properties into account, it is nevertheless very costly in terms of simulation time. In the second approach, all the points of the post located at a predefined depth are fixed. Although this approach reduces CPU computing time, it overestimates soil-post stiffness. The third approach involves modeling the post as a beam supported by a set of nonlinear springs in the horizontal directions. For support in the vertical direction, the posts were constrained at a node at ground level. This approach is less costly, but the literature does not provide a simple procedure to determine the constitutive law of the springs The aim of this study is to propose a simple and low-cost procedure to obtain the constitutive law of nonlinear springs that model the soil-post interaction. To achieve this objective, we will first present a procedure to obtain the constitutive law of nonlinear springs thanks to the simulation of a soil compression test. The test consists in compressing the soil contained in the tank by a rigid solid, up to a vertical displacement of 200 mm. The resultant force exerted by the ground on the rigid solid and its vertical displacement are extracted and, a force-displacement curve was determined. The proposed procedure for replacing the soil with springs must be tested against a reference model. The reference model consists of a wooden post embedded into the ground and impacted with an impactor. Two simplified models with springs are studied. In the first model, called Kh-Kv model, the springs are attached to the post in the horizontal and vertical directions. The second Kh model is the one described in the literature. The two simplified models are compared with the reference model according to several criteria: the displacement of a node located at the top of the post in vertical and horizontal directions; displacement of the post's center of rotation and impactor velocity. The results given by both simplified models are very close to the reference model results. It is noticeable that the Kh-Kv model is slightly better than the Kh model. Further, the former model is more interesting than the latter as it involves less arbitrary conditions. The simplified models also reduce the simulation time by a factor 4. The Kh-Kv model can therefore be used as a reliable tool to represent the soil-post interaction in a future research and development of road safety barriers.

Keywords: crash tests, nonlinear springs, soil-post interaction modeling, constitutive law

Procedia PDF Downloads 28
1151 A Holistic View of Microbial Community Dynamics during a Toxic Harmful Algal Bloom

Authors: Shi-Bo Feng, Sheng-Jie Zhang, Jin Zhou

Abstract:

The relationship between microbial diversity and algal bloom has received considerable attention for decades. Microbes undoubtedly affect annual bloom events and impact the physiology of both partners, as well as shape ecosystem diversity. However, knowledge about interactions and network correlations among broader-spectrum microbes that lead to the dynamics in a complete bloom cycle are limited. In this study, pyrosequencing and network approaches simultaneously assessed the associate patterns among bacteria, archaea, and microeukaryotes in surface water and sediments in response to a natural dinoflagellate (Alexandrium sp.) bloom. In surface water, among the bacterial community, Gamma-Proteobacteria and Bacteroidetes dominated in the initial bloom stage, while Alpha-Proteobacteria, Cyanobacteria, and Actinobacteria become the most abundant taxa during the post-stage. In the archaea biosphere, it clustered predominantly with Methanogenic members in the early pre-bloom period while the majority of species identified in the later-bloom stage were ammonia-oxidizing archaea and Halobacteriales. In eukaryotes, dinoflagellate (Alexandrium sp.) was dominated in the onset stage, whereas multiply species (such as microzooplankton, diatom, green algae, and rotifera) coexistence in bloom collapse stag. In sediments, the microbial species biomass and richness are much higher than the water body. Only Flavobacteriales and Rhodobacterales showed a slight response to bloom stages. Unlike the bacteria, there are small fluctuations of archaeal and eukaryotic structure in the sediment. The network analyses among the inter-specific associations show that bacteria (Alteromonadaceae, Oceanospirillaceae, Cryomorphaceae, and Piscirickettsiaceae) and some zooplankton (Mediophyceae, Mamiellophyceae, Dictyochophyceae and Trebouxiophyceae) have a stronger impact on the structuring of phytoplankton communities than archaeal effects. The changes in population were also significantly shaped by water temperature and substrate availability (N & P resources). The results suggest that clades are specialized at different time-periods and that the pre-bloom succession was mainly a bottom-up controlled, and late-bloom period was controlled by top-down patterns. Additionally, phytoplankton and prokaryotic communities correlated better with each other, which indicate interactions among microorganisms are critical in controlling plankton dynamics and fates. Our results supplied a wider view (temporal and spatial scales) to understand the microbial ecological responses and their network association during algal blooming. It gives us a potential multidisciplinary explanation for algal-microbe interaction and helps us beyond the traditional view linked to patterns of algal bloom initiation, development, decline, and biogeochemistry.

Keywords: microbial community, harmful algal bloom, ecological process, network

Procedia PDF Downloads 113
1150 Effect of Vitrification on Embryos Euploidy Obtained from Thawed Oocytes

Authors: Natalia Buderatskaya, Igor Ilyin, Julia Gontar, Sergey Lavrynenko, Olga Parnitskaya, Ekaterina Ilyina, Eduard Kapustin, Yana Lakhno

Abstract:

Introduction: It is known that cryopreservation of oocytes has peculiar features due to the complex structure of the oocyte. One of the most important features is that mature oocytes contain meiotic division spindle which is very sensitive even to the slightest variation in temperature. Thus, the main objective of this study is to analyse the resulting euploid embryos obtained from thawed oocytes in comparison with the data of preimplantation genetic screening (PGS) in fresh embryo cycles. Material and Methods: The study was conducted at 'Medical Centre IGR' from January to July 2016. Data were analysed for 908 donor oocytes obtained in 67 cycles of assisted reproductive technologies (ART), of which 693 oocytes were used in the 51 'fresh' cycles (group A), and 215 oocytes - 16 ART programs with vitrification female gametes (group B). The average age of donors in the groups match 27.3±2.9 and 27.8±6.6 years. Stimulation of superovulation was conducted the standard way. Vitrification was performed in 1-2 hours after transvaginal puncture and thawing of oocytes were carried out in accordance with the standard protocol of Cryotech (Japan). Manipulation ICSI was performed 4-5 hours after transvaginal follicle puncture for fresh oocytes, or after defrosting - for vitrified female gametes. For the PGS, an embryonic biopsy was done on the third or on the fifth day after fertilization. Diagnostic procedures were performed using fluorescence in situ hybridization with the study of such chromosomes as 13, 16, 18, 21, 22, X, Y. Only morphologically quality blastocysts were used for the transfer, the estimation of which corresponded to the Gardner criteria. The statistical hypotheses were done using the criteria t, x^2 at a significance levels p<0.05, p<0.01, p<0.001. Results: The mean number of mature oocytes per cycle in group A was 13.58±6.65 and in group B - 13.44±6.68 oocytes for patient. The survival of oocytes after thawing totaled 95.3% (n=205), which indicates a highly effective quality of performed vitrification. The proportion of zygotes in the group A corresponded to 91.1%(n=631), in the group B – 80.5%(n=165), which shows statistically significant difference between the groups (p<0.001) and explained by non-viable oocytes elimination after vitrification. This is confirmed by the fact that on the fifth day of embryos development a statistically significant difference in the number of blastocysts was absent (p>0.05), and constituted respectively 61.6%(n=389) and 63.0%(n=104) in the groups. For the PGS performing 250 embryos analyzed in the group A and 72 embryos - in the group B. The results showed that euploidy in the studied chromosomes were 40.0%(n=100) embryos in the group A and 41.7% (n=30) - in the group B, which shows no statistical significant difference (p>0.05). The indicators of clinical pregnancies in the groups amounted to 64.7% (22 pregnancies per 34 embryo transfers) and 61.5% (8 pregnancies per 13 embryo transfers) respectively, and also had no significant difference between the groups (p>0.05). Conclusions: The results showed that the vitrification does not affect the resulting euploid embryos in assisted reproductive technologies and are not reflected in their morphological characteristics in ART programs.

Keywords: euploid embryos, preimplantation genetic screening, thawing oocytes, vitrification

Procedia PDF Downloads 332
1149 Debriefing Practices and Models: An Integrative Review

Authors: Judson P. LaGrone

Abstract:

Simulation-based education in curricula was once a luxurious component of nursing programs but now serves as a vital element of an individual’s learning experience. A debriefing occurs after the simulation scenario or clinical experience is completed to allow the instructor(s) or trained professional(s) to act as a debriefer to guide a reflection with a purpose of acknowledging, assessing, and synthesizing the thought process, decision-making process, and actions/behaviors performed during the scenario or clinical experience. Debriefing is a vital component of the simulation process and educational experience to allow the learner(s) to progressively build upon past experiences and current scenarios within a safe and welcoming environment with a guided dialog to enhance future practice. The aim of this integrative review was to assess current practices of debriefing models in simulation-based education for health care professionals and students. The following databases were utilized for the search: CINAHL Plus, Cochrane Database of Systemic Reviews, EBSCO (ERIC), PsycINFO (Ovid), and Google Scholar. The advanced search option was useful to narrow down the search of articles (full text, Boolean operators, English language, peer-reviewed, published in the past five years). Key terms included debrief, debriefing, debriefing model, debriefing intervention, psychological debriefing, simulation, simulation-based education, simulation pedagogy, health care professional, nursing student, and learning process. Included studies focus on debriefing after clinical scenarios of nursing students, medical students, and interprofessional teams conducted between 2015 and 2020. Common themes were identified after the analysis of articles matching the search criteria. Several debriefing models are addressed in the literature with similarities of effectiveness for participants in clinical simulation-based pedagogy. Themes identified included (a) importance of debriefing in simulation-based pedagogy, (b) environment for which debriefing takes place is an important consideration, (c) individuals who should conduct the debrief, (d) length of debrief, and (e) methodology of the debrief. Debriefing models supported by theoretical frameworks and facilitated by trained staff are vital for a successful debriefing experience. Models differed from self-debriefing, facilitator-led debriefing, video-assisted debriefing, rapid cycle deliberate practice, and reflective debriefing. A reoccurring finding was centered around the emphasis of continued research for systematic tool development and analysis of the validity and effectiveness of current debriefing practices. There is a lack of consistency of debriefing models among nursing curriculum with an increasing rate of ill-prepared faculty to facilitate the debriefing phase of the simulation.

Keywords: debriefing model, debriefing intervention, health care professional, simulation-based education

Procedia PDF Downloads 141
1148 Evaluation of Batch Splitting in the Context of Load Scattering

Authors: S. Wesebaum, S. Willeke

Abstract:

Production companies are faced with an increasingly turbulent business environment, which demands very high production volumes- and delivery date flexibility. If a decoupling by storage stages is not possible (e.g. at a contract manufacturing company) or undesirable from a logistical point of view, load scattering effects the production processes. ‘Load’ characterizes timing and quantity incidence of production orders (e.g. in work content hours) to workstations in the production, which results in specific capacity requirements. Insufficient coordination between load (demand capacity) and capacity supply results in heavy load scattering, which can be described by deviations and uncertainties in the input behavior of a capacity unit. In order to respond to fluctuating loads, companies try to implement consistent and realizable input behavior using the capacity supply available. For example, a uniform and high level of equipment capacity utilization keeps production costs down. In contrast, strong load scattering at workstations leads to performance loss or disproportionately fluctuating WIP, whereby the logistics objectives are affected negatively. Options for reducing load scattering are e.g. shifting the start and end dates of orders, batch splitting and outsourcing of operations or shifting to other workstations. This leads to an adjustment of load to capacity supply, and thus to a reduction of load scattering. If the adaptation of load to capacity cannot be satisfied completely, possibly flexible capacity must be used to ensure that the performance of a workstation does not decrease for a given load. Where the use of flexible capacities normally raises costs, an adjustment of load to capacity supply reduces load scattering and, in consequence, costs. In the literature you mostly find qualitative statements for describing load scattering. Quantitative evaluation methods that describe load mathematically are rare. In this article the authors discuss existing approaches for calculating load scattering and their various disadvantages such as lack of opportunity for normalization. These approaches are the basis for the development of our mathematical quantification approach for describing load scattering that compensates the disadvantages of the current quantification approaches. After presenting our mathematical quantification approach, the method of batch splitting will be described. Batch splitting allows the adaptation of load to capacity to reduce load scattering. After describing the method, it will be explicitly analyzed in the context of the logistic curve theory by Nyhuis using the stretch factor α1 in order to evaluate the impact of the method of batch splitting on load scattering and on logistic curves. The conclusion of this article will be to show how the methods and approaches presented can help companies in a turbulent environment to quantify the occurring work load scattering accurately and apply an efficient method for adjusting work load to capacity supply. In this way, the achievements of the logistical objectives are increased without causing additional costs.

Keywords: batch splitting, production logistics, production planning and control, quantification, load scattering

Procedia PDF Downloads 397
1147 Jungle Justice on Emotional Health Challenges of Residents in Lagos Metropolis

Authors: Aaron Akinloye

Abstract:

this research focuses on the impact of jungle justice on the emotional health challenges experienced by residents in the Lagos metropolitan city in Nigeria. Jungle justice refers to the practice of individuals taking the law into their own hands and administering punishment without proper legal procedures. The aim of this study is to investigate the influence of jungle justice on the emotional challenges faced by residents in Lagos. The specific objectives of the study are to examine the effects of jungle justice on trauma, pressure, fear, and depression among residents. The study adopts a descriptive survey research design and uses a questionnaire as the research instrument. The population of the study consisted of residents in the three senatorial districts that make up Lagos State. A simple random sampling technique was used to select two Local Government Areas (Yaba and Shomolu) from each of the three senatorial districts of Lagos State. Also, a simple random sampling technique was used to select fifty (50) residents from each of the chosen Local Government Areas to make three hundred (300) residents that formed the sample of the study. Accidental sampling technique is employed to select a sample of 300 residents. Data on the variables of interest is collected using a self-developed questionnaire. The research instrument undergoes validation through face, content, and construct validation processes. The reliability coefficient of the instrument is found to be 0.84. The study reveals that jungle justice significantly influences trauma, pressure, fear, and depression among residents in Lagos metropolitan city. The statistical analysis shows significant relationships between jungle justice and these emotional health challenges (df (298) t= 2.33, p< 0.05; df (298) t= 2.16, p< 0.05; df (298) t= 2.20, p< 0.05; df (298) t= 2.14, p< 0.05). This study contributes to the literature by highlighting the negative effects of jungle justice on the emotional well-being of residents. It emphasizes the importance of addressing this issue and implementing measures to prevent such vigilante actions. Data is collected through the administration of the self-developed questionnaire to the selected residents. The collected data is then analyzed using inferential statistics, specifically mean analysis, to examine the relationships between jungle justice and the emotional health challenges experienced by the residents. The main question addressed in this study is how jungle justice affects the emotional health challenges faced by residents in Lagos metropolitan city. Conclusion: The study concludes that jungle justice has a significant influence on trauma, pressure, fear, and depression among residents. To address this issue, recommendations are made, including the implementation of comprehensive awareness campaigns, improvement of law enforcement agencies, development of support systems for victims, and revision of the legal framework to effectively address jungle justice. Overall, this research contributes to the understanding of the consequences of jungle justice and provides recommendations for intervention to protect the emotional well-being of residents in Lagos metropolitan city.

Keywords: jungle justice, emotional health, depression, anger

Procedia PDF Downloads 74
1146 Examining the Relationship Between Green Procurement Practices and Firm’s Performance in Ghana

Authors: Clement Yeboah

Abstract:

Prior research concludes that environmental commitment positively drives organisational performance. Nonetheless, the nexus and conditions under which environmental commitment capabilities contribute to a firm’s performance are less understood. The purpose of this quantitative relational study was to examine the relationship between environmental commitment and 500 firms’ performances in Ghana. The researchers further seek to draw insights from the resource-based view to conceptualize environmental commitment and green procurement practices as resource capabilities to enhance firm performance. The researchers used insights from the contingent resource-based view to examine green leadership orientation conditions under which environmental commitment capability contributes to firm performance through green procurement practices. The study’s conceptual framework was tested on primary data from some firms in the Ghanaian market. PROCESS Macro was used to test the study’s hypotheses. Beyond that, green procurement practices mediated the association between environmental commitment capabilities and the firm’s performance. The study further seeks to find out whether green leadership orientation positively moderates the indirect relationship between environmental commitment capabilities and firm performance through green procurement practices. While conventional wisdom suggests that improved environmental commitment capabilities help improve a firm’s performance, this study tested this presumed relationship between environmental commitment capabilities and firm performance and provides theoretical arguments and empirical evidence to justify how green procurement practices uniquely and in synergy with green leadership orientation transform this relationship. The study results indicated a positive correlation between environmental commitment and firm performance. This result suggests that firms that prioritize environmental sustainability and demonstrate a strong commitment to environmentally responsible practices tend to experience better overall performance. This includes financial gains, operational efficiency, enhanced reputation, and improved relationships with stakeholders. The study's findings inform policy formulation in Ghana related to environmental regulations, incentives, and support mechanisms. Policymakers can use the insights to design policies that encourage and reward firms for their environmental commitments, thereby fostering a more sustainable and environmentally responsible business environment. The findings from such research can influence the design and development of educational programs in Ghana, specifically in fields related to sustainability, environmental management, and corporate social responsibility (CSR). Institutions may consider integrating environmental and sustainability topics into their business and management courses to create awareness and promote responsible practices among future business professionals. Also the study results can also promote the adoption of environmental accounting practices in Ghana. By recognizing and measuring the environmental impacts and costs associated with business activities, firms can better understand the financial implications of their environmental commitments and develop strategies for improved performance.

Keywords: firm’s performance, green procurement practice, environmental commitment, environmental management, sustainability

Procedia PDF Downloads 85
1145 Digital Twins: Towards an Overarching Framework for the Built Environment

Authors: Astrid Bagireanu, Julio Bros-Williamson, Mila Duncheva, John Currie

Abstract:

Digital Twins (DTs) have entered the built environment from more established industries like aviation and manufacturing, although there has never been a common goal for utilising DTs at scale. Defined as the cyber-physical integration of data between an asset and its virtual counterpart, DT has been identified in literature from an operational standpoint – in addition to monitoring the performance of a built asset. However, this has never been translated into how DTs should be implemented into a project and what responsibilities each project stakeholder holds in the realisation of a DT. What is needed is an approach to translate these requirements into actionable DT dimensions. This paper presents a foundation for an overarching framework specific to the built environment. For the purposes of this research, the UK widely used the Royal Institute of British Architects (RIBA) Plan of Work from 2020 is used as a basis for itemising project stages. The RIBA Plan of Work consists of eight stages designed to inform on the definition, briefing, design, coordination, construction, handover, and use of a built asset. Similar project stages are utilised in other countries; therefore, the recommendations from the interviews presented in this paper are applicable internationally. Simultaneously, there is not a single mainstream software resource that leverages DT abilities. This ambiguity meets an unparalleled ambition from governments and industries worldwide to achieve a national grid of interconnected DTs. For the construction industry to access these benefits, it necessitates a defined starting point. This research aims to provide a comprehensive understanding of the potential applications and ramifications of DT in the context of the built environment. This paper is an integral part of a larger research aimed at developing a conceptual framework for the Architecture, Engineering, and Construction (AEC) sector following a conventional project timeline. Therefore, this paper plays a pivotal role in providing practical insights and a tangible foundation for developing a stage-by-stage approach to assimilate the potential of DT within the built environment. First, the research focuses on a review of relevant literature, albeit acknowledging the inherent constraint of limited sources available. Secondly, a qualitative study compiling the views of 14 DT experts is presented, concluding with an inductive analysis of the interview findings - ultimately highlighting the barriers and strengths of DT in the context of framework development. As parallel developments aim to progress net-zero-centred design and improve project efficiencies across the built environment, the limited resources available to support DTs should be leveraged to propel the industry to reach its digitalisation era, in which AEC stakeholders have a fundamental role in understanding this, from the earliest stages of a project.

Keywords: digital twins, decision-making, design, net-zero, built environment

Procedia PDF Downloads 120
1144 Investigating the Flow Physics within Vortex-Shockwave Interactions

Authors: Frederick Ferguson, Dehua Feng, Yang Gao

Abstract:

No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.

Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme

Procedia PDF Downloads 136
1143 3D Nanostructured Assembly of 2D Transition Metal Chalcogenide/Graphene as High Performance Electrocatalysts

Authors: Sunil P. Lonkar, Vishnu V. Pillai, Saeed Alhassan

Abstract:

Design and development of highly efficient, inexpensive, and long-term stable earth-abundant electrocatalysts hold tremendous promise for hydrogen evolution reaction (HER) in water electrolysis. The 2D transition metal dichalcogenides, especially molybdenum disulfide attracted a great deal of interests due to its high electrocatalytic activity. However, due to its poor electrical conductivity and limited exposed active sites, the performance of these catalysts is limited. In this context, a facile and scalable synthesis method for fabrication nanostructured electrocatalysts composed 3D graphene porous aerogels supported with MoS₂ and WS₂ is highly desired. Here we developed a highly active and stable electrocatalyst catalyst for the HER by growing it into a 3D porous architecture on conducting graphene. The resulting nanohybrids were thoroughly investigated by means of several characterization techniques to understand structure and properties. Moreover, the HER performance of these 3D catalysts is expected to greatly improve in compared to other, well-known catalysts which mainly benefits from the improved electrical conductivity of the by graphene and porous structures of the support. This technologically scalable process can afford efficient electrocatalysts for hydrogen evolution reactions (HER) and hydrodesulfurization catalysts for sulfur-rich petroleum fuels. Owing to the lower cost and higher performance, the resulting materials holds high potential for various energy and catalysis applications. In typical hydrothermal method, sonicated GO aqueous dispersion (5 mg mL⁻¹) was mixed with ammonium tetrathiomolybdate (ATTM) and tungsten molybdate was treated in a sealed Teflon autoclave at 200 ◦C for 4h. After cooling, a black solid macroporous hydrogel was recovered washed under running de-ionized water to remove any by products and metal ions. The obtained hydrogels were then freeze-dried for 24 h and was further subjected to thermal annealing driven crystallization at 600 ◦C for 2h to ensure complete thermal reduction of RGO into graphene and formation of highly crystalline MoS₂ and WoS₂ phases. The resulting 3D nanohybrids were characterized to understand the structure and properties. The SEM-EDS clearly reveals the formation of highly porous material with a uniform distribution of MoS₂ and WS₂ phases. In conclusion, a novice strategy for fabrication of 3D nanostructured MoS₂-WS₂/graphene is presented. The characterizations revealed that the in-situ formed promoters uniformly dispersed on to few layered MoS₂¬-WS₂ nanosheets that are well-supported on graphene surface. The resulting 3D hybrids hold high promise as potential electrocatalyst and hydrodesulfurization catalyst.

Keywords: electrocatalysts, graphene, transition metal chalcogenide, 3D assembly

Procedia PDF Downloads 132
1142 Reactors with Effective Mixing as a Solutions for Micro-Biogas Plant

Authors: M. Zielinski, M. Debowski, P. Rusanowska, A. Glowacka-Gil, M. Zielinska, A. Cydzik-Kwiatkowska, J. Kazimierowicz

Abstract:

Technologies for the micro-biogas plant with heating and mixing systems are presented as a part of the Research Coordination for a Low-Cost Biomethane Production at Small and Medium Scale Applications (Record Biomap). The main objective of the Record Biomap project is to build a network of operators and scientific institutions interested in cooperation and the development of promising technologies in the sector of small and medium-sized biogas plants. The activities carried out in the project will bridge the gap between research and market and reduce the time of implementation of new, efficient technological and technical solutions. Reactor with simultaneously mixing and heating system is a concrete tank with a rectangular cross-section. In the reactor, heating is integrated with the mixing of substrate and anaerobic sludge. This reactor is solution dedicated for substrates with high solids content, which cannot be introduced to the reactor with pumps, even with positive displacement pumps. Substrates are poured to the reactor and then with a screw pump, they are mixed with anaerobic sludge. The pumped sludge, flowing through the screw pump, is simultaneously heated by a heat exchanger. The level of the fermentation sludge inside the reactor chamber is above the bottom edge of the cover. Cover of the reactor is equipped with the screw pump driver. Inside the reactor, an electric motor is installed that is driving a screw pump. The heated sludge circulates in the digester. The post-fermented sludge is collected using a drain well. The inlet to the drain well is below the level of the sludge in the digester. The biogas is discharged from the reactor by the biogas intake valve located on the cover. The technology is very useful for fermentation of lignocellulosic biomass and substrates with high content of dry mass (organic wastes). The other technology is a reactor for micro-biogas plant with a pressure mixing system. The reactor has a form of plastic or concrete tank with a circular cross-section. The effective mixing of sludge is ensured by profiled at 90° bottom of the tank. Substrates for fermentation are supplied by an inlet well. The inlet well is equipped with a cover that eliminates odour release. The introduction of a new portion of substrates is preceded by pumping of digestate to the disposal well. Optionally, digestate can gravitationally flow to digestate storage tank. The obtained biogas is discharged into the separator. The valve supplies biogas to the blower. The blower presses the biogas from the fermentation chamber in such a way as to facilitate the introduction of a new portion of substrates. Biogas is discharged from the reactor by valve that enables biogas removal but prevents suction from outside the reactor.

Keywords: biogas, digestion, heating system, mixing system

Procedia PDF Downloads 151
1141 Progressing Institutional Quality Assurance and Accreditation of Higher Education Programmes

Authors: Dominique Parrish

Abstract:

Globally, higher education institutions are responsible for the quality assurance and accreditation of their educational programmes (Courses). The primary purpose of these activities is to ensure that the educational standards of the governing higher education authority are met and the quality of the education provided to students is assured. Despite policies and frameworks being established in many countries, to improve the veracity and accountability of quality assurance and accreditation processes, there are reportedly still mistakes, gaps and deficiencies in these processes. An analysis of Australian universities’ quality assurance and accreditation processes noted that significant improvements were needed in managing these processes and ensuring that review recommendations were implemented. It has also been suggested that the following principles are critical for higher education quality assurance and accreditation to be effective and sustainable: academic standards and performance outcomes must be defined, attainable and monitored; those involved in providing the higher education must assume responsibility for the associated quality assurance and accreditation; potential academic risks must be identified and management solutions developed; and the expectations of the public, governments and students should be considered and incorporated into Course enhancements. This phenomenological study, which was conducted in a Faculty of Science, Medicine and Health in an Australian university, sought to systematically and iteratively develop an effective quality assurance and accreditation process that integrated the evidence-based principles of success and promoted meaningful and sustainable change. Qualitative evaluative feedback was gathered, over a period of eleven months (January - November 2014), from faculty staff engaged in the quality assurance and accreditation of forty-eight undergraduate and postgraduate Courses. Reflexive analysis was used to analyse the data and inform ongoing modifications and developments to the assurance and accreditation process as well as the associated supporting resources. The study resulted in the development of a formal quality assurance and accreditation process together with a suite of targeted resources that were identified as critical for success. The research findings also provided some insights into the institutional enablers that were antecedents to successful quality assurance and accreditation processes as well as meaningful change in the educational practices of academics. While longitudinal data will be collected to further assess the value of the assurance and accreditation process on educational quality, early indicators are that there has been a change in the pedagogical perspectives and activities of academic staff and growing momentum to explore opportunities to further enhance and develop Courses. This presentation will explain the formal quality assurance and accreditation process as well as the component parts, which resulted from this study. The targeted resources that were developed will be described, the pertinent factors that contributed to the success of the process will be discussed and early indicators of sustainable academic change as well as suggestions for future research will be outlined.

Keywords: academic standards, quality assurance and accreditation, phenomenological study, process, resources

Procedia PDF Downloads 377
1140 Creation and Evaluation of an Academic Blog of Tools for the Self-Correction of Written Production in English

Authors: Brady, Imelda Katherine, Da Cunha Fanego, Iria

Abstract:

Today's university students are considered digital natives and the use of Information Technologies (ITs) forms a large part of their study and learning. In the context of language studies, applications that help with revisions of grammar or vocabulary are particularly useful, especially if they are open access. There are studies that show the effectiveness of this type of application in the learning of English as a foreign language and that using IT can help learners become more autonomous in foreign language acquisition, given that these applications can enhance awareness of the learning process; this means that learners are less dependent on the teacher for corrective feedback. We also propose that the exploitation of these technologies also enhances the work of the language instructor wishing to incorporate IT into his/her practice. In this context, the aim of this paper is to present the creation of a repository of tools that provide support in the writing and correction of texts in English and the assessment of their usefulness on behalf of university students enrolled in the English Studies Degree. The project seeks to encourage the development of autonomous learning through the acquisition of skills linked to the self-correction of written work in English. To comply with the above, our methodology follows five phases. First of all, a selection of the main open-access online applications available for the correction of written texts in English is made: AutoCrit, Hemingway, Grammarly, LanguageTool, OutWrite, PaperRater, ProWritingAid, Reverso, Slick Write, Spell Check Plus and Virtual Writing Tutor. Secondly, the functionalities of each of these tools (spelling, grammar, style correction, etc.) are analyzed. Thirdly, explanatory materials (texts and video tutorials) are prepared on each tool. Fourth, these materials are uploaded into a repository of our university in the form of an institutional blog, which is made available to students and the general public. Finally, a survey was designed to collect students’ feedback. The survey aimed to analyse the usefulness of the blog and the quality of the explanatory materials as well as the degree of usefulness that students assigned to each of the tools offered. In this paper, we present the results of the analysis of data received from 33 students in the 1st semester of the 21-22 academic year. One result we highlight in our paper is that the students have rated this resource very highly, in addition to offering very valuable information on the perceived usefulness of the applications provided for them to review. Our work, carried out within the framework of a teaching innovation project funded by our university, emphasizes that teachers need to design methodological strategies that help their students improve the quality of their productions written in English and, by extension, to improve their linguistic competence.

Keywords: academic blog, open access tools, online self-correction, written production in English, university learning

Procedia PDF Downloads 101