Search results for: managing code blue
92 Surface Roughness in the Incremental Forming of Drawing Quality Cold Rolled CR2 Steel Sheet
Authors: Zeradam Yeshiwas, A. Krishnaia
Abstract:
The aim of this study is to verify the resulting surface roughness of parts formed by the Single-Point Incremental Forming (SPIF) process for an ISO 3574 Drawing Quality Cold Rolled CR2 Steel. The chemical composition of drawing quality Cold Rolled CR2 steel is comprised of 0.12 percent of carbon, 0.5 percent of manganese, 0.035 percent of sulfur, 0.04 percent phosphorous, and the remaining percentage is iron with negligible impurities. The experiments were performed on a 3-axis vertical CNC milling machining center equipped with a tool setup comprising a fixture and forming tools specifically designed and fabricated for the process. The CNC milling machine was used to transfer the tool path code generated in Mastercam 2017 environment into three-dimensional motions by the linear incremental progress of the spindle. The blanks of Drawing Quality Cold Rolled CR2 steel sheets of 1 mm of thickness have been fixed along their periphery by a fixture and hardened high-speed steel (HSS) tools with a hemispherical tip of 8, 10 and 12mm of diameter were employed to fabricate sample parts. To investigate the surface roughness, hyperbolic-cone shape specimens were fabricated based on the chosen experimental design. The effect of process parameters on the surface roughness was studied using three important process parameters, i.e., tool diameter, feed rate, and step depth. In this study, the Taylor-Hobson Surtronic 3+ surface roughness tester profilometer was used to determine the surface roughness of the parts fabricated using the arithmetic mean deviation (Rₐ). In this instrument, a small tip is dragged across a surface while its deflection is recorded. Finally, the optimum process parameters and the main factor affecting surface roughness were found using the Taguchi design of the experiment and ANOVA. A Taguchi experiment design with three factors and three levels for each factor, the standard orthogonal array L9 (3³) was selected for the study using the array selection table. The lowest value of surface roughness is significant for surface roughness improvement. For this objective, the ‘‘smaller-the-better’’ equation was used for the calculation of the S/N ratio. The finishing roughness parameter Ra has been measured for the different process combinations. The arithmetic means deviation (Rₐ) was measured via the experimental design for each combination of the control factors by using Taguchi experimental design. Four roughness measurements were taken for a single component and the average roughness was taken to optimize the surface roughness. The lowest value of Rₐ is very important for surface roughness improvement. For this reason, the ‘‘smaller-the-better’’ Equation was used for the calculation of the S/N ratio. Analysis of the effect of each control factor on the surface roughness was performed with a ‘‘S/N response table’’. Optimum surface roughness was obtained at a feed rate of 1500 mm/min, with a tool radius of 12 mm, and with a step depth of 0.5 mm. The ANOVA result shows that step depth is an essential factor affecting surface roughness (91.1 %).Keywords: incremental forming, SPIF, drawing quality steel, surface roughness, roughness behavior
Procedia PDF Downloads 6291 Sugarcane Trash Biochar: Effect of the Temperature in the Porosity
Authors: Gabriela T. Nakashima, Elias R. D. Padilla, Joao L. Barros, Gabriela B. Belini, Hiroyuki Yamamoto, Fabio M. Yamaji
Abstract:
Biochar can be an alternative to use sugarcane trash. Biochar is a solid material obtained from pyrolysis, that is a biomass thermal degradation with low or no O₂ concentration. Pyrolysis transforms the carbon that is commonly found in other organic structures into a carbon with more stability that can resist microbial decomposition. Biochar has a versatility of uses such as soil fertility, carbon sequestration, energy generation, ecological restoration, and soil remediation. Biochar has a great ability to retain water and nutrients in the soil so that this material can improve the efficiency of irrigation and fertilization. The aim of this study was to characterize biochar produced from sugarcane trash in three different pyrolysis temperatures and determine the lowest temperature with the high yield and carbon content. Physical characterization of this biochar was performed to help the evaluation for the best production conditions. Sugarcane (Saccharum officinarum) trash was collected at Corredeira Farm, located in Ibaté, São Paulo State, Brazil. The farm has 800 hectares of planted area with an average yield of 87 t·ha⁻¹. The sugarcane varieties planted on the farm are: RB 855453, RB 867515, RB 855536, SP 803280, SP 813250. Sugarcane trash was dried and crushed into 50 mm pieces. Crucibles and lids were used to settle the sugarcane trash samples. The higher amount of sugarcane trash was added to the crucible to avoid the O₂ concentration. Biochar production was performed in three different pyrolysis temperatures (200°C, 325°C, 450°C) in 2 hours residence time in the muffle furnace. Gravimetric yield of biochar was obtained. Proximate analysis of biochar was done using ASTM E-872 and ABNT NBR 8112. Volatile matter and ash content were calculated by direct weight loss and fixed carbon content calculated by difference. Porosity measurement was evaluated using an automatic gas adsorption device, Autosorb-1, with CO₂ described by Nakatani. Approximately 0.5 g of biochar in 2 mm particle sizes were used for each measurement. Vacuum outgassing was performed as a pre-treatment in different conditions for each biochar temperature. The pore size distribution of micropores was determined using Horváth-Kawazoe method. Biochar presented different colors for each treatment. Biochar - 200°C presented a higher number of pieces with 10mm or more and did not present the dark black color like other treatments after 2 h residence time in muffle furnace. Also, this treatment had the higher content of volatiles and the lower amount of fixed carbon. In porosity analysis, while the temperature treatments increase, the amount of pores also increase. The increase in temperature resulted in a biochar with a better quality. The pores in biochar can help in the soil aeration, adsorption, water retention. Acknowledgment: This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brazil – PROAP-CAPES, PDSE and CAPES - Finance Code 001.Keywords: proximate analysis, pyrolysis, soil amendment, sugarcane straw
Procedia PDF Downloads 21490 Benefits of Environmental Aids to Chronobiology Management and Its Impact on Depressive Mood in an Operational Setting
Authors: M. Trousselard, D. Steiler, C. Drogou, P. van-Beers, G. Lamour, S. N. Crosnier, O. Bouilland, P. Dubost, M. Chennaoui, D. Léger
Abstract:
According to published data, undersea navigation for long periods (nuclear-powered ballistic missile submarine, SSBN) constitutes an extreme environment in which crews are subjected to multiple stresses, including the absence of natural light, illuminance below 1,000 lux, and watch schedules that do not respect natural chronobiological rhythms, for a period of 60-80 days. These stresses seem clearly detrimental to the submariners’ sleep, with consequences for their affective (seasonal affective disorder-like) and cognitive functioning. In the long term, there are abundant publications regarding the consequences of sleep disruption for the occurrence of organic cardiovascular, metabolic, immunological or malignant diseases. It seems essential to propose countermeasures for the duration of the patrol in order to reduce the negative physiological effects on the sleep and mood of submariners. Light therapy, the preferred treatment for dysfunctions of the internal biological clock and the resulting seasonal depression, cannot be used without data to assist knowledge of submariners’ chronobiology (melatonin secretion curve) during patrols, given the unusual characteristics of their working environment. These data are not available in the literature. The aim of this project was to assess, in the course of two studies, the benefits of two environmental techniques for managing chronobiological stress: techniques for optimizing potential (TOP; study 1)3, an existing programme to help in the psychophysiological regulation of stress and sleep in the armed forces, and dawn and dusk simulators (DDS, study 2). For each experiment, psychological, physiological (sleep) or biological (melatonin secretion) data were collected on D20 and D50 of patrol. In the first experiment, we studied sleep and depressive distress in 19 submariners in an operational setting on board an SSBM during a first patrol, and assessed the impact of TOP on the quality of sleep and depressive distress in these same submariners over the course of a second patrol. The submariners were trained in TOP between the two patrols for a 2-month period, at a rate of 1 h of training per week, and assigned daily informal exercises. Results show moderate disruptions in sleep pattern and duration associated with the intensity of depressive distress. The use of TOP during the following patrol improved sleep and depressive mood only in submariners who regularly practiced the techniques. In light of these limited benefits, we assessed, in a second experiment, the benefits of DDS on chronobiology (daily secretion of melatonin) and depressive distress. Ninety submariners were randomly allocated to two groups, group 1 using DDS daily, and group 2 constituting the control group. Although the placebo effect was not controlled, results showed a beneficial effect on chronobiology and depressive mood for submariners with a morning chronotype. Conclusions: These findings demonstrate the difficulty of practicing the tools of psychophysiological management in real life. They raise the question of the subjects’ autonomy with respect to using aids that involve regular practice. It seems important to study autonomy in future studies, as a cognitive resource resulting from the interaction between internal positive resources and “coping” resources, to gain a better understanding of compliance problems.Keywords: chronobiology, light therapy, seasonal affective disorder, sleep, stress, stress management, submarine
Procedia PDF Downloads 45689 Big Data Applications for Transportation Planning
Authors: Antonella Falanga, Armando Cartenì
Abstract:
"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning
Procedia PDF Downloads 6088 Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre of Visual Arts in the Context of Agile, Lean and Hybrid Project Management Approaches
Authors: Maria Ledinskaya
Abstract:
This paper examines the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts in the context of Agile, Lean, and Hybrid project management. It is part case study and part literature review. To date, relatively little has been written about non-traditional project management approaches in heritage conservation. This paper seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation, by referencing their practical application on a recent museum-based conservation project. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre for Visual Arts by private collectors Michael and Joyce Morris. The first part introduces the chronological timeline and key elements of the project. It describes a medium-size conservation project of moderate complexity, which was planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown condition and materials, unconfirmed budget. The project was also impacted by the unknown unknowns of the COVID-19 pandemic, such as indeterminate lockdowns, and the need to accommodate social distancing and remote communications. The author, a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Collection Conservation Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. Subsequent sections examine the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment, due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Collection Conservation Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, as well as the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics, particularly with respect to change management, bespoke ethics, shared decision-making, and value-based cost-benefit conservation strategy. The author concludes that the Morris Collection Conservation Project had multiple Agile and Lean features which were instrumental to the successful delivery of the project. These key features are identified as distributed decision making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point largely in favour of a Hybrid model which combines traditional and alternative project processes and tools to suit the specific needs of the project.Keywords: project management, conservation, waterfall, agile, lean, hybrid
Procedia PDF Downloads 9987 Boost for Online Language Course through Peer Evaluation
Authors: Kirsi Korkealehto
Abstract:
The purpose of this research was to investigate how the peer evaluation concept was perceived by language teachers developing online language courses. The online language courses in question were developed in language teacher teams within a nationwide KiVAKO-project funded by the Finnish Ministry of Education and Culture. The participants of the project were 86 language teachers of 26 higher education institutions in Finland. The KiVAKO-project aims to strengthen the language capital at higher education institutions by building a nationwide online language course offering on a shared platform. All higher education students can study the courses regardless of their home institutions. The project covers the following languages: Chinese, Estonian, Finnish Sign Language, French, German, Italian, Japanese, Korean, Portuguese, Russian, and Spanish on the levels CEFR A1-C1. The courses were piloted in the autumn term of 2019, and an online peer evaluation session was organised for all project participating teachers in spring 2020. The peer evaluation utilised the quality criteria for online implementation, which was developed earlier within the eAMK-project. The eAMK-project was also funded by the Finnish Ministry of Education and Culture with the aim to improve higher education institution teachers’ digital and pedagogical competences. In the online peer evaluation session, the teachers were divided into Zoom breakout rooms, in each of which two pilot courses were presented by their teachers dialogically. The other language teachers provided feedback on the course on the basis of the quality criteria. Thereafter good practices and ideas were gathered to an online document. The breakout rooms were facilitated by one teacher who was instructed and provided a slide-set prior to the online session. After the online peer evaluation sessions, the language teachers were asked to respond to an online questionnaire for feedback. The questionnaire included three multiple-choice questions using the Likert-scale rating and two open-ended questions. The online questionnaire was answered after the sessions immediately, the questionnaire link and the QR-code to it was on the last slide of the session, and it was responded at the site. The data comprise online questionnaire responses of the peer evaluation session and the researcher’s observations during the sessions. The data were analysed with a qualitative content analysis method with the help of Atlas.ti programme, and the Likert scale answers provided results per se. The observations were used as complementary data to support the primary data. The findings indicate that the working in the breakout rooms was successful, and the workshops proceeded smoothly. The workshops were perceived as beneficial in terms of improving the piloted courses and developing the participants’ own work as teachers. Further, the language teachers stated that the collegial discussions and sharing the ideas were fruitful. The aspects to improve the workshops were to give more time for free discussions and the opportunity to familiarize oneself with the quality criteria and the presented language courses beforehand. The quality criteria were considered to provide a suitable frame for self- and peer evaluations.Keywords: higher education, language learning, online learning, peer-evaluation
Procedia PDF Downloads 12686 A Proposed Framework for Better Managing Small Group Projects on an Undergraduate Foundation Programme at an International University Campus
Authors: Sweta Rout-Hoolash
Abstract:
Each year, selected students from around 20 countries begin their degrees at Middlesex University with the International Foundation Program (IFP), developing the skills required for academic study at a UK university. The IFP runs for 30 learning/teaching weeks at Middlesex University Mauritius Branch Campus, which is an international campus of UK’s Middlesex University. Successful IFP students join their degree courses already settled into life at their chosen campus (London, Dubai, Mauritius or Malta) and confident that they understand what is required for degree study. Although part of the School of Science and Technology, in Mauritius it prepares students for undergraduate level across all Schools represented on campus – including disciplines such as Accounting, Business, Computing, Law, Media and Psychology. The researcher has critically reviewed the framework and resources in the curriculum for a particular six week period of IFP study (dedicated group work phase). Despite working together closely for 24 weeks, IFP students approach the final 6 week small group work project phase with mainly inhibitive feelings. It was observed that students did not engage effectively in the group work exercise. Additionally, groups who seemed to be working well did not necessarily produce results reflecting effective collaboration, nor individual members’ results which were better than prior efforts. The researcher identified scope for change and innovation in the IFP curriculum and how group work is introduced and facilitated. The study explores the challenges of groupwork in the context of the Mauritius campus, though it is clear that the implications of the project are not restricted to one campus only. The presentation offers a reflective review on the previous structure put in place for the management of small group assessed projects on the programme from both the student and tutor perspective. The focus of the research perspective is the student voice, by taking into consideration past and present IFP students’ experiences as written in their learning journals. Further, it proposes the introduction of a revised framework to help students take greater ownership of the group work process in order to engage more effectively with the learning outcomes of this crucial phase of the programme. The study has critically reviewed recent and seminal literature on how to achieve greater student ownership during this phase especially under an environment of assessed multicultural group work. The presentation proposes several new approaches for encouraging students to take more control of the collaboration process. Detailed consideration is given to how the proposed changes impact on the work of other stakeholders, or partners to student learning. Clear proposals are laid out for evaluation of the different approaches intended to be implemented during the upcoming academic year (student voice through their own submitted reflections, focus group interviews and through the assessment results). The proposals presented are all realistic and have the potential to transform students’ learning. Furthermore, the study has engaged with the UK Professional Standards Framework for teaching and supporting learning in higher education, and demonstrates practice at the level of ‘fellow’ of the Higher Education Academy (HEA).Keywords: collaborative peer learning, enhancing learning experiences, group work assessment, learning communities, multicultural diverse classrooms, studying abroad
Procedia PDF Downloads 32785 Healthcare Utilization and Costs of Specific Obesity Related Health Conditions in Alberta, Canada
Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach
Abstract:
Obesity-related health conditions impose a substantial economic burden on payers due to increased healthcare use. Estimates of healthcare resource use and costs associated with obesity-related comorbidities are needed to inform policies and interventions targeting these conditions. Methods: Adults living with obesity were identified (a procedure-related body mass index code for class 2/3 obesity between 2012 and 2019 in Alberta, Canada; excluding those with bariatric surgery), and outcomes were compared over 1-year (2019/2020) between those who had and did not have specific obesity-related comorbidities. The probability of using a healthcare service (based on the odds ratio of a zero [OR-zero] cost) was compared; 95% confidence intervals (CI) were reported. Logistic regression and a generalized linear model with log link and gamma distribution were used for total healthcare cost comparisons ($CDN); cost ratios and estimated cost differences (95% CI) were reported. Potential socio-demographic and clinical confounders were adjusted for, and incremental cost differences were representative of a referent case. Results: A total of 220,190 adults living with obesity were included; 44% had hypertension, 25% had osteoarthritis, 24% had type-2 diabetes, 17% had cardiovascular disease, 12% had insulin resistance, 9% had chronic back pain, and 4% of females had polycystic ovarian syndrome (PCOS). The probability of hospitalization, ED visit, and ambulatory care was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (hospitalization: 1.8-times [OR-zero: 0.57 [0.55/0.59]] / ED visit: 1.9-times [OR-zero: 0.54 [0.53/0.56]] / ambulatory care visit: 2.4-times [OR-zero: 0.41 [0.40/0.43]]), cardiovascular disease (2.7-times [OR-zero: 0.37 [0.36/0.38]] / 1.9-times [OR-zero: 0.52 [0.51/0.53]] / 2.8-times [OR-zero: 0.36 [0.35/0.36]]), osteoarthritis (2.0-times [OR-zero: 0.51 [0.50/0.53]] / 1.4-times [OR-zero: 0.74 [0.73/0.76]] / 2.5-times [OR-zero: 0.40 [0.40/0.41]]), type-2 diabetes (1.9-times [OR-zero: 0.54 [0.52/0.55]] / 1.4-times [OR-zero: 0.72 [0.70/0.73]] / 2.1-times [OR-zero: 0.47 [0.46/0.47]]), hypertension (1.8-times [OR-zero: 0.56 [0.54/0.57]] / 1.3-times [OR-zero: 0.79 [0.77/0.80]] / 2.2-times [OR-zero: 0.46 [0.45/0.47]]), PCOS (not significant / 1.2-times [OR-zero: 0.83 [0.79/0.88]] / not significant), and insulin resistance (1.1-times [OR-zero: 0.88 [0.84/0.91]] / 1.1-times [OR-zero: 0.92 [0.89/0.94]] / 1.8-times [OR-zero: 0.56 [0.54/0.57]]). After fully adjusting for potential confounders, the total healthcare cost ratio was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (1.54-times [1.51/1.56]), cardiovascular disease (1.45-times [1.43/1.47]), osteoarthritis (1.36-times [1.35/1.38]), type-2 diabetes (1.30-times [1.28/1.31]), hypertension (1.27-times [1.26/1.28]), PCOS (1.08-times [1.05/1.11]), and insulin resistance (1.03-times [1.01/1.04]). Conclusions: Adults with obesity who have specific disease-related health conditions have a higher probability of healthcare use and incur greater costs than those without specific comorbidities; incremental costs are larger when other obesity-related health conditions are not adjusted for. In a specific referent case, hypertension was costliest (44% had this condition with an additional annual cost of $715 [$678/$753]). If these findings hold for the Canadian population, hypertension in persons with obesity represents an estimated additional annual healthcare cost of $2.5 billion among adults living with obesity (based on an adult obesity rate of 26%). Results of this study can inform decision making on investment in interventions that are effective in treating obesity and its complications.Keywords: administrative data, healthcare cost, obesity-related comorbidities, real world evidence
Procedia PDF Downloads 14984 Internet of Things, Edge and Cloud Computing in Rock Mechanical Investigation for Underground Surveys
Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo
Abstract:
Rock mechanical investigation is one of the most crucial activities in underground operations, especially in surveys related to hydrocarbon exploration and production, geothermal reservoirs, energy storage, mining, and geotechnics. There is a wide range of traditional methods for driving, collecting, and analyzing rock mechanics data. However, these approaches may not be suitable or work perfectly in some situations, such as fractured zones. Cutting-edge technologies have been provided to solve and optimize the mentioned issues. Internet of Things (IoT), Edge, and Cloud Computing technologies (ECt & CCt, respectively) are among the most widely used and new artificial intelligence methods employed for geomechanical studies. IoT devices act as sensors and cameras for real-time monitoring and mechanical-geological data collection of rocks, such as temperature, movement, pressure, or stress levels. Structural integrity, especially for cap rocks within hydrocarbon systems, and rock mass behavior assessment, to further activities such as enhanced oil recovery (EOR) and underground gas storage (UGS), or to improve safety risk management (SRM) and potential hazards identification (P.H.I), are other benefits from IoT technologies. EC techniques can process, aggregate, and analyze data immediately collected by IoT on a real-time scale, providing detailed insights into the behavior of rocks in various situations (e.g., stress, temperature, and pressure), establishing patterns quickly, and detecting trends. Therefore, this state-of-the-art and useful technology can adopt autonomous systems in rock mechanical surveys, such as drilling and production (in hydrocarbon wells) or excavation (in mining and geotechnics industries). Besides, ECt allows all rock-related operations to be controlled remotely and enables operators to apply changes or make adjustments. It must be mentioned that this feature is very important in environmental goals. More often than not, rock mechanical studies consist of different data, such as laboratory tests, field operations, and indirect information like seismic or well-logging data. CCt provides a useful platform for storing and managing a great deal of volume and different information, which can be very useful in fractured zones. Additionally, CCt supplies powerful tools for predicting, modeling, and simulating rock mechanical information, especially in fractured zones within vast areas. Also, it is a suitable source for sharing extensive information on rock mechanics, such as the direction and size of fractures in a large oil field or mine. The comprehensive review findings demonstrate that digital transformation through integrated IoT, Edge, and Cloud solutions is revolutionizing traditional rock mechanical investigation. These advanced technologies have empowered real-time monitoring, predictive analysis, and data-driven decision-making, culminating in noteworthy enhancements in safety, efficiency, and sustainability. Therefore, by employing IoT, CCt, and ECt, underground operations have experienced a significant boost, allowing for timely and informed actions using real-time data insights. The successful implementation of IoT, CCt, and ECt has led to optimized and safer operations, optimized processes, and environmentally conscious approaches in underground geological endeavors.Keywords: rock mechanical studies, internet of things, edge computing, cloud computing, underground surveys, geological operations
Procedia PDF Downloads 6383 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 10582 Sea Level Rise and Sediment Supply Explain Large-Scale Patterns of Saltmarsh Expansion and Erosion
Authors: Cai J. T. Ladd, Mollie F. Duggan-Edwards, Tjeerd J. Bouma, Jordi F. Pages, Martin W. Skov
Abstract:
Salt marshes are valued for their role in coastal flood protection, carbon storage, and for supporting biodiverse ecosystems. As a biogeomorphic landscape, marshes evolve through the complex interactions between sea level rise, sediment supply and wave/current forcing, as well as and socio-economic factors. Climate change and direct human modification could lead to a global decline marsh extent if left unchecked. Whilst the processes of saltmarsh erosion and expansion are well understood, empirical evidence on the key drivers of long-term lateral marsh dynamics is lacking. In a GIS, saltmarsh areal extent in 25 estuaries across Great Britain was calculated from historical maps and aerial photographs, at intervals of approximately 30 years between 1846 and 2016. Data on the key perceived drivers of lateral marsh change (namely sea level rise rates, suspended sediment concentration, bedload sediment flux rates, and frequency of both river flood and storm events) were collated from national monitoring centres. Continuous datasets did not extend beyond 1970, therefore predictor variables that best explained rate change of marsh extent between 1970 and 2016 was calculated using a Partial Least Squares Regression model. Information about the spread of Spartina anglica (an invasive marsh plant responsible for marsh expansion around the globe) and coastal engineering works that may have impacted on marsh extent, were also recorded from historical documents and their impacts assessed on long-term, large-scale marsh extent change. Results showed that salt marshes in the northern regions of Great Britain expanded an average of 2.0 ha/yr, whilst marshes in the south eroded an average of -5.3 ha/yr. Spartina invasion and coastal engineering works could not explain these trends since a trend of either expansion or erosion preceded these events. Results from the Partial Least Squares Regression model indicated that the rate of relative sea level rise (RSLR) and availability of suspended sediment concentration (SSC) best explained the patterns of marsh change. RSLR increased from 1.6 to 2.8 mm/yr, as SSC decreased from 404.2 to 78.56 mg/l along the north-to-south gradient of Great Britain, resulting in the shift from marsh expansion to erosion. Regional differences in RSLR and SSC are due to isostatic rebound since deglaciation, and tidal amplitudes respectively. Marshes exposed to low RSLR and high SSC likely leads to sediment accumulation at the coast suitable for colonisation by marsh plants and thus lateral expansion. In contrast, high RSLR with are likely not offset deposition under low SSC, thus average water depth at the marsh edge increases, allowing larger wind-waves to trigger marsh erosion. Current global declines in sediment flux to the coast are likely to diminish the resilience of salt marshes to RSLR. Monitoring and managing suspended sediment supply is not common-place, but may be critical to mitigating coastal impacts from climate change.Keywords: lateral saltmarsh dynamics, sea level rise, sediment supply, wave forcing
Procedia PDF Downloads 13481 Concealing Breast Cancer Status: A Qualitative Study in India
Authors: Shradha Parsekar, Suma Nair, Ajay Bailey, Binu V. S.
Abstract:
Background: Concealing of cancer-related information is seen in many low-and-middle-income countries and may be associated with multiple factors. Comparatively, there is lack of information about, how breast cancers diagnosed women disclose cancer-related information to their social contacts and vice versa. To get more insights on the participant’s experience, opinions, expectations, and attitudes, a qualitative study is a suitable approach. Therefore, this study involving in-depth interviews was planned to lessen this gap. Methods: Interviews were conducted separately among breast cancer patients and their caregivers with semi-structured qualitative interview guide. Purposive and convenient sampling was being used to recruit patients and caregivers, respectively. Ethical clearance and permission from the tertiary hospital were obtained and participants were selected from the Udupi district, Karnataka, India. After obtaining a list of breast cancer diagnosed cases, participants were contacted in person and their willingness to take part in the study was taken. About 39 caregivers and 35 patients belonging to different breast cancer stages were recruited. Interviews were recorded with prior permission. Data was managed by Atlas.ti 8 software. The recordings were transcribed, translated and coded in two cycles. Most of the patients belonged to stage II and III cancer. Codes were grouped together into to whom breast cancer status was concealed to and underneath reason for the same. Main findings: followings are the codes and code families which emerged from the data. 1) Concealing the breast cancer status from social contacts other than close family members (such as extended family, neighbor and friends). Participants perceived the reasons as, a) to avoid questions which people probe (which doesn’t have answers), b) to avoid people paying courtesy visit (to inquire about the health as it is Indian culture to visit the sick person) making it inconvenient for patient and caregivers have to offer something and talk to them, c) to avoid people getting shocked (react as if cancer is different from other diseases) or getting emotional/sad, or getting fear of death d) to avoid getting negative suggestion or talking anything in front of patient as it may affect patient negatively, e) to avoid getting stigmatized, f) to avoid getting obstacle in child’s marriage. 2) Participant concealed the breast cancer status of young children as they perceived that it may a) affect studies, b) affect emotionally, c) children may get scared. 3) Concealing the breast cancer status from patients as the caregivers perceived that they have fear of a) worsening patient’s health, b) patient getting tensed, c) patient getting shocked, and d) patient getting scared. However, some participants stressed important in disclosing the cancer status to social contact/patient to make the people aware of the disease. Conclusion: The news of breast cancer spreads like electricity in the wire, therefore, patient or family avoid it for many reasons. Although, globally, due to physicians’ ethical obligations, there is an inclination towards more disclosure of cancer diagnosis and status of prognosis to the patient. However, it is an ongoing argument whether patient/social contacts should know the status especially in a country like India.Keywords: breast cancer, concealing cancer status, India, qualitative study
Procedia PDF Downloads 13580 User-Centered Design in the Development of Patient Decision Aids
Authors: Ariane Plaisance, Holly O. Witteman, Patrick Michel Archambault
Abstract:
Upon admission to an intensive care unit (ICU), all patients should discuss their wishes concerning life-sustaining interventions (e.g., cardiopulmonary resuscitation (CPR)). Without such discussions, interventions that prolong life at the cost of decreasing its quality may be used without appropriate guidance from patients. We employed user-centered design to adapt an existing decision aid (DA) about CPR to create a novel wiki-based DA adapted to the context of a single ICU and tailored to individual patient’s risk factors. During Phase 1, we conducted three weeks of ethnography of the decision-making context in our ICU to identify clinician and patient needs for a decision aid. During this time, we observed five dyads of intensivists and patients discussing their wishes concerning life-sustaining interventions. We also conducted semi-structured interviews with the attending intensivists in this ICU. During Phase 2, we conducted three rounds of rapid prototyping involving 15 patients and 11 other allied health professionals. We recorded discussions between intensivists and patients and used a standardized observation grid to collect patients’ comments and sociodemographic data. We applied content analysis to field notes, verbatim transcripts and the completed observation grids. Each round of observations and rapid prototyping iteratively informed the design of the next prototype. We also used the programming architecture of a wiki platform to embed the GO-FAR prediction rule programming code that we linked to a risk graphics software to better illustrate outcome risks calculated. During Phase I, we identified the need to add a section in our DA concerning invasive mechanical ventilation in addition to CPR because both life-sustaining interventions were often discussed together by physicians. During Phase II, we produced a context-adapted decision aid about CPR and mechanical ventilation that includes a values clarification section, questions about the patient’s functional autonomy prior to admission to the ICU and the functional decline that they would judge acceptable upon hospital discharge, risks and benefits of CPR and invasive mechanical ventilation, population-level statistics about CPR, a synthesis section to help patients come to a final decision and an online calculator based on the GO-FAR prediction rule. Even though the three rounds of rapid prototyping led to simplifying the information in our DA, 60% (n= 3/5) of the patients involved in the last cycle still did not understand the purpose of the DA. We also identified gaps in the discussion and documentation of patients’ preferences concerning life-sustaining interventions (e.g.,. CPR, invasive mechanical ventilation). The final version of our DA and our online wiki-based GO-FAR risk calculator using the IconArray.com risk graphics software are available online at www.wikidecision.org and are ready to be adapted to other contexts. Our results inform producers of decision aids on the use of wikis and user-centered design to develop DAs that are better adapted to users’ needs. Further work is needed on the creation of a video version of our DA. Physicians will also need the training to use our DA and to develop shared decision-making skills about goals of care.Keywords: ethnography, intensive care units, life-sustaining therapies, user-centered design
Procedia PDF Downloads 35479 Ethical Decision-Making by Healthcare Professionals during Disasters: Izmir Province Case
Authors: Gulhan Sen
Abstract:
Disasters could result in many deaths and injuries. In these difficult times, accessible resources are limited, demand and supply balance is distorted, and there is a need to make urgent interventions. Disproportionateness between accessible resources and intervention capacity makes triage a necessity in every stage of disaster response. Healthcare professionals, who are in charge of triage, have to evaluate swiftly and make ethical decisions about which patients need priority and urgent intervention given the limited available resources. For such critical times in disaster triage, 'doing the greatest good for the greatest number of casualties' is adopted as a code of practice. But there is no guide for healthcare professionals about ethical decision-making during disasters, and this study is expected to use as a source in the preparation of the guide. This study aimed to examine whether the qualities healthcare professionals in Izmir related to disaster triage were adequate and whether these qualities influence their capacity to make ethical decisions. The researcher used a survey developed for data collection. The survey included two parts. In part one, 14 questions solicited information about socio-demographic characteristics and knowledge levels of the respondents on ethical principles of disaster triage and allocation of scarce resources. Part two included four disaster scenarios adopted from existing literature and respondents were asked to make ethical decisions in triage based on the provided scenarios. The survey was completed by 215 healthcare professional working in Emergency-Medical Stations, National Medical Rescue Teams and Search-Rescue-Health Teams in Izmir. The data was analyzed with SPSS software. Chi-Square Test, Mann-Whitney U Test, Kruskal-Wallis Test and Linear Regression Analysis were utilized. According to results, it was determined that 51.2% of the participants had inadequate knowledge level of ethical principles of disaster triage and allocation of scarce resources. It was also found that participants did not tend to make ethical decisions on four disaster scenarios which included ethical dilemmas. They stayed in ethical dilemmas that perform cardio-pulmonary resuscitation, manage limited resources and make decisions to die. Results also showed that participants who had more experience in disaster triage teams, were more likely to make ethical decisions on disaster triage than those with little or no experience in disaster triage teams(p < 0.01). Moreover, as their knowledge level of ethical principles of disaster triage and allocation of scarce resources increased, their tendency to make ethical decisions also increased(p < 0.001). In conclusion, having inadequate knowledge level of ethical principles and being inexperienced affect their ethical decision-making during disasters. So results of this study suggest that more training on disaster triage should be provided on the areas of the pre-impact phase of disaster. In addition, ethical dimension of disaster triage should be included in the syllabi of the ethics classes in the vocational training for healthcare professionals. Drill, simulations, and board exercises can be used to improve ethical decision making abilities of healthcare professionals. Disaster scenarios where ethical dilemmas are faced should be prepared for such applied training programs.Keywords: disaster triage, medical ethics, ethical principles of disaster triage, ethical decision-making
Procedia PDF Downloads 24578 Using Business Interactive Games to Improve Management Skills
Authors: Nuno Biga
Abstract:
Continuous processes’ improvement is a permanent challenge for managers of any organization. Lean management means that efficiency gains can be obtained through a systematic framework able to explore synergies between processes, eliminate waste of time, and other resources. Leaderships in organizations determine the efficiency of the teams through their influence on collaborators, their motivation, and consolidation of ownership (group) feeling. The “organization health” depends on the leadership style, which is directly influenced by the intrinsic characteristics of each personality and leadership ability (leadership competencies). Therefore, it’s important that managers can correct in advance any deviation from expected leadership exercises. Top management teams must assume themselves as regulatory agents of leadership within the organization, ensuring monitoring of actions and the alignment of managers in accordance with the humanist standards anchored in a visible Code of Ethics and Conduct. This article is built around an innovative model of “Business Interactive Games” (BI GAMES) that simulates a real-life management environment. It shows that the strategic management of operations depends on a complex set of endogenous and exogenous variables to the intervening agents that require specific skills and a set of critical processes to monitor. BI GAMES are designed for each management reality and have already been applied successfully in several contexts over the last five years comprising the educational and enterprise ones. Results from these experiences are used to demonstrate how serious games in working living labs contributed to improve the organizational environment by focusing on the evaluation of players’ (agents’) skills, empower its capabilities, and the critical factors that create value in each context. The implementation of the BI GAMES simulator highlights that leadership skills are decisive for the performance of teams, regardless of the sector of activity and the specificities of each organization whose operation is intended to simulate. The players in the BI GAMES can be managers or employees of different roles in the organization or students in the learning context. They interact with each other and are asked to decide/make choices in the presence of several options for the follow-up operation, for example, when the costs and benefits are not fully known but depend on the actions of external parties (e.g., subcontracted enterprises and actions of regulatory bodies). Each team must evaluate resources used/needed in each operation, identify bottlenecks in the system of operations, assess the performance of the system through a set of key performance indicators, and set a coherent strategy to improve efficiency. Through the gamification and the serious games approach, organizational managers will be able to confront the scientific approach in strategic decision-making versus their real-life approach based on experiences undertaken. Considering that each BI GAME’s team has a leader (chosen by draw), the performance of this player has a direct impact on the results obtained. Leadership skills are thus put to the test during the simulation of the functioning of each organization, allowing conclusions to be drawn at the end of the simulation, including its discussion amongst participants.Keywords: business interactive games, gamification, management empowerment skills, simulation living labs
Procedia PDF Downloads 11277 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application
Authors: Ramesh P., Aby Joseph
Abstract:
Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling
Procedia PDF Downloads 13476 Facilitating Primary Care Practitioners to Improve Outcomes for People With Oropharyngeal Dysphagia Living in the Community: An Ongoing Realist Review
Authors: Caroline Smith, Professor Debi Bhattacharya, Sion Scott
Abstract:
Introduction: Oropharyngeal Dysphagia (OD) effects around 15% of older people, however it is often unrecognised and under diagnosed until they are hospitalised. There is a need for primary care healthcare practitioners (HCPs) to assume a proactive role in identifying and managing OD to prevent adverse outcomes such as aspiration pneumonia. Understanding the determinants of primary care HCPs undertaking this new behaviour provides the intervention targets for addressing. This realist review, underpinned by the Theoretical Domains Framework (TDF), aims to synthesise relevant literature and develop programme theories to understand what interventions work, how they work and under what circumstances to facilitate HCPs to prevent harm from OD. Combining realist methodology with behavioural science will permit conceptualisation of intervention components as theoretical behavioural constructs, thus informing the design of a future behaviour change intervention. Furthermore, through the TDF’s linkage to a taxonomy of behaviour change techniques, we will identify corresponding behaviour change techniques to include in this intervention. Methods & analysis: We are following the five steps for undertaking a realist review: 1) clarify the scope 2) Literature search 3) appraise and extract data 4) evidence synthesis 5) evaluation. We have searched Medline, Google scholar, PubMed, EMBASE, CINAHL, AMED, Scopus and PsycINFO databases. We are obtaining additional evidence through grey literature, snowball sampling, lateral searching and consulting the stakeholder group. Literature is being screened, evaluated and synthesised in Excel and Nvivo. We will appraise evidence in relation to its relevance and rigour. Data will be extracted and synthesised according to its relation to Initial programme theories (IPTs). IPTs were constructed after the preliminary literature search, informed by the TDF and with input from a stakeholder group of patient and public involvement advisors, general practitioners, speech and language therapists, geriatricians and pharmacists. We will follow the Realist and Meta-narrative Evidence Syntheses: Evolving Standards (RAMESES) quality and publication standards to report study results. Results: In this ongoing review our search has identified 1417 manuscripts with approximately 20% progressing to full text screening. We inductively generated 10 IPTs that hypothesise practitioners require: the knowledge to spot the signs and symptoms of OD; the skills to provide initial advice and support; and access to resources in their working environment to support them conducting these new behaviours. We mapped the 10 IPTs to 8 TDF domains and then generated a further 12 IPTs deductively using domain definitions to fulfil the remaining 6 TDF domains. Deductively generated IPTs broadened our thinking to consider domains such as ‘Emotion,’ ‘Optimism’ and ‘Social Influence’, e.g. If practitioners perceive that patients, carers and relatives expect initial advice and support, then they will be more likely to provide this, because they will feel obligated to do so. After prioritisation with stakeholders using a modified nominal group technique approach, a maximum of 10 IPTs will progress to test against the literature.Keywords: behaviour change, deglutition disorders, primary healthcare, realist review
Procedia PDF Downloads 8575 Innocent Victims and Immoral Women: Sex Workers in the Philippines through the Lens of Mainstream Media
Authors: Sharmila Parmanand
Abstract:
This paper examines dominant media representations of prostitution in the Philippines and interrogates sex workers’ interactions with the media establishment. This analysis of how sex workers are constituted in media, often as both innocent victims and immoral actors, contributes to an understanding of public discourse on sex work in the Philippines, where decriminalisation has recently been proposed and sex workers are currently classified as potential victims under anti-trafficking laws but also as criminals under the penal code. The first part is an analysis of media coverage of two prominent themes on prostitution: first, raid and rescue operations conducted by law enforcement; and second, prostitution on military bases and tourism hotspots. As a result of pressure from activists and international donors, these two themes often define the policy conversations on sex work in the Philippines. The discourses in written and televised news reports and documentaries from established local and international media sources that address these themes are explored through content analysis. Conclusions are drawn based on specific terms commonly used to refer to sex workers, how sex workers are seen as performing their cultural roles as mothers and wives, how sex work is depicted, associations made between sex work and public health, representations of clients and managers and ‘rescuers’ such as the police, anti-trafficking organisations, and faith-based groups, and which actors are presumed to be issue experts. Images of how prostitution is used as a metaphor for relations between the Philippines and foreign nations are also deconstructed, along with common tropes about developing world female subjects. In general, sex workers are simultaneously portrayed as bad mothers who endanger their family’s morality but also as long-suffering victims who endure exploitation for the sake of their children. They are also depicted as unclean, drug-addicted threats to public health. Their managers and clients are portrayed as cold, abusive, and sometimes violent, and their rescuers as moral and altruistic agents who are essential for sex workers’ rehabilitation and restoration as virtuous citizens. The second part explores sex workers’ own perceptions of their interactions with media, through interviews with members of the Philippine Sex Workers Collective, a loose organisation of sex workers around the Philippines. They reveal that they are often excluded by media practitioners and that they do not feel that they have space for meaningful self-revelation about their work when they do engage with journalists, who seem to have an overt agenda of depicting them as either victims or women of loose morals. In their assessment, media narratives do not necessarily reflect their lived experiences, and in some cases, coverage of rescues and raid operations endangers their privacy and instrumentalises their suffering. Media representations of sex workers may produce subject positions such as ‘victims’ or ‘criminals’ and legitimize specific interventions while foreclosing other ways of thinking. Further, in light of media’s power to reflect and shape public consciousness, it is a valuable academic and political project to examine whether sex workers are able to assert agency in determining how they are represented.Keywords: discourse analysis, news media, sex work, trafficking
Procedia PDF Downloads 39374 Harnessing the Benefits and Mitigating the Challenges of Neurosensitivity for Learners: A Mixed Methods Study
Authors: Kaaryn Cater
Abstract:
People vary in how they perceive, process, and react to internal, external, social, and emotional environmental factors; some are more sensitive than others. Compassionate people have a highly reactive nervous system and are more impacted by positive and negative environmental conditions (Differential Susceptibility). Further, some sensitive individuals are disproportionately able to benefit from positive and supportive environments without necessarily suffering negative impacts in less supportive environments (Vantage Sensitivity). Environmental sensitivity is underpinned by physiological, genetic, and personality/temperamental factors, and the phenotypic expression of high sensitivity is Sensory Processing Sensitivity. The hallmarks of Sensory Processing Sensitivity are deep cognitive processing, emotional reactivity, high levels of empathy, noticing environmental subtleties, a tendency to observe new and novel situations, and a propensity to become overwhelmed when over-stimulated. Several educational advantages associated with high sensitivity include creativity, enhanced memory, divergent thinking, giftedness, and metacognitive monitoring. High sensitivity can also lead to some educational challenges, particularly managing multiple conflicting demands and negotiating low sensory thresholds. A mixed methods study was undertaken. In the first quantitative study, participants completed the Perceived Success in Study Survey (PSISS) and the Highly Sensitive Person Scale (HSPS-12). Inclusion criteria were current or previous postsecondary education experience. The survey was presented on social media, and snowball recruitment was employed (n=365). The Excel spreadsheets were uploaded to the statistical package for the social sciences (SPSS)26, and descriptive statistics found normal distribution. T-tests and analysis of variance (ANOVA) calculations found no difference in the responses of demographic groups, and Principal Components Analysis and the posthoc Tukey calculations identified positive associations between high sensitivity and three of the five PSISS factors. Further ANOVA calculations found positive associations between the PSISS and two of the three sensitivity subscales. This study included a response field to register interest in further research. Respondents who scored in the 70th percentile on the HSPS-12 were invited to participate in a semi-structured interview. Thirteen interviews were conducted remotely (12 female). Reflexive inductive thematic analysis was employed to analyse data, and a descriptive approach was employed to present data reflective of participant experience. The results of this study found that compassionate students prioritize work-life balance; employ a range of practical metacognitive study and self-care strategies; value independent learning; connect with learning that is meaningful; and are bothered by aspects of the physical learning environment, including lighting, noise, and indoor environmental pollutants. There is a dearth of research investigating sensitivity in the educational context, and these studies highlight the need to promote widespread education sector awareness of environmental sensitivity, and the need to include sensitivity in sector and institutional diversity and inclusion initiatives.Keywords: differential susceptibility, highly sensitive person, learning, neurosensitivity, sensory processing sensitivity, vantage sensitivity
Procedia PDF Downloads 6573 Risking Injury: Exploring the Relationship between Risk Propensity and Injuries among an Australian Rules Football Team
Authors: Sarah A. Harris, Fleur L. McIntyre, Paola T. Chivers, Benjamin G. Piggott, Fiona H. Farringdon
Abstract:
Australian Rules Football (ARF) is an invasion based, contact field sport with over one million participants. The contact nature of the game increases exposure to all injuries, including head trauma. Evidence suggests that both concussion and sub-concussive traumas such as head knocks may damage the brain, in particular the prefrontal cortex. The prefrontal cortex may not reach full maturity until a person is in their early twenties with males taking longer to mature than females. Repeated trauma to the pre-frontal cortex during maturation may lead to negative social, cognitive and emotional effects. It is also during this period that males exhibit high levels of risk taking behaviours. Risk propensity and the incidence of injury is an unexplored area of research. Little research has considered if the level of player’s (especially younger players) risk propensity in everyday life places them at an increased risk of injury. Hence the current study, investigated if a relationship exists between risk propensity and self-reported injuries including diagnosed concussion and head knocks, among male ARF players aged 18 to 31 years. Method: The study was conducted over 22 weeks with one West Australian Football League (WAFL) club during the 2015 competition. Pre-season risk propensity was measured using the 7-item self-report Risk Propensity Scale. Possible scores ranged from 9 to 63, with higher scores indicating higher risk propensity. Players reported their self-perceived injuries (concussion, head knocks, upper body and lower body injuries) fortnightly using the WAFL Injury Report Survey (WIRS). A unique ID code was used to ensure player anonymity, which also enabled linkage of survey responses and injury data tracking over the season. A General Linear Model (GLM) was used to analyse whether there was a relationship between risk propensity score and total number of injuries for each injury type. Results: Seventy one players (N=71) with an age range of 18.40 to 30.48 years and a mean age of 21.92 years (±2.96 years) participated in the study. Player’s mean risk propensity score was 32.73, SD ±8.38. Four hundred and ninety five (495) injuries were reported. The most frequently reported injury was head knocks representing 39.19% of total reported injuries. The GLM identified a significant relationship between risk propensity and head knocks (F=4.17, p=.046). No other injury types were significantly related to risk propensity. Discussion: A positive relationship between risk propensity and head trauma in contact sports (specifically WAFL) was discovered. Assessing player’s risk propensity therefore, may identify those more at risk of head injuries. Potentially leading to greater monitoring and education of these players throughout the season, regarding self-identification of head knocks and symptoms that may indicate trauma to the brain. This is important because many players involved in WAFL are in their late teens or early 20’s hence, may be at greater risk of negative outcomes if they experience repeated head trauma. Continued education and research into the risks associated with head injuries has the potential to improve player well-being.Keywords: football, head injuries, injury identification, risk
Procedia PDF Downloads 33372 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding
Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta
Abstract:
Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration
Procedia PDF Downloads 16071 Concentration of Droplets in a Transient Gas Flow
Authors: Timur S. Zaripov, Artur K. Gilfanov, Sergei S. Sazhin, Steven M. Begg, Morgan R. Heikal
Abstract:
The calculation of the concentration of inertial droplets in complex flows is encountered in the modelling of numerous engineering and environmental phenomena; for example, fuel droplets in internal combustion engines and airborne pollutant particles. The results of recent research, focused on the development of methods for calculating concentration and their implementation in the commercial CFD code, ANSYS Fluent, is presented here. The study is motivated by the investigation of the mixture preparation processes in internal combustion engines with direct injection of fuel sprays. Two methods are used in our analysis; the Fully Lagrangian method (also known as the Osiptsov method) and the Eulerian approach. The Osiptsov method predicts droplet concentrations along path lines by solving the equations for the components of the Jacobian of the Eulerian-Lagrangian transformation. This method significantly decreases the computational requirements as it does not require counting of large numbers of tracked droplets as in the case of the conventional Lagrangian approach. In the Eulerian approach the average droplet velocity is expressed as a function of the carrier phase velocity as an expansion over the droplet response time and transport equation can be solved in the Eulerian form. The advantage of the method is that droplet velocity can be found without solving additional partial differential equations for the droplet velocity field. The predictions from the two approaches were compared in the analysis of the problem of a dilute gas-droplet flow around an infinitely long, circular cylinder. The concentrations of inertial droplets, with Stokes numbers of 0.05, 0.1, 0.2, in steady-state and transient laminar flow conditions, were determined at various Reynolds numbers. In the steady-state case, flows with Reynolds numbers of 1, 10, and 100 were investigated. It has been shown that the results predicted using both methods are almost identical at small Reynolds and Stokes numbers. For larger values of these numbers (Stokes — 0.1, 0.2; Reynolds — 10, 100) the Eulerian approach predicted a wider spread in concentration in the perturbations caused by the cylinder that can be attributed to the averaged droplet velocity field. The transient droplet flow case was investigated for a Reynolds number of 200. Both methods predicted a high droplet concentration in the zones of high strain rate and low concentrations in zones of high vorticity. The maxima of droplet concentration predicted by the Osiptsov method was up to two orders of magnitude greater than that predicted by the Eulerian method; a significant variation for an approach widely used in engineering applications. Based on the results of these comparisons, the Osiptsov method has resulted in a more precise description of the local properties of the inertial droplet flow. The method has been applied to the analysis of the results of experimental observations of a liquid gasoline spray at representative fuel injection pressure conditions. The preliminary results show good qualitative agreement between the predictions of the model and experimental data.Keywords: internal combustion engines, Eulerian approach, fully Lagrangian approach, gasoline fuel sprays, droplets and particle concentrations
Procedia PDF Downloads 25770 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms
Authors: Abdul Rehman, Bo Liu
Abstract:
Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization
Procedia PDF Downloads 22569 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy
Authors: M. Regina Carreira-Lopez
Abstract:
Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.Keywords: hypernymy, information retrieval, lightweight ontology, resonance
Procedia PDF Downloads 12568 Clinico-pathological Study of Xeroderma Pigmentosa: A Case Series of Eight Cases
Authors: Kakali Roy, Sahana P. Raju, Subhra Dhar, Sandipan Dhar
Abstract:
Introduction: Xeroderma pigmentosa (XP) is a rare inherited (autosomal recessive) disease resulting from impairment in DNA repair that involves recognition and repair of ultraviolet radiation (UVR) induced DNA damage in the nucleotide excision repair pathway. Which results in increased photosensitivity, UVR induced damage to skin and eye, increased susceptibility of skin and ocular cancer, and progressive neurodegeneration in some patients. XP is present worldwide, with higher incidence in areas having frequent consanguinity. Being extremely rare, there is limited literature on XP and associated complications. Here, the clinico-pathological experience (spectrum of clinical presentation, histopathological findings of malignant skin lesions, and progression) of managing 8 cases of XP is presented. Methodology: A retrospective study was conducted in a pediatric tertiary care hospital in eastern India during a ten-year period from 2013 to 2022. A clinical diagnosis was made based on severe sun burn or premature photo-aging and/or onset of cutaneous malignancies at early age (1st decade) in background of consanguinity and autosomal recessive inheritance pattern in family. Results: The mean age of presentation was 1.2 years (range of 7month-3years), while three children presented during their infancy. Male to female ratio was 5:3, and all were born of consanguineous marriage. They presented with dermatological manifestations (100%) followed by ophthalmic (75%) and/or neurological symptoms (25%). Patients had normal skin at birth but soon developed extreme sensitivity to UVR in the form of exaggerated sun tanning, burning, and blistering on minimal sun exposure, followed by abnormal skin pigmentation like freckles and lentiginosis. Subsequently, over time there was progressive xerosis, atrophy, wrinkling, and poikiloderma. Six patients had varied degree of ocular involvement, while three of them had severe manifestation, including madarosis, tylosis, ectropion, Lagopthalmos, Pthysis bulbi, clouding and scarring of the cornea with complete or partial loss of vision, and ophthalmic malignancies. 50% (n=4) cases had skin and ocular pre-malignant (actinic keratosis) and malignant lesions, including melanoma and non melanoma skin cancer (NMSC) like squamous cell carcinoma (SCC) and basal cell carcinoma (BCC) in their early childhood. One patient had simultaneous occurrence of multiple malignancies together (SCC, BCC, and melanoma). Subnormal intelligence was noticed as neurological feature, and none had sensory neural hearing loss, microcephaly, neuroregression, or neurdeficit. All the patients had been being managed by a multidisciplinary team of pediatricians, dermatologists, ophthalmologists, neurologists and psychiatrists. Conclusion: Although till date there is no complete cure for XP and the disease is ultimately fatal. But increased awareness, early diagnosis followed by persistent vigorous protection from UVR, and regular screening for early detection of malignancies along with psychological support can drastically improve patients’ quality of life and life expectancy. Further research is required on formulating optimal management of XP, specifically the role and possibilities of gene therapy in XP.Keywords: childhood malignancies, dermato-pathological findings, eastern India, Xeroderma pigmentosa
Procedia PDF Downloads 7667 Regulatory and Economic Challenges of AI Integration in Cyber Insurance
Authors: Shreyas Kumar, Mili Shangari
Abstract:
Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware
Procedia PDF Downloads 3366 Effect of a Chatbot-Assisted Adoption of Self-Regulated Spaced Practice on Students' Vocabulary Acquisition and Cognitive Load
Authors: Ngoc-Nguyen Nguyen, Hsiu-Ling Chen, Thanh-Truc Lai Huynh
Abstract:
In foreign language learning, vocabulary acquisition has consistently posed challenges to learners, especially for those at lower levels. Conventional approaches often fail to promote vocabulary learning and ensure engaging experiences alike. The emergence of mobile learning, particularly the integration of chatbot systems, has offered alternative ways to facilitate this practice. Chatbots have proven effective in educational contexts by offering interactive learning experiences in a constructivist manner. These tools have caught attention in the field of mobile-assisted language learning (MALL) in recent years. This research is conducted in an English for Specific Purposes (ESP) course at the A2 level of the CEFR, designed for non-English majors. Participants are first-year Vietnamese students aged 18 to 20 at a university. This quasi-experimental study follows a pretest-posttest control group design over five weeks, with two classes randomly assigned as the experimental and control groups. The experimental group engages in chatbot-assisted spaced practice with SRL components, while the control group uses the same spaced practice without SRL. The two classes are taught by the same lecturer. Data are collected through pre- and post-tests, cognitive load surveys, and semi-structured interviews. The combination of self-regulated learning (SRL) and distributed practice, grounded in the spacing effect, forms the basis of the present study. SRL elements, which concern goal setting and strategy planning, are integrated into the system. The spaced practice method, similar to those used in widely recognized learning platforms like Duolingo and Anki flashcards, spreads out learning over multiple sessions. This study’s design features quizzes progressively increasing in difficulty. These quizzes are aimed at targeting both the Recognition-Recall and Comprehension-Use dimensions for a comprehensive acquisition of vocabulary. The mobile-based chatbot system is built using Golang, an open-source programming language developed by Google. It follows a structured flow that guides learners through a series of 4 quizzes in each week of teacher-led learning. The quizzes start with less cognitively demanding tasks, such as multiple-choice questions, before moving on to more complex exercises. The integration of SRL elements allows students to self-evaluate the difficulty level of vocabulary items, predict scores achieved, and choose appropriate strategy. This research is part one of a two-part project. The initial findings will determine the development of an upgraded chatbot system in part two, where adaptive features in response to the integration of SRL components will be introduced. The research objectives are to assess the effectiveness of the chatbot-assisted approach, based on the combination of spaced practice and SRL, in improving vocabulary acquisition and managing cognitive load, as well as to understand students' perceptions of this learning tool. The insights from this study will contribute to the growing body of research on mobile-assisted language learning and offer practical implications for integrating chatbot systems with spaced practice into educational settings to enhance vocabulary learning.Keywords: mobile learning, mobile-assisted language learning, MALL, chatbots, vocabulary learning, spaced practice, spacing effect, self-regulated learning, SRL, self-regulation, EFL, cognitive load
Procedia PDF Downloads 2065 Information Pollution: Exploratory Analysis of Subs-Saharan African Media’s Capabilities to Combat Misinformation and Disinformation
Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi
Abstract:
The role of information in societal development and growth cannot be over-emphasized. It has remained an age-long strategy to adopt the information flow to make an egalitarian society. The same has become a tool for throwing society into chaos and anarchy. It has been adopted as a weapon of war and a veritable instrument of psychological warfare with a variety of uses. That is why some scholars posit that information could be deployed as a weapon to wreak “Mass Destruction" or promote “Mass Development". When used as a tool for destruction, the effect on society is like an atomic bomb which when it is released, pollutes the air and suffocates the people. Technological advancement has further exposed the latent power of information and many societies seem to be overwhelmed by its negative effect. While information remains one of the bedrock of democracy, the information ecosystem across the world is currently facing a more difficult battle than ever before due to information pluralism and technological advancement. The more the agents involved try to combat its menace, the difficult and complex it is proving to be curbed. In a region like Africa with dangling democracy enfolds with complexities of multi-religion, multi-cultures, inter-tribes, ongoing issues that are yet to be resolved, it is important to pay critical attention to the case of information disorder and find appropriate ways to curb or mitigate its effects. The media, being the middleman in the distribution of information, needs to build capacities and capabilities to separate the whiff of misinformation and disinformation from the grains of truthful data. From quasi-statistical senses, it has been observed that the efforts aimed at fighting information pollution have not considered the built resilience of media organisations against this disorder. Apparently, the efforts, resources and technologies adopted for the conception, production and spread of information pollution are much more sophisticated than approaches to suppress and even reduce its effects on society. Thus, this study seeks to interrogate the phenomenon of information pollution and the capabilities of select media organisations in Sub-Saharan Africa. In doing this, the following questions are probed; what are the media actions to curb the menace of information pollution? Which of these actions are working and how effective are they? And which of the actions are not working and why they are not working? Adopting quantitative and qualitative approaches and anchored on the Dynamic Capability Theory, the study aims at digging up insights to further understand the complexities of information pollution, media capabilities and strategic resources for managing misinformation and disinformation in the region. The quantitative approach involves surveys and the use of questionnaires to get data from journalists on their understanding of misinformation/disinformation and their capabilities to gate-keep. Case Analysis of select media and content analysis of their strategic resources to manage misinformation and disinformation is adopted in the study while the qualitative approach will involve an In-depth Interview to have a more robust analysis is also considered. The study is critical in the fight against information pollution for a number of reasons. One, it is a novel attempt to document the level of media capabilities to fight the phenomenon of information disorder. Two, the study will enable the region to have a clear understanding of the capabilities of existing media organizations to combat misinformation and disinformation in the countries that make up the region. Recommendations emanating from the study could be used to initiate, intensify or review existing approaches to combat the menace of information pollution in the region.Keywords: disinformation, information pollution, misinformation, media capabilities, sub-Saharan Africa
Procedia PDF Downloads 16164 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems
Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana
Abstract:
Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP
Procedia PDF Downloads 20063 Exploring a Cross-Sectional Analysis Defining Social Work Leadership Competencies in Social Work Education and Practice
Authors: Trevor Stephen, Joshua D. Aceves, David Guyer, Jona Jacobson
Abstract:
As a profession, social work has much to offer individuals, groups, and organizations. A multidisciplinary approach to understanding and solving complex challenges and a commitment to developing and training ethical practitioners outlines characteristics of a profession embedded with leadership skills. This presentation will take an overview of the historical context of social work leadership, examine social work as a unique leadership model composed of its qualities and theories that inform effective leadership capability as it relates to our code of ethics. Reflect critically on leadership theories and their foundational comparison. Finally, a look at recommendations and implementation to social work education and practice. Similar to defining leadership, there is no universally accepted definition of social work leadership. However, some distinct traits and characteristics are essential. Recent studies help set the stage for this research proposal because they measure views on effective social work leadership among social work and non-social leaders and followers. However, this research is interested in working backward from that approach and examining social workers' leadership preparedness perspectives based solely on social work training, competencies, values, and ethics. Social workers understand how to change complex structures and challenge resistance to change to improve the well-being of organizations and those they serve. Furthermore, previous studies align with the idea of practitioners assessing their skill and capacity to engage in leadership but not to lead. In addition, this research is significant because it explores aspiring social work leaders' competence to translate social work practice into direct leadership skills. The research question seeks to answer whether social work training and competencies are sufficient to determine whether social workers believe they possess the capacity and skill to engage in leadership practice. Aim 1: Assess whether social workers have the capacity and skills to assume leadership roles. Aim 2: Evaluate how the development of social workers is sufficient in defining leadership. This research intends to reframe the misconception that social workers do not possess the capacity and skills to be effective leaders. On the contrary, social work encompasses a framework dedicated to lifelong development and growth. Social workers must be skilled, competent, ethical, supportive, and empathic. These are all qualities and traits of effective leadership, whereas leaders are in relation with others and embody partnership and collaboration with followers and stakeholders. The proposed study is a cross-sectional quasi-experimental survey design that will include the distribution of a multi-level social work leadership model and assessment tool. The assessment tool aims to help define leadership in social work using a Likert scale model. A cross-sectional research design is appropriate for answering the research questions because the measurement survey will help gather data using a structured tool. Other than the proposed social work leadership measurement tool, there is no other mechanism based on social work theory and designed to measure the capacity and skill of social work leadership.Keywords: leadership competencies, leadership education, multi-level social work leadership model, social work core values, social work leadership, social work leadership education, social work leadership measurement tool
Procedia PDF Downloads 172