Search results for: sensory integration procedure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5328

Search results for: sensory integration procedure

78 Advancements in Arthroscopic Surgery Techniques for Anterior Cruciate Ligament (ACL) Reconstruction

Authors: Islam Sherif, Ahmed Ashour, Ahmed Hassan, Hatem Osman

Abstract:

Anterior Cruciate Ligament (ACL) injuries are common among athletes and individuals participating in sports with sudden stops, pivots, and changes in direction. Arthroscopic surgery is the gold standard for ACL reconstruction, aiming to restore knee stability and function. Recent years have witnessed significant advancements in arthroscopic surgery techniques, graft materials, and technological innovations, revolutionizing the field of ACL reconstruction. This presentation delves into the latest advancements in arthroscopic surgery techniques for ACL reconstruction and their potential impact on patient outcomes. Traditionally, autografts from the patellar tendon, hamstring tendon, or quadriceps tendon have been commonly used for ACL reconstruction. However, recent studies have explored the use of allografts, synthetic scaffolds, and tissue-engineered grafts as viable alternatives. This abstract evaluates the benefits and potential drawbacks of each graft type, considering factors such as graft incorporation, strength, and risk of graft failure. Moreover, the application of augmented reality (AR) and virtual reality (VR) technologies in surgical planning and intraoperative navigation has gained traction. AR and VR platforms provide surgeons with detailed 3D anatomical reconstructions of the knee joint, enhancing preoperative visualization and aiding in graft tunnel placement during surgery. We discuss the integration of AR and VR in arthroscopic ACL reconstruction procedures, evaluating their accuracy, cost-effectiveness, and overall impact on surgical outcomes. Beyond graft selection and surgical navigation, patient-specific planning has gained attention in recent research. Advanced imaging techniques, such as MRI-based personalized planning, enable surgeons to tailor ACL reconstruction procedures to each patient's unique anatomy. By accounting for individual variations in the femoral and tibial insertion sites, this personalized approach aims to optimize graft placement and potentially improve postoperative knee kinematics and stability. Furthermore, rehabilitation and postoperative care play a crucial role in the success of ACL reconstruction. This abstract explores novel rehabilitation protocols, emphasizing early mobilization, neuromuscular training, and accelerated recovery strategies. Integrating technology, such as wearable sensors and mobile applications, into postoperative care can facilitate remote monitoring and timely intervention, contributing to enhanced rehabilitation outcomes. In conclusion, this presentation provides an overview of the cutting-edge advancements in arthroscopic surgery techniques for ACL reconstruction. By embracing innovative graft materials, augmented reality, patient-specific planning, and technology-driven rehabilitation, orthopedic surgeons and sports medicine specialists can achieve superior outcomes in ACL injury management. These developments hold great promise for improving the functional outcomes and long-term success rates of ACL reconstruction, benefitting athletes and patients alike.

Keywords: arthroscopic surgery, ACL, autograft, allograft, graft materials, ACL reconstruction, synthetic scaffolds, tissue-engineered graft, virtual reality, augmented reality, surgical planning, intra-operative navigation

Procedia PDF Downloads 92
77 The Use of Modern Technologies and Computers in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar MehrAfarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: Iran, sistan, archaeological surveys, computer use, modern technologies

Procedia PDF Downloads 79
76 Wind Turbine Scaling for the Investigation of Vortex Shedding and Wake Interactions

Authors: Sarah Fitzpatrick, Hossein Zare-Behtash, Konstantinos Kontis

Abstract:

Traditionally, the focus of horizontal axis wind turbine (HAWT) blade aerodynamic optimisation studies has been the outer working region of the blade. However, recent works seek to better understand, and thus improve upon, the performance of the inboard blade region to enhance power production, maximise load reduction and better control the wake behaviour. This paper presents the design considerations and characterisation of a wind turbine wind tunnel model devised to further the understanding and fundamental definition of horizontal axis wind turbine root vortex shedding and interactions. Additionally, the application of passive and active flow control mechanisms – vortex generators and plasma actuators – to allow for the manipulation and mitigation of unsteady aerodynamic behaviour at the blade inboard section is investigated. A static, modular blade wind turbine model has been developed for use in the University of Glasgow’s de Havilland closed return, low-speed wind tunnel. The model components - which comprise of a half span blade, hub, nacelle and tower - are scaled using the equivalent full span radius, R, for appropriate Mach and Strouhal numbers, and to achieve a Reynolds number in the range of 1.7x105 to 5.1x105 for operational speeds up to 55m/s. The half blade is constructed to be modular and fully dielectric, allowing for the integration of flow control mechanisms with a focus on plasma actuators. Investigations of root vortex shedding and the subsequent wake characteristics using qualitative – smoke visualisation, tufts and china clay flow – and quantitative methods – including particle image velocimetry (PIV), hot wire anemometry (HWA), and laser Doppler anemometry (LDA) – were conducted over a range of blade pitch angles 0 to 15 degrees, and Reynolds numbers. This allowed for the identification of shed vortical structures from the maximum chord position, the transitional region where the blade aerofoil blends into a cylindrical joint, and the blade nacelle connection. Analysis of the trailing vorticity interactions between the wake core and freestream shows the vortex meander and diffusion is notably affected by the Reynold’s number. It is hypothesized that the shed vorticity from the blade root region directly influences and exacerbates the nacelle wake expansion in the downstream direction. As the design of inboard blade region form is, by necessity, driven by function rather than aerodynamic optimisation, a study is undertaken for the application of flow control mechanisms to manipulate the observed vortex phenomenon. The designed model allows for the effective investigation of shed vorticity and wake interactions with a focus on the accurate geometry of a root region which is representative of small to medium power commercial HAWTs. The studies undertaken allow for an enhanced understanding of the interplay of shed vortices and their subsequent effect in the near and far wake. This highlights areas of interest within the inboard blade area for the potential use of passive and active flow control devices which contrive to produce a more desirable wake quality in this region.

Keywords: vortex shedding, wake interactions, wind tunnel model, wind turbine

Procedia PDF Downloads 235
75 Comparing Implications of Manual and ROSA-assisted Total Knee Replacements on Patients and Physicians: A Scoping Review

Authors: Bassem M. Darwish, Robert H. Ablove

Abstract:

Introduction: Total knee arthroscopy (TKA) is a commonly performed procedure in patients with end-stage osteoarthritis and inaccuracy of component alignment in TKA has been shown to have many adverse post-operative outcomes such as accelerated implant wear, reduced functional outcomes, and shorter overall implant survival. Robotic surgical systems have been introduced to try and improve joint alignment and functional outcomes in knee arthroscopy, one recent iteration is the ROSA knee system, released to the market in 2019. The objective of this scoping review is to map the available evidence, identify the current types of evidence, and identify knowledge gaps to guide future studies on patient outcomes following ROSA-assisted total knee arthroplasties. Methods: An electronic search was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for scoping reviews. Search terms included ROSA, knee arthroscopy, osteoarthritis, robotic, and malalignment. Types of study participants included patients with osteoarthritis, ages 18 and older, male or female, who received manual TKA (mTKA) or ROSA-assisted TKA (rTKA), and human patients or cadavers. Published, peer-reviewed controlled trials, observational studies, and case series were included. Case reports were not included in article review. Resulting articles were first screened based on title and abstract. Articles meeting inclusion criteria based on title and abstract review then underwent full-text review by the same reviewer. Results: This scoping review identified 11 total studies, 3 prospective observational studies, and 8 retrospective observational studies - a total of 970 rTKA patients and 1745 mTKA patients. There were no case series or randomized controlled trials comparing rTKA and mTKA. Patient-centered outcomes showed promise for rTKA, where it frequently showed significantly favorable functional outcomes, measured via KOOS-JR, VAS, KSS, OKS, FJS, and PROMIS scores, at various times postoperatively. However, there was much discrepancy about which score yielded significance at which postoperative follow-up. Complication rates, reoperation rates, and LOS were very similar between mTKA and rTKA groups. Studies also showed rTKA had more accurate joint alignment within the 0 ± 3o corridor and had significantly higher rates of achieving postoperative joint angles similar to the preoperative plan. Finally, there was major agreement that rTKA cases take significantly longer time at the start, however, there is a rapid learning curve. Once past the learning curve, rTKA cases are performed in a similar time to mTKA and reduced physician stress and strain. Conclusion: The ROSA knee system represents a promising option for the management of osteoarthritis via total knee arthroscopy. The studies reviewed in this paper favor the patient-centered function outcomes, joint alignments, and physician health implications of the ROSA knee system to conventional total knee arthroscopy. Further study is warranted, however, to better understand recovery periods, longer-term functional outcomes, operative fatigue, and reduction in radiation exposure.

Keywords: arthroplasty, knee, robotics, malalignment

Procedia PDF Downloads 29
74 Development of DEMO-FNS Hybrid Facility and Its Integration in Russian Nuclear Fuel Cycle

Authors: Yury S. Shpanskiy, Boris V. Kuteev

Abstract:

Development of a fusion-fission hybrid facility based on superconducting conventional tokamak DEMO-FNS runs in Russia since 2013. The main design goal is to reach the technical feasibility and outline prospects of industrial hybrid technologies providing the production of neutrons, fuel nuclides, tritium, high-temperature heat, electricity and subcritical transmutation in Fusion-Fission Hybrid Systems. The facility should operate in a steady-state mode at the fusion power of 40 MW and fission reactions of 400 MW. Major tokamak parameters are the following: major radius R=3.2 m, minor radius a=1.0 m, elongation 2.1, triangularity 0.5. The design provides the neutron wall loading of ~0.2 MW/m², the lifetime neutron fluence of ~2 MWa/m², with the surface area of the active cores and tritium breeding blanket ~100 m². Core plasma modelling showed that the neutron yield ~10¹⁹ n/s is maximal if the tritium/deuterium density ratio is 1.5-2.3. The design of the electromagnetic system (EMS) defined its basic parameters, accounting for the coils strength and stability, and identified the most problematic nodes in the toroidal field coils and the central solenoid. The EMS generates toroidal, poloidal and correcting magnetic fields necessary for the plasma shaping and confinement inside the vacuum vessel. EMC consists of eighteen superconducting toroidal field coils, eight poloidal field coils, five sections of a central solenoid, correction coils, in-vessel coils for vertical plasma control. Supporting structures, the thermal shield, and the cryostat maintain its operation. EMS operates with the pulse duration of up to 5000 hours at the plasma current up to 5 MA. The vacuum vessel (VV) is an all-welded two-layer toroidal shell placed inside the EMS. The free space between the vessel shells is filled with water and boron steel plates, which form the neutron protection of the EMS. The VV-volume is 265 m³, its mass with manifolds is 1800 tons. The nuclear blanket of DEMO-FNS facility was designed to provide functions of minor actinides transmutation, tritium production and enrichment of spent nuclear fuel. The vertical overloading of the subcritical active cores with MA was chosen as prospective. Analysis of the device neutronics and the hybrid blanket thermal-hydraulic characteristics has been performed for the system with functions covering transmutation of minor actinides, production of tritium and enrichment of spent nuclear fuel. A study of FNS facilities role in the Russian closed nuclear fuel cycle was performed. It showed that during ~100 years of operation three FNS facilities with fission power of 3 GW controlled by fusion neutron source with power of 40 MW can burn 98 tons of minor actinides and 198 tons of Pu-239 can be produced for startup loading of 20 fast reactors. Instead of Pu-239, up to 25 kg of tritium per year may be produced for startup of fusion reactors using blocks with lithium orthosilicate instead of fissile breeder blankets.

Keywords: fusion-fission hybrid system, conventional tokamak, superconducting electromagnetic system, two-layer vacuum vessel, subcritical active cores, nuclear fuel cycle

Procedia PDF Downloads 147
73 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection

Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément

Abstract:

The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.

Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars

Procedia PDF Downloads 117
72 Design Thinking and Project-Based Learning: Opportunities, Challenges, and Possibilities

Authors: Shoba Rathilal

Abstract:

High unemployment rates and a shortage of experienced and qualified employees appear to be a paradox that currently plagues most countries worldwide. In a developing country like South Africa, the rate of unemployment is reported to be approximately 35%, the highest recorded globally. At the same time, a countrywide deficit in experienced and qualified potential employees is reported in South Africa, which is causing fierce rivalry among firms. Employers have reported that graduates are very rarely able to meet the demands of the job as there are gaps in their knowledge and conceptual understanding and other 21st-century competencies, attributes, and dispositions required to successfully negotiate the multiple responsibilities of employees in organizations. In addition, the rates of unemployment and suitability of graduates appear to be skewed by race and social class, the continued effects of a legacy of inequitable educational access. Higher Education in the current technologically advanced and dynamic world needs to serve as an agent of transformation, aspiring to develop graduates to be creative, flexible, critical, and with entrepreneurial acumen. This requires that higher education curricula and pedagogy require a re-envisioning of our selection, sequencing, and pacing of the learning, teaching, and assessment. At a particular Higher education Institution in South Africa, Design Thinking and Project Based learning are being adopted as two approaches that aim to enhance the student experience through the provision of a “distinctive education” that brings together disciplinary knowledge, professional engagement, technology, innovation, and entrepreneurship. Using these methodologies forces the students to solve real-time applied problems using various forms of knowledge and finding innovative solutions that can result in new products and services. The intention is to promote the development of skills for self-directed learning, facilitate the development of self-awareness, and contribute to students being active partners in the application and production of knowledge. These approaches emphasize active and collaborative learning, teamwork, conflict resolution, and problem-solving through effective integration of theory and practice. In principle, both these approaches are extremely impactful. However, at the institution in this study, the implementation of the PBL and DT was not as “smooth” as anticipated. This presentation reports on the analysis of the implementation of these two approaches within higher education curricula at a particular university in South Africa. The study adopts a qualitative case study design. Data were generated through the use of surveys, evaluation feedback at workshops, and content analysis of project reports. Data were analyzed using document analysis, content, and thematic analysis. Initial analysis shows that the forces constraining the implementation of PBL and DT range from the capacity to engage with DT and PBL, both from staff and students, educational contextual realities of higher education institutions, administrative processes, and resources. At the same time, the implementation of DT and PBL was enabled through the allocation of strategic funding and capacity development workshops. These factors, however, could not achieve maximum impact. In addition, the presentation will include recommendations on how DT and PBL could be adapted for differing contexts will be explored.

Keywords: design thinking, project based learning, innovative higher education pedagogy, student and staff capacity development

Procedia PDF Downloads 77
71 Secure Texting Used in a Post-Acute Pediatric Skilled Nursing Inpatient Setting: A Multidisciplinary Care Team Driven Communication System with Alarm and Alert Notification Management

Authors: Bency Ann Massinello, Nancy Day, Janet Fellini

Abstract:

Background: The use of an appropriate mode of communication among the multidisciplinary care team members regarding coordination of care is an extremely complicated yet important patient safety initiative. Effective communication among the team members(nursing staff, medical staff, respiratory therapists, rehabilitation therapists, patient-family services team…) become essential to develop a culture of trust and collaboration to deliver the highest quality care to patients are their families. The inpatient post-acute pediatrics, where children and their caregivers come for continuity of care, is no exceptions to the increasing use of text messages as a means to communication among clinicians. One such platform is the Vocera Communications (Vocera Smart Mobile App called Vocera Edge) allows the teams to use the application and share sensitive patient information through an encrypted platform using IOS company provided shared and assigned mobile devices. Objective: This paper discusses the quality initiative of implementing the transition from Vocera Smartbage to Vocera Edge Mobile App, technology advantage, use case expansion, and lessons learned about a secure alternative modality that allows sending and receiving secure text messages in a pediatric post-acute setting using an IOS device. This implementation process included all direct care staff, ancillary teams, and administrative teams on the clinical units. Methods: Our institution launched this transition from voice prompted hands-free Vocera Smartbage to Vocera Edge mobile based app for secure care team texting using a big bang approach during the first PDSA cycle. The pre and post implementation data was gathered using a qualitative survey of about 500 multidisciplinary team members to determine the ease of use of the application and its efficiency in care coordination. The technology was further expanded in its use by implementing clinical alerts and alarms notification using middleware integration with patient monitoring (Masimo) and life safety (Nurse call) systems. Additional use of the smart mobile iPhone use include pushing out apps like Lexicomp and Up to Date to have it readily available for users for evident-based practice in medication and disease management. Results: Successful implementation of the communication system in a shared and assigned model with all of the multidisciplinary teams in our pediatric post-acute setting. In just a 3-monthperiod post implementation, we noticed a 14% increase from 7,993 messages in 6 days in December 2020 to 9,116messages in March 2021. This confirmed that all clinical and non-clinical teams were using this mode of communication for coordinating the care for their patients. System generated data analytics used in addition to the pre and post implementation staff survey for process evaluation. Conclusion: A secure texting option using a mobile device is a safe and efficient mode for care team communication and collaboration using technology in real time. This allows for the settings like post-acute pediatric care areas to be in line with the widespread use of mobile apps and technology in our mainstream healthcare.

Keywords: nursing informatics, mobile secure texting, multidisciplinary communication, pediatrics post acute care

Procedia PDF Downloads 196
70 Exploring Participatory Research Approaches in Agricultural Settings: Analyzing Pathways to Enhance Innovation in Production

Authors: Michele Paleologo, Marta Acampora, Serena Barello, Guendalina Graffigna

Abstract:

Introduction: In the face of increasing demands for higher agricultural productivity with minimal environmental impact, participatory research approaches emerge as promising means to promote innovation. However, the complexities and ambiguities surrounding these approaches in both theory and practice present challenges. This Scoping Review seeks to bridge these gaps by mapping participatory approaches in agricultural contexts, analyzing their characteristics, and identifying indicators of success. Methods: Following PRISMA guidelines, we conducted a systematic Scoping Review, searching Scopus and Web of Science databases. Our review encompassed 34 projects from diverse geographical regions and farming contexts. Thematic analysis was employed to explore the types of innovation promoted and the categories of participants involved. Results: The identified innovation types encompass technological advancements, sustainable farming practices, and market integration, forming 5 main themes: climate change, cultivar, irrigation, pest and herbicide, and technical improvement. These themes represent critical areas where participatory research drives innovation to address pressing agricultural challenges. Participants were categorized as citizens, experts, NGOs, private companies, and public bodies. Understanding their roles is vital for designing effective participatory initiatives that embrace diverse stakeholders. The review also highlighted 27 theoretical frameworks underpinning participatory projects. Clearer guidelines and reporting standards are crucial for facilitating the comparison and synthesis of findings across studies, thereby enhancing the robustness of future participatory endeavors. Furthermore, we identified three main categories of barriers and facilitators: pragmatic/behavioral, emotional/relational, and cognitive. These insights underscore the significance of participant engagement and collaborative decision-making for project success beyond theoretical considerations. Regarding participation, projects were classified as contributory (5 cases), where stakeholders contributed insights; collaborative (10 cases), with active co-designing of solutions; and co-created (19 cases), featuring deep stakeholder involvement from ideation to implementation, resulting in joint ownership of outcomes. Such diverse participation modes highlight the adaptability of participatory approaches to varying agricultural contexts. Discussion: In conclusion, this Scoping Review demonstrates the potential of participatory research in driving transformative changes in farmers' practices, fostering sustainability and innovation in agriculture. Understanding the diverse landscape of participatory approaches, theoretical frameworks, and participant engagement strategies is essential for designing effective and context-specific interventions. Collaborative efforts among researchers, practitioners, and stakeholders are pivotal in harnessing the full potential of participatory approaches and driving positive change in agricultural settings worldwide. The identified themes of innovation and participation modes provide valuable insights for future research and targeted interventions in agricultural innovation.

Keywords: participatory research, co-creation, agricultural innovation, stakeholders' engagement

Procedia PDF Downloads 65
69 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.

Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin

Procedia PDF Downloads 63
68 The Impacts of New Digital Technology Transformation on Singapore Healthcare Sector: Case Study of a Public Hospital in Singapore from a Management Accounting Perspective

Authors: Junqi Zou

Abstract:

As one of the world’s most tech-ready countries, Singapore has initiated the Smart Nation plan to harness the full power and potential of digital technologies to transform the way people live and work, through the more efficient government and business processes, to make the economy more productive. The key evolutions of digital technology transformation in healthcare and the increasing deployment of Internet of Things (IoTs), Big Data, AI/cognitive, Robotic Process Automation (RPA), Electronic Health Record Systems (EHR), Electronic Medical Record Systems (EMR), Warehouse Management System (WMS in the most recent decade have significantly stepped up the move towards an information-driven healthcare ecosystem. The advances in information technology not only bring benefits to patients but also act as a key force in changing management accounting in healthcare sector. The aim of this study is to investigate the impacts of digital technology transformation on Singapore’s healthcare sector from a management accounting perspective. Adopting a Balanced Scorecard (BSC) analysis approach, this paper conducted an exploratory case study of a newly launched Singapore public hospital, which has been recognized as amongst the most digitally advanced healthcare facilities in Asia-Pacific region. Specifically, this study gains insights on how the new technology is changing healthcare organizations’ management accounting from four perspectives under the Balanced Scorecard approach, 1) Financial Perspective, 2) Customer (Patient) Perspective, 3) Internal Processes Perspective, and 4) Learning and Growth Perspective. Based on a thorough review of archival records from the government and public, and the interview reports with the hospital’s CIO, this study finds the improvements from all the four perspectives under the Balanced Scorecard framework as follows: 1) Learning and Growth Perspective: The Government (Ministry of Health) works with the hospital to open up multiple training pathways to health professionals that upgrade and develops new IT skills among the healthcare workforce to support the transformation of healthcare services. 2) Internal Process Perspective: The hospital achieved digital transformation through Project OneCare to integrate clinical, operational, and administrative information systems (e.g., EHR, EMR, WMS, EPIB, RTLS) that enable the seamless flow of data and the implementation of JIT system to help the hospital operate more effectively and efficiently. 3) Customer Perspective: The fully integrated EMR suite enhances the patient’s experiences by achieving the 5 Rights (Right Patient, Right Data, Right Device, Right Entry and Right Time). 4) Financial Perspective: Cost savings are achieved from improved inventory management and effective supply chain management. The use of process automation also results in a reduction of manpower costs and logistics cost. To summarize, these improvements identified under the Balanced Scorecard framework confirm the success of utilizing the integration of advanced ICT to enhance healthcare organization’s customer service, productivity efficiency, and cost savings. Moreover, the Big Data generated from this integrated EMR system can be particularly useful in aiding management control system to optimize decision making and strategic planning. To conclude, the new digital technology transformation has moved the usefulness of management accounting to both financial and non-financial dimensions with new heights in the area of healthcare management.

Keywords: balanced scorecard, digital technology transformation, healthcare ecosystem, integrated information system

Procedia PDF Downloads 161
67 Environmental Fate and Toxicity of Aged Titanium Dioxide Nano-Composites Used in Sunscreen

Authors: Danielle Slomberg, Jerome Labille, Riccardo Catalano, Jean-Claude Hubaud, Alexandra Lopes, Alice Tagliati, Teresa Fernandes

Abstract:

In the assessment and management of cosmetics and personal care products, sunscreens are of emerging concern regarding both human and environmental health. Organic UV blockers in many sunscreens have been evidenced to undergo rapid photodegradation, induce dermal allergic reactions due to skin penetration, and to cause adverse effects on marine systems. While mineral UV-blockers may offer a safer alternative, their fate and impact and resulting regulation are still under consideration, largely related to the potential influence of nanotechnology-based products on both consumers and the environment. Nanometric titanium dioxide (TiO₂) UV-blockers have many advantages in terms of sun protection and asthetics (i.e., transparency). These UV-blockers typically consist of rutile nanoparticles coated with a primary mineral layer (silica or alumina) aimed at blocking the nanomaterial photoactivity and can include a secondary organic coating (e.g., stearic acid, methicone) aimed at favouring dispersion of the nanomaterial in the sunscreen formulation. The nanomaterials contained in the sunscreen can leave the skin either through a bathing of everyday usage, with subsequent release into rivers, lakes, seashores, and/or sewage treatment plants. The nanomaterial behaviour, fate and impact in these different systems is largely determined by its surface properties, (e.g. the nanomaterial coating type) and lifetime. The present work aims to develop the eco-design of sunscreens through the minimisation of risks associated with nanomaterials incorporated into the formulation. All stages of the sunscreen’s life cycle must be considered in this aspect, from its manufacture to its end-of-life, through its use by the consumer to its impact on the exposed environment. Reducing the potential release and/or toxicity of the nanomaterial from the sunscreen is a decisive criterion for its eco-design. TiO₂ UV-blockers of varied size and surface coating (e.g., stearic acid and silica) have been selected for this study. Hydrophobic TiO₂ UV-blockers (i.e., stearic acid-coated) were incorporated into a typical water-in-oil (w/o) formulation while hydrophilic, silica-coated TiO₂ UV-blockers were dispersed into an oil-in-water (o/w) formulation. The resulting sunscreens were characterised in terms of nanomaterial localisation, sun protection factor, and photo-passivation. The risk to the direct aquatic environment was assessed by evaluating the release of nanomaterials from the sunscreen through a simulated laboratory aging procedure. The size distribution, surface charge, and degradation state of the nano-composite by-products, as well as their nanomaterial concentration and colloidal behaviour were determined in a variety of aqueous environments (e.g., seawater and freshwater). Release of the hydrophobic nanocomposites into the aqueous environment was driven by oil droplet formation while hydrophilic nano-composites were readily dispersed. Ecotoxicity of the sunscreen by-products (from both w/o and o/w formulations) and their risk to marine organisms were assessed using coral symbiotes and tropical corals, evaluating both lethal and sublethal toxicities. The data dissemination and provided risk knowledge from the present work will help guide regulation related to nanomaterials in sunscreen, provide better information for consumers, and allow for easier decision-making for manufacturers.

Keywords: alteration, environmental fate, sunscreens, titanium dioxide nanoparticles

Procedia PDF Downloads 262
66 Use of computer and peripherals in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar Mehrafarin, Reza Mehrafarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: archaeological surveys, computer use, iran, modern technologies, sistan

Procedia PDF Downloads 78
65 Characterizing and Developing the Clinical Grade Microbiome Assay with a Robust Bioinformatics Pipeline for Supporting Precision Medicine Driven Clinical Development

Authors: Danyi Wang, Andrew Schriefer, Dennis O'Rourke, Brajendra Kumar, Yang Liu, Fei Zhong, Juergen Scheuenpflug, Zheng Feng

Abstract:

Purpose: It has been recognized that the microbiome plays critical roles in disease pathogenesis, including cancer, autoimmune disease, and multiple sclerosis. To develop a clinical-grade assay for exploring microbiome-derived clinical biomarkers across disease areas, a two-phase approach is implemented. 1) Identification of the optimal sample preparation reagents using pre-mixed bacteria and healthy donor stool samples coupled with proprietary Sigma-Aldrich® bioinformatics solution. 2) Exploratory analysis of patient samples for enabling precision medicine. Study Procedure: In phase 1 study, we first compared the 16S sequencing results of two ATCC® microbiome standards (MSA 2002 and MSA 2003) across five different extraction kits (Kit A, B, C, D & E). Both microbiome standards samples were extracted in triplicate across all extraction kits. Following isolation, DNA quantity was determined by Qubit assay. DNA quality was assessed to determine purity and to confirm extracted DNA is of high molecular weight. Bacterial 16S ribosomal ribonucleic acid (rRNA) amplicons were generated via amplification of the V3/V4 hypervariable region of the 16S rRNA. Sequencing was performed using a 2x300 bp paired-end configuration on the Illumina MiSeq. Fastq files were analyzed using the Sigma-Aldrich® Microbiome Platform. The Microbiome Platform is a cloud-based service that offers best-in-class 16S-seq and WGS analysis pipelines and databases. The Platform and its methods have been extensively benchmarked using microbiome standards generated internally by MilliporeSigma and other external providers. Data Summary: The DNA yield using the extraction kit D and E is below the limit of detection (100 pg/µl) of Qubit assay as both extraction kits are intended for samples with low bacterial counts. The pre-mixed bacterial pellets at high concentrations with an input of 2 x106 cells for MSA-2002 and 1 x106 cells from MSA-2003 were not compatible with the kits. Among the remaining 3 extraction kits, kit A produced the greatest yield whereas kit B provided the least yield (Kit-A/MSA-2002: 174.25 ± 34.98; Kit-A/MSA-2003: 179.89 ± 30.18; Kit-B/MSA-2002: 27.86 ± 9.35; Kit-B/MSA-2003: 23.14 ± 6.39; Kit-C/MSA-2002: 55.19 ± 10.18; Kit-C/MSA-2003: 35.80 ± 11.41 (Mean ± SD)). Also, kit A produced the greatest yield, whereas kit B provided the least yield. The PCoA 3D visualization of the Weighted Unifrac beta diversity shows that kits A and C cluster closely together while kit B appears as an outlier. The kit A sequencing samples cluster more closely together than both the other kits. The taxonomic profiles of kit B have lower recall when compared to the known mixture profiles indicating that kit B was inefficient at detecting some of the bacteria. Conclusion: Our data demonstrated that the DNA extraction method impacts DNA concentration, purity, and microbial communities detected by next-generation sequencing analysis. Further microbiome analysis performance comparison of using healthy stool samples is underway; also, colorectal cancer patients' samples will be acquired for further explore the clinical utilities. Collectively, our comprehensive qualification approach, including the evaluation of optimal DNA extraction conditions, the inclusion of positive controls, and the implementation of a robust qualified bioinformatics pipeline, assures accurate characterization of the microbiota in a complex matrix for deciphering the deep biology and enabling precision medicine.

Keywords: 16S rRNA sequencing, analytical validation, bioinformatics pipeline, metagenomics

Procedia PDF Downloads 170
64 High Cycle Fatigue Analysis of a Lower Hopper Knuckle Connection of a Large Bulk Carrier under Dynamic Loading

Authors: Vaso K. Kapnopoulou, Piero Caridis

Abstract:

The fatigue of ship structural details is of major concern in the maritime industry as it can generate fracture issues that may compromise structural integrity. In the present study, a fatigue analysis of the lower hopper knuckle connection of a bulk carrier was conducted using the Finite Element Method by means of ABAQUS/CAE software. The fatigue life was calculated using Miner’s Rule and the long-term distribution of stress range by the use of the two-parameter Weibull distribution. The cumulative damage ratio was estimated using the fatigue damage resulting from the stress range occurring at each load condition. For this purpose, a cargo hold model was first generated, which extends over the length of two holds (the mid-hold and half of each of the adjacent holds) and transversely over the full breadth of the hull girder. Following that, a submodel of the area of interest was extracted in order to calculate the hot spot stress of the connection and to estimate the fatigue life of the structural detail. Two hot spot locations were identified; one at the top layer of the inner bottom plate and one at the top layer of the hopper plate. The IACS Common Structural Rules (CSR) require that specific dynamic load cases for each loading condition are assessed. Following this, the dynamic load case that causes the highest stress range at each loading condition should be used in the fatigue analysis for the calculation of the cumulative fatigue damage ratio. Each load case has a different effect on ship hull response. Of main concern, when assessing the fatigue strength of the lower hopper knuckle connection, was the determination of the maximum, i.e. the critical value of the stress range, which acts in a direction normal to the weld toe line. This acts in the transverse direction, that is, perpendicularly to the ship's centerline axis. The load cases were explored both theoretically and numerically in order to establish the one that causes the highest damage to the location examined. The most severe one was identified to be the load case induced by beam sea condition where the encountered wave comes from the starboard. At the level of the cargo hold model, the model was assumed to be simply supported at its ends. A coarse mesh was generated in order to represent the overall stiffness of the structure. The elements employed were quadrilateral shell elements, each having four integration points. A linear elastic analysis was performed because linear elastic material behavior can be presumed, since only localized yielding is allowed by most design codes. At the submodel level, the displacements of the analysis of the cargo hold model to the outer region nodes of the submodel acted as boundary conditions and applied loading for the submodel. In order to calculate the hot spot stress at the hot spot locations, a very fine mesh zone was generated and used. The fatigue life of the detail was found to be 16.4 years which is lower than the design fatigue life of the structure (25 years), making this location vulnerable to fatigue fracture issues. Moreover, the loading conditions that induce the most damage to the location were found to be the various ballasting conditions.

Keywords: dynamic load cases, finite element method, high cycle fatigue, lower hopper knuckle

Procedia PDF Downloads 418
63 A Novel Paradigm in the Management of Pancreatic Trauma

Authors: E. Tan, O. McKay, T. Clarnette T., D. Croagh

Abstract:

Background: Historically with pancreatic trauma, complete disruption of the main pancreatic duct (MPD), classified as Grade IV-V by the American Association for the Surgery of Trauma (AAST), necessitated a damage-control laparotomy. This was to avoid mortality, shorten diet upgrade timeframe, and hence shorter length of stay. However, acute pancreatic resection entailed complications of pancreatic fistulas and leaks. With the advance of imaging-guided interventions, non-operative management such as percutaneous and transpapillary drainage of traumatic peripancreatic collections have been trialled favourably. The aim of this case series is to evaluate the efficacy of endoscopic ultrasound-guided (EUS) transmural drainage in managing traumatic peripancreatic collections as a less invasive alternative to traditional approaches. This study also highlights the importance of anatomical knowledge regarding peripancreatic collection’s common location in the lesser sac, the pancreas relationship to adjacent organs, and the formation of the main pancreatic duct in regards to the feasibility of therapeutic internal drainage. Methodology: A retrospective case series was conducted at a single tertiary endoscopy unit, analysing patient data over a 5-year period. Inclusion criteria outlined patients age 5 to 80-years-old, traumatic pancreatic injury of at least Grade IV and haemodynamic stability. Exclusion criteria involved previous episodes of pancreatitis or abdominal trauma. Patient demographics and clinicopathological characteristics were retrospectively collected. Results: The study identified 7 patients with traumatic pancreatic injuries that were managed from 2018-2022; age ranging from 5 to 34 years old, with majority being female (n=5). Majority of the mechanisms of trauma were a handlebar injury (n=4). Diagnosis was confirmed with an elevated lipase and computerized tomotography (CT) confirmation of proximal pancreatic transection with MPD disruption. All patients sustained an isolated single organ grade IV pancreatic injury, except case 4 and 5 with other intra-abdominal visceral Grade 1 injuries. 6 patients underwent early ERCP-guided transpapillary drainage with 1 being unsuccessful for pancreatic duct stent insertion (case 1) and 1 complication of stent migration (case 2). Surveillance imaging post ERCP showed the stents were unable to bridge the disrupted duct and development of symptomatic collections with an average size of 9.9cm. Hence, all patients proceeded to EUS-guided transmural drainage, with 2/7 patients requiring repeat drainages (case 6 and 7). Majority (n=6) had a cystogastrostomy, whilst 1 (case 6) had a cystoenterostomy due to feasibility of the peripancreatic collection being adjacent to duodenum rather than stomach. However, case 6 subsequently required repeat EUS-guided drainage with cystogastrostomy for ongoing collections. Hence all patients avoided initial laparotomy with an average index length of stay of 11.7 days. Successful transmural drainage was demonstrated, with no long-term complications of pancreatic insufficiency; except for 1 patient requiring a distal pancreatectomy at 2 year follow-up due to chronic pain. Conclusion: The early results of this series support EUS-guided transmural drainage as a viable management option for traumatic peripancreatic collections, showcasing successful outcomes, minimal complications, and long-term efficacy in avoiding surgical interventions. More studies are required before the adoption of this procedure as a less invasive and complication-prone management approach for traumatic peripancreatic collections.

Keywords: endoscopic ultrasound, cystogastrostomy, pancreatic trauma, traumatic peripancreatic collection, transmural drainage

Procedia PDF Downloads 47
62 Comparing Practices of Swimming in the Netherlands against a Global Model for Integrated Development of Mass and High Performance Sport: Perceptions of Coaches

Authors: Melissa de Zeeuw, Peter Smolianov, Arnold Bohl

Abstract:

This study was designed to help and improve international performance as well increase swimming participation in the Netherlands. Over 200 sources of literature on sport delivery systems from 28 Australasian, North and South American, Western and Eastern European countries were analyzed to construct a globally applicable model of high performance swimming integrated with mass participation, comprising of the following seven elements and three levels: Micro level (operations, processes, and methodologies for development of individual athletes): 1. Talent search and development, 2. Advanced athlete support. Meso level (infrastructures, personnel, and services enabling sport programs): 3. Training centers, 4. Competition systems, 5. Intellectual services. Macro level (socio-economic, cultural, legislative, and organizational): 6. Partnerships with supporting agencies, 7. Balanced and integrated funding and structures of mass and elite sport. This model emerged from the integration of instruments that have been used to analyse and compare national sport systems. The model has received scholarly validation and showed to be a framework for program analysis that is not culturally bound. It has recently been accepted as a model for further understanding North American sport systems, including (in chronological order of publications) US rugby, tennis, soccer, swimming and volleyball. The above model was used to design a questionnaire of 42 statements reflecting desired practices. The statements were validated by 12 international experts, including executives from sport governing bodies, academics who published on high performance and sport development, and swimming coaches and administrators. In this study both a highly structured and open ended qualitative analysis tools were used. This included a survey of swim coaches where open responses accompanied structured questions. After collection of the surveys, semi-structured discussions with Federation coaches were conducted to add triangulation to the findings. Lastly, a content analysis of Dutch Swimming’s website and organizational documentation was conducted. A representative sample of 1,600 Dutch Swim coaches and administrators was collected via email addresses from Royal Dutch Swimming Federation' database. Fully completed questionnaires were returned by 122 coaches from all key country’s regions for a response rate of 7,63% - higher than the response rate of the previously mentioned US studies which used the same model and method. Results suggest possible enhancements at macro level (e.g., greater public and corporate support to prepare and hire more coaches and to address the lack of facilities, monies and publicity at mass participation level in order to make swimming affordable for all), at meso level (e.g., comprehensive education for all coaches and full spectrum of swimming pools particularly 50 meters long), and at micro level (e.g., better preparation of athletes for a future outside swimming and better use of swimmers to stimulate swimming development). Best Dutch swimming management practices (e.g., comprehensive support to most talented swimmers who win Olympic medals) as well as relevant international practices available for transfer to the Netherlands (e.g., high school competitions) are discussed.

Keywords: sport development, high performance, mass participation, swimming

Procedia PDF Downloads 205
61 Supporting 'vulnerable' Students to Complete Their Studies During the Economic Crisis in Greece: The Umbrella Program of International Hellenic University

Authors: Rigas Kotsakis, Nikolaos Tsigilis, Vasilis Grammatikopoulos, Evridiki Zachopoulou

Abstract:

During the last decade, Greece has faced an unprecedented financial crisis, affecting various aspects and functionalities of Higher Education. Besides the restricted funding of academic institutions, the students and their families encountered economical difficulties that undoubtedly influenced the effective completion of their studies. In this context, a fairly large number of students in Alexander campus of International Hellenic University (IHU) delay, interrupt, or even abandon their studies, especially when they come from low-income families, belong to sensitive social or special needs groups, they have different cultural origins, etc. For this reason, a European project, named “Umbrella”, was initiated aiming at providing the necessary psychological support and counseling, especially to disadvantaged students, towards the completion of their studies. To this end, a network of various academic members (academic staff and students) from IHU, namely iMentor, were implicated in different roles. Specifically, experienced academic staff trained students to serve as intermediate links for the integration and educational support of students that fall into the aforementioned sensitive social groups and face problems for the completion of their studies. The main idea of the project is held upon its person-centered character, which facilitates direct student-to-student communication without the intervention of the teaching staff. The backbone of the iMentors network are senior students that face no problem in their academic life and volunteered for this project. It should be noted that there is a provision from the Umbrella structure for substantial and ethical rewards for their engagement. In this context, a well-defined, stringent methodology was implemented for the evaluation of the extent of the problem in IHU and the detection of the profile of the “candidate” disadvantaged students. The first phase included two steps, (a) data collection and (b) data cleansing/ preprocessing. The first step involved the data collection process from the Secretary Services of all Schools in IHU, from 1980 to 2019, which resulted in 96.418 records. The data set included the School name, the semester of studies, a student enrolling criteria, the nationality, the graduation year or the current, up-to-date academic state (still studying, delayed, dropped off, etc.). The second step of the employed methodology involved the data cleansing/preprocessing because of the existence of “noisy” data, missing and erroneous values, etc. Furthermore, several assumptions and grouping actions were imposed to achieve data homogeneity and an easy-to-interpret subsequent statistical analysis. Specifically, the duration of 40 years recording was limited to the last 15 years (2004-2019). In 2004 the Greek Technological Institutions were evolved into Higher Education Universities, leading into a stable and unified frame of graduate studies. In addition, the data concerning active students were excluded from the analysis since the initial processing effort was focused on the detection of factors/variables that differentiated graduate and deleted students. The final working dataset included 21.432 records with only two categories of students, those that have a degree and those who abandoned their studies. Findings of the first phase are presented across faculties and further discussed.

Keywords: higher education, students support, economic crisis, mentoring

Procedia PDF Downloads 115
60 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data

Authors: M. Mueller, M. Kuehn, M. Voelker

Abstract:

In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).

Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing

Procedia PDF Downloads 131
59 Translation, Cross-Cultural Adaption, and Validation of the Vividness of Movement Imagery Questionnaire 2 (VMIQ-2) to Classical Arabic Language

Authors: Majid Alenezi, Abdelbare Algamode, Amy Hayes, Gavin Lawrence, Nichola Callow

Abstract:

The purpose of this study was to translate and culturally adapt the Vividness of Movement Imagery Questionnaire-2 (VMIQ-2) from English to produce a new Arabic version (VMIQ-2A), and to evaluate the reliability and validity of the translated questionnaire. The questionnaire assesses how vividly and clearly individuals are able to imagine themselves performing everyday actions. Its purpose is to measure individuals’ ability to conduct movement imagery, which can be defined as “the cognitive rehearsal of a task in the absence of overt physical movement.” Movement imagery has been introduced in physiotherapy as a promising intervention technique, especially when physical exercise is not possible (e.g. pain, immobilisation.) Considerable evidence indicates movement imagery interventions improve physical function, but to maximize efficacy it is important to know the imagery abilities of the individuals being treated. Given the increase in the global sharing of knowledge it is desirable to use standard measures of imagery ability across language and cultures, thus motivating this project. The translation procedure followed guidelines from the Translation and Cultural Adaptation group of the International Society for Pharmacoeconomics and Outcomes Research and involved the following phases: Preparation; the original VMIQ-2 was adapted slightly to provide additional information and simplified grammar. Forward translation; three native speakers resident in Saudi Arabia translated the original VMIQ-2 from English to Arabic, following instruction to preserve meaning (not literal translation), and cultural relevance. Reconciliation; the project manager (first author), the primary translator and a physiotherapist reviewed the three independent translations to produce a reconciled first Arabic draft of VMIQ-2A. Backward translation; a fourth translator (native Arabic speaker fluent in English) translated literally the reconciled first Arabic draft to English. The project manager and two study authors compared the English back translation to the original VMIQ-2 and produced the second Arabic draft. Cognitive debriefing; to assess participants’ understanding of the second Arabic draft, 7 native Arabic speakers resident in the UK completed the questionnaire, and rated the clearness of the questions, specified difficult words or passages, and wrote in their own words their understanding of key terms. Following review of this feedback, a final Arabic version was created. 142 native Arabic speakers completed the questionnaire in community meeting places or at home; a subset of 44 participants completed the questionnaire a second time 1 week later. Results showed the translated questionnaire to be valid and reliable. Correlation coefficients indicated good test-retest reliability. Cronbach’s a indicated high internal consistency. Construct validity was tested in two ways. Imagery ability scores have been found to be invariant across gender; this result was replicated within the current study, assessed by independent-samples t-test. Additionally, experienced sports participants have higher imagery ability than those less experienced; this result was also replicated within the current study, assessed by analysis of variance, supporting construct validity. Results provide preliminary evidence that the VMIQ-2A is reliable and valid to be used with a general population who are native Arabic speakers. Future research will include validation of the VMIQ-2A in a larger sample, and testing validity in specific patient populations.

Keywords: motor imagery, physiotherapy, translation and validation, imagery ability

Procedia PDF Downloads 334
58 Unveiling the Dynamics of Preservice Teachers’ Engagement with Mathematical Modeling through Model Eliciting Activities: A Comprehensive Exploration of Acceptance and Resistance Towards Modeling and Its Pedagogy

Authors: Ozgul Kartal, Wade Tillett, Lyn D. English

Abstract:

Despite its global significance in curricula, mathematical modeling encounters persistent disparities in recognition and emphasis within regular mathematics classrooms and teacher education across countries with diverse educational and cultural traditions, including variations in the perceived role of mathematical modeling. Over the past two decades, increased attention has been given to the integration of mathematical modeling into national curriculum standards in the U.S. and other countries. Therefore, the mathematics education research community has dedicated significant efforts to investigate various aspects associated with the teaching and learning of mathematical modeling, primarily focusing on exploring the applicability of modeling in schools and assessing students', teachers', and preservice teachers' (PTs) competencies and engagement in modeling cycles and processes. However, limited attention has been directed toward examining potential resistance hindering teachers and PTs from effectively implementing mathematical modeling. This study focuses on how PTs, without prior modeling experience, resist and/or embrace mathematical modeling and its pedagogy as they learn about models and modeling perspectives, navigate the modeling process, design and implement their modeling activities and lesson plans, and experience the pedagogy enabling modeling. Model eliciting activities (MEAs) were employed due to their high potential to support the development of mathematical modeling pedagogy. The mathematical modeling module was integrated into a mathematics methods course to explore how PTs embraced or resisted mathematical modeling and its pedagogy. The module design included reading, reflecting, engaging in modeling, assessing models, creating a modeling task (MEA), and designing a modeling lesson employing an MEA. Twelve senior undergraduate students participated, and data collection involved video recordings, written prompts, lesson plans, and reflections. An open coding analysis revealed acceptance and resistance toward teaching mathematical modeling. The study identified four overarching themes, including both acceptance and resistance: pedagogy, affordance of modeling (tasks), modeling actions, and adjusting modeling. In the category of pedagogy, PTs displayed acceptance based on potential pedagogical benefits and resistance due to various concerns. The affordance of modeling (tasks) category emerged from instances when PTs showed acceptance or resistance while discussing the nature and quality of modeling tasks, often debating whether modeling is considered mathematics. PTs demonstrated both acceptance and resistance in their modeling actions, engaging in modeling cycles as students and designing/implementing MEAs as teachers. The adjusting modeling category captured instances where PTs accepted or resisted maintaining the qualities and nature of the modeling experience or converted modeling into a typical structured mathematics experience for students. While PTs displayed a mix of acceptance and resistance in their modeling actions, limitations were observed in embracing complexity and adhering to model principles. The study provides valuable insights into the challenges and opportunities of integrating mathematical modeling into teacher education, emphasizing the importance of addressing pedagogical concerns and providing support for effective implementation. In conclusion, this research offers a comprehensive understanding of PTs' engagement with modeling, advocating for a more focused discussion on the distinct nature and significance of mathematical modeling in the broader curriculum to establish a foundation for effective teacher education programs.

Keywords: mathematical modeling, model eliciting activities, modeling pedagogy, secondary teacher education

Procedia PDF Downloads 65
57 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 133
56 Distributed Listening in Intensive Care: Nurses’ Collective Alarm Responses Unravelled through Auditory Spatiotemporal Trajectories

Authors: Michael Sonne Kristensen, Frank Loesche, James Foster, Elif Ozcan, Judy Edworthy

Abstract:

Auditory alarms play an integral role in intensive care nurses’ daily work. Most medical devices in the intensive care unit (ICU) are designed to produce alarm sounds in order to make nurses aware of immediate or prospective safety risks. The utilisation of sound as a carrier of crucial patient information is highly dependent on nurses’ presence - both physically and mentally. For ICU nurses, especially the ones who work with stationary alarm devices at the patient bed space, it is a challenge to display ‘appropriate’ alarm responses at all times as they have to navigate with great flexibility in a complex work environment. While being primarily responsible for a small number of allocated patients they are often required to engage with other nurses’ patients, relatives, and colleagues at different locations inside and outside the unit. This work explores the social strategies used by a team of nurses to comprehend and react to the information conveyed by the alarms in the ICU. Two main research questions guide the study: To what extent do alarms from a patient bed space reach the relevant responsible nurse by direct auditory exposure? By which means do responsible nurses get informed about their patients’ alarms when not directly exposed to the alarms? A comprehensive video-ethnographic field study was carried out to capture and evaluate alarm-related events in an ICU. The study involved close collaboration with four nurses who wore eye-level cameras and ear-level binaural audio recorders during several work shifts. At all time the entire unit was monitored by multiple video and audio recorders. From a data set of hundreds of hours of recorded material information about the nurses’ location, social interaction, and alarm exposure at any point in time was coded in a multi-channel replay-interface. The data shows that responsible nurses’ direct exposure and awareness of the alarms of their allocated patients vary significantly depending on work load, social relationships, and the location of the patient’s bed space. Distributed listening is deliberately employed by the nursing team as a social strategy to respond adequately to alarms, but the patterns of information flow prompted by alarm-related events are not uniform. Auditory Spatiotemporal Trajectory (AST) is proposed as a methodological label to designate the integration of temporal, spatial and auditory load information. As a mixed-method metrics it provides tangible evidence of how nurses’ individual alarm-related experiences differ from one another and from stationary points in the ICU. Furthermore, it is used to demonstrate how alarm-related information reaches the individual nurse through principles of social and distributed cognition, and how that information relates to the actual alarm event. Thereby it bridges a long-standing gap in the literature on medical alarm utilisation between, on the one hand, initiatives to measure objective data of the medical sound environment without consideration for any human experience, and, on the other hand, initiatives to study subjective experiences of the medical sound environment without detailed evidence of the objective characteristics of the environment.

Keywords: auditory spatiotemporal trajectory, medical alarms, social cognition, video-ethography

Procedia PDF Downloads 190
55 The BETA Module in Action: An Empirical Study on Enhancing Entrepreneurial Skills through Kearney's and Bloom's Guiding Principles

Authors: Yen Yen Tan, Lynn Lam, Cynthia Lam, Angela Koh, Edwin Seng

Abstract:

Entrepreneurial education plays a crucial role in nurturing future innovators and change-makers. Over time, significant progress has been made in refining instructional approaches to develop the necessary skills among learners effectively. Two highly valuable frameworks, Kearney's "4 Principles of Entrepreneurial Pedagogy" and Bloom's "Three Domains of Learning," serve as guiding principles in entrepreneurial education. Kearney's principles align with experiential and student-centric learning, which are crucial for cultivating an entrepreneurial mindset. The potential synergies between these frameworks hold great promise for enhancing entrepreneurial acumen among students. However, despite this potential, their integration remains largely unexplored. This study aims to bridge this gap by building upon the Business Essentials through Action (BETA) module and investigating its contributions to nurturing the entrepreneurial mindset. This study employs a quasi-experimental mixed-methods approach, combining quantitative and qualitative elements to ensure comprehensive and insightful data. A cohort of 235 students participated, with 118 enrolled in the BETA module and 117 in a traditional curriculum. Their Personal Entrepreneurial Competencies (PECs) were assessed before admission (pre-Y1) and one year into the course (post-Y1) using a comprehensive 55-item PEC questionnaire, enabling measurement of critical traits such as opportunity-seeking, persistence, and risk-taking. Rigorous computations of individual entrepreneurial competencies and overall PEC scores were performed, including a correction factor to mitigate potential self-assessment bias. The orchestration of Kearney's principles and Bloom's domains within the BETA module necessitates a granular examination. Here, qualitative revelations surface, courtesy of structured interviews aligned with contemporary research methodologies. These interviews act as a portal, ushering us into the transformative journey undertaken by students. Meanwhile, the study pivots to explore the BETA module's influence on students' entrepreneurial competencies from the vantage point of faculty members. A symphony of insights emanates from intimate focus group discussions featuring six dedicated lecturers, who share their perceptions, experiences, and reflective narratives, illuminating the profound impact of pedagogical practices embedded within the BETA module. Preliminary findings from ongoing data analysis indicate promising results, showcasing a substantial improvement in entrepreneurial skills among students participating in the BETA module. This study promises not only to elevate students' entrepreneurial competencies but also to illuminate the broader canvas of applicability for Kearney's principles and Bloom's domains. The dynamic interplay of quantitative analyses, proffering precise competency metrics, and qualitative revelations, delving into the nuanced narratives of transformative journeys, engenders a holistic understanding of this educational endeavour. Through a rigorous quasi-experimental mixed-methods approach, this research aims to establish the BETA module's effectiveness in fostering entrepreneurial acumen among students at Singapore Polytechnic, thereby contributing valuable insights to the broader discourse on educational methodologies.

Keywords: entrepreneurial education, experiential learning, pedagogical frameworks, innovative competencies

Procedia PDF Downloads 64
54 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects

Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm

Abstract:

Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.

Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology

Procedia PDF Downloads 180
53 Interpretable Deep Learning Models for Medical Condition Identification

Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji

Abstract:

Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.

Keywords: deep learning, interpretability, attention, big data, medical conditions

Procedia PDF Downloads 91
52 Challenges to Developing a Trans-European Programme for Health Professionals to Recognize and Respond to Survivors of Domestic Violence and Abuse

Authors: June Keeling, Christina Athanasiades, Vaiva Hendrixson, Delyth Wyndham

Abstract:

Recognition and education in violence, abuse, and neglect for medical and healthcare practitioners (REVAMP) is a trans-European project aiming to introduce a training programme that has been specifically developed by partners across seven European countries to meet the needs of medical and healthcare practitioners. Amalgamating the knowledge and experience of clinicians, researchers, and educators from interdisciplinary and multi-professional backgrounds, REVAMP has tackled the under-resourced and underdeveloped area of domestic violence and abuse. The team designed an online training programme to support medical and healthcare practitioners to recognise and respond appropriately to survivors of domestic violence and abuse at their point of contact with a health provider. The REVAMP partner countries include Europe: France, Lithuania, Germany, Greece, Iceland, Norway, and the UK. The training is delivered through a series of interactive online modules, adapting evidence-based pedagogical approaches to learning. Capturing and addressing the complexities of the project impacted the methodological decisions and approaches to evaluation. The challenge was to find an evaluation methodology that captured valid data across all partner languages to demonstrate the extent of the change in knowledge and understanding. Co-development by all team members was a lengthy iterative process, challenged by a lack of consistency in terminology. A mixed methods approach enabled both qualitative and quantitative data to be collected, at the start, during, and at the conclusion of the training for the purposes of evaluation. The module content and evaluation instrument were accessible in each partner country's language. Collecting both types of data provided a high-level snapshot of attainment via the quantitative dataset and an in-depth understanding of the impact of the training from the qualitative dataset. The analysis was mixed methods, with integration at multiple interfaces. The primary focus of the analysis was to support the overall project evaluation for the funding agency. A key project outcome was identifying that the trans-European approach posed several challenges. Firstly, the project partners did not share a first language or a legal or professional approach to domestic abuse and neglect. This was negotiated through complex, systematic, and iterative interaction between team members so that consensus could be achieved. Secondly, the context of the data collection in several different cultural, educational, and healthcare systems across Europe challenged the development of a robust evaluation. The participants in the pilot evaluation shared that the training was contemporary, well-designed, and of great relevance to inform practice. Initial results from the evaluation indicated that the participants were drawn from more than eight partner countries due to the online nature of the training. The primary results indicated a high level of engagement with the content and achievement through the online assessment. The main finding was that the participants perceived the impact of domestic abuse and neglect in very different ways in their individual professional contexts. Most significantly, the participants recognised the need for the training and the gap that existed previously. It is notable that a mixed-methods evaluation of a trans-European project is unusual at this scale.

Keywords: domestic violence, e-learning, health professionals, trans-European

Procedia PDF Downloads 83
51 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition

Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan

Abstract:

Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.

Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models

Procedia PDF Downloads 342
50 Organization Structure of Towns and Villages System in County Area Based on Fractal Theory and Gravity Model: A Case Study of Suning, Hebei Province, China

Authors: Liuhui Zhu, Peng Zeng

Abstract:

With the rapid development in China, the urbanization has entered the transformation and promotion stage, and its direction of development has shifted to overall regional synergy. China has a large number of towns and villages, with comparative small scale and scattered distribution, which always support and provide resources to cities leading to urban-rural opposition, so it is difficult to achieve common development in a single town or village. In this context, the regional development should focus more on towns and villages to form a synergetic system, joining the regional association with cities. Thus, the paper raises the question about how to effectively organize towns and villages system to regulate the resource allocation and improve the comprehensive value of the regional area. To answer the question, it is necessary to find a suitable research unit and analysis of its present situation of towns and villages system for optimal development. By combing relevant researches and theoretical models, the county is the most basic administrative unit in China, which can directly guide and regulate the development of towns and villages, so the paper takes county as the research unit. Following the theoretical concept of ‘three structures and one network’, the paper concludes the research framework to analyse the present situation of towns and villages system, including scale structure, functional structure, spatial structure, and organization network. The analytical methods refer to the fractal theory and gravity model, using statistics and spatial data. The scale structure analyzes rank-size dimensions and uses the principal component method to calculate the comprehensive scale of towns and villages. The functional structure analyzes the functional types and industrial development of towns and villages. The spatial structure analyzes the aggregation dimension, network dimension, and correlation dimension of spatial elements to represent the overall spatial relationships. In terms of organization network, from the perspective of entity and ono-entity, the paper analyzes the transportation network and gravitational network. Based on the present situation analysis, the optimization strategies are proposed in order to achieve a synergetic relationship between towns and villages in the county area. The paper uses Suning county in the Beijing-Tianjin-Hebei region as a case study to apply the research framework and methods and then proposes the optimization orientations. The analysis results indicate that: (1) The Suning county is lack of medium-scale towns to transfer effect from towns to villages. (2) The distribution of gravitational centers is uneven, and the effect of gravity is limited only for nearby towns and villages. The gravitational network is not complete, leading to economic activities scattered and isolated. (3) The overall development of towns and villages system is immature, staying at ‘single heart and multi-core’ stage, and some specific optimization strategies are proposed. This study provides a regional view for the development of towns and villages and concludes the research framework and methods of towns and villages system for forming an effective synergetic relationship between them, contributing to organize resources and stimulate endogenous motivation, and form counter magnets to join the urban-rural integration.

Keywords: towns and villages system, organization structure, county area, fractal theory, gravity model

Procedia PDF Downloads 136
49 Hydrogen Production Using an Anion-Exchange Membrane Water Electrolyzer: Mathematical and Bond Graph Modeling

Authors: Hugo Daneluzzo, Christelle Rabbat, Alan Jean-Marie

Abstract:

Water electrolysis is one of the most advanced technologies for producing hydrogen and can be easily combined with electricity from different sources. Under the influence of electric current, water molecules can be split into oxygen and hydrogen. The production of hydrogen by water electrolysis favors the integration of renewable energy sources into the energy mix by compensating for their intermittence through the storage of the energy produced when production exceeds demand and its release during off-peak production periods. Among the various electrolysis technologies, anion exchange membrane (AEM) electrolyser cells are emerging as a reliable technology for water electrolysis. Modeling and simulation are effective tools to save time, money, and effort during the optimization of operating conditions and the investigation of the design. The modeling and simulation become even more important when dealing with multiphysics dynamic systems. One of those systems is the AEM electrolysis cell involving complex physico-chemical reactions. Once developed, models may be utilized to comprehend the mechanisms to control and detect flaws in the systems. Several modeling methods have been initiated by scientists. These methods can be separated into two main approaches, namely equation-based modeling and graph-based modeling. The former approach is less user-friendly and difficult to update as it is based on ordinary or partial differential equations to represent the systems. However, the latter approach is more user-friendly and allows a clear representation of physical phenomena. In this case, the system is depicted by connecting subsystems, so-called blocks, through ports based on their physical interactions, hence being suitable for multiphysics systems. Among the graphical modelling methods, the bond graph is receiving increasing attention as being domain-independent and relying on the energy exchange between the components of the system. At present, few studies have investigated the modelling of AEM systems. A mathematical model and a bond graph model were used in previous studies to model the electrolysis cell performance. In this study, experimental data from literature were simulated using OpenModelica using bond graphs and mathematical approaches. The polarization curves at different operating conditions obtained by both approaches were compared with experimental ones. It was stated that both models predicted satisfactorily the polarization curves with error margins lower than 2% for equation-based models and lower than 5% for the bond graph model. The activation polarization of hydrogen evolution reactions (HER) and oxygen evolution reactions (OER) were behind the voltage loss in the AEM electrolyzer, whereas ion conduction through the membrane resulted in the ohmic loss. Therefore, highly active electro-catalysts are required for both HER and OER while high-conductivity AEMs are needed for effectively lowering the ohmic losses. The bond graph simulation of the polarisation curve for operating conditions at various temperatures has illustrated that voltage increases with temperature owing to the technology of the membrane. Simulation of the polarisation curve can be tested virtually, hence resulting in reduced cost and time involved due to experimental testing and improved design optimization. Further improvements can be made by implementing the bond graph model in a real power-to-gas-to-power scenario.

Keywords: hydrogen production, anion-exchange membrane, electrolyzer, mathematical modeling, multiphysics modeling

Procedia PDF Downloads 91