Search results for: type-2 fuzzy sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1865

Search results for: type-2 fuzzy sets

35 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks

Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi

Abstract:

Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.

Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex

Procedia PDF Downloads 131
34 A Case for Strategic Landscape Infrastructure: South Essex Estuary Park

Authors: Alexandra Steed

Abstract:

Alexandra Steed URBAN was commissioned to undertake the South Essex Green and Blue Infrastructure Study (SEGBI) on behalf of the Association of South Essex Local Authorities (ASELA): a partnership of seven neighboring councils within the Thames Estuary. Located on London’s doorstep, the 70,000-hectare region is under extraordinary pressure for regeneration, further development, and economic expansion, yet faces extreme challenges: sea-level rise and inadequate flood defenses, stormwater flooding and threatened infrastructure, loss of internationally important habitats, significant existing community deprivation, and lack of connectivity and access to green space. The brief was to embrace these challenges in the creation of a document that would form a key part of ASELA’s Joint Strategic Framework and feed into local plans and master plans. Thus, helping to tackle climate change, ecological collapse, and social inequity at a regional scale whilst creating a relationship and awareness between urban communities and the surrounding landscapes and nature. The SEGBI project applied a ‘land-based’ methodology, combined with a co-design approach involving numerous stakeholders, to explore how living infrastructure can address these significant issues, reshape future planning and development, and create thriving places for the whole community of life. It comprised three key stages, including Baseline Review; Green and Blue Infrastructure Assessment; and the final Green and Blue Infrastructure Report. The resulting proposals frame an ambitious vision for the delivery of a new regional South Essex Estuary (SEE) Park – 24,000 hectares of protected and connected landscapes. This unified parkland system will drive effective place-shaping and “leveling up” for the most deprived communities while providing large-scale nature recovery and biodiversity net gain. Comprehensive analysis and policy recommendations ensure best practices will be embedded within planning documents and decisions guiding future development. Furthermore, a Natural Capital Account was undertaken as part of the strategy showing the tremendous economic value of the natural assets. This strategy sets a pioneering precedent that demonstrates how the prioritisation of living infrastructure has the capacity to address climate change and ecological collapse, while also supporting sustainable housing, healthier communities, and resilient infrastructures. It was only achievable through a collaborative and cross-boundary approach to strategic planning and growth, with a shared vision of place, and a strong commitment to delivery. With joined-up thinking and a joined-up region, a more impactful plan for South Essex was developed that will lead to numerous environmental, social, and economic benefits across the region, and enhancing the landscape and natural environs on the periphery of one of the largest cities in the world.

Keywords: climate change, green and blue infrastructure, landscape architecture, master planning, regional planning, social equity

Procedia PDF Downloads 67
33 Optimization and Coordination of Organic Product Supply Chains under Competition: An Analytical Modeling Perspective

Authors: Mohammadreza Nematollahi, Bahareh Mosadegh Sedghy, Alireza Tajbakhsh

Abstract:

The last two decades have witnessed substantial attention to organic and sustainable agricultural supply chains. Motivated by real-world practices, this paper aims to address two main challenges observed in organic product supply chains: decentralized decision-making process between farmers and their retailers, and competition between organic products and their conventional counterparts. To this aim, an agricultural supply chain consisting of two farmers, a conventional farmer and an organic farmer who offers an organic version of the same product, is considered. Both farmers distribute their products through a single retailer, where there exists competition between the organic and the conventional product. The retailer, as the market leader, sets the wholesale price, and afterward, the farmers set their production quantity decisions. This paper first models the demand functions of the conventional and organic products by incorporating the effect of asymmetric brand equity, which captures the fact that consumers usually pay a premium for organic due to positive perceptions regarding their health and environmental benefits. Then, profit functions with consideration of some characteristics of organic farming, including crop yield gap and organic cost factor, are modeled. Our research also considers both economies and diseconomies of scale in farming production as well as the effects of organic subsidy paid by the government to support organic farming. This paper explores the investigated supply chain in three scenarios: decentralized, centralized, and coordinated decision-making structures. In the decentralized scenario, the conventional and organic farmers and the retailer maximize their own profits individually. In this case, the interaction between the farmers is modeled under the Bertrand competition, while analyzing the interaction between the retailer and farmers under the Stackelberg game structure. In the centralized model, the optimal production strategies are obtained from the entire supply chain perspective. Analytical models are developed to derive closed-form optimal solutions. Moreover, analytical sensitivity analyses are conducted to explore the effects of main parameters like the crop yield gap, organic cost factor, organic subsidy, and percent price premium of the organic product on the farmers’ and retailer’s optimal strategies. Afterward, a coordination scenario is proposed to convince the three supply chain members to shift from the decentralized to centralized decision-making structure. The results indicate that the proposed coordination scenario provides a win-win-win situation for all three members compared to the decentralized model. Moreover, our paper demonstrates that the coordinated model respectively increases and decreases the production and price of organic produce, which in turn motivates the consumption of organic products in the market. Moreover, the proposed coordination model helps the organic farmer better handle the challenges of organic farming, including the additional cost and crop yield gap. Last but not least, our results highlight the active role of the organic subsidy paid by the government as a means of promoting sustainable organic product supply chains. Our paper shows that although the amount of organic subsidy plays a significant role in the production and sales price of organic products, the allocation method of subsidy between the organic farmer and retailer is not of that importance.

Keywords: analytical game-theoretic model, product competition, supply chain coordination, sustainable organic supply chain

Procedia PDF Downloads 83
32 Emotional State and Cognitive Workload during a Flight Simulation: Heart Rate Study

Authors: Damien Mouratille, Antonio R. Hidalgo-Muñoz, Nadine Matton, Yves Rouillard, Mickael Causse, Radouane El Yagoubi

Abstract:

Background: The monitoring of the physiological activity related to mental workload (MW) on pilots will be useful to improve aviation safety by anticipating human performance degradation. The electrocardiogram (ECG) can reveal MW fluctuations due to either cognitive workload or/and emotional state since this measure exhibits autonomic nervous system modulations. Arguably, heart rate (HR) is one of its most intuitive and reliable parameters. It would be particularly interesting to analyze the interaction between cognitive requirements and emotion in ecologic sets such as a flight simulator. This study aims to explore by means of HR the relation between cognitive demands and emotional activation. Presumably, the effects of cognition and emotion overloads are not necessarily cumulative. Methodology: Eight healthy volunteers in possession of the Private Pilot License were recruited (male; 20.8±3.2 years). ECG signal was recorded along the whole experiment by placing two electrodes on the clavicle and left pectoral of the participants. The HR was computed within 4 minutes segments. NASA-TLX and Big Five inventories were used to assess subjective workload and to consider the influence of individual personality differences. The experiment consisted in completing two dual-tasks of approximately 30 minutes of duration into a flight simulator AL50. Each dual-task required the simultaneous accomplishment of both a pre-established flight plan and an additional task based on target stimulus discrimination inserted between Air Traffic Control instructions. This secondary task allowed us to vary the cognitive workload from low (LC) to high (HC) levels, by combining auditory and visual numerical stimuli to respond to meeting specific criteria. Regarding emotional condition, the two dual-tasks were designed to assure analogous difficulty in terms of solicited cognitive demands. The former was realized by the pilot alone, i.e. Low Arousal (LA) condition. In contrast, the latter generates a high arousal (HA), since the pilot was supervised by two evaluators, filmed and involved into a mock competition with the rest of the participants. Results: Performance for the secondary task showed significant faster reaction times (RT) for HA compared to LA condition (p=.003). Moreover, faster RT was found for LC compared to HC (p < .001) condition. No interaction was found. Concerning HR measure, despite the lack of main effects an interaction between emotion and cognition is evidenced (p=.028). Post hoc analysis showed smaller HR for HA compared to LA condition only for LC (p=.049). Conclusion. The control of an aircraft is a very complex task including strong cognitive demands and depends on the emotional state of pilots. According to the behavioral data, the experimental set has permitted to generate satisfactorily different emotional and cognitive levels. As suggested by the interaction found in HR measure, these two factors do not seem to have a cumulative impact on the sympathetic nervous system. Apparently, low cognitive workload makes pilots more sensitive to emotional variations. These results hint the independency between data processing and emotional regulation. Further physiological data are necessary to confirm and disentangle this relation. This procedure may be useful for monitoring objectively pilot’s mental workload.

Keywords: cognitive demands, emotion, flight simulator, heart rate, mental workload

Procedia PDF Downloads 247
31 Enhancing Strategic Counter-Terrorism: Understanding How Familial Leadership Influences the Resilience of Terrorist and Insurgent Organizations in Asia

Authors: Andrew D. Henshaw

Abstract:

The research examines the influence of familial and kinship based leadership on the resilience of politically violent organizations. Organizations of this type frequently fight in the same conflicts though are called 'terrorist' or 'insurgent' depending on political foci of the time, and thus different approaches are used to combat them. The research considers them correlated phenomena with significant overlap and identifies strengths and vulnerabilities in resilience processes. The research employs paired case studies to examine resilience in organizations under significant external pressure, and achieves this by measuring three variables. 1: Organizational robustness in terms of leadership and governance. 2. Bounce-back response efficiency to external pressures and adaptation to endogenous and exogenous shock. 3. Perpetuity of operational and attack capability, and political legitimacy. The research makes three hypotheses. First, familial/kinship leadership groups have a significant effect on organizational resilience in terms of informal operations. Second, non-familial/kinship organizations suffer in terms of heightened security transaction costs and social economics surrounding recruitment, retention, and replacement. Third, resilience in non-familial organizations likely stems from critical external supports like state sponsorship or powerful patrons, rather than organic resilience dynamics. The case studies pair familial organizations with non-familial organizations. Set 1: The Haqqani Network (HQN) - Pair: Lashkar-e-Toiba (LeT). Set 2: Jemaah Islamiyah (JI) - Pair: The Abu Sayyaf Group (ASG). Case studies were selected based on three requirements, being: contrasting governance types, exposure to significant external pressures and, geographical similarity. The case study sets were examined over 24 months following periods of significantly heightened operational activities. This enabled empirical measurement of the variables as substantial external pressures came into force. The rationale for the research is obvious. Nearly all organizations have some nexus of familial interconnectedness. Examining familial leadership networks does not provide further understanding of how terrorism and insurgency originate, however, the central focus of the research does address how they persist. The sparse attention to this in existing literature presents an unexplored yet important area of security studies. Furthermore, social capital in familial systems is largely automatic and organic, given at birth or through kinship. It reduces security vetting cost for recruits, fighters and supporters which lowers liabilities and entry costs, while raising organizational efficiency and exit costs. Better understanding of these process is needed to exploit strengths into weaknesses. Outcomes and implications of the research have critical relevance to future operational policy development. Increased clarity of internal trust dynamics, social capital and power flows are essential to fracturing and manipulating kinship nexus. This is highly valuable to external pressure mechanisms such as counter-terrorism, counterinsurgency, and strategic intelligence methods to penetrate, manipulate, degrade or destroy the resilience of politically violent organizations.

Keywords: Counterinsurgency (COIN), counter-terrorism, familial influence, insurgency, intelligence, kinship, resilience, terrorism

Procedia PDF Downloads 283
30 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario

Authors: Dipankar Saha, J. P. Singh, C. B. Pandey

Abstract:

Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.

Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface

Procedia PDF Downloads 201
29 Big Data Applications for Transportation Planning

Authors: Antonella Falanga, Armando Cartenì

Abstract:

"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning

Procedia PDF Downloads 32
28 Investigation of Chemical Effects on the Lγ2,3 and Lγ4 X-ray Production Cross Sections for Some Compounds of 66dy at Photon Energies Close to L1 Absorption-edge Energy

Authors: Anil Kumar, Rajnish Kaur, Mateusz Czyzycki, Alessandro Migilori, Andreas Germanos Karydas, Sanjiv Puri

Abstract:

The radiative decay of Li(i=1-3) sub-shell vacancies produced through photoionization results in production of the characteristic emission spectrum comprising several X-ray lines, whereas non-radiative vacancy decay results in Auger electron spectrum. Accurate reliable data on the Li(i=1-3) sub-shell X-ray production (XRP) cross sections is of considerable importance for investigation of atomic inner-shell ionization processes as well as for quantitative elemental analysis of different types of samples employing the energy dispersive X-ray fluorescence (EDXRF) analysis technique. At incident photon energies in vicinity of the absorption edge energies of an element, the many body effects including the electron correlation, core relaxation, inter-channel coupling and post-collision interactions become significant in the photoionization of atomic inner-shells. Further, in case of compounds, the characteristic emission spectrum of the specific element is expected to get influenced by the chemical environment (coordination number, oxidation state, nature of ligand/functional groups attached to central atom, etc.). These chemical effects on L X-ray fluorescence parameters have been investigated by performing the measurements at incident photon energies much higher than the Li(i=1-3) sub-shell absorption edge energies using EDXRF spectrometers. In the present work, the cross sections for production of the Lk(k= γ2,3, γ4) X-rays have been measured for some compounds of 66Dy, namely, Dy2O3, Dy2(CO3)3, Dy2(SO4)3.8H2O, DyI2 and Dy metal by tuning the incident photon energies few eV above the L1 absorption-edge energy in order to investigate the influence of chemical effects on these cross sections in presence of the many body effects which become significant at photon energies close to the absorption-edge energies. The present measurements have been performed under vacuum at the IAEA end-station of the X-ray fluorescence beam line (10.1L) of ELETTRA synchrotron radiation facility (Trieste, Italy) using self-supporting pressed pellet targets (1.3 cm diameter, nominal thicknesses ~ 176 mg/cm2) of 66Dy compounds (procured from Sigma Aldrich) and a metallic foil of 66Dy (nominal thickness ~ 3.9 mg/cm2, procured from Good Fellow, UK). The present measured cross sections have been compared with theoretical values calculated using the Dirac-Hartree-Slater(DHS) model based fluorescence and Coster-Kronig yields, Dirac-Fock(DF) model based X-ray emission rates and two sets of L1 sub-shell photoionization cross sections based on the non-relativistic Hartree-Fock-Slater(HFS) model and those deduced from the self-consistent Dirac-Hartree-Fock(DHF) model based total photoionization cross sections. The present measured XRP cross sections for 66Dy as well as for its compounds for the L2,3 and L4 X-rays, are found to be higher by ~14-36% than the two calculated set values. It is worth to be mentioned that L2,3 and L4 X-ray lines are originated by filling up of the L1 sub-shell vacancies by the outer sub-shell (N2,3 and O2,3) electrons which are much more sensitive to the chemical environment around the central atom. The present observed differences between measured and theoretical values are expected due to combined influence of the many-body effects and the chemical effects.

Keywords: chemical effects, L X-ray production cross sections, Many body effects, Synchrotron radiation

Procedia PDF Downloads 107
27 Being Chinese Online: Discursive (Re)Production of Internet-Mediated Chinese National Identity

Authors: Zhiwei Wang

Abstract:

Much emphasis has been placed on the political dimension of digitised Chinese national(ist) discourses and their embodied national identities, which neglects other important dimensions constitutive of their discursive nature. A further investigation into how Chinese national(ist) discourses are daily (re)shaped online by diverse socio-political actors (especially ordinary users) is crucial, which can contribute to not only deeper understandings of Chinese national sentiments on China’s Internet beyond the excessive focus on their passionate, political-charged facet but also richer insights into the socio-technical ecology of the contemporary Chinese digital (and physical) world. This research adopts an ethnographic methodology, by which ‘fieldsites’ are Sina Weibo and bilibili. The primary data collection method is virtual ethnographic observation on everyday national(ist) discussions on both platforms. If data obtained via observations do not suffice to answer research questions, in-depth online qualitative interviews with ‘key actors’ identified from those observations in discursively (re)producing Chinese national identity on each ‘fieldsite’ will be conducted, to complement data gathered through the first method. Critical discourse analysis is employed to analyse data. During the process of data coding, NVivo is utilised. From November 2021 to December 2022, 35 weeks’ digital ethnographic observations have been conducted, with 35 sets of fieldnotes obtained. The strategy adopted for the initial stage of observations was keyword searching, which means typing into the search box on Sina Weibo and bilibili any keywords related to China as a nation and then observing the search results. Throughout 35 weeks’ online ethnographic observations, six keywords have been employed on Sina Weibo and two keywords on bilibili. For 35 weeks’ observations, textual content created by ordinary users have been concentrated much upon. Based on the fieldnotes of the first week’s observations, multifarious national(ist) discourses on Sina Weibo and bilibili have been found, targeted both at national ‘Others’ and ‘Us’, both on the historical and real-world dimension, both aligning with and differing from or even conflicting with official discourses, both direct national(ist) expressions and articulations of sentiments in the name of presentation of national(ist) attachments but for other purposes. Second, Sina Weibo and bilibili users have agency in interpreting and deploying concrete national(ist) discourses despite the leading role played by the government and the two platforms in deciding on the basic framework of national expressions. Besides, there are also disputes and even quarrels between users in terms of explanations for concrete components of ‘nation-ness’ and (in)direct dissent to officially defined ‘mainstream’ discourses to some extent, though often expressed much more mundanely, discursively and playfully. Third, the (re)production process of national(ist) discourses on Sina Weibo and bilibili depends upon not only technical affordances and limitations of the two sites but also, to a larger degree, some established socio-political mechanisms and conventions in the offline China, e.g., the authorities’ acquiescence of citizens’ freedom in understanding and explaining concrete elements of national discourses while setting the basic framework of national narratives to the extent that citizens’ own national(ist) expressions do not reach political bottom lines and develop into mobilising power to shake social stability.

Keywords: national identity, national(ist) discourse(s), everyday nationhood/nationalism, Chinese nationalism, digital nationalism

Procedia PDF Downloads 60
26 Utilization of Informatics to Transform Clinical Data into a Simplified Reporting System to Examine the Analgesic Prescribing Practices of a Single Urban Hospital’s Emergency Department

Authors: Rubaiat S. Ahmed, Jemer Garrido, Sergey M. Motov

Abstract:

Clinical informatics (CI) enables the transformation of data into a systematic organization that improves the quality of care and the generation of positive health outcomes.Innovative technology through informatics that compiles accurate data on analgesic utilization in the emergency department can enhance pain management in this important clinical setting. We aim to establish a simplified reporting system through CI to examine and assess the analgesic prescribing practices in the EDthrough executing a U.S. federal grant project on opioid reduction initiatives. Queried data points of interest from a level-one trauma ED’s electronic medical records were used to create data sets and develop informational/visual reporting dashboards (on Microsoft Excel and Google Sheets) concerning analgesic usage across several pre-defined parameters and performance metrics using CI. The data was then qualitatively analyzed to evaluate ED analgesic prescribing trends by departmental clinicians and leadership. During a 12-month reporting period (Dec. 1, 2020 – Nov. 30, 2021) for the ongoing project, about 41% of all ED patient visits (N = 91,747) were for pain conditions, of which 81.6% received analgesics in the ED and at discharge (D/C). Of those treated with analgesics, 24.3% received opioids compared to 75.7% receiving opioid alternatives in the ED and at D/C, including non-pharmacological modalities. Demographics showed among patients receiving analgesics, 56.7% were aged between 18-64, 51.8% were male, 51.7% were white, and 66.2% had government funded health insurance. Ninety-one percent of all opioids prescribed were in the ED, with intravenous (IV) morphine, IV fentanyl, and morphine sulfate immediate release (MSIR) tablets accounting for 88.0% of ED dispensed opioids. With 9.3% of all opioids prescribed at D/C, MSIR was dispensed 72.1% of the time. Hydrocodone, oxycodone, and tramadol usage to only 10-15% of the time, and hydromorphone at 0%. Of opioid alternatives, non-steroidal anti-inflammatory drugs were utilized 60.3% of the time, 23.5% with local anesthetics and ultrasound-guided nerve blocks, and 7.9% with acetaminophen as the primary non-opioid drug categories prescribed by ED providers. Non-pharmacological analgesia included virtual reality and other modalities. An average of 18.5 ED opioid orders and 1.9 opioid D/C prescriptions per 102.4 daily ED patient visits was observed for the period. Compared to other specialties within our institution, 2.0% of opioid D/C prescriptions are given by ED providers, compared to the national average of 4.8%. Opioid alternatives accounted for 69.7% and 30.3% usage, versus 90.7% and 9.3% for opioids in the ED and D/C, respectively.There is a pressing need for concise, relevant, and reliable clinical data on analgesic utilization for ED providers and leadership to evaluate prescribing practices and make data-driven decisions. Basic computer software can be used to create effective visual reporting dashboards with indicators that convey relevant and timely information in an easy-to-digest manner. We accurately examined our ED's analgesic prescribing practices using CI through dashboard reporting. Such reporting tools can quickly identify key performance indicators and prioritize data to enhance pain management and promote safe prescribing practices in the emergency setting.

Keywords: clinical informatics, dashboards, emergency department, health informatics, healthcare informatics, medical informatics, opioids, pain management, technology

Procedia PDF Downloads 117
25 European Food Safety Authority (EFSA) Safety Assessment of Food Additives: Data and Methodology Used for the Assessment of Dietary Exposure for Different European Countries and Population Groups

Authors: Petra Gergelova, Sofia Ioannidou, Davide Arcella, Alexandra Tard, Polly E. Boon, Oliver Lindtner, Christina Tlustos, Jean-Charles Leblanc

Abstract:

Objectives: To assess chronic dietary exposure to food additives in different European countries and population groups. Method and Design: The European Food Safety Authority’s (EFSA) Panel on Food Additives and Nutrient Sources added to Food (ANS) estimates chronic dietary exposure to food additives with the purpose of re-evaluating food additives that were previously authorized in Europe. For this, EFSA uses concentration values (usage and/or analytical occurrence data) reported through regular public calls for data by food industry and European countries. These are combined, at individual level, with national food consumption data from the EFSA Comprehensive European Food Consumption Database including data from 33 dietary surveys from 19 European countries and considering six different population groups (infants, toddlers, children, adolescents, adults and the elderly). EFSA ANS Panel estimates dietary exposure for each individual in the EFSA Comprehensive Database by combining the occurrence levels per food group with their corresponding consumption amount per kg body weight. An individual average exposure per day is calculated, resulting in distributions of individual exposures per survey and population group. Based on these distributions, the average and 95th percentile of exposure is calculated per survey and per population group. Dietary exposure is assessed based on two different sets of data: (a) Maximum permitted levels (MPLs) of use set down in the EU legislation (defined as regulatory maximum level exposure assessment scenario) and (b) usage levels and/or analytical occurrence data (defined as refined exposure assessment scenario). The refined exposure assessment scenario is sub-divided into the brand-loyal consumer scenario and the non-brand-loyal consumer scenario. For the brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the highest reported usage/analytical level for one food group, and at the mean level for the remaining food groups. For the non-brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the mean reported usage/analytical level for all food groups. An additional exposure from sources other than direct addition of food additives (i.e. natural presence, contaminants, and carriers of food additives) is also estimated, as appropriate. Results: Since 2014, this methodology has been applied in about 30 food additive exposure assessments conducted as part of scientific opinions of the EFSA ANS Panel. For example, under the non-brand-loyal scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 5.9 and 8.7 mg/kg body weight/day, respectively. The same estimates under the brand-loyal scenario in toddlers resulted in exposures of 8.1 and 20.7 mg/kg body weight/day, respectively. For the regulatory maximum level exposure assessment scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 11.9 and 30.3 mg/kg body weight/day, respectively. Conclusions: Detailed and up-to-date information on food additive concentration values (usage and/or analytical occurrence data) and food consumption data enable the assessment of chronic dietary exposure to food additives to more realistic levels.

Keywords: α-tocopherol, ammonium phosphatides, dietary exposure assessment, European Food Safety Authority, food additives, food consumption data

Procedia PDF Downloads 288
24 The Stem Cell Transcription Co-factor Znf521 Sustains Mll-af9 Fusion Protein In Acute Myeloid Leukemias By Altering The Gene Expression Landscape

Authors: Emanuela Chiarella, Annamaria Aloisio, Nisticò Clelia, Maria Mesuraca

Abstract:

ZNF521 is a stem cell-associated transcription co-factor, that plays a crucial role in the homeostatic regulation of the stem cell compartment in the hematopoietic, osteo-adipogenic, and neural system. In normal hematopoiesis, primary human CD34+ hematopoietic stem cells display typically a high expression of ZNF521, while its mRNA levels rapidly decrease when these progenitors progress towards erythroid, granulocytic, or B-lymphoid differentiation. However, most acute myeloid leukemias (AMLs) and leukemia-initiating cells keep high ZNF521 expression. In particular, AMLs are often characterized by chromosomal translocations involving the Mixed Lineage Leukemia (MLL) gene, which MLL gene includes a variety of fusion oncogenes arisen from genes normally required during hematopoietic development; once they are fused, they promote epigenetic and transcription factor dysregulation. The chromosomal translocation t(9;11)(p21-22;q23), fusing the MLL gene with AF9 gene, results in a monocytic immune phenotype with an aggressive course, frequent relapses, and a short survival time. To better understand the dysfunctional transcriptional networks related to genetic aberrations, AML gene expression profile datasets were queried for ZNF521 expression and its correlations with specific gene rearrangements and mutations. The results showed that ZNF521 mRNA levels are associated with specific genetic aberrations: the highest expression levels were observed in AMLs involving t(11q23) MLL rearrangements in two distinct datasets (MILE and den Boer); elevated ZNF521 mRNA expression levels were also revealed in AMLs with t(7;12) or with internal rearrangements of chromosome 16. On the contrary, relatively low ZNF521 expression levels seemed to be associated with the t(8;21) translocation, that in turn is correlated with the AML1-ETO fusion gene or the t(15;17) translocation and in AMLs with FLT3-ITD, NPM1, or CEBPα double mutations. Invitro, we found that the enforced co-expression of ZNF521 in cord blood-derived CD34+ cells induced a significant proliferative advantage, improving MLL-AF9 effects on the induction of proliferation and the expansion of leukemic progenitor cells. Transcriptome profiling of CD34+ cells transduced with either MLL-AF9, ZNF521, or a combination of the two transgenes highlighted specific sets of up- or down-regulated genes that are involved in the leukemic phenotype, including those encoding transcription factors, epigenetic modulators, and cell cycle regulators as well as those engaged in the transport or uptake of nutrients. These data enhance the functional cooperation between ZNF521 and MA9, resulting in the development, maintenance, and clonal expansion of leukemic cells. Finally, silencing of ZNF521 in MLL-AF9-transformed primary CD34+ cells inhibited their proliferation and led to their extinction, as well as ZNF521 silencing in the MLL-AF9+ THP-1 cell line resulted in an impairment of their growth and clonogenicity. Taken together, our data highlight ZNF521 role in the control of self-renewal and in the immature compartment of malignant hematopoiesis, which, by altering the gene expression landscape, contributes to the development and/or maintenance of AML acting in concert with the MLL-AF9 fusion oncogene.

Keywords: AML, human zinc finger protein 521 (hZNF521), mixed lineage leukemia gene (MLL) AF9 (MLLT3 or LTG9), cord blood-derived hematopoietic stem cells (CB-CD34+)

Procedia PDF Downloads 77
23 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum

Authors: Divya Palaniappan

Abstract:

Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.

Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum

Procedia PDF Downloads 331
22 The Use of Artificial Intelligence in the Context of a Space Traffic Management System: Legal Aspects

Authors: George Kyriakopoulos, Photini Pazartzis, Anthi Koskina, Crystalie Bourcha

Abstract:

The need for securing safe access to and return from outer space, as well as ensuring the viability of outer space operations, maintains vivid the debate over the promotion of organization of space traffic through a Space Traffic Management System (STM). The proliferation of outer space activities in recent years as well as the dynamic emergence of the private sector has gradually resulted in a diverse universe of actors operating in outer space. The said developments created an increased adverse impact on outer space sustainability as the case of the growing number of space debris clearly demonstrates. The above landscape sustains considerable threats to outer space environment and its operators that need to be addressed by a combination of scientific-technological measures and regulatory interventions. In this context, recourse to recent technological advancements and, in particular, to Artificial Intelligence (AI) and machine learning systems, could achieve exponential results in promoting space traffic management with respect to collision avoidance as well as launch and re-entry procedures/phases. New technologies can support the prospects of a successful space traffic management system at an international scale by enabling, inter alia, timely, accurate and analytical processing of large data sets and rapid decision-making, more precise space debris identification and tracking and overall minimization of collision risks and reduction of operational costs. What is more, a significant part of space activities (i.e. launch and/or re-entry phase) takes place in airspace rather than in outer space, hence the overall discussion also involves the highly developed, both technically and legally, international (and national) Air Traffic Management System (ATM). Nonetheless, from a regulatory perspective, the use of AI for the purposes of space traffic management puts forward implications that merit particular attention. Key issues in this regard include the delimitation of AI-based activities as space activities, the designation of the applicable legal regime (international space or air law, national law), the assessment of the nature and extent of international legal obligations regarding space traffic coordination, as well as the appropriate liability regime applicable to AI-based technologies when operating for space traffic coordination, taking into particular consideration the dense regulatory developments at EU level. In addition, the prospects of institutionalizing international cooperation and promoting an international governance system, together with the challenges of establishment of a comprehensive international STM regime are revisited in the light of intervention of AI technologies. This paper aims at examining regulatory implications advanced by the use of AI technology in the context of space traffic management operations and its key correlating concepts (SSA, space debris mitigation) drawing in particular on international and regional considerations in the field of STM (e.g. UNCOPUOS, International Academy of Astronautics, European Space Agency, among other actors), the promising advancements of the EU approach to AI regulation and, last but not least, national approaches regarding the use of AI in the context of space traffic management, in toto. Acknowledgment: The present work was co-funded by the European Union and Greek national funds through the Operational Program "Human Resources Development, Education and Lifelong Learning " (NSRF 2014-2020), under the call "Supporting Researchers with an Emphasis on Young Researchers – Cycle B" (MIS: 5048145).

Keywords: artificial intelligence, space traffic management, space situational awareness, space debris

Procedia PDF Downloads 214
21 Developing Primal Teachers beyond the Classroom: The Quadrant Intelligence (Q-I) Model

Authors: Alexander K. Edwards

Abstract:

Introduction: The moral dimension of teacher education globally has assumed a new paradigm of thinking based on the public gain (return-on-investments), value-creation (quality), professionalism (practice), and business strategies (innovations). Abundant literature reveals an interesting revolutionary trend in complimenting the raising of teachers and academic performances. Because of the global competition in the knowledge-creation and service areas, the C21st teacher at all levels is expected to be resourceful, strategic thinker, socially intelligent, relationship aptitude, and entrepreneur astute. This study is a significant contribution to practice and innovations to raise exemplary or primal teachers. In this study, the qualities needed were considered as ‘Quadrant Intelligence (Q-i)’ model for a primal teacher leadership beyond the classroom. The researcher started by examining the issue of the majority of teachers in Ghana Education Services (GES) in need of this Q-i to be effective and efficient. The conceptual framing became determinants of such Q-i. This is significant for global employability and versatility in teacher education to create premium and primal teacher leadership, which are again gaining high attention in scholarship due to failing schools. The moral aspect of teachers failing learners is a highly important discussion. In GES, some schools score zero percent at the basic education certificate examination (BECE). The question is what will make any professional teacher highly productive, marketable, and an entrepreneur? What will give teachers the moral consciousness of doing the best to succeed? Method: This study set out to develop a model for primal teachers in GES as an innovative way to highlight a premium development for the C21st business-education acumen through desk reviews. The study is conceptually framed by examining certain skill sets such as strategic thinking, social intelligence, relational and emotional intelligence and entrepreneurship to answer three main burning questions and other hypotheses. Then the study applied the causal comparative methodology with a purposive sampling technique (N=500) from CoE, GES, NTVI, and other teachers associations. Participants responded to a 30-items, researcher-developed questionnaire. Data is analyzed on the quadrant constructs and reported as ex post facto analyses of multi-variances and regressions. Multiple associations were established for statistical significance (p=0.05). Causes and effects are postulated for scientific discussions. Findings: It was found out that these quadrants are very significant in teacher development. There were significant variations in the demographic groups. However, most teachers lack considerable skills in entrepreneurship, leadership in teaching and learning, and business thinking strategies. These have significant effect on practices and outcomes. Conclusion and Recommendations: It is quite conclusive therefore that in GES teachers may need further instructions in innovations and creativity to transform knowledge-creation into business venture. In service training (INSET) has to be comprehensive. Teacher education curricula at Colleges may have to be re-visited. Teachers have the potential to raise their social capital, to be entrepreneur, and to exhibit professionalism beyond their community services. Their primal leadership focus will benefit many clienteles including students and social circles. Recommendations examined the policy implications for curriculum design, practice, innovations and educational leadership.

Keywords: emotional intelligence, entrepreneurship, leadership, quadrant intelligence (q-i), primal teacher leadership, strategic thinking, social intelligence

Procedia PDF Downloads 275
20 Effects of Combined Lewis Acid and Ultrasonic Pretreatment on the Physicochemical Properties of Heat-Treated Moso Bamboo

Authors: Tianfang Zhang, Luxi He, Zhengbin He, Songlin Yi

Abstract:

Moso bamboo is a common non-wood forest resource in Asia that is widely used in construction, furniture, and other fields. Influenced by the heterogeneous structure and various hygroscopic groups of bamboo, the deformation occurs as moisture absorption and desorption when the environment temperature and humidity conditions change. Thermal modification is a well-established commercial technology for improving the dimensional stability of bamboo. However, the higher energy consumption and carbon emissions limit its further development. Previous studies have indicated that inorganic salt-assisted thermal modification could lead to significant reductions in moisture absorption and energy consumption. Represented by metal chlorides, it could show Lewis acid properties when dissolved in water, generating metal ion ligand complexes. In addition, ultrasonic treatment, as an efficient and environmentally friendly physical treatment method, improved the accessibility of pretreatment chemical impregnation agents and intensified mass and heat transfer during reactions. To save energy and reduce deformation, this study elucidates the influence of zinc chloride-ultrasonic treatment on the physicochemical properties of heat-treated bamboo, and the details of the bamboo deformation mechanism with Lewis acid are explained. Three sets of parameters (inorganic salt concentration, ultrasonic frequency and heat treatment temperature) were designed, and an optimized process was proposed to clarify this scientific question, that is: 5% (w/w) zinc chloride solution, 40 kHz ultrasonic waves and heat treatment at 160 °C. The samples were characterized by different means to analyze changes in their macroscopic features, pore structure, chemical structure and chemical composition. The results suggested that the maximum weight loss rate was reduced by at least 19.75%. The maximum thermal degradation peak of hemicellulose was significantly shifted forward. The hygroscopicity was reduced by 10.15%, the relative crystallinity was increased by 4.4%, the surface contact angle was increased by 25.2%, and the color change was increased by 23.60 in the optimal condition. From the electron microscope observation, the treated surface became rougher, and cracks appeared in some weaker areas, accelerating starch loss and removing granular attachments around the pits. By ion diffusion, zinc ions diffused into hemicellulose and a partial amorphous region of cellulose. Parts of the cell wall structure were subjected to swelling and degradation, leading to the broken state of parenchyma cells. From the Raman spectrum, compared to conventional thermal modifications, hemicellulose thermal degradation and lignin migration is promoted by Lewis acid under dilute acid-thermal condition. As shown in this work, the combined Lewis acid and ultrasonic pretreatment as an environmentally friendly, safe, and efficient physic-chemical combined pretreatment method improved the dimensional stability of Moso bamboo and lowered the thermal degradation conditions. This method has great potential for development in the field of bamboo heat treatment, and it might provide some guidance for making dark bamboo flooring.

Keywords: Moso bamboo, Lewis acid, ultrasound, heat treatment

Procedia PDF Downloads 35
19 A Copula-Based Approach for the Assessment of Severity of Illness and Probability of Mortality: An Exploratory Study Applied to Intensive Care Patients

Authors: Ainura Tursunalieva, Irene Hudson

Abstract:

Continuous improvement of both the quality and safety of health care is an important goal in Australia and internationally. The intensive care unit (ICU) receives patients with a wide variety of and severity of illnesses. Accurately identifying patients at risk of developing complications or dying is crucial to increasing healthcare efficiency. Thus, it is essential for clinicians and researchers to have a robust framework capable of evaluating the risk profile of a patient. ICU scoring systems provide such a framework. The Acute Physiology and Chronic Health Evaluation III and the Simplified Acute Physiology Score II are ICU scoring systems frequently used for assessing the severity of acute illness. These scoring systems collect multiple risk factors for each patient including physiological measurements then render the assessment outcomes of individual risk factors into a single numerical value. A higher score is related to a more severe patient condition. Furthermore, the Mortality Probability Model II uses logistic regression based on independent risk factors to predict a patient’s probability of mortality. An important overlooked limitation of SAPS II and MPM II is that they do not, to date, include interaction terms between a patient’s vital signs. This is a prominent oversight as it is likely there is an interplay among vital signs. The co-existence of certain conditions may pose a greater health risk than when these conditions exist independently. One barrier to including such interaction terms in predictive models is the dimensionality issue as it becomes difficult to use variable selection. We propose an innovative scoring system which takes into account a dependence structure among patient’s vital signs, such as systolic and diastolic blood pressures, heart rate, pulse interval, and peripheral oxygen saturation. Copulas will capture the dependence among normally distributed and skewed variables as some of the vital sign distributions are skewed. The estimated dependence parameter will then be incorporated into the traditional scoring systems to adjust the points allocated for the individual vital sign measurements. The same dependence parameter will also be used to create an alternative copula-based model for predicting a patient’s probability of mortality. The new copula-based approach will accommodate not only a patient’s trajectories of vital signs but also the joint dependence probabilities among the vital signs. We hypothesise that this approach will produce more stable assessments and lead to more time efficient and accurate predictions. We will use two data sets: (1) 250 ICU patients admitted once to the Chui Regional Hospital (Kyrgyzstan) and (2) 37 ICU patients’ agitation-sedation profiles collected by the Hunter Medical Research Institute (Australia). Both the traditional scoring approach and our copula-based approach will be evaluated using the Brier score to indicate overall model performance, the concordance (or c) statistic to indicate the discriminative ability (or area under the receiver operating characteristic (ROC) curve), and goodness-of-fit statistics for calibration. We will also report discrimination and calibration values and establish visualization of the copulas and high dimensional regions of risk interrelating two or three vital signs in so-called higher dimensional ROCs.

Keywords: copula, intensive unit scoring system, ROC curves, vital sign dependence

Procedia PDF Downloads 127
18 Integrated Mathematical Modeling and Advance Visualization of Magnetic Nanoparticle for Drug Delivery, Drug Release and Effects to Cancer Cell Treatment

Authors: Norma Binti Alias, Che Rahim Che The, Norfarizan Mohd Said, Sakinah Abdul Hanan, Akhtar Ali

Abstract:

This paper discusses on the transportation of magnetic drug targeting through blood within vessels, tissues and cells. There are three integrated mathematical models to be discussed and analyze the concentration of drug and blood flow through magnetic nanoparticles. The cell therapy brought advancement in the field of nanotechnology to fight against the tumors. The systematic therapeutic effect of Single Cells can reduce the growth of cancer tissue. The process of this nanoscale phenomena system is able to measure and to model, by identifying some parameters and applying fundamental principles of mathematical modeling and simulation. The mathematical modeling of single cell growth depends on three types of cell densities such as proliferative, quiescent and necrotic cells. The aim of this paper is to enhance the simulation of three types of models. The first model represents the transport of drugs by coupled partial differential equations (PDEs) with 3D parabolic type in a cylindrical coordinate system. This model is integrated by Non-Newtonian flow equations, leading to blood liquid flow as the medium for transportation system and the magnetic force on the magnetic nanoparticles. The interaction between the magnetic force on drug with magnetic properties produces induced currents and the applied magnetic field yields forces with tend to move slowly the movement of blood and bring the drug to the cancer cells. The devices of nanoscale allow the drug to discharge the blood vessels and even spread out through the tissue and access to the cancer cells. The second model is the transport of drug nanoparticles from the vascular system to a single cell. The treatment of the vascular system encounters some parameter identification such as magnetic nanoparticle targeted delivery, blood flow, momentum transport, density and viscosity for drug and blood medium, intensity of magnetic fields and the radius of the capillary. Based on two discretization techniques, finite difference method (FDM) and finite element method (FEM), the set of integrated models are transformed into a series of grid points to get a large system of equations. The third model is a single cell density model involving the three sets of first order PDEs equations for proliferating, quiescent and necrotic cells change over time and space in Cartesian coordinate which regulates under different rates of nutrients consumptions. The model presents the proliferative and quiescent cell growth depends on some parameter changes and the necrotic cells emerged as the tumor core. Some numerical schemes for solving the system of equations are compared and analyzed. Simulation and computation of the discretized model are supported by Matlab and C programming languages on a single processing unit. Some numerical results and analysis of the algorithms are presented in terms of informative presentation of tables, multiple graph and multidimensional visualization. As a conclusion, the integrated of three types mathematical modeling and the comparison of numerical performance indicates that the superior tool and analysis for solving the complete set of magnetic drug delivery system which give significant effects on the growth of the targeted cancer cell.

Keywords: mathematical modeling, visualization, PDE models, magnetic nanoparticle drug delivery model, drug release model, single cell effects, avascular tumor growth, numerical analysis

Procedia PDF Downloads 395
17 Exploring the Effect of Nursing Students’ Self-Directed Learning and Technology Acceptance through the Use of Digital Game-Based Learning in Medical Terminology Course

Authors: Hsin-Yu Lee, Ming-Zhong Li, Wen-Hsi Chiu, Su-Fen Cheng, Shwu-Wen Lin

Abstract:

Background: The use of medical terminology is essential to professional nurses on clinical practice. However, most nursing students consider traditional lecture-based teaching of medical terminology as boring and overly conceptual and lack motivation to learn. It is thus an issue to be discussed on how to enhance nursing students’ self-directed learning and improve learning outcomes of medical terminology. Digital game-based learning is a learner-centered way of learning. Past literature showed that the most common game-based learning for language education has been immersive games and teaching games. Thus, this study selected role-playing games (RPG) and digital puzzle games for observation and comparison. It is interesting to explore whether digital game-based learning has positive impact on nursing students’ learning of medical terminology and whether students can adapt well on this type of learning. Results can be used to provide references for institutes and teachers on teaching medical terminology. These instructions give you guidelines for preparing papers for the conference. Use this document as a template if you are using Microsoft Word. Otherwise, use this document as an instruction set. The electronic file of your paper will be formatted further at WASET. Define all symbols used in the abstract. Do not cite references in the abstract. Do not delete the blank line immediately above the abstract; it sets the footnote at the bottom of this column. Page margins are 1,78 cm top and down; 1,65 cm left and right. Each column width is 8,89 cm and the separation between the columns is 0,51 cm. Objective: The purpose of this research is to explore respectively the impact of RPG and puzzle game on nursing students’ self-directed learning and technology acceptance. The study further discusses whether different game types bring about different influences on students’ self-directed learning and technology acceptance. Methods: A quasi-experimental design was adopted in this study so that repeated measures between two groups could be conveniently conducted. 103 nursing students from a nursing college in Northern Taiwan participated in the study. For three weeks of experiment, the experiment group (n=52) received “traditional teaching + RPG” while the control group (n=51) received “traditional teaching + puzzle games”. Results: 1. On self-directed learning: For each game type, there were significant differences for the delayed tests of both groups as compared to the pre and post-tests of each group. However, there were no significant differences between the two game types. 2. On technology acceptance: For the experiment group, after the intervention of RPG, there were no significant differences concerning technology acceptance. For the control group, after the intervention of puzzle games, there were significant differences regarding technology acceptance. Pearson-correlation coefficient and path analysis conducted on the results of the two groups revealed that the dimension were highly correlated and reached statistical significance. Yet, the comparison of technology acceptance between the two game types did not reach statistical significance. Conclusion and Recommend: This study found that through using different digital games on learning, nursing students have effectively improved their self-directed learning. Students’ technology acceptances were also high for the two different digital game types and each dimension was significantly correlated. The results of the experimental group showed that through the scenarios of RPG, students had a deeper understanding of medical terminology, which reached the ‘Understand’ dimension of Bloom’s taxonomy. The results of the control group indicated that digital puzzle games could help students memorize and review medical terminology, which reached the ‘Remember’ dimension of Bloom’s taxonomy. The findings suggest that teachers of medical terminology could use digital games to assist their teaching according to their goals on cognitive learning. Adequate use of those games could help improve students’ self-directed learning and further enhance their learning outcome on medical terminology.

Keywords: digital game-based learning, medical terminology, nursing education, self-directed learning, technology acceptance model

Procedia PDF Downloads 139
16 Bringing Together Student Collaboration and Research Opportunities to Promote Scientific Understanding and Outreach Through a Seismological Community

Authors: Michael Ray Brunt

Abstract:

China has been the site of some of the most significant earthquakes in history; however, earthquake monitoring has long been the provenance of universities and research institutions. The China Digital Seismographic Network was initiated in 1983 and improved significantly during 1992-1993. Data from the CDSN is widely used by government and research institutions, and, generally, this data is not readily accessible to middle and high school students. An educational seismic network in China is needed to provide collaboration and research opportunities for students and engaging students around the country in scientific understanding of earthquake hazards and risks while promoting community awareness. In 2022, the Tsinghua International School (THIS) Seismology Team, made up of enthusiastic students and facilitated by two experienced teachers, was established. As a group, the team’s objective is to install seismographs in schools throughout China, thus creating an educational seismic network that shares data from the THIS Educational Seismic Network (THIS-ESN) and facilitates collaboration. The THIS-ESN initiative will enhance education and outreach in China about earthquake risks and hazards, introduce seismology to a wider audience, stimulate interest in research among students, and develop students’ programming, data collection and analysis skills. It will also encourage and inspire young minds to pursue science, technology, engineering, the arts, and math (STEAM) career fields. The THIS-ESN utilizes small, low-cost RaspberryShake seismographs as a powerful tool linked into a global network, giving schools and the public access to real-time seismic data from across China, increasing earthquake monitoring capabilities in the perspective areas and adding to the available data sets regionally and worldwide helping create a denser seismic network. The RaspberryShake seismograph is compatible with free seismic data viewing platforms such as SWARM, RaspberryShake web programs and mobile apps are designed specifically towards teaching seismology and seismic data interpretation, providing opportunities to enhance understanding. The RaspberryShake is powered by an operating system embedded in the Raspberry Pi, which makes it an easy platform to teach students basic computer communication concepts by utilizing processing tools to investigate, plot, and manipulate data. THIS Seismology Team believes strongly in creating opportunities for committed students to become part of the seismological community by engaging in analysis of real-time scientific data with tangible outcomes. Students will feel proud of the important work they are doing to understand the world around them and become advocates spreading their knowledge back into their homes and communities, helping to improve overall community resilience. We trust that, in studying the results seismograph stations yield, students will not only grasp how subjects like physics and computer science apply in real life, and by spreading information, we hope students across the country can appreciate how and why earthquakes bear on their lives, develop practical skills in STEAM, and engage in the global seismic monitoring effort. By providing such an opportunity to schools across the country, we are confident that we will be an agent of change for society.

Keywords: collaboration, outreach, education, seismology, earthquakes, public awareness, research opportunities

Procedia PDF Downloads 31
15 Remote BioMonitoring of Mothers and Newborns for Temperature Surveillance Using a Smart Wearable Sensor: Techno-Feasibility Study and Clinical Trial in Southern India

Authors: Prem K. Mony, Bharadwaj Amrutur, Prashanth Thankachan, Swarnarekha Bhat, Suman Rao, Maryann Washington, Annamma Thomas, N. Sheela, Hiteshwar Rao, Sumi Antony

Abstract:

The disease burden among mothers and newborns is caused mostly by a handful of avoidable conditions occurring around the time of childbirth and within the first month following delivery. Real-time monitoring of vital parameters of mothers and neonates offers a potential opportunity to impact access as well as the quality of care in vulnerable populations. We describe the design, development and testing of an innovative wearable device for remote biomonitoring (RBM) of body temperatures in mothers and neonates in a hospital in southern India. The architecture consists of: [1] a low-cost, wearable sensor tag; [2] a gateway device for ‘real-time’ communication link; [3] piggy-backing on a commercial GSM communication network; and [4] an algorithm-based data analytics system. Requirements for the device were: long battery-life upto 28 days (with sampling frequency 5/hr); robustness; IP 68 hermetic sealing; and human-centric design. We undertook pre-clinical laboratory testing followed by clinical trial phases I & IIa for evaluation of safety and efficacy in the following sequence: seven healthy adult volunteers; 18 healthy mothers; and three sets of babies – 3 healthy babies; 10 stable babies in the Neonatal Intensive Care Unit (NICU) and 1 baby with hypoxic ischaemic encephalopathy (HIE). The 3-coin thickness, pebble-design sensor weighing about 8 gms was secured onto the abdomen for the baby and over the upper arm for adults. In the laboratory setting, the response-time of the sensor device to attain thermal equilibrium with the surroundings was 4 minutes vis-a-vis 3 minutes observed with a precision-grade digital thermometer used as a reference standard. The accuracy was ±0.1°C of the reference standard within the temperature range of 25-40°C. The adult volunteers, aged 20 to 45 years, contributed a total of 345 hours of readings over a 7-day period and the postnatal mothers provided a total of 403 paired readings. The mean skin temperatures measured in the adults by the sensor were about 2°C lower than the axillary temperature readings (sensor =34.1 vs digital = 36.1); this difference was statistically significant (t-test=13.8; p<0.001). The healthy neonates provided a total of 39 paired readings; the mean difference in temperature was 0.13°C (sensor =36.9 vs digital = 36.7; p=0.2). The neonates in the NICU provided a total of 130 paired readings. Their mean skin temperature measured by the sensor was 0.6°C lower than that measured by the radiant warmer probe (sensor =35.9 vs warmer probe = 36.5; p < 0.001). The neonate with HIE provided a total of 25 paired readings with the mean sensor reading being not different from the radian warmer probe reading (sensor =33.5 vs warmer probe = 33.5; p=0.8). No major adverse events were noted in both the adults and neonates; four adult volunteers reported mild sweating under the device/arm band and one volunteer developed mild skin allergy. This proof-of-concept study shows that real-time monitoring of temperatures is technically feasible and that this innovation appears to be promising in terms of both safety and accuracy (with appropriate calibration) for improved maternal and neonatal health.

Keywords: public health, remote biomonitoring, temperature surveillance, wearable sensors, mothers and newborns

Procedia PDF Downloads 177
14 Classical Improvisation Facilitating Enhanced Performer-Audience Engagement and a Mutually Developing Impulse Exchange with Concert Audiences

Authors: Pauliina Haustein

Abstract:

Improvisation was part of Western classical concert culture and performers’ skill sets until early 20th century. Historical accounts, as well as recent studies, indicate that improvisatory elements in the programme may contribute specifically towards the audiences’ experience of enhanced emotional engagement during the concert. This paper presents findings from the author’s artistic practice research, which explored re-introducing improvisation to Western classical performance practice as a musician (cellist and ensemble partner/leader). In an investigation of four concert cycles, the performer-researcher sought to gain solo and chamber music improvisation techniques (both related to and independent of repertoire), conduct ensemble improvisation rehearsals, design concerts with an improvisatory approach, and reflect on interactions with audiences after each concert. Data was collected through use of reflective diary, video recordings, measurement of sound parameters, questionnaires, a focus group, and interviews. The performer’s empirical experiences and findings from audience research components were juxtaposed and interrogated to better understand the (1) rehearsal and planning processes that enable improvisatory elements to return to Western classical concert experience and (2) the emotional experience and type of engagement that occur throughout the concert experience for both performer and audience members. This informed the development of a concert model, in which a programme of solo and chamber music repertoire and improvisations were combined according to historically evidenced performance practice (including free formal solo and ensemble improvisations based on audience suggestions). Inspired by historical concert culture, where elements of risk-taking, spontaneity, and audience involvement (such as proposing themes for fantasies) were customary, this concert model invited musicians to contribute to the process personally and creatively at all stages, from programme planning, and throughout the live concert. The type of democratic, personal, creative, and empathetic collaboration that emerged, as a result, appears unique in Western classical contexts, rather finding resonance in jazz ensemble, drama, or interdisciplinary settings. The research identified features of ensemble improvisation, such as empathy, emergence, mutual engagement, and collaborative creativity, that became mirrored in audience’s responses, generating higher levels of emotional engagement, empathy, inclusivity, and a participatory, co-creative experience. It appears that duringimprovisatory moments in the concert programme, audience members started feeling more like active participants in za\\a creative, collaborative exchange and became stakeholders in a deeper phenomenon of meaning-making and narrativization. Examining interactions between all involved during the concert revealed that performer-audience impulse exchange occurred on multiple levels of awareness and seemed to build upon each other, resulting in particularly strong experiences of both performer and audience’s engagement. This impact appeared especially meaningful for audience members who were seldom concertgoers and reported little familiarity with classical music. The study found that re-introducing improvisatory elements to Western classical concert programmes has strong potential in increasing audience’s emotional engagement with the musical performance, enabling audience members to connect more personally with the individual performers, and in reaching new-to-classical-music audiences.

Keywords: artistic research, audience engagement, audience experience, classical improvisation, ensemble improvisation, emotional engagement, improvisation, improvisatory approach, musical performance, practice research

Procedia PDF Downloads 107
13 Even When the Passive Resistance Is Obligatory: Civil Intellectuals’ Solidarity Activism in Tea Workers Movement

Authors: Moshreka Aditi Huq

Abstract:

This study shows how a progressive portion of civil intellectuals in Bangladesh contributed as the solidarity activist entities in a movement of tea workers that became the symbol of their unique moral struggle. Their passive yet sharp way of resistance, with the integration of mass tea workers of a tea estate, got demonstrated against certain private companies and government officials who approached to establish a special economic zone inside the tea garden without offering any compensation and rehabilitation for poor tea workers. Due to massive protests and rebellion, the authorized entrepreneurs had to step back and called off the project immediately. The extraordinary features of this movement generated itself from the deep core social need of indigenous tea workers who are still imprisoned in the colonial cage. Following an anthropological and ethnographic perspective, this study adopted the main three techniques of intensive interview, focus group discussion, and laborious observation, to extract empirical data. The intensive interviews were undertaken informally using a mostly conversational approach. Focus group discussions were piloted among various representative groups where observations prevailed as part of the regular documentation process. These were conducted among civil intellectual entities, tea workers, tea estate authorities, civil service authorities, and business officials to obtain a holistic view of the situation. The fieldwork was executed in capital Dhaka city, along with northern areas like Chandpur-Begumkhan Tea Estate of Chunarughat Upazilla and Habiganj city of Habiganj District of Bangladesh. Correspondingly, secondary data were accessed through books, scholarly papers, archives, newspapers, reports, leaflets, posters, writing blog, and electronic pages of social media. The study results find that: (1) civil intellectuals opposed state-sponsored business impositions by producing counter-discourse and struggled against state hegemony through the phases of the movement; (2) instead of having the active physical resistance, civil intellectuals’ strength was preferably in passive form which was portrayed through their intellectual labor; (3) the combined movement of tea workers and civil intellectuals reflected on social security of ethnic worker communities that contrasts state’s pseudo-development motives which ultimately supports offensive and oppressive neoliberal growths of economy; (4) civil intellectuals are revealed as having certain functional limitations in the process of movement organization as well as resource mobilization; (5) in specific contexts, the genuine need of protest by indigenous subaltern can overshadow intellectual elitism and helps to raise the voices of ‘subjugated knowledge’. This study is quite likely to represent two sets of apparent protagonist entities in the discussion of social injustice and oppressive development intervention. On the one, hand it may help us to find the basic functional characteristics of civil intellectuals in Bangladesh when they are in a passive mode of resistance in social movement issues. On the other hand, it represents the community ownership and inherent protest tendencies of indigenous workers when they feel threatened and insecure. The study seems to have the potential to understand the conditions of ‘subjugated knowledge’ of subalterns. Furthermore, being the memory and narratives, these ‘activism mechanisms’ of social entities broadens the path to understand ‘power’ and ‘resistance’ in more fascinating ways.

Keywords: civil intellectuals, resistance, subjugated knowledge, indigenous

Procedia PDF Downloads 102
12 Implementation of Building Information Modelling to Monitor, Assess, and Control the Indoor Environmental Quality of Higher Education Buildings

Authors: Mukhtar Maigari

Abstract:

The landscape of Higher Education (HE) institutions, especially following the CVID-19 pandemic, necessitates advanced approaches to manage Indoor Environmental Quality (IEQ) which is crucial for the comfort, health, and productivity of students and staff. This study investigates the application of Building Information Modelling (BIM) as a multifaceted tool for monitoring, assessing, and controlling IEQ in HE buildings aiming to bridge the gap between traditional management practices and the innovative capabilities of BIM. Central to the study is a comprehensive literature review, which lays the foundation by examining current knowledge and technological advancements in both IEQ and BIM. This review sets the stage for a deeper investigation into the practical application of BIM in IEQ management. The methodology consists of Post-Occupancy Evaluation (POE) which encompasses physical monitoring, questionnaire surveys, and interviews under the umbrella of case studies. The physical data collection focuses on vital IEQ parameters such as temperature, humidity, CO2 levels etc, conducted by using different equipment including dataloggers to ensure accurate data. Complementing this, questionnaire surveys gather perceptions and satisfaction levels from students, providing valuable insights into the subjective aspects of IEQ. The interview component, targeting facilities management teams, offers an in-depth perspective on IEQ management challenges and strategies. The research delves deeper into the development of a conceptual BIM-based framework, informed by the insight findings from case studies and empirical data. This framework is designed to demonstrate the critical functions necessary for effective IEQ monitoring, assessment, control and automation with real time data handling capabilities. This BIM-based framework leads to the developing and testing a BIM-based prototype tool. This prototype leverages on software such as Autodesk Revit with its visual programming tool i.e., Dynamo and an Arduino-based sensor network thereby allowing for real-time flow of IEQ data for monitoring, control and even automation. By harnessing the capabilities of BIM technology, the study presents a forward-thinking approach that aligns with current sustainability and wellness goals, particularly vital in the post-COVID-19 era. The integration of BIM in IEQ management promises not only to enhance the health, comfort, and energy efficiency of educational environments but also to transform them into more conducive spaces for teaching and learning. Furthermore, this research could influence the future of HE buildings by prompting universities and government bodies to revaluate and improve teaching and learning environments. It demonstrates how the synergy between IEQ and BIM can empower stakeholders to monitor IEQ conditions more effectively and make informed decisions in real-time. Moreover, the developed framework has broader applications as well; it can serve as a tool for other sustainability assessments, like energy analysis in HE buildings, leveraging measured data synchronized with the BIM model. In conclusion, this study bridges the gap between theoretical research and real-world application by practicalizing how advanced technologies like BIM can be effectively integrated to enhance environmental quality in educational institutions. It portrays the potential of integrating advanced technologies like BIM in the pursuit of improved environmental conditions in educational institutions.

Keywords: BIM, POE, IEQ, HE-buildings

Procedia PDF Downloads 18
11 Multiaxial Stress Based High Cycle Fatigue Model for Adhesive Joint Interfaces

Authors: Martin Alexander Eder, Sergei Semenov

Abstract:

Many glass-epoxy composite structures, such as large utility wind turbine rotor blades (WTBs), comprise of adhesive joints with typically thick bond lines used to connect the different components during assembly. Performance optimization of rotor blades to increase power output by simultaneously maintaining high stiffness-to-low-mass ratios entails intricate geometries in conjunction with complex anisotropic material behavior. Consequently, adhesive joints in WTBs are subject to multiaxial stress states with significant stress gradients depending on the local joint geometry. Moreover, the dynamic aero-elastic interaction of the WTB with the airflow generates non-proportional, variable amplitude stress histories in the material. Empiricism shows that a prominent failure type in WTBs is high cycle fatigue failure of adhesive bond line interfaces, which in fact over time developed into a design driver as WTB sizes increase rapidly. Structural optimization employed at an early design stage, therefore, sets high demands on computationally efficient interface fatigue models capable of predicting the critical locations prone for interface failure. The numerical stress-based interface fatigue model presented in this work uses the Drucker-Prager criterion to compute three different damage indices corresponding to the two interface shear tractions and the outward normal traction. The two-parameter Drucker-Prager model was chosen because of its ability to consider shear strength enhancement under compression and shear strength reduction under tension. The governing interface damage index is taken as the maximum of the triple. The damage indices are computed through the well-known linear Palmgren-Miner rule after separate rain flow-counting of the equivalent shear stress history and the equivalent pure normal stress history. The equivalent stress signals are obtained by self-similar scaling of the Drucker-Prager surface whose shape is defined by the uniaxial tensile strength and the shear strength such that it intersects with the stress point at every time step. This approach implicitly assumes that the damage caused by the prevailing multiaxial stress state is the same as the damage caused by an amplified equivalent uniaxial stress state in the three interface directions. The model was implemented as Python plug-in for the commercially available finite element code Abaqus for its use with solid elements. The model was used to predict the interface damage of an adhesively bonded, tapered glass-epoxy composite cantilever I-beam tested by LM Wind Power under constant amplitude compression-compression tip load in the high cycle fatigue regime. Results show that the model was able to predict the location of debonding in the adhesive interface between the webfoot and the cap. Moreover, with a set of two different constant life diagrams namely in shear and tension, it was possible to predict both the fatigue lifetime and the failure mode of the sub-component with reasonable accuracy. It can be concluded that the fidelity, robustness and computational efficiency of the proposed model make it especially suitable for rapid fatigue damage screening of large 3D finite element models subject to complex dynamic load histories.

Keywords: adhesive, fatigue, interface, multiaxial stress

Procedia PDF Downloads 140
10 A Model for Analysing Argumentative Structures and Online Deliberation in User-Generated Comments to the Website of a South African Newspaper

Authors: Marthinus Conradie

Abstract:

The conversational dynamics of democratically orientated deliberation continue to stimulate critical scholarship for its potential to bolster robust engagement between different sections of pluralist societies. Several axes of deliberation that have attracted academic attention include face-to-face vs. online interaction, and citizen-to-citizen communication vs. engagement between citizens and political elites. In all these areas, numerous researchers have explored deliberative procedures aimed at achieving instrumental goals such a securing consensus on policy issues, against procedures that prioritise expressive outcomes such as broadening the range of argumentative repertoires that discursively construct and mediate specific political issues. The study that informs this paper, works in the latter stream. Drawing its data from the reader-comments section of a South African broadsheet newspaper, the study investigates online, citizen-to-citizen deliberation by analysing the discursive practices through which competing understandings of social problems are articulated and contested. To advance this agenda, the paper deals specifically with user-generated comments posted in response to news stories on questions of race and racism in South Africa. The analysis works to discern and interpret the various sets of discourse practices that shape how citizens deliberate contentious political issues, especially racism. Since the website in question is designed to encourage the critical comparison of divergent interpretations of news events, without feeding directly into national policymaking, the study adopts an analytic framework that traces how citizens articulate arguments, rather than the instrumental effects that citizen deliberations might exert on policy. The paper starts from the argument that such expressive interactions are particularly crucial to current trends in South African politics, given that the precise nature of race and racism remain contested and uncertain. Centred on a sample of 2358 conversational moves in 814 posts to 18 news stories emanating from issues of race and racism, the analysis proceeds in a two-step fashion. The first stage conducts a qualitative content analysis that offers insights into the levels of reciprocity among commenters (do readers engage with each other or simply post isolated opinions?), as well as the structures of argumentation (do readers support opinions by citing evidence?). The second stage involves a more fine-grained discourse analysis, based on a theorisation of argumentation that delineates it into three components: opinions/conclusions, evidence/data to support opinions/conclusions and warrants that explicate precisely how evidence/data buttress opinions/conclusions. By tracing the manifestation and frequency of specific argumentative practices, this study contributes to the archive of research currently aggregating around the practices that characterise South Africans’ engagement with provocative political questions, especially racism and racial inequity. Additionally, the study also contributes to recent scholarship on the affordances of Web 2.0 software by eschewing a simplistic bifurcation between cyber-optimist vs. pessimism, in favour of a more nuanced and context-specific analysis of the patterns that structure online deliberation.

Keywords: online deliberation, discourse analysis, qualitative content analysis, racism

Procedia PDF Downloads 151
9 The Usefulness of Medical Scribes in the Emengecy Department

Authors: Victor Kang, Sirene Bellahnid, Amy Al-Simaani

Abstract:

Efficient documentation and completion of clerical tasks are pillars of efficient patient-centered care in acute settings such as the emergency department (ED). Medical scribes aid physicians with documentation, navigation of electronic health records, results gathering, and communication coordination with other healthcare teams. However, the use of medical scribes is not widespread, with some hospitals even continuing to discontinue their programs. One reason for this could be the lack of studies that have outlined concrete improvements in efficiency and patient and provider satisfaction in emergency departments before and after incorporating scribes. Methods: We conducted a review of the literature concerning the implementation of a medical scribe program and emergency department performance. For this review, a narrative synthesis accompanied by textual commentaries was chosen to present the selected papers. PubMed was searched exclusively. Initially, no date limits were set, but seeing as the electronic medical record was officially implemented in Canada in 2013, studies published after this date were preferred as they provided insight into the interplay between its implementation and scribes on quality improvement. Results: Throughput, efficiency, and cost-effectiveness were the most commonly used parameters in evaluating scribes in the Emergency Department. Important throughput metrics, specifically door-to-doctor and disposition time, were significantly decreased in emergency departments that utilized scribes. Of note, this was shown to be the case in community hospitals, where the burden of documentation and clerical tasks would fall directly upon the attending physician. Academic centers differ in that they rely heavily on residents and students; so the implementation of scribes has been shown to have limited effect on these metrics. However, unique to academic centers was the provider’s perception of incrased time for teaching was unique to academic centers. Consequently, providers express increased work satisfaction in relation to time spent with patients and in teaching. Patients, on the other hand, did not demonstrate a decrease in satisfaction in regards to the care that was provided, but there was no significant increase observed either. Of the studies we reviewed, one of the biggest limitations was the lack of significance in the data. While many individual studies reported that medical scribes in emergency rooms improved relative value units, patient satisfaction, provider satisfaction, and increased number of patients seen, there was no statistically significant improvement in the above criteria when compiled in a systematic review. There is also a clear publication bias; very few studies with negative results were published. To prove significance, data from more emergency rooms with scribe programs would need to be compiled which also includes emergency rooms who did not report noticeable benefits. Furthermore, most data sets focused only on scribes in academic centers. Conclusion: Ultimately, the literature suggests that while emergency room physicians who have access to medical scribes report higher satisfaction due to lower clerical burdens and can see more patients per shift, there is still variability in terms of patient and provider satisfaction. Whether or not this variability exists due to differences in training (in-house trainees versus contractors), population profile (adult versus pediatric), setting (academic versus community), or which shifts scribe work cannot be determined based on the studies that exist. Ultimately, more scribe programs need to be evaluated to determine whether these variables affect outcomes and prove whether scribes significantly improve emergency room efficiency.

Keywords: emergency medicine, medical scribe, scribe, documentation

Procedia PDF Downloads 68
8 Analysis Of Fine Motor Skills in Chronic Neurodegenerative Models of Huntington’s Disease and Amyotrophic Lateral Sclerosis

Authors: T. Heikkinen, J. Oksman, T. Bragge, A. Nurmi, O. Kontkanen, T. Ahtoniemi

Abstract:

Motor impairment is an inherent phenotypic feature of several chronic neurodegenerative diseases, and pharmacological therapies aimed to counterbalance the motor disability have a great market potential. Animal models of chronic neurodegenerative diseases display a number deteriorating motor phenotype during the disease progression. There is a wide array of behavioral tools to evaluate motor functions in rodents. However, currently existing methods to study motor functions in rodents are often limited to evaluate gross motor functions only at advanced stages of the disease phenotype. The most commonly applied traditional motor assays used in CNS rodent models, lack the sensitivity to capture fine motor impairments or improvements. Fine motor skill characterization in rodents provides a more sensitive tool to capture more subtle motor dysfunctions and therapeutic effects. Importantly, similar approach, kinematic movement analysis, is also used in clinic, and applied both in diagnosis and determination of therapeutic response to pharmacological interventions. The aim of this study was to apply kinematic gait analysis, a novel and automated high precision movement analysis system, to characterize phenotypic deficits in three different chronic neurodegenerative animal models, a transgenic mouse model (SOD1 G93A) for amyotrophic lateral sclerosis (ALS), and R6/2 and Q175KI mouse models for Huntington’s disease (HD). The readouts from walking behavior included gait properties with kinematic data, and body movement trajectories including analysis of various points of interest such as movement and position of landmarks in the torso, tail and joints. Mice (transgenic and wild-type) from each model were analyzed for the fine motor kinematic properties at young ages, prior to the age when gross motor deficits are clearly pronounced. Fine motor kinematic Evaluation was continued in the same animals until clear motor dysfunction with conventional motor assays was evident. Time course analysis revealed clear fine motor skill impairments in each transgenic model earlier than what is seen with conventional gross motor tests. Motor changes were quantitatively analyzed for up to ~80 parameters, and the largest data sets of HD models were further processed with principal component analysis (PCA) to transform the pool of individual parameters into a smaller and focused set of mutually uncorrelated gait parameters showing strong genotype difference. Kinematic fine motor analysis of transgenic animal models described in this presentation show that this method isa sensitive, objective and fully automated tool that allows earlier and more sensitive detection of progressive neuromuscular and CNS disease phenotypes. As a result of the analysis a comprehensive set of fine motor parameters for each model is created, and these parameters provide better understanding of the disease progression and enhanced sensitivity of this assay for therapeutic testing compared to classical motor behavior tests. In SOD1 G93A, R6/2, and Q175KI mice, the alterations in gait were evident already several weeks earlier than with traditional gross motor assays. Kinematic testing can be applied to a wider set of motor readouts beyond gait in order to study whole body movement patterns such as with relation to joints and various body parts longitudinally, providing a sophisticated and translatable method for disseminating motor components in rodent disease models and evaluating therapeutic interventions.

Keywords: Gait analysis, kinematic, motor impairment, inherent feature

Procedia PDF Downloads 330
7 Implementation of Green Deal Policies and Targets in Energy System Optimization Models: The TEMOA-Europe Case

Authors: Daniele Lerede, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

The European Green Deal is the first internationally agreed set of measures to contrast climate change and environmental degradation. Besides the main target of reducing emissions by at least 55% by 2030, it sets the target of accompanying European countries through an energy transition to make the European Union into a modern, resource-efficient, and competitive net-zero emissions economy by 2050, decoupling growth from the use of resources and ensuring a fair adaptation of all social categories to the transformation process. While the general purpose to allow the realization of the purposes of the Green Deal already dates back to 2019, strategies and policies keep being developed coping with recent circumstances and achievements. However, general long-term measures like the Circular Economy Action Plan, the proposals to shift from fossil natural gas to renewable and low-carbon gases, in particular biomethane and hydrogen, and to end the sale of gasoline and diesel cars by 2035, will all have significant effects on energy supply and demand evolution across the next decades. The interactions between energy supply and demand over long-term time frames are usually assessed via energy system models to derive useful insights for policymaking and to address technological choices and research and development. TEMOA-Europe is a newly developed energy system optimization model instance based on the minimization of the total cost of the system under analysis, adopting a technologically integrated, detailed, and explicit formulation and considering the evolution of the system in partial equilibrium in competitive markets with perfect foresight. TEMOA-Europe is developed on the TEMOA platform, an open-source modeling framework totally implemented in Python, therefore ensuring third-party verification even on large and complex models. TEMOA-Europe is based on a single-region representation of the European Union and EFTA countries on a time scale between 2005 and 2100, relying on a set of assumptions for socio-economic developments based on projections by the International Energy Outlook and a large technological dataset including 7 sectors: the upstream and power sectors for the production of all energy commodities and the end-use sectors, including industry, transport, residential, commercial and agriculture. TEMOA-Europe also includes an updated hydrogen module considering its production, storage, transportation, and utilization. Besides, it can rely on a wide set of innovative technologies, ranging from nuclear fusion and electricity plants equipped with CCS in the power sector to electrolysis-based steel production processes and steel in the industrial sector – with a techno-economic characterization based on public literature – to produce insightful energy scenarios and especially to cope with the very long analyzed time scale. The aim of this work is to examine in detail the scheme of measures and policies for the realization of the purposes of the Green Deal and to transform them into a set of constraints and new socio-economic development pathways. Based on them, TEMOA-Europe will be used to produce and comparatively analyze scenarios to assess the consequences of Green Deal-related measures on the future evolution of the energy mix over the whole energy system in an economic optimization environment.

Keywords: European Green Deal, energy system optimization modeling, scenario analysis, TEMOA-Europe

Procedia PDF Downloads 79
6 The Development of the Geological Structure of the Bengkulu Fore Arc Basin, Western Edge of Sundaland, Sumatra, and Its Relationship to Hydrocarbon Trapping Mechanism

Authors: Lauti Dwita Santy, Hermes Panggabean, Syahrir Andi Mangga

Abstract:

The Bengkulu Basin is part of the Sunda Arc system, which is a classic convergent type margin that occur around the southern rim of the Eurasian continental (Sundaland) plate. The basin is located between deep sea trench (Mentawai Outer Arc high) and the volvanic/ magmatic Arc of the Barisan Mountains Range. To the northwest it is bounded by Padang High, to the northest by Barisan Mountains (Sumatra Fault Zone) to the southwest by Mentawai Fault Zone and to the southeast by Semangko High/ Sunda Strait. The stratigraphic succession and tectonic development can be broadly divided into four stage/ periods, i.e Late Jurassic- Early Cretaceous, Late Eocene-Early Oligocene, Late Oligocene-Early Miocene, Middle Miocene-Late Miocene and Pliocene-Plistocene, which are mainly controlled by the development of subduction activities. The Pre Tertiary Basement consist of sedimentary and shallow water limestone, calcareous mudstone, cherts and tholeiitic volcanic rocks, with Late Jurassic to Early Cretaceous in age. The sedimentation in this basin is depend on the relief of the Pre Tertiary Basement (Woyla Terrane) and occured into two stages, i.e. transgressive stage during the Latest Oligocene-Early Middle Miocene Seblat Formation, and the regressive stage during the Latest Middle Miocene-Pleistocene (Lemau, Simpangaur and Bintunan Formations). The Pre-Tertiary Faults were more intensive than the overlying cover, The Tertiary Rocks. There are two main fault trends can be distinguished, Northwest–Southwest Faults and Northeast-Southwest Faults. The NW-SE fault (Ketaun) are commonly laterally persistent, are interpreted to the part of Sumatran Fault Systems. They commonly form the boundaries to the Pre Tertiary basement highs and therefore are one of the faults elements controlling the geometry and development of the Tertiary sedimentary basins.The Northeast-Southwest faults was formed a conjugate set to the Northwest–Southeast Faults. In the earliest Tertiary and reactivated during the Plio-Pleistocene in a compressive mode with subsequent dextral displacement. The Block Faulting accross these two sets of faults related to approximate North–South compression in Paleogene time and produced a series of elongate basins separated by basement highs in the backarc and forearc region. The Bengkulu basin is interpreted having evolved from pull apart feature in the area southwest of the main Sumatra Fault System related to NW-SE trending in dextral shear.Based on Pyrolysis Yield (PY) vs Total Organic Carbon (TOC) diagram show that Seblat and Lemau Formation belongs to oil and Gas Prone with the quality of the source rocks includes into excellent and good (Lemau Formation), Fair and Poor (Seblat Formation). The fine-grained carbonaceous sediment of the Seblat dan Lemau Formations as source rocks, the coarse grained and carbonate sediments of the Seblat and Lemau Formations as reservoir rocks, claystone bed in Seblat and Lemau Formation as caprock. The source rocks maturation are late immature to early mature, with kerogen type II and III (Seblat Formation), and late immature to post mature with kerogen type I and III (Lemau Formation). The burial history show to 2500 m in depthh with paleo temperature reached 80oC. Trapping mechanism occur during Oligo–Miocene and Middle Miocene, mainly in block faulting system.

Keywords: fore arc, bengkulu, sumatra, sundaland, hydrocarbon, trapping mechanism

Procedia PDF Downloads 534