Search results for: William Loudon
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 203

Search results for: William Loudon

23 Strategies of Translation: Unlocking the Secret of 'Locksley Hall'

Authors: Raja Lahiani

Abstract:

'Locksley Hall' is a poem that Lord Alfred Tennyson (1809-1892) published in 1842. It is believed to be his first attempt to face as a poet some of the most painful of his experiences, as it is a study of his rising out of sickness into health, conquering his selfish sorrow by faith and hope. So far, in Victorian scholarship as in modern criticism, 'Locksley Hall' has been studied and approached as a canonical Victorian English poem. The aim of this project is to prove that some strategies of translation were used in this poem in such a way as to guarantee its assimilation into the English canon and hence efface to a large extent its Arabic roots. In its relationship with its source text, 'Locksley Hall' is at the same time mimetic and imitative. As part of the terminology used in translation studies, ‘imitation’ means almost the exact opposite of what it means in ordinary English. By adopting an imitative procedure, a translator would do something totally different from the original author, wandering far and freely from the words and sense of the original text. An imitation is thus aimed at an audience which wants the work of the particular translator rather than the work of the original poet. Hallam Tennyson, the poet’s biographer, asserts that 'Locksley Hall' is a simple invention of place, incidents, and people, though he notes that he remembers the poet claiming that Sir William Jones’ prose translation of the Mu‘allaqat (pre-Islamic poems) gave him the idea of the poem. A comparative work would prove that 'Locksley Hall' mirrors a great deal of Tennyson’s biography and hence is not a simple invention of details as asserted by his biographer. It would be challenging to prove that 'Locksley Hall' shares so many details with the Mu‘allaqat, as declared by Tennyson himself, that it needs to be studied as an imitation of the Mu‘allaqat of Imru’ al-Qays and ‘Antara in addition to its being a poem in its own right. Thus, the main aim of this work is to unveil the imitative and mimetic strategies used by Tennyson in his composition of 'Locksley Hall.' It is equally important that this project researches the acculturating assimilative tools used by the poet to root his poem in its Victorian English literary, cultural and spatiotemporal settings. This work adopts a comparative methodology. Comparison is done at different levels. The poem will be contextualized in its Victorian English literary framework. Alien details related to structure, socio-spatial setting, imagery and sound effects shall be compared to Arabic poems from the Mu‘allaqat collection. This would determine whether the poem is a translation, an adaption, an imitation or a genuine work. The ultimate objective of the project is to unveil in this canonical poem a new dimension that has for long been either marginalized or ignored. By proving that 'Locksley Hall' is an imitation of classical Arabic poetry, the project aspires to consolidate its literary value and open up new gates of accessing it.

Keywords: comparative literature, imitation, Locksley Hall, Lord Alfred Tennyson, translation, Victorian poetry

Procedia PDF Downloads 201
22 Queer Anti-Urbanism: An Exploration of Queer Space Through Design

Authors: William Creighton, Jan Smitheram

Abstract:

Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?

Keywords: queer, queer anti-urbanism, design as research, design

Procedia PDF Downloads 176
21 Educational Institutional Approach for Livelihood Improvement and Sustainable Development

Authors: William Kerua

Abstract:

The PNG University of Technology (Unitech) has mandatory access to teaching, research and extension education. Given such function, the Agriculture Department has established the ‘South Pacific Institute of Sustainable Agriculture and Rural Development (SPISARD)’ in 2004. SPISARD is established as a vehicle to improve farming systems practiced in selected villages by undertaking pluralistic extension method through ‘Educational Institutional Approach’. Unlike other models, SPISARD’s educational institutional approach stresses on improving the whole farming systems practiced in a holistic manner and has a two-fold focus. The first is to understand the farming communities and improve the productivity of the farming systems in a sustainable way to increase income, improve nutrition and food security as well as livelihood enhancement trainings. The second is to enrich the Department’s curriculum through teaching, research, extension and getting inputs from farming community. SPISARD has established number of model villages in various provinces in Papua New Guinea (PNG) and with many positive outcome and success stories. Adaption of ‘educational institutional approach’ thus binds research, extension and training into one package with the use of students and academic staff through model village establishment in delivering development and extension to communities. This centre (SPISARD) coordinates the activities of the model village programs and linkages. The key to the development of the farming systems is establishing and coordinating linkages, collaboration, and developing partnerships both within and external institutions, organizations and agencies. SPISARD has a six-point step strategy for the development of sustainable agriculture and rural development. These steps are (i) establish contact and identify model villages, (ii) development of model village resource centres for research and trainings, (iii) conduct baseline surveys to identify problems/needs of model villages, (iv) development of solution strategies, (v) implementation and (vi) evaluation of impact of solution programs. SPISARD envisages that the farming systems practiced being improved if the villages can be made the centre of SPISARD activities. Therefore, SPISARD has developed a model village approach to channel rural development. The model village when established become the conduit points where teaching, training, research, and technology transfer takes place. This approach is again different and unique to the existing ones, in that, the development process take place in the farmers’ environment with immediate ‘real time’ feedback mechanisms based on the farmers’ perspective and satisfaction. So far, we have developed 14 model villages and have conducted 75 trainings in 21 different areas/topics in 8 provinces to a total of 2,832 participants of both sex. The aim of these trainings is to directly participate with farmers in the pursuit to improving their farming systems to increase productivity, income and to secure food security and nutrition, thus to improve their livelihood.

Keywords: development, educational institutional approach, livelihood improvement, sustainable agriculture

Procedia PDF Downloads 154
20 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 191
19 Innovation Management in E-Health Care: The Implementation of New Technologies for Health Care in Europe and the USA

Authors: Dariusz M. Trzmielak, William Bradley Zehner, Elin Oftedal, Ilona Lipka-Matusiak

Abstract:

The use of new technologies should create new value for all stakeholders in the healthcare system. The article focuses on demonstrating that technologies or products typically enable new functionality, a higher standard of service, or a higher level of knowledge and competence for clinicians. It also highlights the key benefits that can be achieved through the use of artificial intelligence, such as relieving clinicians of many tasks and enabling the expansion and greater specialisation of healthcare services. The comparative analysis allowed the authors to create a classification of new technologies in e-health according to health needs and benefits for patients, doctors, and healthcare systems, i.e., the main stakeholders in the implementation of new technologies and products in healthcare. The added value of the development of new technologies in healthcare is diagnosed. The work is both theoretical and practical in nature. The primary research methods are bibliographic analysis and analysis of research data and market potential of new solutions for healthcare organisations. The bibliographic analysis is complemented by the author's case studies of implemented technologies, mostly based on artificial intelligence or telemedicine. In the past, patients were often passive recipients, the end point of the service delivery system, rather than stakeholders in the system. One of the dangers of powerful new technologies is that patients may become even more marginalised. Healthcare will be provided and delivered in an increasingly administrative, programmed way. The doctor may also become a robot, carrying out programmed activities - using 'non-human services'. An alternative approach is to put the patient at the centre, using technologies, products, and services that allow them to design and control technologies based on their own needs. An important contribution to the discussion is to open up the different dimensions of the user (carer and patient) and to make them aware of healthcare units implementing new technologies. The authors of this article outline the importance of three types of patients in the successful implementation of new medical solutions. The impact of implemented technologies is analysed based on: 1) "Informed users", who are able to use the technology based on a better understanding of it; 2) "Engaged users" who play an active role in the broader healthcare system as a result of the technology; 3) "Innovative users" who bring their own ideas to the table based on a deeper understanding of healthcare issues. The authors' research hypothesis is that the distinction between informed, engaged, and innovative users has an impact on the perceived and actual quality of healthcare services. The analysis is based on case studies of new solutions implemented in different medical centres. In addition, based on the observations of the Polish author, who is a manager at the largest medical research institute in Poland, with analytical input from American and Norwegian partners, the added value of the implementations for patients, clinicians, and the healthcare system will be demonstrated.

Keywords: innovation, management, medicine, e-health, artificial intelligence

Procedia PDF Downloads 20
18 Sensory Interventions for Dementia: A Review

Authors: Leigh G. Hayden, Susan E. Shepley, Cristina Passarelli, William Tingo

Abstract:

Introduction: Sensory interventions are popular therapeutic and recreational approaches for people living with all stages of dementia. However, it is unknown which sensory interventions are used to achieve which outcomes across all subtypes of dementia. Methods: To address this gap, we conducted a scoping review of sensory interventions for people living with dementia. We conducted a search of the literature for any article published in English from 1 January 1990 to 1 June 2019, on any sensory or multisensory intervention targeted to people living with any kind of dementia, which reported on patient health outcomes. We did not include complex interventions where only a small aspect was related to sensory stimulation. We searched the databases Medline, CINHAL, and Psych Articles using our institutional discovery layer. We conducted all screening in duplicate to reduce Type 1 and Type 2 errors. The data from all included papers were extracted by one team member, and audited by another, to ensure consistency of extraction and completeness of data. Results: Our initial search captured 7654 articles, and the removal of duplicates (n=5329), those that didn’t pass title and abstract screening (n=1840) and those that didn’t pass full-text screening (n=281) resulted in 174 articles included. The countries with the highest publication in this area were the United States (n=59), the United Kingdom (n=26) and Australia (n=15). The most common type of interventions were music therapy (n=36), multisensory rooms (n=27) and multisensory therapies (n=25). Seven articles were published in the 1990’s, 55 in the 2000’s, and the remainder since 2010 (n=112). Discussion: Multisensory rooms have been present in the literature since the early 1990’s. However, more recently, nature/garden therapy, art therapy, and light therapy have emerged since 2008 in the literature, an indication of the increasingly diverse scholarship in the area. The least popular type of intervention is a traditional food intervention. Taste as a sensory intervention is generally avoided for safety reasons, however it shows potential for increasing quality of life. Agitation, behavior, and mood are common outcomes for all sensory interventions. However, light therapy commonly targets sleep. The majority (n=110) of studies have very small sample sizes (n=20 or less), an indicator of the lack of robust data in the field. Additional small-scale studies of the known sensory interventions will likely do little to advance the field. However, there is a need for multi-armed studies which directly compare sensory interventions, and more studies which investigate the use of layering sensory interventions (for example, adding an aromatherapy component to a lighting intervention). In addition, large scale studies which enroll people at early stages of dementia will help us better understand the potential of sensory and multisensory interventions to slow the progression of the disease.

Keywords: sensory interventions, dementia, scoping review

Procedia PDF Downloads 134
17 Memorializing the Holocaust in the Present Century

Authors: Mehak Burza

Abstract:

As we pause to observe the Holocaust Remembrance Day each year on 27 January, it becomes important to consider how the Holocaust is witnessed, and its education is perceived across the globe. The dissemination of knowledge of the Holocaust becomes more pertinent in the countries that were not directly affected by it. The Holocaust education is not widespread in Asian countries and is thus not mandatory as an academic discipline for school and university students. One such Asian country that often considers Holocaust as an isolated event is India. Though the struggle for freedom began with the 1857 mutiny (the first war of Indian independence) but the freedom revolts gained momentum specifically during the years 1944-1947, when India was steeped in a battery of rebellions. However, freedom for the Indian subcontinent from the domination of British Raj came at the cost of partition of India that resulted in widespread bloodshed and immigration. For India, it is this backdrop of her freedom struggle that always outweighs the incidents of the Second World War, including the catastrophic event of the Holocaust. As a result, the knowledge about the Holocaust is available through secondary sources such as Holocaust documentaries and movies. Besides Anne Frank’s diary, the knowledge about the Holocaust is disseminated through the course readings in the universities. The most common literary acquaintances with the Jewish faith for university students are when they come across the Jewish characters in their course readings. The Prioress’s Tale in Geoffrey Chaucer’s Canterbury Tales, the character of Shylock in William Shakespeare’s The Merchant of Venice, and the Jewish protagonist, Barabas, in Christopher Marlow’s Jew of Malta. Apart from this, the school textbooks mention a detailed chapter on Holocaust and Hitler, which is an encouraging turn. However, there still exists a yawning gap between dissemination and sensitization of Holocaust education owing to different geographical locales. My paper presentation aims to trace the intersectional elements between India and the Holocaust that can serve as the required pivotal stand-board to foster sensitization towards Holocaust education in the Indian subcontinent. For instance, Maharaja Jam SahebDigvijaysinhjiRanjitsinhji, the ruler of Nawanagar, a princely state in British India, helped save thousand Polish Jewish children in 1945 at the time when India herself was steeped in its struggle for freedom. Famously known as the ‘Indian Oskar Schindler’ Polish government has named a street after him in Krakow, Poland. Another example that deserves mention is the spy princess, Noor Inayat Khan, a descendent of Tipu Sultan, who became the most celebrated British spyand fought against the Nazis. Additionally, by offering refuge to Jews, India has proved to be a distant haven for them. Researching further the domain of Jewish refugees in India will not only illuminate a dull/gray zone of investigation but also enable the educators to provide appropriate entry points for introducing the subject of Shoah/Holocaust in India, a subject which unfortunately hitherto is either seldom discussed or is equated with the Partition of India.

Keywords: awareness, dissemination, holocaust, India

Procedia PDF Downloads 137
16 Basic Life Support Training in Rural Uganda: A Mixed Methods Study of Training and Attitudes towards Resuscitation

Authors: William Gallagher, Harriet Bothwell, Lowri Evans, Kevin Jones

Abstract:

Background: Worldwide, a third of adult deaths are caused by cardiovascular disease, a high proportion occurring in the developing world. Contributing to these poor outcomes are suboptimal assessments, treatments and monitoring of the acutely unwell patient. Successful training in trauma and neonates is recognised in the developing world but there is little literature supporting adult resuscitation. As far as the authors are aware no literature has been published on resuscitation training in Uganda since 2000 when a resuscitation training officer ran sessions in neonatal and paediatric resuscitation. The aim of this project was to offer training in Basic Life Support ( BLS) to staff and healthcare students based at Villa Maria Hospital in the Kalungu District, Central Uganda. This project was undertaken as a student selected component (SSC) offered by Swindon Academy, based at the Great Western Hospital, to medical students in their fourth year of the undergraduate programme. Methods: Semi-structured, informal interviews and focus groups were conducted with different clinicians in the hospital. These interviews were designed to focus on the level of training and understanding of BLS. A training session was devised which focused on BLS (excluding the use of an automatic external defribrillator) involving pre and post-training questionnaires and clinical assessments. Three training sessions were run for different cohorts: a pilot session for 5 Ugandan medical students, a second session for a group of 8 nursing and midwifery students and finally, a third was devised for physicians. The data collected was analysed in excel. Paired T-Tests determined statistical significance between pre and post-test scores and confidence before and after the sessions. Average clinical skill assessment scores were converted to percentages based on the area of BLS being assessed. Results: 27 participants were included in the analysis. 14 received ‘small group training’ whilst 13 received’ large group training’ 88% of all participants had received some form of resuscitation training. Of these, 46% had received theory training, 27% practical training and only 15% received both. 12% had received no training. On average, all participants demonstrated a significant increase of 5.3 in self-assessed confidence (p <0.05). On average, all participants thought the session was very useful. Analysis of qualitative date from clinician interviews in ongoing but identified themes identified include rescue breaths being considered the most important aspect resuscitation and doubts of a ‘good’ outcome from resuscitation. Conclusions: The results of this small study reflect the need for regular formal training in BLS in low resource settings. The active engagement and positive opinions concerning the utility of the training are promising as well as the evidence of improvement in knowledge.

Keywords: basic life support, education, resuscitation, sub-Saharan Africa, training, Uganda

Procedia PDF Downloads 148
15 Co-Smoldered Digestate Ash as Additive for Anaerobic Digestion of Berry Fruit Waste: Stability and Enhanced Production Rate

Authors: Arinze Ezieke, Antonio Serrano, William Clarke, Denys Villa-Gomez

Abstract:

Berry cultivation results in discharge of high organic strength putrescible solid waste which potentially contributes to environmental degradation, making it imperative to assess options for its complete management. Anaerobic digestion (AD) could be an ideal option when the target is energy generation; however, due to berry fruit characteristics high carbohydrate composition, the technology could be limited by its high alkalinity requirement which suggests dosing of additives such as buffers and trace elements supplement. Overcoming this limitation in an economically viable way could entail replacement of synthetic additives with recycled by-product waste. Consequently, ash from co-smouldering of high COD characteristic AD digestate and coco-coir could be a promising material to be used to enhance the AD of berry fruit waste, given its characteristic high pH, alkalinity and metal concentrations which is typical of synthetic additives. Therefore, the aim of the research was to evaluate the stability and process performance from the AD of BFW when ash from co-smoldered digestate and coir are supplemented as alkalinity and trace elements (TEs) source. Series of batch experiments were performed to ascertain the necessity for alkalinity addition and to see whether the alkalinity and metals in the co-smouldered digestate ash can provide the necessary buffer and TEs for AD of berry fruit waste. Triplicate assays were performed in batch systems following I/S of 2 (in VS), using serum bottles (160 mL) sealed and placed in a heated room (35±0.5 °C), after creating anaerobic conditions. Control experiment contained inoculum and substrates only, and inoculum, substrate and NaHCO3 for optimal total alkalinity concentration and TEs assays, respectively. Total alkalinity concentration refers to alkalinity of inoculum and the additives. The alkalinity and TE potential of the ash were evaluated by supplementing ash (22.574 g/kg) of equivalent total alkalinity concentration to that of the pre-determined optimal from NaHCO3, and by dosing ash (0.012 – 7.574 g/kg) of varying concentrations of specific essential TEs (Co, Fe, Ni, Se), respectively. The result showed a stable process at all examined conditions. Supplementation of 745 mg/L CaCO3 NaHCO3 resulted to an optimum TAC of 2000 mg/L CaCO3. Equivalent ash supplementation of 22.574 g/kg allowed the achievement of this pre-determined optimum total alkalinity concentration, resulting to a stable process with a 92% increase in the methane production rate (323 versus 168 mL CH4/ (gVS.d)), but a 36% reduction in the cumulative methane production (103 versus 161 mL CH4/gVS). Addition of ashes at incremental dosage as TEs source resulted to a reduction in the Cumulative methane production, with the highest dosage of 7.574 g/kg having the highest effect of -23.5%; however, the seemingly immediate bioavailability of TE at this high dosage allowed for a +15% increase in the methane production rate. With an increased methane production rate, the results demonstrated that the ash at high dosages could be an effective supplementary material for either a buffered or none buffered berry fruit waste AD system.

Keywords: anaerobic digestion, alkalinity, co-smoldered digestate ash, trace elements

Procedia PDF Downloads 122
14 The Budget Impact of the DISCERN™ Diagnostic Test for Alzheimer’s Disease in the United States

Authors: Frederick Huie, Lauren Fusfeld, William Burchenal, Scott Howell, Alyssa McVey, Thomas F. Goss

Abstract:

Alzheimer’s Disease (AD) is a degenerative brain disease characterized by memory loss and cognitive decline that presents a substantial economic burden for patients and health insurers in the US. This study evaluates the payer budget impact of the DISCERN™ test in the diagnosis and management of patients with symptoms of dementia evaluated for AD. DISCERN™ comprises three assays that assess critical factors related to AD that regulate memory, formation of synaptic connections among neurons, and levels of amyloid plaques and neurofibrillary tangles in the brain and can provide a quicker, more accurate diagnosis than tests in the current diagnostic pathway (CDP). An Excel-based model with a three-year horizon was developed to assess the budget impact of DISCERN™ compared with CDP in a Medicare Advantage plan with 1M beneficiaries. Model parameters were identified through a literature review and were verified through consultation with clinicians experienced in diagnosis and management of AD. The model assesses direct medical costs/savings for patients based on the following categories: •Diagnosis: costs of diagnosis using DISCERN™ and CDP. •False Negative (FN) diagnosis: incremental cost of care avoidable with a correct AD diagnosis and appropriately directed medication. •True Positive (TP) diagnosis: AD medication costs; cost from a later TP diagnosis with the CDP versus DISCERN™ in the year of diagnosis, and savings from the delay in AD progression due to appropriate AD medication in patients who are correctly diagnosed after a FN diagnosis.•False Positive (FP) diagnosis: cost of AD medication for patients who do not have AD. A one-way sensitivity analysis was conducted to assess the effect of varying key clinical and cost parameters ±10%. An additional scenario analysis was developed to evaluate the impact of individual inputs. In the base scenario, DISCERN™ is estimated to decrease costs by $4.75M over three years, equating to approximately $63.11 saved per test per year for a cohort followed over three years. While the diagnosis cost is higher with DISCERN™ than with CDP modalities, this cost is offset by the higher overall costs associated with CDP due to the longer time needed to receive a TP diagnosis and the larger number of patients who receive a FN diagnosis and progress more rapidly than if they had received appropriate AD medication. The sensitivity analysis shows that the three parameters with the greatest impact on savings are: reduced sensitivity of DISCERN™, improved sensitivity of the CDP, and a reduction in the percentage of disease progression that is avoided with appropriate AD medication. A scenario analysis in which DISCERN™ reduces the utilization for patients of computed tomography from 21% in the base case to 16%, magnetic resonance imaging from 37% to 27% and cerebrospinal fluid biomarker testing, positive emission tomography, electroencephalograms, and polysomnography testing from 4%, 5%, 10%, and 8%, respectively, in the base case to 0%, results in an overall three-year net savings of $14.5M. DISCERN™ improves the rate of accurate, definitive diagnosis of AD earlier in the disease and may generate savings for Medicare Advantage plans.

Keywords: Alzheimer’s disease, budget, dementia, diagnosis.

Procedia PDF Downloads 138
13 A Negotiation Model for Understanding the Role of International Law in Foreign Policy Crises

Authors: William Casto

Abstract:

Studies that consider the actual impact of international law upon foreign affairs crises are flawed by an unrealistic model of decision making. The common, unexamined assumption is that a nation has a unitary executive or ruler who considers a wide variety of considerations, including international law, in attempting to resolve a crisis. To the extent that negotiation theory is considered, the focus is on negotiations between or among nations. The unsettling result is a shallow focus that concentrates on each country’s public posturing about international law. The country-to-country model ignores governments’ internal negotiations that lead to their formal position in a crisis. The model for foreign policy crises needs to be supplemented to include a model of internal negotiations. Important foreign policy decisions come from groups within a government committee, advisers, etc. Within these groups, participants may have differing agendas and resort to international law to bolster their positions. To understand the influence of international law in international crises, these internal negotiations must be considered. These negotiations are crucial to creating a foreign policy agenda or recommendations. External negotiations between the two nations are significant, but the internal negotiations provide a better understanding of the actual influence of international law upon international crises. Discovering the details of specific internal negotiations is quite difficult but not necessarily impossible. The present proposal will use a specific crisis to illustrate the role of international law. In 1861 during the American Civil War, a United States navy captain stopped a British mail ship and removed two ambassadors of the rebelling southern states. The result was what is commonly called the Trent Affair. In the wake of the captain’s unauthorized and rash action, Great Britain seriously considered going to war against the United States. A detailed analysis of the Trent Affair is possible using the available and extensive internal British correspondence and memoranda to reach an understanding of the effect of international law upon decision making. The extensive trove of internal British documents is particularly valuable because in 1861, the only effective means of communication was face-to-face or through letters. Telephones did not exist, and travel by horse and carriage was tedious. The British documents tell us how individual participants viewed the process. We can approach an accurate understanding of what actually happened as the British government strove to resolve the crisis. For example, British law officers initially concluded that the American captain’s rash act was permissible under international law. Later, the law officers revised their opinion. A model of internal negotiation is particularly valuable because it strips away nations’ public posturing about disputed international law principles. In internal decision making, there is room for meaningful debate over the relevant principles. This fluid debate tells how international law is used to develop a hard, public bargaining position. The Trent Affair indicates that international law had an actual influence upon the crisis and that law was not mere window dressing for the government’s public position.

Keywords: foreign affairs crises, negotiation, international law, Trent affair

Procedia PDF Downloads 127
12 Factors Affecting Early Antibiotic Delivery in Open Tibial Shaft Fractures

Authors: William Elnemer, Nauman Hussain, Samir Al-Ali, Henry Shu, Diane Ghanem, Babar Shafiq

Abstract:

Introduction: The incidence of infection in open tibial shaft injuries varies depending on the severity of the injury, with rates ranging from 1.8% for Gustilo-Anderson type I to 42.9% for type IIIB fractures. The timely administration of antibiotics upon presentation to the emergency department (ED) is an essential component of fracture management, and evidence indicates that prompt delivery of antibiotics is associated with improved outcomes. The objective of this study is to identify factors that contribute to the expedient administration of antibiotics. Methods: This is a retrospective study of open tibial shaft fractures at an academic Level I trauma center. Current Procedural Terminology (CPT) codes identified all patients treated for open tibial shaft fractures between 2015 and 2021. Open fractures were identified by reviewing ED and provider notes, and with ballistic fractures were considered open. Chart reviews were performed to extract demographics, fracture characteristics, postoperative outcomes, time to operative room, time to antibiotic order, and delivery. Univariate statistical analysis compared patients who received early antibiotics (EA), which were delivered within one hour of ED presentation, and those who received late antibiotics (LA), which were delivered outside of one hour of ED presentation. A multivariate analysis was performed to investigate patient, fracture, and transport/ED characteristics contributing to faster delivery of antibiotics. The multivariate analysis included the dependent variables: ballistic fracture, activation of Delta Trauma, Gustilo-Andersen (Type III vs. Type I and II), AO-OTA Classification (Type C vs. Type A and B), arrival between 7 am and 11 pm, and arrival via Emergency Medical Services (EMS) or walk-in. Results: Seventy ED patients with open tibial shaft fractures were identified. Of these, 39 patients (55.7%) received EA, while 31 patients (44.3%) received LA. Univariate analysis shows that the arrival via EMS as opposed to walk-in (97.4% vs. 74.2%, respectively, p = 0.01) and activation of Delta Trauma (89.7% vs. 51.6%, respectively, p < 0.001) was significantly higher in the EA group vs. the LA group. Additionally, EA cases had significantly shorter intervals between the antibiotic order and delivery when compared to LA cases (0.02 hours vs. 0.35 hours, p = 0.007). No other significant differences were found in terms of postoperative outcomes or fracture characteristics. Multivariate analysis shows that a Delta Trauma Response, arrival via EMS, and presentation between 7 am and 11 pm were independent predictors of a shorter time to antibiotic administration (Odds Ratio = 11.9, 30.7, and 5.4, p = 0.001, 0.016, and 0.013, respectively). Discussion: Earlier antibiotic delivery is associated with arrival to the ED between 7 am and 11 pm, arrival via EMS, and a coordinated Delta Trauma activation. Our findings indicate that in cases where administering antibiotics is critical to achieving positive outcomes, it is advisable to employ a coordinated Delta Trauma response. Hospital personnel should be attentive to the rapid administration of antibiotics to patients with open fractures who arrive via walk-in or during late-night hours.

Keywords: antibiotics, emergency department, fracture management, open tibial shaft fractures, orthopaedic surgery, time to or, trauma fractures

Procedia PDF Downloads 65
11 Associated Factors of Hypercholesterolemia, Hyperuricemia and Double Burden of Hypercuricémia-Hypercholesterolemia in Gout Patients: Hospital Based Study

Authors: Pierre Mintom, Armel Assiene Agamou, Leslie Toukem, William Dakam, Christine Fernande Nyangono Biyegue

Abstract:

Context: Hyperuricemia, the presence of high levels of uric acid in the blood, is a known precursor to the development of gout. Recent studies have suggested a strong association between hyperuricemia and disorders of lipoprotein metabolism, specifically hypercholesterolemia. Understanding the factors associated with these conditions in gout patients is essential for effective treatment and management. Research Aim: The objective of this study was to determine the prevalence of hyperuricemia, hypercholesterolemia, and the double burden of hyperuricemia-hypercholesterolemia in the gouty population. Additionally, the study aimed to identify the factors associated with these conditions. Methodology: The study utilized a database from a survey of 150 gouty patients recruited at the Laquintinie Hospital in Douala between August 2017 and February 2018. The database contained information on anthropometric parameters, biochemical markers, and the food and drugs consumed by the patients. Hyperuricemia and hypercholesterolemia were defined based on specific serum uric acid and total cholesterol thresholds, and the double burden was defined as the co-occurrence of hyperuricemia and hypercholesterolemia. Findings: The study found that the prevalence rates for hyperuricemia, hypercholesterolemia, and the double burden were 61.3%, 76%, and 50.7% respectively. Factors associated with these conditions included hypertriglyceridemia, atherogenicity index TC/HDL ratio, atherogenicity index LDL/HDL ratio, family history, and the consumption of specific foods and drinks. Theoretical Importance: The study highlights the strong association between hyperuricemia and dyslipidemia, providing important insights for guiding treatment strategies in gout patients. Additionally, it emphasizes the significance of nutritional education in managing these metabolic disorders, suggesting the need to address eating habits in gout patients. Data Collection and Analysis Procedures: Data was collected through surveys and medical records of gouty patients. Information on anthropometric parameters, biochemical markers, and dietary habits was recorded. Prevalence rates and associated factors were determined through statistical analysis, employing odds ratios to assess the risks. Question Addressed: The study aimed to address the prevalence rates and associated factors of hyperuricemia, hypercholesterolemia, and the double burden in gouty patients. It sought to understand the relationships between these conditions and determine their implications for treatment and nutritional education. Conclusion: Findings show that it’s exists an association between hyperuricemia and hypercholesterolemia in gout patients, thus creating a double burden. The findings underscore the importance of considering family history and eating habits in addressing the double burden of hyperuricemia-hypercholesterolemia. This study provides valuable insights for guiding treatment approaches and emphasizes the need for nutritional education in gout patients. This study specifically focussed on the sick population. A case–control study between gouty and non-gouty populations would be interesting to better compare and explain the results observed.

Keywords: gout, hyperuricemia, hypercholesterolemia, double burden

Procedia PDF Downloads 61
10 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 131
9 Genetically Informed Precision Drug Repurposing for Rheumatoid Arthritis

Authors: Sahar El Shair, Laura Greco, William Reay, Murray Cairns

Abstract:

Background: Rheumatoid arthritis (RA) is a chronic, systematic, inflammatory, autoimmune disease that involves damages to joints and erosions to the associated bones and cartilage, resulting in reduced physical function and disability. RA is a multifactorial disorder influenced by heterogenous genetic and environmental factors. Whilst different medications have proven successful in reducing inflammation associated with RA, they often come with significant side effects and limited efficacy. To address this, the novel pharmagenic enrichment score (PES) algorithm was tested in self-reported RA patients from the UK Biobank (UKBB), which is a cohort of predominantly European ancestry, and identified individuals with a high genetic risk in clinically actionable biological pathways to identify novel opportunities for precision interventions and drug repurposing to treat RA. Methods and materials: Genetic association data for rheumatoid arthritis was derived from publicly available genome-wide association studies (GWAS) summary statistics (N=97173). The PES framework exploits competitive gene set enrichment to identify pathways that are associated with RA to explore novel treatment opportunities. This data is then integrated into WebGestalt, Drug Interaction database (DGIdb) and DrugBank databases to identify existing compounds with existing use or potential for repurposed use. The PES for each of these candidates was then profiled in individuals with RA in the UKBB (Ncases = 3,719, Ncontrols = 333,160). Results A total of 209 pathways with known drug targets after multiple testing correction were identified. Several pathways, including interferon gamma signaling and TID pathway (which relates to a chaperone that modulates interferon signaling), were significantly associated with self-reported RA in the UKBB when adjusting for age, sex, assessment centre month and location, RA polygenic risk and 10 principal components. These pathways have a major role in RA pathogenesis, including autoimmune attacks against certain citrullinated proteins, synovial inflammation, and bone loss. Encouragingly, many also relate to the mechanism of action of existing RA medications. The analyses also revealed statistically significant association between RA polygenic scores and self-reported RA with individual PES scorings, highlighting the potential utility of the PES algorithm in uncovering additional genetic insights that could aid in the identification of individuals at risk for RA and provide opportunities for more targeted interventions. Conclusions In this study, pharmacologically annotated genetic risk was explored through the PES framework to overcome inter-individual heterogeneity and enable precision drug repurposing in RA. The results showed a statistically significant association between RA polygenic scores and self-reported RA and individual PES scorings for 3,719 RA patients. Interestingly, several enriched PES pathways were targeted by already approved RA drugs. In addition, the analysis revealed genetically supported drug repurposing opportunities for future treatment of RA with a relatively safe profile.

Keywords: rheumatoid arthritis, precision medicine, drug repurposing, system biology, bioinformatics

Procedia PDF Downloads 76
8 Organizational Resilience in the Perspective of Supply Chain Risk Management: A Scholarly Network Analysis

Authors: William Ho, Agus Wicaksana

Abstract:

Anecdotal evidence in the last decade shows that the occurrence of disruptive events and uncertainties in the supply chain is increasing. The coupling of these events with the nature of an increasingly complex and interdependent business environment leads to devastating impacts that quickly propagate within and across organizations. For example, the recent COVID-19 pandemic increased the global supply chain disruption frequency by at least 20% in 2020 and is projected to have an accumulative cost of $13.8 trillion by 2024. This crisis raises attention to organizational resilience to weather business uncertainty. However, the concept has been criticized for being vague and lacking a consistent definition, thus reducing the significance of the concept for practice and research. This study is intended to solve that issue by providing a comprehensive review of the conceptualization, measurement, and antecedents of operational resilience that have been discussed in the supply chain risk management literature (SCRM). We performed a Scholarly Network Analysis, combining citation-based and text-based approaches, on 252 articles published from 2000 to 2021 in top-tier journals based on three parameters: AJG ranking and ABS ranking, UT Dallas and FT50 list, and editorial board review. We utilized a hybrid scholarly network analysis by combining citation-based and text-based approaches to understand the conceptualization, measurement, and antecedents of operational resilience in the SCRM literature. Specifically, we employed a Bibliographic Coupling Analysis in the research cluster formation stage and a Co-words Analysis in the research cluster interpretation and analysis stage. Our analysis reveals three major research clusters of resilience research in the SCRM literature, namely (1) supply chain network design and optimization, (2) organizational capabilities, and (3) digital technologies. We portray the research process in the last two decades in terms of the exemplar studies, problems studied, commonly used approaches and theories, and solutions provided in each cluster. We then provide a conceptual framework on the conceptualization and antecedents of resilience based on studies in these clusters and highlight potential areas that need to be studied further. Finally, we leverage the concept of abnormal operating performance to propose a new measurement strategy for resilience. This measurement overcomes the limitation of most current measurements that are event-dependent and focus on the resistance or recovery stage - without capturing the growth stage. In conclusion, this study provides a robust literature review through a scholarly network analysis that increases the completeness and accuracy of research cluster identification and analysis to understand conceptualization, antecedents, and measurement of resilience. It also enables us to perform a comprehensive review of resilience research in SCRM literature by including research articles published during the pandemic and connects this development with a plethora of articles published in the last two decades. From the managerial perspective, this study provides practitioners with clarity on the conceptualization and critical success factors of firm resilience from the SCRM perspective.

Keywords: supply chain risk management, organizational resilience, scholarly network analysis, systematic literature review

Procedia PDF Downloads 74
7 On the Bias and Predictability of Asylum Cases

Authors: Panagiota Katsikouli, William Hamilton Byrne, Thomas Gammeltoft-Hansen, Tijs Slaats

Abstract:

An individual who demonstrates a well-founded fear of persecution or faces real risk of being subjected to torture is eligible for asylum. In Danish law, the exact legal thresholds reflect those established by international conventions, notably the 1951 Refugee Convention and the 1950 European Convention for Human Rights. These international treaties, however, remain largely silent when it comes to how states should assess asylum claims. As a result, national authorities are typically left to determine an individual’s legal eligibility on a narrow basis consisting of an oral testimony, which may itself be hampered by several factors, including imprecise language interpretation, insecurity or lacking trust towards the authorities among applicants. The leaky ground, on which authorities must assess their subjective perceptions of asylum applicants' credibility, questions whether, in all cases, adjudicators make the correct decision. Moreover, the subjective element in these assessments raises questions on whether individual asylum cases could be afflicted by implicit biases or stereotyping amongst adjudicators. In fact, recent studies have uncovered significant correlations between decision outcomes and the experience and gender of the assigned judge, as well as correlations between asylum outcomes and entirely external events such as weather and political elections. In this study, we analyze a publicly available dataset containing approximately 8,000 summaries of asylum cases, initially rejected, and re-tried by the Refugee Appeals Board (RAB) in Denmark. First, we look for variations in the recognition rates, with regards to a number of applicants’ features: their country of origin/nationality, their identified gender, their identified religion, their ethnicity, whether torture was mentioned in their case and if so, whether it was supported or not, and the year the applicant entered Denmark. In order to extract those features from the text summaries, as well as the final decision of the RAB, we applied natural language processing and regular expressions, adjusting for the Danish language. We observed interesting variations in recognition rates related to the applicants’ country of origin, ethnicity, year of entry and the support or not of torture claims, whenever those were made in the case. The appearance (or not) of significant variations in the recognition rates, does not necessarily imply (or not) bias in the decision-making progress. None of the considered features, with the exception maybe of the torture claims, should be decisive factors for an asylum seeker’s fate. We therefore investigate whether the decision can be predicted on the basis of these features, and consequently, whether biases are likely to exist in the decisionmaking progress. We employed a number of machine learning classifiers, and found that when using the applicant’s country of origin, religion, ethnicity and year of entry with a random forest classifier, or a decision tree, the prediction accuracy is as high as 82% and 85% respectively. tentially predictive properties with regards to the outcome of an asylum case. Our analysis and findings call for further investigation on the predictability of the outcome, on a larger dataset of 17,000 cases, which is undergoing.

Keywords: asylum adjudications, automated decision-making, machine learning, text mining

Procedia PDF Downloads 95
6 Effectiveness of an Intervention to Increase Physics Students' STEM Self-Efficacy: Results of a Quasi-Experimental Study

Authors: Stephanie J. Sedberry, William J. Gerace, Ian D. Beatty, Michael J. Kane

Abstract:

Increasing the number of US university students who attain degrees in STEM and enter the STEM workforce is a national priority. Demographic groups vary in their rates of participation in STEM, and the US produces just 10% of the world’s science and engineering degrees (2014 figures). To address these gaps, we have developed and tested a practical, 30-minute, single-session classroom-based intervention to improve students’ self-efficacy and academic performance in University STEM courses. Self-efficacy is a psychosocial construct that strongly correlates with academic success. Self-efficacy is a construct that is internal and relates to the social, emotional, and psychological aspects of student motivation and performance. A compelling body of research demonstrates that university students’ self-efficacy beliefs are strongly related to their selection of STEM as a major, aspirations for STEM-related careers, and persistence in science. The development of an intervention to increase students’ self-efficacy is motivated by research showing that short, social-psychological interventions in education can lead to large gains in student achievement. Our intervention addresses STEM self-efficacy via two strong, but previously separate, lines of research into attitudinal/affect variables that influence student success. The first is ‘attributional retraining,’ in which students learn to attribute their successes and failures to internal rather than external factors. The second is ‘mindset’ about fixed vs. growable intelligence, in which students learn that the brain remains plastic throughout life and that they can, with conscious effort and attention to thinking skills and strategies, become smarter. Extant interventions for both of these constructs have significantly increased academic performance in the classroom. We developed a 34-item questionnaire (Likert scale) to measure STEM Self-efficacy, Perceived Academic Control, and Growth Mindset in a University STEM context, and validated it with exploratory factor analysis, Rasch analysis, and multi-trait multi-method comparison to coded interviews. Four iterations of our 42-week research protocol were conducted across two academic years (2017-2018) at three different Universities in North Carolina, USA (UNC-G, NC A&T SU, and NCSU) with varied student demographics. We utilized a quasi-experimental prospective multiple-group time series research design with both experimental and control groups, and we are employing linear modeling to estimate the impact of the intervention on Self-Efficacy,wth-Mindset, Perceived Academic Control, and final course grades (performance measure). Preliminary results indicate statistically significant effects of treatment vs. control on Self-Efficacy, Growth-Mindset, Perceived Academic Control. Analyses are ongoing and final results pending. This intervention may have the potential to increase student success in the STEM classroom—and ownership of that success—to continue in a STEM career. Additionally, we have learned a great deal about the complex components and dynamics of self-efficacy, their link to performance, and the ways they can be impacted to improve students’ academic performance.

Keywords: academic performance, affect variables, growth mindset, intervention, perceived academic control, psycho-social variables, self-efficacy, STEM, university classrooms

Procedia PDF Downloads 127
5 Improving the Utility of Social Media in Pharmacovigilance: A Mixed Methods Study

Authors: Amber Dhoot, Tarush Gupta, Andrea Gurr, William Jenkins, Sandro Pietrunti, Alexis Tang

Abstract:

Background: The COVID-19 pandemic has driven pharmacovigilance towards a new paradigm. Nowadays, more people than ever before are recognising and reporting adverse reactions from medications, treatments, and vaccines. In the modern era, with over 3.8 billion users, social media has become the most accessible medium for people to voice their opinions and so provides an opportunity to engage with more patient-centric and accessible pharmacovigilance. However, the pharmaceutical industry has been slow to incorporate social media into its modern pharmacovigilance strategy. This project aims to make social media a more effective tool in pharmacovigilance, and so reduce drug costs, improve drug safety and improve patient outcomes. This will be achieved by firstly uncovering and categorising the barriers facing the widespread adoption of social media in pharmacovigilance. Following this, the potential opportunities of social media will be explored. We will then propose realistic, practical recommendations to make social media a more effective tool for pharmacovigilance. Methodology: A comprehensive systematic literature review was conducted to produce a categorised summary of these barriers. This was followed by conducting 11 semi-structured interviews with pharmacovigilance experts to confirm the literature review findings whilst also exploring the unpublished and real-life challenges faced by those in the pharmaceutical industry. Finally, a survey of the general public (n = 112) ascertained public knowledge, perception, and opinion regarding the use of their social media data for pharmacovigilance purposes. This project stands out by offering perspectives from the public and pharmaceutical industry that fill the research gaps identified in the literature review. Results: Our results gave rise to several key analysis points. Firstly, inadequacies of current Natural Language Processing algorithms hinder effective pharmacovigilance data extraction from social media, and where data extraction is possible, there are significant questions over its quality. Social media also contains a variety of biases towards common drugs, mild adverse drug reactions, and the younger generation. Additionally, outdated regulations for social media pharmacovigilance do not align with new, modern General Data Protection Regulations (GDPR), creating ethical ambiguity about data privacy and level of access. This leads to an underlying mindset of avoidance within the pharmaceutical industry, as firms are disincentivised by the legal, financial, and reputational risks associated with breaking ambiguous regulations. Conclusion: Our project uncovered several barriers that prevent effective pharmacovigilance on social media. As such, social media should be used to complement traditional sources of pharmacovigilance rather than as a sole source of pharmacovigilance data. However, this project adds further value by proposing five practical recommendations that improve the effectiveness of social media pharmacovigilance. These include: prioritising health-orientated social media; improving technical capabilities through investment and strategic partnerships; setting clear regulatory guidelines using multi-stakeholder processes; creating an adverse drug reaction reporting interface inbuilt into social media platforms; and, finally, developing educational campaigns to raise awareness of the use of social media in pharmacovigilance. Implementation of these recommendations would speed up the efficient, ethical, and systematic adoption of social media in pharmacovigilance.

Keywords: adverse drug reaction, drug safety, pharmacovigilance, social media

Procedia PDF Downloads 82
4 Exploring the Relationship between Mediolateral Center of Pressure and Galvanic Skin Response during Balance Tasks

Authors: Karlee J. Hall, Mark Laylor, Jessy Varghese, Paula Polastri, Karen Van Ooteghem, William McIlroy

Abstract:

Balance training is a common part of physiotherapy treatment and often involves a set of proprioceptive exercises which the patient carries out in the clinic and as part of their exercise program. Understanding all contributing factors to altered balance is of utmost importance to the clinical success of treatment of balance dysfunctions. A critical role for the autonomic nervous system (ANS) in the control of balance reactions has been proposed previously, with evidence for potential involvement being inferred from the observation of phasic galvanic skin responses (GSR) evoked by external balance perturbations. The current study explored whether the coupling between ANS reactivity and balance reactions would be observed during spontaneously occurring instability while standing, including standard positions typical of physiotherapy balance assessments. It was hypothesized that time-varying changes in GSR (ANS reactivity) would be associated with time-varying changes in the mediolateral center of pressure (ML-COP) (somatomotor reactivity). Nine individuals (5 females, 4 males, aged 19-37 years) were recruited. To induce varying balance demands during standing, the study compared ML-COP and GSR data across different task conditions varying the availability of vision and width of the base of support. Subjects completed 3, 30-second trials for each of the following stance conditions: standard, narrow, and tandem eyes closed, tandem eyes open, tandem eyes open with dome to shield visual input, and restricted peripheral visual field. ANS activity was evaluated by measures of GSR recorded from Ag-AgCl electrodes on the middle phalanges of digits 2 and 4 on the left hand; balance measures include ML-COP excursion frequency and amplitude recorded from two force plates embedded in the floor underneath each foot. Subjects were instructed to stand as still as possible with arms crossed in front of their chest. When comparing mean task differences across subjects, there was an expected increase in postural sway from tasks with a wide stance and no sensory restrictions (least challenging) to those with a narrow stance and no vision (most challenging). The correlation analysis revealed a significant positive relationship between ML-COP variability and GSR variability when comparing across tasks (r=0.94, df=5, p < 0.05). In addition, correlations coincided within each subject and revealed a significant positive correlation in 7 participants (r= 0.47, 0.57, 0.62, 0.62, 0.81, 0.64, 0.69 respectively, df=19, p < 0.05) and no significant relationship in 2 participants (r=0.36, 0.29 respectively, df=19, p > 0.05). The current study revealed a significant relationship between ML-COP and GSR during balance tasks, revealing the ANS reactivity associated with naturally occurring instability when standing still, which is proportional to the degree of instability. Understanding the link between ANS activity and control of COP is an important step forward in the enhancement of assessment of contributing factors to poor balance and treatment of balance dysfunctions. The next steps will explore the temporal association between the time-varying changes in COP and GSR to establish if the ANS reactivity phase leads or lags the evoked motor reactions, as well as exploration of potential biomarkers for use in screening of ANS activity as a contributing factor to altered balance control clinically.

Keywords: autonomic nervous system, balance control, center of pressure, somatic nervous system

Procedia PDF Downloads 168
3 The Influence of Screen Translation on Creative Audiovisual Writing: A Corpus-Based Approach

Authors: John D. Sanderson

Abstract:

The popularity of American cinema worldwide has contributed to the development of sociolects related to specific film genres in other cultural contexts by means of screen translation, in many cases eluding norms of usage in the target language, a process whose result has come to be known as 'dubbese'. A consequence for the reception in countries where local audiovisual fiction consumption is far lower than American imported productions is that this linguistic construct is preferred, even though it differs from common everyday speech. The iconography of film genres such as science-fiction, western or sword-and-sandal films, for instance, generates linguistic expectations in international audiences who will accept more easily the sociolects assimilated by the continuous reception of American productions, even if the themes, locations, characters, etc., portrayed on screen may belong in origin to other cultures. And the non-normative language (e.g., calques, semantic loans) used in the preferred mode of linguistic transfer, whether it is translation for dubbing or subtitling, has diachronically evolved in many cases into a status of canonized sociolect, not only accepted but also required, by foreign audiences of American films. However, a remarkable step forward is taken when this typology of artificial linguistic constructs starts being used creatively by nationals of these target cultural contexts. In the case of Spain, the success of American sitcoms such as Friends in the 1990s led Spanish television scriptwriters to include in national productions lexical and syntactical indirect borrowings (Anglicisms not formally identifiable as such because they include elements from their own language) in order to target audiences of the former. However, this commercial strategy had already taken place decades earlier when Spain became a favored location for the shooting of foreign films in the early 1960s. The international popularity of the then newly developed sub-genre known as Spaghetti-Western encouraged Spanish investors to produce their own movies, and local scriptwriters made use of the dubbese developed nationally since the advent of sound in film instead of using normative language. As a result, direct Anglicisms, as well as lexical and syntactical borrowings made up the creative writing of these Spanish productions, which also became commercially successful. Interestingly enough, some of these films were even marketed in English-speaking countries as original westerns (some of the names of actors and directors were anglified to that purpose) dubbed into English. The analysis of these 'back translations' will also foreground some semantic distortions that arose in the process. In order to perform the research on these issues, a wide corpus of American films has been used, which chronologically range from Stagecoach (John Ford, 1939) to Django Unchained (Quentin Tarantino, 2012), together with a shorter corpus of Spanish films produced during the golden age of Spaghetti Westerns, from una tumba para el sheriff (Mario Caiano; in English lone and angry man, William Hawkins) to tu fosa será la exacta, amigo (Juan Bosch, 1972; in English my horse, my gun, your widow, John Wood). The methodology of analysis and the conclusions reached could be applied to other genres and other cultural contexts.

Keywords: dubbing, film genre, screen translation, sociolect

Procedia PDF Downloads 171
2 Linguistic Insights Improve Semantic Technology in Medical Research and Patient Self-Management Contexts

Authors: William Michael Short

Abstract:

Semantic Web’ technologies such as the Unified Medical Language System Metathesaurus, SNOMED-CT, and MeSH have been touted as transformational for the way users access online medical and health information, enabling both the automated analysis of natural-language data and the integration of heterogeneous healthrelated resources distributed across the Internet through the use of standardized terminologies that capture concepts and relationships between concepts that are expressed differently across datasets. However, the approaches that have so far characterized ‘semantic bioinformatics’ have not yet fulfilled the promise of the Semantic Web for medical and health information retrieval applications. This paper argues within the perspective of cognitive linguistics and cognitive anthropology that four features of human meaning-making must be taken into account before the potential of semantic technologies can be realized for this domain. First, many semantic technologies operate exclusively at the level of the word. However, texts convey meanings in ways beyond lexical semantics. For example, transitivity patterns (distributions of active or passive voice) and modality patterns (configurations of modal constituents like may, might, could, would, should) convey experiential and epistemic meanings that are not captured by single words. Language users also naturally associate stretches of text with discrete meanings, so that whole sentences can be ascribed senses similar to the senses of words (so-called ‘discourse topics’). Second, natural language processing systems tend to operate according to the principle of ‘one token, one tag’. For instance, occurrences of the word sound must be disambiguated for part of speech: in context, is sound a noun or a verb or an adjective? In syntactic analysis, deterministic annotation methods may be acceptable. But because natural language utterances are typically characterized by polyvalency and ambiguities of all kinds (including intentional ambiguities), such methods leave the meanings of texts highly impoverished. Third, ontologies tend to be disconnected from everyday language use and so struggle in cases where single concepts are captured through complex lexicalizations that involve profile shifts or other embodied representations. More problematically, concept graphs tend to capture ‘expert’ technical models rather than ‘folk’ models of knowledge and so may not match users’ common-sense intuitions about the organization of concepts in prototypical structures rather than Aristotelian categories. Fourth, and finally, most ontologies do not recognize the pervasively figurative character of human language. However, since the time of Galen the widespread use of metaphor in the linguistic usage of both medical professionals and lay persons has been recognized. In particular, metaphor is a well-documented linguistic tool for communicating experiences of pain. Because semantic medical knowledge-bases are designed to help capture variations within technical vocabularies – rather than the kinds of conventionalized figurative semantics that practitioners as well as patients actually utilize in clinical description and diagnosis – they fail to capture this dimension of linguistic usage. The failure of semantic technologies in these respects degrades the efficiency and efficacy not only of medical research, where information retrieval inefficiencies can lead to direct financial costs to organizations, but also of care provision, especially in contexts of patients’ self-management of complex medical conditions.

Keywords: ambiguity, bioinformatics, language, meaning, metaphor, ontology, semantic web, semantics

Procedia PDF Downloads 132
1 A Generative Pretrained Transformer-Based Question-Answer Chatbot and Phantom-Less Quantitative Computed Tomography Bone Mineral Density Measurement System for Osteoporosis

Authors: Mian Huang, Chi Ma, Junyu Lin, William Lu

Abstract:

Introduction: Bone health attracts more attention recently and an intelligent question and answer (QA) chatbot for osteoporosis is helpful for science popularization. With Generative Pretrained Transformer (GPT) technology developing, we build an osteoporosis corpus dataset and then fine-tune LLaMA, a famous open-source GPT foundation large language model(LLM), on our self-constructed osteoporosis corpus. Evaluated by clinical orthopedic experts, our fine-tuned model outperforms vanilla LLaMA on osteoporosis QA task in Chinese. Three-dimensional quantitative computed tomography (QCT) measured bone mineral density (BMD) is considered as more accurate than DXA for BMD measurement in recent years. We develop an automatic Phantom-less QCT(PL-QCT) that is more efficient for BMD measurement since no need of an external phantom for calibration. Combined with LLM on osteoporosis, our PL-QCT provides efficient and accurate BMD measurement for our chatbot users. Material and Methods: We build an osteoporosis corpus containing about 30,000 Chinese literatures whose titles are related to osteoporosis. The whole process is done automatically, including crawling literatures in .pdf format, localizing text/figure/table region by layout segmentation algorithm and recognizing text by OCR algorithm. We train our model by continuous pre-training with Low-rank Adaptation (LoRA, rank=10) technology to adapt LLaMA-7B model to osteoporosis domain, whose basic principle is to mask the next word in the text and make the model predict that word. The loss function is defined as cross-entropy between the predicted and ground-truth word. Experiment is implemented on single NVIDIA A800 GPU for 15 days. Our automatic PL-QCT BMD measurement adopt AI-associated region-of-interest (ROI) generation algorithm for localizing vertebrae-parallel cylinder in cancellous bone. Due to no phantom for BMD calibration, we calculate ROI BMD by CT-BMD of personal muscle and fat. Results & Discussion: Clinical orthopaedic experts are invited to design 5 osteoporosis questions in Chinese, evaluating performance of vanilla LLaMA and our fine-tuned model. Our model outperforms LLaMA on over 80% of these questions, understanding ‘Expert Consensus on Osteoporosis’, ‘QCT for osteoporosis diagnosis’ and ‘Effect of age on osteoporosis’. Detailed results are shown in appendix. Future work may be done by training a larger LLM on the whole orthopaedics with more high-quality domain data, or a multi-modal GPT combining and understanding X-ray and medical text for orthopaedic computer-aided-diagnosis. However, GPT model gives unexpected outputs sometimes, such as repetitive text or seemingly normal but wrong answer (called ‘hallucination’). Even though GPT give correct answers, it cannot be considered as valid clinical diagnoses instead of clinical doctors. The PL-QCT BMD system provided by Bone’s QCT(Bone’s Technology(Shenzhen) Limited) achieves 0.1448mg/cm2(spine) and 0.0002 mg/cm2(hip) mean absolute error(MAE) and linear correlation coefficient R2=0.9970(spine) and R2=0.9991(hip)(compared to QCT-Pro(Mindways)) on 155 patients in three-center clinical trial in Guangzhou, China. Conclusion: This study builds a Chinese osteoporosis corpus and develops a fine-tuned and domain-adapted LLM as well as a PL-QCT BMD measurement system. Our fine-tuned GPT model shows better capability than LLaMA model on most testing questions on osteoporosis. Combined with our PL-QCT BMD system, we are looking forward to providing science popularization and early morning screening for potential osteoporotic patients.

Keywords: GPT, phantom-less QCT, large language model, osteoporosis

Procedia PDF Downloads 71