Search results for: Scott Hayward
35 An Emergence of Pinus taeda Needle Defoliation and Tree Mortality in Alabama, USA
Authors: Debit Datta, Jeffrey J. Coleman, Scott A. Enebak, Lori G. Eckhardt
Abstract:
Pinus taeda, commonly known as loblolly pine, is a crucial timber species native to the southeastern USA. An emerging problem has been encountered for the past few years, which is better to be known as loblolly pine needle defoliation (LPND), which is threatening the ecological health of southeastern forests and economic vitality of the region’s timber industry. Currently, more than 1000 hectares of loblolly plantations in Alabama are affected with similar symptoms and have created concern among southeast landowners and forest managers. However, it is still uncertain whether LPND results from one or the combination of several fungal pathogens. Therefore, the objectives of the study were to identify and characterize the fungi associated with LPND in the southeastern USA and document the damage being done to loblolly pine as a result of repeated defoliation. Identification of fungi was confirmed using classical morphological methods (microscopic examination of the infected needles), conventional and species-specific priming (SSPP) PCR, and ITS sequencing. To date, 17 species of fungi, either cultured from pine needles or formed fruiting bodies on pine needles, were identified based on morphology and genetic sequence data. Among them, brown-spot pathogen Lecanostica acicola has been frequently recovered from pine needles in both spring and summer. Moreover, Ophistomatoid fungi such as Leptographium procerum, L. terebrantis are associated with pine decline have also been recovered from root samples of the infected stands. Trees have been increasingly and repeatedly chlorotic and defoliated from 2019 to 2020. Based on morphological observations and molecular data, emerging loblolly pine needle defoliation is due in larger part to the brown-spot pathogen L. acoicola followed by pine decline pathogens L. procerum and L. terebrantis. Root pathogens were suspected to emerge later, and their cumulative effects contribute to the widespread mortality of the trees. It is more likely that longer wet spring and warmer temperatures are favorable to disease development and may be important in the disease ecology of LPND. Therefore, the outbreak of the disease is assumed to be expanded over a large geographical area in a changing climatic condition.Keywords: brown-spot fungi, emerging disease, defoliation, loblolly pine
Procedia PDF Downloads 13934 Assessing the Theoretical Suitability of Sentinel-2 and Worldview-3 Data for Hydrocarbon Mapping of Spill Events, Using Hydrocarbon Spectral Slope Model
Authors: K. Tunde Olagunju, C. Scott Allen, Freek Van Der Meer
Abstract:
Identification of hydrocarbon oil in remote sensing images is often the first step in monitoring oil during spill events. Most remote sensing methods adopt techniques for hydrocarbon identification to achieve detection in order to model an appropriate cleanup program. Identification on optical sensors does not only allow for detection but also for characterization and quantification. Until recently, in optical remote sensing, quantification and characterization are only potentially possible using high-resolution laboratory and airborne imaging spectrometers (hyperspectral data). Unlike multispectral, hyperspectral data are not freely available, as this data category is mainly obtained via airborne survey at present. In this research, two (2) operational high-resolution multispectral satellites (WorldView-3 and Sentinel-2) are theoretically assessed for their suitability for hydrocarbon characterization, using the hydrocarbon spectral slope model (HYSS). This method utilized the two most persistent hydrocarbon diagnostic/absorption features at 1.73 µm and 2.30 µm for hydrocarbon mapping on multispectral data. In this research, spectra measurement of seven (7) different hydrocarbon oils (crude and refined oil) taken on ten (10) different substrates with the use of laboratory ASD Fieldspec were convolved to Sentinel-2 and WorldView-3 resolution, using their full width half maximum (FWHM) parameter. The resulting hydrocarbon slope values obtained from the studied samples enable clear qualitative discrimination of most hydrocarbons, despite the presence of different background substrates, particularly on WorldView-3. Due to close conformity of central wavelengths and narrow bandwidths to key hydrocarbon bands used in HYSS, the statistical significance for qualitative analysis on WorldView-3 sensors for all studied hydrocarbon oil returned with 95% confidence level (P-value ˂ 0.01), except for Diesel. Using multifactor analysis of variance (MANOVA), the discriminating power of HYSS is statistically significant for most hydrocarbon-substrate combinations on Sentinel-2 and WorldView-3 FWHM, revealing the potential of these two operational multispectral sensors as rapid response tools for hydrocarbon mapping. One notable exception is highly transmissive hydrocarbons on Sentinel-2 data due to the non-conformity of spectral bands with key hydrocarbon absorptions and the relatively coarse bandwidth (> 100 nm).Keywords: hydrocarbon, oil spill, remote sensing, hyperspectral, multispectral, hydrocarbon-substrate combination, Sentinel-2, WorldView-3
Procedia PDF Downloads 21633 The Silent Tuberculosis: A Case Study to Highlight Awareness of a Global Health Disease and Difficulties in Diagnosis
Authors: Susan Scott, Dina Hanna, Bassel Zebian, Gary Ruiz, Sreena Das
Abstract:
Although the number of cases of TB in England has fallen over the last 4 years, it remains an important public health burden with 1 in 20 cases dying annually. The vast majority of cases present in non-UK born individuals with social risk factors. We present a case of non-pulmonary TB presenting in a healthy child born in the UK to professional parents. We present a case of a healthy 10 year old boy who developed acute back pain during school PE. Over the next 5 months, he was seen by various health and allied professionals with worsening back pain and kyphosis. He became increasing unsteady and for the 10 days prior to admission to our hospital, he developed fevers. He was admitted to his local hospital for tonsillitis where he suffered two falls on account of his leg weakness. A spinal X-ray revealed a pathological fracture and gibbus formation. He was transferred to our unit for further management. On arrival, the patient had lower motor neurone signs of his left leg. He underwent spinal fixture, laminectomy and decompression. Microbiology samples taken intra-operatively confirmed Mycobacterium Tuberculosis. He had a positive Mantoux and T-spot and treatment were commenced. There was no evidence of immune compromise. The patient was born in the UK, had a BCG scar and his only travel history had been two years prior to presentation when he travelled to the Phillipines for a short holiday. The patient continues to have issues around neuropathic pain, mobility, pill burden and mild liver side effects from treatment. Discussion: There is a paucity of case reports on spinal TB in paediatrics and diagnosis is often difficult due to the non-specific symptomatology. Although prognosis on treatment is good, a delayed diagnosis can have devastating consequences. This case highlights the continued need for higher index of suspicion and diagnosis in a world with changing patterns of migration and increase global travel. Surgical intervention is limited to the most serious cases to minimise further neurological damage and improve prognosis. There remains the need for a multi-disciplinary approach to deal with challenges of treatment and rehabilitation.Keywords: tuberculosis, non-pulmonary TB, public health burden, diagnostic challenge
Procedia PDF Downloads 19432 The Maps of Meaning (MoM) Consciousness Theory
Authors: Scott Andersen
Abstract:
Perhaps simply and rather unadornedly, consciousness is having multiple goals for action and the continuously adjudication of such goals to implement action, referred to as the Maps of Meaning (MoM) Consciousness Theory. The MoM theory triangulates through three parallel corollaries, action (behavior), mechanism (morphology/pathophysiology), and goals (teleology). (1) An organism’s consciousness contains a fluid, nested goals. These goals are not intentionality, but intersectionality, embodiment meeting the world. i.e., Darwinian inclusive fitness or randomization, then survival of the fittest. These goals form via gradual descent under inclusive fitness, the goals being the abstraction of a ‘match’ between the evolutionary environment and organism. Human consciousness implements the brain efficiency hypothesis, genetics, epigenetics, and experience crystallize efficiencies, not necessitating best or objective but fitness, i.e., perceived efficiency based on one’s adaptive environment. These efficiencies are objectively arbitrary, but determine the operation and level of one’s consciousness, termed extreme thrownness. Since inclusive fitness drives efficiencies in physiologic mechanism, morphology and behavior (action) and originates one’s goals, embodiment is necessarily entangled to human consciousness as its the intersection of mechanism or action (both necessitating embodiment) occurring in the world that determines fitness. Perception is the operant process of consciousness and is the consciousness’ de facto goal adjudication process. Goal operationalization is fundamentally efficiency-based via one’s unique neuronal mapping as a byproduct of genetics, epigenetics, and experience. Perception involves information intake and information discrimination, equally underpinned by efficiencies of inclusive fitness via extreme thrownness. Perception isn’t a ‘frame rate,’ but Bayesian priors of efficiency based on one’s extreme thrownness. Consciousness and human consciousness is a modular (i.e., a scalar level of richness, which builds up like building blocks) and dimensionalized (i.e., cognitive abilities become possibilities as emergent phenomena at various modularities, like stratified factors in factor analysis). The meta dimensions of human consciousness seemingly include intelligence quotient, personality (five-factor model), richness of perception intake, and richness of perception discrimination, among other potentialities. Future consciousness research should utilize factor analysis to parse modularities and dimensions of human consciousness and animal models.Keywords: consciousness, perception, prospection, embodiment
Procedia PDF Downloads 6231 Population Pharmacokinetics of Levofloxacin and Moxifloxacin, and the Probability of Target Attainment in Ethiopian Patients with Multi-Drug Resistant Tuberculosis
Authors: Temesgen Sidamo, Prakruti S. Rao, Eleni Akllilu, Workineh Shibeshi, Yumi Park, Yong-Soon Cho, Jae-Gook Shin, Scott K. Heysell, Stellah G. Mpagama, Ephrem Engidawork
Abstract:
The fluoroquinolones (FQs) are used off-label for the treatment of multidrug-resistant tuberculosis (MDR-TB), and for evaluation in shortening the duration of drug-susceptible TB in recently prioritized regimens. Within the class, levofloxacin (LFX) and moxifloxacin (MXF) play a substantial role in ensuring success in treatment outcomes. However, sub-therapeutic plasma concentrations of either LFX or MXF may drive unfavorable treatment outcomes. To the best of our knowledge, the pharmacokinetics of LFX and MXF in Ethiopian patients with MDR-TB have not yet been investigated. Therefore, the aim of this study was to develop a population pharmacokinetic (PopPK) model of levofloxacin (LFX) and moxifloxacin (MXF) and assess the percent probability of target attainment (PTA) as defined by the ratio of the area under the plasma concentration-time curve over 24-h (AUC0-24) and the in vitro minimum inhibitory concentration (MIC) (AUC0-24/MIC) in Ethiopian MDR-TB patients. Steady-state plasma was collected from 39 MDR-TB patients enrolled in the programmatic treatment course and the drug concentrations were determined using optimized liquid chromatography-tandem mass spectrometry. In addition, the in vitro MIC of the patients' pretreatment clinical isolates was determined. PopPK and simulations were run at various doses, and PK parameters were estimated. The effect of covariates on the PK parameters and the PTA for maximum mycobacterial kill and resistance prevention was also investigated. LFX and MXF both fit in a one-compartment model with adjustments. The apparent volume of distribution (V) and clearance (CL) of LFX were influenced by serum creatinine (Scr), whereas the absorption constant (Ka) and V of MXF were influenced by Scr and BMI, respectively. The PTA for LFX maximal mycobacterial kill at the critical MIC of 0.5 mg/L was 29%, 62%, and 95% with the simulated 750 mg, 1000 mg, and 1500 mg doses, respectively, whereas the PTA for resistance prevention at 1500 mg was only 4.8%, with none of the lower doses achieving this target. At the critical MIC of 0.25 mg/L, there was no difference in the PTA (94.4%) for maximum bacterial kill among the simulated doses of MXF (600 mg, 800 mg, and 1000 mg), but the PTA for resistance prevention improved proportionately with dose. Standard LFX and MXF doses may not provide adequate drug exposure. LFX PopPK is more predictable for maximum mycobacterial kill, whereas MXF's resistance prevention target increases with dose. Scr and BMI are likely to be important covariates in dose optimization or therapeutic drug monitoring (TDM) studies in Ethiopian patients.Keywords: population PK, PTA, moxifloxacin, levofloxacin, MDR-TB patients, ethiopia
Procedia PDF Downloads 12030 Internet of Things in Higher Education: Implications for Students with Disabilities
Authors: Scott Hollier, Ruchi Permvattana
Abstract:
The purpose of this abstract is to share the findings of a recently completed disability-related Internet of Things (IoT) project undertaken at Curtin University in Australia. The project focused on identifying how IoT could support people with disabilities with their educational outcomes. To achieve this, the research consisted of an analysis of current literature and interviews conducted with students with vision, hearing, mobility and print disabilities. While the research acknowledged the ability to collect data with IoT is now a fairly common occurrence, its benefits and applicability still need to be grounded back into real-world applications. Furthermore, it is important to consider if there are sections of our society that may benefit from these developments and if those benefits are being fully realised in a rush by large companies to achieve IoT dominance for their particular product or digital ecosystem. In this context, it is important to consider a group which, to our knowledge, has had little specific mainstream focus in the IoT area –people with disabilities. For people with disabilities, the ability for every device to interact with us and with each other has the potential to yield significant benefits. In terms of engagement, the arrival of smart appliances is already offering benefits such as the ability for a person in a wheelchair to give verbal commands to an IoT-enabled washing machine if the buttons are out of reach, or for a blind person to receive a notification on a smartphone when dinner has finished cooking in an IoT-enabled microwave. With clear benefits of IoT being identified for people with disabilities, it is important to also identify what implications there are for education. With higher education being a critical pathway for many people with disabilities in finding employment, the question as to whether such technologies can support the educational outcomes of people with disabilities was what ultimately led to this research project. This research will discuss several significant findings that have emerged from the research in relation to how consumer-based IoT can be used in the classroom to support the learning needs of students with disabilities, how industrial-based IoT sensors and actuators can be used to monitor and improve the real-time learning outcomes for the delivery of lectures and student engagement, and a proposed method for students to gain more control over their learning environment. The findings shared in this presentation are likely to have significant implications for the use of IoT in the classroom through the implementation of affordable and accessible IoT solutions and will provide guidance as to how policies can be developed as the implications of both benefits and risks continue to be considered by educators.Keywords: disability, higher education, internet of things, students
Procedia PDF Downloads 11929 Comics as an Intermediary for Media Literacy Education
Authors: Ryan C. Zlomek
Abstract:
The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.Keywords: comics, graphics novels, mass communication, media literacy, metacognition
Procedia PDF Downloads 30028 Best Practice for Post-Operative Surgical Site Infection Prevention
Authors: Scott Cavinder
Abstract:
Surgical site infections (SSI) are a known complication to any surgical procedure and are one of the most common nosocomial infections. Globally it is estimated 300 million surgical procedures take place annually, with an incidence of SSI’s estimated to be 11 of 100 surgical patients developing an infection within 30 days after surgery. The specific purpose of the project is to address the PICOT (Problem, Intervention, Comparison, Outcome, Time) question: In patients who have undergone cardiothoracic or vascular surgery (P), does implementation of a post-operative care bundle based on current EBP (I) as compared to current clinical agency practice standards (C) result in a decrease of SSI (O) over a 12-week period (T)? Synthesis of Supporting Evidence: A literature search of five databases, including citation chasing, was performed, which yielded fourteen pieces of evidence ranging from high to good quality. Four common themes were identified for the prevention of SSI’s including use and removal of surgical dressings; use of topical antibiotics and antiseptics; implementation of evidence-based care bundles, and implementation of surveillance through auditing and feedback. The Iowa Model was selected as the framework to help guide this project as it is a multiphase change process which encourages clinicians to recognize opportunities for improvement in healthcare practice. Practice/Implementation: The process for this project will include recruiting postsurgical participants who have undergone cardiovascular or thoracic surgery prior to discharge at a Northwest Indiana Hospital. The patients will receive education, verbal instruction, and return demonstration. The patients will be followed for 12 weeks, and wounds assessed utilizing the National Healthcare Safety Network//Centers for Disease Control (NHSN/CDC) assessment tool and compared to the SSI rate of 2021. Key stakeholders will include two cardiovascular surgeons, four physician assistants, two advance practice nurses, medical assistant and patients. Method of Evaluation: Chi Square analysis will be utilized to establish statistical significance and similarities between the two groups. Main Results/Outcomes: The proposed outcome is the prevention of SSIs in the post-op cardiothoracic and vascular patient. Implication/Recommendation(s): Implementation of standardized post operative care bundles in the prevention of SSI in cardiovascular and thoracic surgical patients.Keywords: cardiovascular, evidence based practice, infection, post-operative, prevention, thoracic, surgery
Procedia PDF Downloads 8327 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space
Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson
Abstract:
Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling
Procedia PDF Downloads 23526 Use of Satellite Altimetry and Moderate Resolution Imaging Technology of Flood Extent to Support Seasonal Outlooks of Nuisance Flood Risk along United States Coastlines and Managed Areas
Authors: Varis Ransibrahmanakul, Doug Pirhalla, Scott Sheridan, Cameron Lee
Abstract:
U.S. coastal areas and ecosystems are facing multiple sea level rise threats and effects: heavy rain events, cyclones, and changing wind and weather patterns all influence coastal flooding, sedimentation, and erosion along critical barrier islands and can strongly impact habitat resiliency and water quality in protected habitats. These impacts are increasing over time and have accelerated the need for new tracking techniques, models and tools of flood risk to support enhanced preparedness for coastal management and mitigation. To address this issue, NOAA National Ocean Service (NOS) evaluated new metrics from satellite altimetry AVISO/Copernicus and MODIS IR flood extents to isolate nodes atmospheric variability indicative of elevated sea level and nuisance flood events. Using de-trended time series of cross-shelf sea surface heights (SSH), we identified specific Self Organizing Maps (SOM) nodes and transitions having a strongest regional association with oceanic spatial patterns (e.g., heightened downwelling favorable wind-stress and enhanced southward coastal transport) indicative of elevated coastal sea levels. Results show the impacts of the inverted barometer effect as well as the effects of surface wind forcing; Ekman-induced transport along broad expanses of the U.S. eastern coastline. Higher sea levels and corresponding localized flooding are associated with either pattern indicative of enhanced on-shore flow, deepening cyclones, or local- scale winds, generally coupled with an increased local to regional precipitation. These findings will support an integration of satellite products and will inform seasonal outlook model development supported through NOAAs Climate Program Office and NOS office of Center for Operational Oceanographic Products and Services (CO-OPS). Overall results will prioritize ecological areas and coastal lab facilities at risk based on numbers of nuisance flood projected and inform coastal management of flood risk around low lying areas subjected to bank erosion.Keywords: AVISO satellite altimetry SSHA, MODIS IR flood map, nuisance flood, remote sensing of flood
Procedia PDF Downloads 14525 The Use of Social Media in a UK School of Pharmacy to Increase Student Engagement and Sense of Belonging
Authors: Samantha J. Hall, Luke Taylor, Kenneth I. Cumming, Jakki Bardsley, Scott S. P. Wildman
Abstract:
Medway School of Pharmacy – a joint collaboration between the University of Kent and the University of Greenwich – is a large school of pharmacy in the United Kingdom. The school primarily delivers the accredited Master or Pharmacy (MPharm) degree programme. Reportedly, some students may feel isolated from the larger student body that extends across four separate campuses, where a diverse range of academic subjects is delivered. In addition, student engagement has been noted as being limited in some areas, as evidenced in some cases by poor attendance at some lectures. In January 2015, the University of Kent launched a new initiative dedicated to Equality, Diversity and Inclusivity (EDI). As part of this project, Medway School of Pharmacy employed ‘Student Success Project Officers’ in order to analyse past and present school data. As a result, initiatives have been implemented to i) negate disparities in attainment and ii) increase engagement, particularly for Black, Asian and Minority Ethnic (BAME) students which make up for more than 80% of the pharmacy student cohort. Social media platforms are prevalent, with global statistics suggesting that they are most commonly used by females between the ages of 16-34. Student focus groups held throughout the academic year brought to light the school’s need to use social media much more actively. Prior to the EDI initiative, social media usage for Medway School of Pharmacy was scarce. Platforms including: Facebook, Twitter, Instagram, YouTube, The Student Room and University Blogs were either introduced or rejuvenated. This action was taken with the primary aim of increasing student engagement. By using a number of varied social media platforms, the university is able to capture a large range of students by appealing to different interests. Social media is being used to disseminate important information, promote equality and diversity, recognise and celebrate student success and also to allow students to explore the student life outside of Medway School of Pharmacy. Early data suggests an increase in lecture attendance, as well as greater evidence of student engagement highlighted by recent focus group discussions. In addition, students have communicated that active social media accounts were imperative when choosing universities for 2015/16. It allows students to understand more about the University and community prior to beginning their studies. By having a lively presence on social media, the university can use a multi-faceted approach to succeed in early engagement, as well as fostering the long term engagement of continuing students.Keywords: engagement, social media, pharmacy, community
Procedia PDF Downloads 32724 The Diagnostic Utility and Sensitivity of the Xpert® MTB/RIF Assay in Diagnosing Mycobacterium tuberculosis in Bone Marrow Aspirate Specimens
Authors: Nadhiya N. Subramony, Jenifer Vaughan, Lesley E. Scott
Abstract:
In South Africa, the World Health Organisation estimated 454000 new cases of Mycobacterium tuberculosis (M.tb) infection (MTB) in 2015. Disseminated tuberculosis arises from the haematogenous spread and seeding of the bacilli in extrapulmonary sites. The gold standard for the detection of MTB in bone marrow is TB culture which has an average turnaround time of 6 weeks. Histological examinations of trephine biopsies to diagnose MTB also have a time delay owing mainly to the 5-7 day processing period prior to microscopic examination. Adding to the diagnostic delay is the non-specific nature of granulomatous inflammation which is the hallmark of MTB involvement of the bone marrow. A Ziehl-Neelson stain (which highlights acid-fast bacilli) is therefore mandatory to confirm the diagnosis but can take up to 3 days for processing and evaluation. Owing to this delay in diagnosis, many patients are lost to follow up or remain untreated whilst results are awaited, thus encouraging the spread of undiagnosed TB. The Xpert® MTB/RIF (Cepheid, Sunnyvale, CA) is the molecular test used in the South African national TB program as the initial diagnostic test for pulmonary TB. This study investigates the optimisation and performance of the Xpert® MTB/RIF on bone marrow aspirate specimens (BMA), a first since the introduction of the assay in the diagnosis of extrapulmonary TB. BMA received for immunophenotypic analysis as part of the investigation into disseminated MTB or in the evaluation of cytopenias in immunocompromised patients were used. Processing BMA on the Xpert® MTB/RIF was optimised to ensure bone marrow in EDTA and heparin did not inhibit the PCR reaction. Inactivated M.tb was spiked into the clinical bone marrow specimen and distilled water (as a control). A volume of 500mcl and an incubation time of 15 minutes with sample reagent were investigated as the processing protocol. A total of 135 BMA specimens had sufficient residual volume for Xpert® MTB/RIF testing however 22 specimens (16.3%) were not included in the final statistical analysis as an adequate trephine biopsy and/or TB culture was not available. Xpert® MTB/RIF testing was not affected by BMA material in the presence of heparin or EDTA, but the overall detection of MTB in BMA was low compared to histology and culture. Sensitivity of the Xpert® MTB/RIF compared to both histology and culture was 8.7% (95% confidence interval (CI): 1.07-28.04%) and sensitivity compared to histology only was 11.1% (95% CI: 1.38-34.7%). Specificity of the Xpert® MTB/RIF was 98.9% (95% CI: 93.9-99.7%). Although the Xpert® MTB/RIF generates a faster result than histology and TB culture and is less expensive than culture and drug susceptibility testing, the low sensitivity of the Xpert® MTB/RIF precludes its use for the diagnosis of MTB in bone marrow aspirate specimens and warrants alternative/additional testing to optimise the assay.Keywords: bone marrow aspirate , extrapulmonary TB, low sensitivity, Xpert® MTB/RIF
Procedia PDF Downloads 17223 Exploring Faculty Attitudes about Grades and Alternative Approaches to Grading: Pilot Study
Authors: Scott Snyder
Abstract:
Grading approaches in higher education have not changed meaningfully in over 100 years. While there is variation in the types of grades assigned across countries, most use approaches based on simple ordinal scales (e.g, letter grades). While grades are generally viewed as an indication of a student's performance, challenges arise regarding the clarity, validity, and reliability of letter grades. Research about grading in higher education has primarily focused on grade inflation, student attitudes toward grading, impacts of grades, and benefits of plus-minus letter grade systems. Little research is available about alternative approaches to grading, varying approaches used by faculty within and across colleges, and faculty attitudes toward grades and alternative approaches to grading. To begin to address these gaps, a survey was conducted of faculty in a sample of departments at three diverse colleges in a southeastern state in the US. The survey focused on faculty experiences with and attitudes toward grading, the degree to which faculty innovate in teaching and grading practices, and faculty interest in alternatives to the point system approach to grading. Responses were received from 104 instructors (21% response rate). The majority reported that teaching accounted for 50% or more of their academic duties. Almost all (92%) of respondents reported using point and percentage systems for their grading. While all respondents agreed that grades should reflect the degree to which objectives were mastered, half indicated that grades should also reflect effort or improvement. Over 60% felt that grades should be predictive of success in subsequent courses or real life applications. Most respondents disagreed that grades should compare students to other students. About 42% worried about their own grade inflation and grade inflation in their college. Only 17% disagreed that grades mean different things based on the instructor while 75% thought it would be good if there was agreement. Less than 50% of respondents felt that grades were directly useful for identifying students who should/should not continue, identify strengths/weaknesses, predict which students will be most successful, or contribute to program monitoring of student progress. Instructors were less willing to modify assessment than they were to modify instruction and curriculum. Most respondents (76%) were interested in learning about alternative approaches to grading (e.g., specifications grading). The factors that were most associated with willingness to adopt a new grading approach were clarity to students and simplicity of adoption of the approach. Follow-up studies are underway to investigate implementations of alternative grading approaches, expand the study to universities and departments not involved in the initial study, examine student attitudes about alternative approaches, and refine the measure of attitude toward adoption of alternative grading practices within the survey. Workshops about challenges of using percentage and point systems for determining grades and workshops regarding alternative approaches to grading are being offered.Keywords: alternative approaches to grading, grades, higher education, letter grades
Procedia PDF Downloads 9622 Time to Retire Rubber Crumb: How Soft Fall Playgrounds are Threatening Australia’s Great Barrier Reef
Authors: Michelle Blewitt, Scott P. Wilson, Heidi Tait, Juniper Riordan
Abstract:
Rubber crumb is a physical and chemical pollutant of concern for the environment and human health, warranting immediate investigations into its pathways to the environment and potential impacts. This emerging microplastic is created by shredding end-of-life tyres into ‘rubber crumb’ particles between 1-5mm used on synthetic turf fields and soft-fall playgrounds as a solution to intensifying tyre waste worldwide. Despite having known toxic and carcinogenic properties, studies into the transportation pathways and movement patterns of rubber crumbs from these surfaces remain in their infancy. To address this deficit, AUSMAP, the Australian Microplastic Assessment Project, in partnership with the Tangaroa Blue Foundation, conducted a study to quantify crumb loss from soft-fall surfaces. To our best knowledge, this is the first of its kind, with funding for the audits being provided by the Australian Government’s Reef Trust. Sampling occurred at 12 soft-fall playgrounds within the Great Barrier Reef Catchment Area on Australia’s North-East coast, in close proximity to the United Nations World Heritage Listed Reef. Samples were collected over a 12-month period using randomized sediment cores at 0, 2 and 4 meters away from the playground edge along a 20-meter transect. This approach facilitated two objectives pertaining to particle movement: to establish that crumb loss is occurring and that it decreases with distance from the soft-fall surface. Rubber crumb abundance was expressed as a total value and used to determine an expected average of rubber crumb loss per m2. An Analysis of Variance (ANOVA) was used to compare the differences in crumb abundance at each interval from the playground. Site characteristics, including surrounding sediment type, playground age, degree of ultra-violet exposure and amount of foot traffic, were additionally recorded for the comparison. Preliminary findings indicate that crumb is being lost at considerable rates from soft-fall playgrounds in the region, emphasizing an urgent need to further examine it as a potential source of aquatic pollution, soil contamination and threat to individuals who regularly utilize these surfaces. Additional implications for the future of rubber crumbs as a fit-for-purpose recycling initiative will be discussed with regard to industry, governments and the economic burden of surface maintenance and/ or replacement.Keywords: microplastics, toxic rubber crumb, litter pathways, marine environment
Procedia PDF Downloads 9121 Regional Analysis of Freight Movement by Vehicle Classification
Authors: Katerina Koliou, Scott Parr, Evangelos Kaisar
Abstract:
The surface transportation of freight is particularly vulnerable to storm and hurricane disasters, while at the same time, it is the primary transportation mode for delivering medical supplies, fuel, water, and other essential goods. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The research investigation used Florida's statewide continuous-count station traffic volumes, where then compared between years, to identify locations where traffic was moving differently during the evacuation. The data was then used to identify days on which traffic was significantly different between years. While the literature on auto-based evacuations is extensive, the consideration of freight travel is lacking. To better plan for commercial vehicles during an evacuation, it is necessary to understand how these vehicles travel during an evacuation and determine if this travel is different from the general public. The goal of this research was to investigate the movement of vehicles by classification, with an emphasis on freight during two major evacuation events: hurricanes Irma (2017) and Michael (2018). The methodology of the research was divided into three phases: data collection and management, spatial analysis, and temporal comparisons. Data collection and management obtained continuous-co station data from the state of Florida for both 2017 and 2018 by vehicle classification. The data was then processed into a manageable format. The second phase used geographic information systems (GIS) to display where and when traffic varied across the state. The third and final phase was a quantitative investigation into which vehicle classifications were statistically different and on which dates statewide. This phase used a two-sample, two-tailed t-test to compare sensor volume by classification on similar days between years. Overall, increases in freight movement between years prevented a more precise paired analysis. This research sought to identify where and when different classes of vehicles were traveling leading up to hurricane landfall and post-storm reentry. Of the more significant findings, the research results showed that commercial-use vehicles may have underutilized rest areas during the evacuation, or perhaps these rest areas were closed. This may suggest that truckers are driving longer distances and possibly longer hours before hurricanes. Another significant finding of this research was that changes in traffic patterns for commercial-use vehicles occurred earlier and lasted longer than changes for personal-use vehicles. This finding suggests that commercial vehicles are perhaps evacuating in a fashion different from personal use vehicles. This paper may serve as the foundation for future research into commercial travel during evacuations and explore additional factors that may influence freight movements during evacuations.Keywords: evacuation, freight, travel time, evacuation
Procedia PDF Downloads 7020 Ethical, Legal and Societal Aspects of Unmanned Aircraft in Defence
Authors: Henning Lahmann, Benjamyn I. Scott, Bart Custers
Abstract:
Suboptimal adoption of AI in defence organisations carries risks for the protection of the freedom, safety, and security of society. Despite the vast opportunities that defence AI-technology presents, there are also a variety of ethical, legal, and societal concerns. To ensure the successful use of AI technology by the military, ethical, legal, and societal aspects (ELSA) need to be considered, and their concerns continuously addressed at all levels. This includes ELSA considerations during the design, manufacturing and maintenance of AI-based systems, as well as its utilisation via appropriate military doctrine and training. This raises the question how defence organisations can remain strategically competitive and at the edge of military innovation, while respecting the values of its citizens. This paper will explain the set-up and share preliminary results of a 4-year research project commissioned by the National Research Council in the Netherlands on the ethical, legal, and societal aspects of AI in defence. The project plans to develop a future-proof, independent, and consultative ecosystem for the responsible use of AI in the defence domain. In order to achieve this, the lab shall devise a context-dependent methodology that focuses on the ‘analysis’, ‘design’ and ‘evaluation’ of ELSA of AI-based applications within the military context, which include inter alia unmanned aircraft. This is bolstered as the Lab also recognises and complements the existing methods in regards to human-machine teaming, explainable algorithms, and value-sensitive design. Such methods will be modified for the military context and applied to pertinent case-studies. These case-studies include, among others, the application of autonomous robots (incl. semi- autonomous) and AI-based methods against cognitive warfare. As the perception of the application of AI in the military context, by both society and defence personnel, is important, the Lab will study how these perceptions evolve and vary in different contexts. Furthermore, the Lab will monitor – as they may influence people’s perception – developments in the global technological, military and societal spheres. Although the emphasis of the research project is on different forms of AI in defence, it focuses on several case studies. One of these case studies is on unmanned aircraft, which will also be the focus of the paper. Hence, ethical, legal, and societal aspects of unmanned aircraft in the defence domain will be discussed in detail, including but not limited to privacy issues. Typical other issues concern security (for people, objects, data or other aircraft), privacy (sensitive data, hindrance, annoyance, data collection, function creep), chilling effects, PlayStation mentality, and PTSD.Keywords: autonomous weapon systems, unmanned aircraft, human-machine teaming, meaningful human control, value-sensitive design
Procedia PDF Downloads 9319 Queer Anti-Urbanism: An Exploration of Queer Space Through Design
Authors: William Creighton, Jan Smitheram
Abstract:
Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?Keywords: queer, queer anti-urbanism, design as research, design
Procedia PDF Downloads 17818 Design Development and Qualification of a Magnetically Levitated Blower for C0₂ Scrubbing in Manned Space Missions
Authors: Larry Hawkins, Scott K. Sakakura, Michael J. Salopek
Abstract:
The Marshall Space Flight Center is designing and building a next-generation CO₂ removal system, the Four Bed Carbon Dioxide Scrubber (4BCO₂), which will use the International Space Station (ISS) as a testbed. The current ISS CO2 removal system has faced many challenges in both performance and reliability. Given that CO2 removal is an integral Environmental Control and Life Support System (ECLSS) subsystem, the 4BCO2 Scrubber has been designed to eliminate the shortfalls identified in the current ISS system. One of the key required upgrades was to improve the performance and reliability of the blower that provides the airflow through the CO₂ sorbent beds. A magnetically levitated blower, capable of higher airflow and pressure than the previous system, was developed to meet this need. The design and qualification testing of this next-generation blower are described here. The new blower features a high-efficiency permanent magnet motor, a five-axis, active magnetic bearing system, and a compact controller containing both a variable speed drive and a magnetic bearing controller. The blower uses a centrifugal impeller to pull air from the inlet port and drive it through an annular space around the motor and magnetic bearing components to the exhaust port. Technical challenges of the blower and controller development include survival of the blower system under launch random vibration loads, operation in microgravity, packaging under strict size and weight requirements, and successful operation during 4BCO₂ operational changeovers. An ANSYS structural dynamic model of the controller was used to predict response to the NASA defined random vibration spectrum and drive minor design changes. The simulation results are compared to measurements from qualification testing the controller on a vibration table. Predicted blower performance is compared to flow loop testing measurements. Dynamic response of the system to valve changeovers is presented and discussed using high bandwidth measurements from dynamic pressure probes, magnetic bearing position sensors, and actuator coil currents. The results presented in the paper show that the blower controller will survive launch vibration levels, the blower flow meets the requirements, and the magnetic bearings have adequate load capacity and control bandwidth to maintain the desired rotor position during the valve changeover transients.Keywords: blower, carbon dioxide removal, environmental control and life support system, magnetic bearing, permanent magnet motor, validation testing, vibration
Procedia PDF Downloads 13617 Mega Sporting Events and Branding: Marketing Implications for the Host Country’s Image
Authors: Scott Wysong
Abstract:
Qatar will spend billions of dollars to host the 2022 World Cup. While football fans around the globe get excited to cheer on their favorite team every four years, critics debate the merits of a country hosting such an expensive and large-scale event. That is, the host countries spend billions of dollars on stadiums and infrastructure to attract these mega sporting events with the hope of equitable returns in economic impact and creating jobs. Yet, in many cases, the host countries are left in debt with decaying venues. There are benefits beyond the economic impact of hosting mega-events. For example, citizens are often proud of their city/country to host these famous events. Yet, often overlooked in the literature is the proposition that serving as the host for a mega-event may enhance the country’s brand image, not only as a tourist destination but for the products made in that country of origin. This research aims to explore this phenomenon by taking an exploratory look at consumer perceptions of three host countries of a mega-event in sports. In 2014, the U.S., Chinese and Finn (Finland) consumer attitudes toward Brazil and its products were measured before and after the World Cup via surveys (n=89). An Analysis of Variance (ANOVA) revealed that there were no statistically significant differences in the pre-and post-World Cup perceptions of Brazil’s brand personality or country-of-origin image. After the World Cup in 2018, qualitative interviews were held with U.S. sports fans (n=17) in an effort to further explore consumer perceptions of products made in the host country: Russia. A consistent theme of distrust and corruption with Russian products emerged despite their hosting of this prestigious global event. In late 2021, U.S. football (soccer) fans (n=42) and non-fans (n=37) were surveyed about the upcoming 2022 World Cup. A regression analysis revealed that how much an individual indicated that they were a soccer fan did not significantly influence their desire to visit Qatar or try products from Qatar in the future even though the country was hosting the World Cup—in the end, hosting a mega-event as grand as the World Cup showcases the country to the world. However, it seems to have little impact on consumer perceptions of the country, as a whole, or its brands. That is, the World Cup appeared to enhance already pre-existing stereotypes about Brazil (e.g., beaches, partying and fun, yet with crime and poverty), Russia (e.g., cold weather, vodka and business corruption) and Qatar (desert and oil). Moreover, across all three countries, respondents could rarely name a brand from the host country. Because mega-events cost a lot of time and money, countries need to do more to market their country and its brands when hosting. In addition, these countries would be wise to measure the impact of the event from different perspectives. Hence, we put forth a comprehensive future research agenda to further the understanding of how countries, and their brands, can benefit from hosting a mega sporting event.Keywords: branding, country-of-origin effects, mega sporting events, return on investment
Procedia PDF Downloads 28216 Persistent Ribosomal In-Frame Mis-Translation of Stop Codons as Amino Acids in Multiple Open Reading Frames of a Human Long Non-Coding RNA
Authors: Leonard Lipovich, Pattaraporn Thepsuwan, Anton-Scott Goustin, Juan Cai, Donghong Ju, James B. Brown
Abstract:
Two-thirds of human genes do not encode any known proteins. Aside from long non-coding RNA (lncRNA) genes with recently-discovered functions, the ~40,000 non-protein-coding human genes remain poorly understood, and a role for their transcripts as de-facto unconventional messenger RNAs has not been formally excluded. Ribosome profiling (Riboseq) predicts translational potential, but without independent evidence of proteins from lncRNA open reading frames (ORFs), ribosome binding of lncRNAs does not prove translation. Previously, we mass-spectrometrically documented translation of specific lncRNAs in human K562 and GM12878 cells. We now examined lncRNA translation in human MCF7 cells, integrating strand-specific Illumina RNAseq, Riboseq, and deep mass spectrometry in biological quadruplicates performed at two core facilities (BGI, China; City of Hope, USA). We excluded known-protein matches. UCSC Genome Browser-assisted manual annotation of imperfect (tryptic-digest-peptides)-to-(lncRNA-three-frame-translations) alignments revealed three peptides hypothetically explicable by 'stop-to-nonstop' in-frame replacement of stop codons by amino acids in two ORFs of the lncRNA MMP24-AS1. To search for this phenomenon genomewide, we designed and implemented a novel pipeline, matching tryptic-digest spectra to wildcard-instead-of-stop versions of repeat-masked, six-frame, whole-genome translations. Along with singleton putative stop-to-nonstop events affecting four other lncRNAs, we identified 24 additional peptides with stop-to-nonstop in-frame substitutions from multiple positive-strand MMP24-AS1 ORFs. Only UAG and UGA, never UAA, stop codons were impacted. All MMP24-AS1-matching spectra met the same significance thresholds as high-confidence known-protein signatures. Targeted resequencing of MMP24-AS1 genomic DNA and cDNA from the same samples did not reveal any mutations, polymorphisms, or sequencing-detectable RNA editing. This unprecedented apparent gene-specific violation of the genetic code highlights the importance of matching peptides to whole-genome, not known-genes-only, ORFs in mass-spectrometry workflows, and suggests a new mechanism enhancing the combinatorial complexity of the proteome. Funding: NIH Director’s New Innovator Award 1DP2-CA196375 to LL.Keywords: genetic code, lncRNA, long non-coding RNA, mass spectrometry, proteogenomics, ribo-seq, ribosome, RNAseq
Procedia PDF Downloads 23515 Predictors of Sexually Transmitted Infection of Korean Adolescent Females: Analysis of Pooled Data from Korean Nationwide Survey
Authors: Jaeyoung Lee, Minji Je
Abstract:
Objectives: In adolescence, adolescents are curious about sex, but sexual experience before becoming an adult can cause the risk of high probability of sexually transmitted infection. Therefore, it is very important to prevent sexually transmitted infections so that adolescents can grow in healthy and upright way. Adolescent females, especially, have sexual behavior distinguished from that of male adolescents. Protecting female adolescents’ reproductive health is even more important since it is directly related to the childbirth of the next generation. This study, thus, investigated the predictors of sexually transmitted infection in adolescent females with sexual experiences based on the National Health Statistics in Korea. Methods: This study was conducted based on the National Health Statistics in Korea. The 11th Korea Youth Behavior Web-based Survey in 2016 was conducted in the type of anonymous self-reported survey in order to find out the health behavior of adolescents. The target recruitment group was middle and high school students nationwide as of April 2016, and 65,528 students from a total of 800 middle and high schools participated. The study was conducted in 537 female high school students (Grades 10–12) among them. The collected data were analyzed as complex sampling design using SPSS statistics 22. The strata, cluster, weight, and finite population correction provided by Korea Center for Disease Control & Prevention (KCDC) were reflected to constitute complex sample design files, which were used in the statistical analysis. The analysis methods included Rao-Scott chi-square test, complex samples general linear model, and complex samples multiple logistic regression analysis. Results: Out of 537 female adolescents, 11.9% (53 adolescents) had experiences of venereal infection. The predictors for venereal infection of the subjects were ‘age at first intercourse’ and ‘sexual intercourse after drinking’. The sexually transmitted infection of the subjects was decreased by 0.31 times (p=.006, 95%CI=0.13-0.71) for middle school students and 0.13 times (p<.001, 95%CI=0.05-0.32) for high school students whereas the age of the first sexual experience was under elementary school age. In addition, the sexually transmitted infection of the subjects was 3.54 times (p < .001, 95%CI=1.76-7.14) increased when they have experience of sexual relation after drinking alcohol, compared to those without the experience of sexual relation after drinking alcohol. Conclusions: The female adolescents had high probability of sexually transmitted infection if their age for the first sexual experience was low. Therefore, the female adolescents who start sexual experience earlier shall have practical sex education appropriate for their developmental stage. In addition, since the sexually transmitted infection increases, if they have sexual relations after drinking alcohol, the consideration for prevention of alcohol use or intervention of sex education shall be required. When health education intervention is conducted for health promotion for female adolescents in the future, it is necessary to reflect the result of this study.Keywords: adolescent, coitus, female, sexually transmitted diseases
Procedia PDF Downloads 19214 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes
Authors: Nadarajah I. Ramesh
Abstract:
Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model
Procedia PDF Downloads 27913 HyDUS Project; Seeking a Wonder Material for Hydrogen Storage
Authors: Monica Jong, Antonios Banos, Tom Scott, Chris Webster, David Fletcher
Abstract:
Hydrogen, as a clean alternative to methane, is relatively easy to make, either from water using electrolysis or from methane using steam reformation. However, hydrogen is much trickier to store than methane, and without effective storage, it simply won’t pass muster as a suitable methane substitute. Physical storage of hydrogen is quite inefficient. Storing hydrogen as a compressed gas at pressures up to 900 times atmospheric is volumetrically inefficient and carries safety implications, whilst storing it as a liquid requires costly and constant cryogenic cooling to minus 253°C. This is where DU steps in as a possible solution. Across the periodic table, there are many different metallic elements that will react with hydrogen to form a chemical compound known as a hydride (or metal hydride). From a chemical perspective, the ‘king’ of the hydride forming metals is palladium because it offers the highest hydrogen storage volumetric capacity. However, this material is simply too expensive and scarce to be used in a scaled-up bulk hydrogen storage solution. Depleted Uranium is the second most volumetrically efficient hydride-forming metal after palladium. The UK has accrued a significant amount of DU because of manufacturing nuclear fuel for many decades, and that is currently without real commercial use. Uranium trihydride (UH3) contains three hydrogen atoms for every uranium atom and can chemically store hydrogen at ambient pressure and temperature at more than twice the density of pure liquid hydrogen for the same volume. To release the hydrogen from the hydride, all you do is heat it up. At temperatures above 250°C, the hydride starts to thermally decompose, releasing hydrogen as a gas and leaving the Uranium as a metal again. The reversible nature of this reaction allows the hydride to be formed and unformed again and again, enabling its use as a high-density hydrogen storage material which is already available in large quantities because of its stockpiling as a ‘waste’ by-product. Whilst the tritium storage credentials of Uranium have been rigorously proven at the laboratory scale and at the fusion demonstrator JET for over 30 years, there is a need to prove the concept for depleted uranium hydrogen storage (HyDUS) at scales towards that which is needed to flexibly supply our national power grid with energy. This is exactly the purpose of the HyDUS project, a collaborative venture involving EDF as the interested energy vendor, Urenco as the owner of the waste DU, and the University of Bristol with the UKAEA as the architects of the technology. The team will embark on building and proving the world’s first pilot scale demonstrator of bulk chemical hydrogen storage using depleted Uranium. Within 24 months, the team will attempt to prove both the technical and commercial viability of this technology as a longer duration energy storage solution for the UK. The HyDUS project seeks to enable a true by-product to wonder material story for depleted Uranium, demonstrating that we can think sustainably about unlocking the potential value trapped inside nuclear waste materials.Keywords: hydrogen, long duration storage, storage, depleted uranium, HyDUS
Procedia PDF Downloads 16012 Identification of Clinical Characteristics from Persistent Homology Applied to Tumor Imaging
Authors: Eashwar V. Somasundaram, Raoul R. Wadhwa, Jacob G. Scott
Abstract:
The use of radiomics in measuring geometric properties of tumor images such as size, surface area, and volume has been invaluable in assessing cancer diagnosis, treatment, and prognosis. In addition to analyzing geometric properties, radiomics would benefit from measuring topological properties using persistent homology. Intuitively, features uncovered by persistent homology may correlate to tumor structural features. One example is necrotic cavities (corresponding to 2D topological features), which are markers of very aggressive tumors. We develop a data pipeline in R that clusters tumors images based on persistent homology is used to identify meaningful clinical distinctions between tumors and possibly new relationships not captured by established clinical categorizations. A preliminary analysis was performed on 16 Magnetic Resonance Imaging (MRI) breast tissue segments downloaded from the 'Investigation of Serial Studies to Predict Your Therapeutic Response with Imaging and Molecular Analysis' (I-SPY TRIAL or ISPY1) collection in The Cancer Imaging Archive. Each segment represents a patient’s breast tumor prior to treatment. The ISPY1 dataset also provided the estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) status data. A persistent homology matrix up to 2-dimensional features was calculated for each of the MRI segmentation. Wasserstein distances were then calculated between all pairwise tumor image persistent homology matrices to create a distance matrix for each feature dimension. Since Wasserstein distances were calculated for 0, 1, and 2-dimensional features, three hierarchal clusters were constructed. The adjusted Rand Index was used to see how well the clusters corresponded to the ER/PR/HER2 status of the tumors. Triple-negative cancers (negative status for all three receptors) significantly clustered together in the 2-dimensional features dendrogram (Adjusted Rand Index of .35, p = .031). It is known that having a triple-negative breast tumor is associated with aggressive tumor growth and poor prognosis when compared to non-triple negative breast tumors. The aggressive tumor growth associated with triple-negative tumors may have a unique structure in an MRI segmentation, which persistent homology is able to identify. This preliminary analysis shows promising results in the use of persistent homology on tumor imaging to assess the severity of breast tumors. The next step is to apply this pipeline to other tumor segment images from The Cancer Imaging Archive at different sites such as the lung, kidney, and brain. In addition, whether other clinical parameters, such as overall survival, tumor stage, and tumor genotype data are captured well in persistent homology clusters will be assessed. If analyzing tumor MRI segments using persistent homology consistently identifies clinical relationships, this could enable clinicians to use persistent homology data as a noninvasive way to inform clinical decision making in oncology.Keywords: cancer biology, oncology, persistent homology, radiomics, topological data analysis, tumor imaging
Procedia PDF Downloads 13611 State and Benefit: Delivering the First State of the Bays Report for Victoria
Authors: Scott Rawlings
Abstract:
Victoria’s first State of the Bays report is an historic baseline study of the health of Port Phillip Bay and Western Port. The report includes 50 assessments of 36 indicators across a broad array of topics from the nitrogen cycle and water quality to key marine species and habitats. This paper discusses the processes for determining and assessing the indicators and comments on future priorities identified to maintain and improve the health of these water ways. Victoria’s population is now at six million, and growing at a rate of over 100,000 people per year - the highest increase in Australia – and the population of greater Melbourne is over four million. Port Phillip Bay and Western Port are vital marine assets at the centre of this growth and will require adaptive strategies if they are to remain in good condition and continue to deliver environmental, economic and social benefits. In 2014, it was in recognition of these pressures that the incoming Victorian Government committed to reporting on the state of the bays every five years. The inaugural State of the Bays report was issued by the independent Victorian Commissioner for Environmental Sustainability. The report brought together what is known about both bays, based on existing research. It was a baseline on which future reports will build and, over time, include more of Victoria’s marine environment. Port Phillip Bay and Western Port generally demonstrate healthy systems. Specific threats linked to population growth are a significant pressure. Impacts are more significant where human activity is more intense and where nutrients are transported to the bays around the mouths of creeks and drainage systems. The transport of high loads of nutrients and pollutants to the bays from peak rainfall events is likely to increase with climate change – as will sea level rise. Marine pests are also a threat. More than 100 introduced marine species have become established in Port Phillip Bay and can compete with native species, alter habitat, reduce important fish stocks and potentially disrupt nitrogen cycling processes. This study confirmed that our data collection regime is better within the Marine Protected Areas of Port Phillip Bay than in other parts. The State of the Bays report is a positive and practical example of what can be achieved through collaboration and cooperation between environmental reporters, Government agencies, academic institutions, data custodians, and NGOs. The State of the Bays 2016 provides an important foundation by identifying knowledge gaps and research priorities for future studies and reports on the bays. It builds a strong evidence base to effectively manage the bays and support an adaptive management framework. The Report proposes a set of indicators for future reporting that will support a step-change in our approach to monitoring and managing the bays – a shift from reporting only on what we do know, to reporting on what we need to know.Keywords: coastal science, marine science, Port Phillip Bay, state of the environment, Western Port
Procedia PDF Downloads 21010 The Budget Impact of the DISCERN™ Diagnostic Test for Alzheimer’s Disease in the United States
Authors: Frederick Huie, Lauren Fusfeld, William Burchenal, Scott Howell, Alyssa McVey, Thomas F. Goss
Abstract:
Alzheimer’s Disease (AD) is a degenerative brain disease characterized by memory loss and cognitive decline that presents a substantial economic burden for patients and health insurers in the US. This study evaluates the payer budget impact of the DISCERN™ test in the diagnosis and management of patients with symptoms of dementia evaluated for AD. DISCERN™ comprises three assays that assess critical factors related to AD that regulate memory, formation of synaptic connections among neurons, and levels of amyloid plaques and neurofibrillary tangles in the brain and can provide a quicker, more accurate diagnosis than tests in the current diagnostic pathway (CDP). An Excel-based model with a three-year horizon was developed to assess the budget impact of DISCERN™ compared with CDP in a Medicare Advantage plan with 1M beneficiaries. Model parameters were identified through a literature review and were verified through consultation with clinicians experienced in diagnosis and management of AD. The model assesses direct medical costs/savings for patients based on the following categories: •Diagnosis: costs of diagnosis using DISCERN™ and CDP. •False Negative (FN) diagnosis: incremental cost of care avoidable with a correct AD diagnosis and appropriately directed medication. •True Positive (TP) diagnosis: AD medication costs; cost from a later TP diagnosis with the CDP versus DISCERN™ in the year of diagnosis, and savings from the delay in AD progression due to appropriate AD medication in patients who are correctly diagnosed after a FN diagnosis.•False Positive (FP) diagnosis: cost of AD medication for patients who do not have AD. A one-way sensitivity analysis was conducted to assess the effect of varying key clinical and cost parameters ±10%. An additional scenario analysis was developed to evaluate the impact of individual inputs. In the base scenario, DISCERN™ is estimated to decrease costs by $4.75M over three years, equating to approximately $63.11 saved per test per year for a cohort followed over three years. While the diagnosis cost is higher with DISCERN™ than with CDP modalities, this cost is offset by the higher overall costs associated with CDP due to the longer time needed to receive a TP diagnosis and the larger number of patients who receive a FN diagnosis and progress more rapidly than if they had received appropriate AD medication. The sensitivity analysis shows that the three parameters with the greatest impact on savings are: reduced sensitivity of DISCERN™, improved sensitivity of the CDP, and a reduction in the percentage of disease progression that is avoided with appropriate AD medication. A scenario analysis in which DISCERN™ reduces the utilization for patients of computed tomography from 21% in the base case to 16%, magnetic resonance imaging from 37% to 27% and cerebrospinal fluid biomarker testing, positive emission tomography, electroencephalograms, and polysomnography testing from 4%, 5%, 10%, and 8%, respectively, in the base case to 0%, results in an overall three-year net savings of $14.5M. DISCERN™ improves the rate of accurate, definitive diagnosis of AD earlier in the disease and may generate savings for Medicare Advantage plans.Keywords: Alzheimer’s disease, budget, dementia, diagnosis.
Procedia PDF Downloads 1399 Budget Impact Analysis of a Stratified Treatment Cascade for Hepatitis C Direct Acting Antiviral Treatment in an Asian Middle-Income Country through the Use of Compulsory and Voluntary Licensing Options
Authors: Amirah Azzeri, Fatiha H. Shabaruddin, Scott A. McDonald, Rosmawati Mohamed, Maznah Dahlui
Abstract:
Objective: A scaled-up treatment cascade with direct-acting antiviral (DAA) therapy is necessary to achieve global WHO targets for hepatitis C virus (HCV) elimination in Malaysia. Recently, limited access to Sofosbuvir/Daclatasvir (SOF/DAC) is available through compulsory licensing, with future access to Sofosbuvir/Velpatasvir (SOF/VEL) expected through voluntary licensing due to recent agreements. SOF/VEL has superior clinical outcomes, particularly for cirrhotic stages, but has higher drug acquisition costs compared to SOF/DAC. It has been proposed that a stratified treatment cascade might be the most cost-efficient approach for Malaysia whereby all HCV patients are treated with SOF/DAC except for patients with cirrhosis who are treated with SOF/VEL. This study aimed to conduct a five-year budget impact analysis from the provider perspective of the proposed stratified treatment cascade for HCV treatment in Malaysia. Method: A disease progression model that was developed based on model-predicted HCV epidemiology data in Malaysia was used for the analysis, where all HCV patients in scenario A were treated with SOF/DAC for all disease stages while in scenario B, SOF/DAC was used only for non-cirrhotic patients and SOF/VEL was used for the cirrhotic patients. The model projections estimated the annual numbers of patients in care and the numbers of patients to be initiated on DAA treatment nationally. Healthcare costs associated with DAA therapy and disease stage monitoring was included to estimate the downstream cost implications. For scenario B, the estimated treatment uptake of SOF/VEL for cirrhotic patients were 25%, 50%, 75%, 100% and 100% for 2018, 2019, 2020, 2021 and 2022 respectively. Healthcare costs were estimated based on standard clinical pathways for DAA treatment described in recent guidelines. All costs were reported in US dollars (conversion rate US$1=RM4.09, the price year 2018). Scenario analysis was conducted for 5% and 10% reduction of SOF/VEL acquisition cost anticipated from the competitive market pricing of generic DAA in Malaysia. Results: The stratified treatment cascade with SOF/VEL in Scenario B was found to be cost-saving compared to Scenario A. A substantial portion of the cost reduction was due to the costs associated with DAA therapy which resulted in USD 40 thousand (year 1) to USD 443 thousand (year 5) savings annually, with cumulative savings of USD 1.1 million after 5 years. Cost reductions for disease stage monitoring were seen in year three onwards which resulted in cumulative savings of USD 1.1 thousand. Scenario analysis estimated cumulative savings of USD 1.24 to USD 1.35 million when the acquisition cost of SOF/VEL was reduced. Conclusion: A stratified treatment cascade with SOF/VEL was expected to be cost-saving and can results in a budget impact reduction in overall healthcare expenditure in Malaysia compared to treatment with SOF/DAC. The better clinical efficacy with SOF/VEL is expected to halt patients’ HCV disease progression and may reduce downstream costs of treating advanced disease stages. The findings of this analysis may be useful to inform healthcare policies for HCV treatment in Malaysia.Keywords: Malaysia, direct acting antiviral, compulsory licensing, voluntary licensing
Procedia PDF Downloads 1658 Healthcare Utilization and Costs of Specific Obesity Related Health Conditions in Alberta, Canada
Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach
Abstract:
Obesity-related health conditions impose a substantial economic burden on payers due to increased healthcare use. Estimates of healthcare resource use and costs associated with obesity-related comorbidities are needed to inform policies and interventions targeting these conditions. Methods: Adults living with obesity were identified (a procedure-related body mass index code for class 2/3 obesity between 2012 and 2019 in Alberta, Canada; excluding those with bariatric surgery), and outcomes were compared over 1-year (2019/2020) between those who had and did not have specific obesity-related comorbidities. The probability of using a healthcare service (based on the odds ratio of a zero [OR-zero] cost) was compared; 95% confidence intervals (CI) were reported. Logistic regression and a generalized linear model with log link and gamma distribution were used for total healthcare cost comparisons ($CDN); cost ratios and estimated cost differences (95% CI) were reported. Potential socio-demographic and clinical confounders were adjusted for, and incremental cost differences were representative of a referent case. Results: A total of 220,190 adults living with obesity were included; 44% had hypertension, 25% had osteoarthritis, 24% had type-2 diabetes, 17% had cardiovascular disease, 12% had insulin resistance, 9% had chronic back pain, and 4% of females had polycystic ovarian syndrome (PCOS). The probability of hospitalization, ED visit, and ambulatory care was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (hospitalization: 1.8-times [OR-zero: 0.57 [0.55/0.59]] / ED visit: 1.9-times [OR-zero: 0.54 [0.53/0.56]] / ambulatory care visit: 2.4-times [OR-zero: 0.41 [0.40/0.43]]), cardiovascular disease (2.7-times [OR-zero: 0.37 [0.36/0.38]] / 1.9-times [OR-zero: 0.52 [0.51/0.53]] / 2.8-times [OR-zero: 0.36 [0.35/0.36]]), osteoarthritis (2.0-times [OR-zero: 0.51 [0.50/0.53]] / 1.4-times [OR-zero: 0.74 [0.73/0.76]] / 2.5-times [OR-zero: 0.40 [0.40/0.41]]), type-2 diabetes (1.9-times [OR-zero: 0.54 [0.52/0.55]] / 1.4-times [OR-zero: 0.72 [0.70/0.73]] / 2.1-times [OR-zero: 0.47 [0.46/0.47]]), hypertension (1.8-times [OR-zero: 0.56 [0.54/0.57]] / 1.3-times [OR-zero: 0.79 [0.77/0.80]] / 2.2-times [OR-zero: 0.46 [0.45/0.47]]), PCOS (not significant / 1.2-times [OR-zero: 0.83 [0.79/0.88]] / not significant), and insulin resistance (1.1-times [OR-zero: 0.88 [0.84/0.91]] / 1.1-times [OR-zero: 0.92 [0.89/0.94]] / 1.8-times [OR-zero: 0.56 [0.54/0.57]]). After fully adjusting for potential confounders, the total healthcare cost ratio was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (1.54-times [1.51/1.56]), cardiovascular disease (1.45-times [1.43/1.47]), osteoarthritis (1.36-times [1.35/1.38]), type-2 diabetes (1.30-times [1.28/1.31]), hypertension (1.27-times [1.26/1.28]), PCOS (1.08-times [1.05/1.11]), and insulin resistance (1.03-times [1.01/1.04]). Conclusions: Adults with obesity who have specific disease-related health conditions have a higher probability of healthcare use and incur greater costs than those without specific comorbidities; incremental costs are larger when other obesity-related health conditions are not adjusted for. In a specific referent case, hypertension was costliest (44% had this condition with an additional annual cost of $715 [$678/$753]). If these findings hold for the Canadian population, hypertension in persons with obesity represents an estimated additional annual healthcare cost of $2.5 billion among adults living with obesity (based on an adult obesity rate of 26%). Results of this study can inform decision making on investment in interventions that are effective in treating obesity and its complications.Keywords: administrative data, healthcare cost, obesity-related comorbidities, real world evidence
Procedia PDF Downloads 1497 Optimization of Perfusion Distribution in Custom Vascular Stent-Grafts Through Patient-Specific CFD Models
Authors: Scott M. Black, Craig Maclean, Pauline Hall Barrientos, Konstantinos Ritos, Asimina Kazakidi
Abstract:
Aortic aneurysms and dissections are leading causes of death in cardiovascular disease. Both inevitably lead to hemodynamic instability without surgical intervention in the form of vascular stent-graft deployment. An accurate description of the aortic geometry and blood flow in patient-specific cases is vital for treatment planning and long-term success of such grafts, as they must generate physiological branch perfusion and in-stent hemodynamics. The aim of this study was to create patient-specific computational fluid dynamics (CFD) models through a multi-modality, multi-dimensional approach with boundary condition optimization to predict branch flow rates and in-stent hemodynamics in custom stent-graft configurations. Three-dimensional (3D) thoracoabdominal aortae were reconstructed from four-dimensional flow-magnetic resonance imaging (4D Flow-MRI) and computed tomography (CT) medical images. The former employed a novel approach to generate and enhance vessel lumen contrast via through-plane velocity at discrete, user defined cardiac time steps post-hoc. To produce patient-specific boundary conditions (BCs), the aortic geometry was reduced to a one-dimensional (1D) model. Thereafter, a zero-dimensional (0D) 3-Element Windkessel model (3EWM) was coupled to each terminal branch to represent the distal vasculature. In this coupled 0D-1D model, the 3EWM parameters were optimized to yield branch flow waveforms which are representative of the 4D Flow-MRI-derived in-vivo data. Thereafter, a 0D-3D CFD model was created, utilizing the optimized 3EWM BCs and a 4D Flow-MRI-obtained inlet velocity profile. A sensitivity analysis on the effects of stent-graft configuration and BC parameters was then undertaken using multiple stent-graft configurations and a range of distal vasculature conditions. 4D Flow-MRI granted unparalleled visualization of blood flow throughout the cardiac cycle in both the pre- and postsurgical states. Segmentation and reconstruction of healthy and stented regions from retrospective 4D Flow-MRI images also generated 3D models with geometries which were successfully validated against their CT-derived counterparts. 0D-1D coupling efficiently captured branch flow and pressure waveforms, while 0D-3D models also enabled 3D flow visualization and quantification of clinically relevant hemodynamic parameters for in-stent thrombosis and graft limb occlusion. It was apparent that changes in 3EWM BC parameters had a pronounced effect on perfusion distribution and near-wall hemodynamics. Results show that the 3EWM parameters could be iteratively changed to simulate a range of graft limb diameters and distal vasculature conditions for a given stent-graft to determine the optimal configuration prior to surgery. To conclude, this study outlined a methodology to aid in the prediction post-surgical branch perfusion and in-stent hemodynamics in patient specific cases for the implementation of custom stent-grafts.Keywords: 4D flow-MRI, computational fluid dynamics, vascular stent-grafts, windkessel
Procedia PDF Downloads 1816 A Resilience-Based Approach for Assessing Social Vulnerability in New Zealand's Coastal Areas
Authors: Javad Jozaei, Rob G. Bell, Paula Blackett, Scott A. Stephens
Abstract:
In the last few decades, Social Vulnerability Assessment (SVA) has been a favoured means in evaluating the susceptibility of social systems to drivers of change, including climate change and natural disasters. However, the application of SVA to inform responsive and practical strategies to deal with uncertain climate change impacts has always been challenging, and typically agencies resort back to conventional risk/vulnerability assessment. These challenges include complex nature of social vulnerability concepts which influence its applicability, complications in identifying and measuring social vulnerability determinants, the transitory social dynamics in a changing environment, and unpredictability of the scenarios of change that impacts the regime of vulnerability (including contention of when these impacts might emerge). Research suggests that the conventional quantitative approaches in SVA could not appropriately address these problems; hence, the outcomes could potentially be misleading and not fit for addressing the ongoing uncertain rise in risk. The second phase of New Zealand’s Resilience to Nature’s Challenges (RNC2) is developing a forward-looking vulnerability assessment framework and methodology that informs the decision-making and policy development in dealing with the changing coastal systems and accounts for complex dynamics of New Zealand’s coastal systems (including socio-economic, environmental and cultural). Also, RNC2 requires the new methodology to consider plausible drivers of incremental and unknowable changes, create mechanisms to enhance social and community resilience; and fits the New Zealand’s multi-layer governance system. This paper aims to analyse the conventional approaches and methodologies in SVA and offer recommendations for more responsive approaches that inform adaptive decision-making and policy development in practice. The research adopts a qualitative research design to examine different aspects of the conventional SVA processes, and the methods to achieve the research objectives include a systematic review of the literature and case study methods. We found that the conventional quantitative, reductionist and deterministic mindset in the SVA processes -with a focus the impacts of rapid stressors (i.e. tsunamis, floods)- show some deficiencies to account for complex dynamics of social-ecological systems (SES), and the uncertain, long-term impacts of incremental drivers. The paper will focus on addressing the links between resilience and vulnerability; and suggests how resilience theory and its underpinning notions such as the adaptive cycle, panarchy, and system transformability could address these issues, therefore, influence the perception of vulnerability regime and its assessment processes. In this regard, it will be argued that how a shift of paradigm from ‘specific resilience’, which focuses on adaptive capacity associated with the notion of ‘bouncing back’, to ‘general resilience’, which accounts for system transformability, regime shift, ‘bouncing forward’, can deliver more effective strategies in an era characterised by ongoing change and deep uncertainty.Keywords: complexity, social vulnerability, resilience, transformation, uncertain risks
Procedia PDF Downloads 104