Search results for: uncertain QoS
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 337

Search results for: uncertain QoS

7 Prospective Museum Visitor Management Based on Prospect Theory: A Pragmatic Approach

Authors: Athina Thanou, Eirini Eleni Tsiropoulou, Symeon Papavassiliou

Abstract:

The problem of museum visitor experience and congestion management – in various forms - has come increasingly under the spotlight over the last few years, since overcrowding can significantly decrease the quality of visitors’ experience. Evidence suggests that on busy days the amount of time a visitor spends inside a crowded house museum can fall by up to 60% compared to a quiet mid-week day. In this paper we consider the aforementioned problem, by treating museums as evolving social systems that induce constraints. However, in a cultural heritage space, as opposed to the majority of social environments, the momentum of the experience is primarily controlled by the visitor himself. Visitors typically behave selfishly regarding the maximization of their own Quality of Experience (QoE) - commonly expressed through a utility function that takes several parameters into consideration, with crowd density and waiting/visiting time being among the key ones. In such a setting, congestion occurs when either the utility of one visitor decreases due to the behavior of other persons, or when costs of undertaking an activity rise due to the presence of other persons. We initially investigate how visitors’ behavioral risk attitudes, as captured and represented by prospect theory, affect their decisions in resource sharing settings, where visitors’ decisions and experiences are strongly interdependent. Different from the majority of existing studies and literature, we highlight that visitors are not risk neutral utility maximizers, but they demonstrate risk-aware behavior according to their personal risk characteristics. In our work, exhibits are organized into two groups: a) “safe exhibits” that correspond to less congested ones, where the visitors receive guaranteed satisfaction in accordance with the visiting time invested, and b) common pool of resources (CPR) exhibits, which are the most popular exhibits with possibly increased congestion and uncertain outcome in terms of visitor satisfaction. A key difference is that the visitor satisfaction due to CPR strongly depends not only on the invested time decision of a specific visitor, but also on that of the rest of the visitors. In the latter case, the over-investment in time, or equivalently the increased congestion potentially leads to “exhibit failure”, interpreted as the visitors gain no satisfaction from their observation of this exhibit due to high congestion. We present a framework where each visitor in a distributed manner determines his time investment in safe or CPR exhibits to optimize his QoE. Based on this framework, we analyze and evaluate how visitors, acting as prospect-theoretic decision-makers, respond and react to the various pricing policies imposed by the museum curators. Based on detailed evaluation results and experiments, we present interesting observations, regarding the impact of several parameters and characteristics such as visitor heterogeneity and use of alternative pricing policies, on scalability, user satisfaction, museum capacity, resource fragility, and operation point stability. Furthermore, we study and present the effectiveness of alternative pricing mechanisms, when used as implicit tools, to deal with the congestion management problem in the museums, and potentially decrease the exhibit failure probability (fragility), while considering the visitor risk preferences.

Keywords: museum resource and visitor management, congestion management, propsect theory, cyber physical social systems

Procedia PDF Downloads 184
6 Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre of Visual Arts in the Context of Agile, Lean and Hybrid Project Management Approaches

Authors: Maria Ledinskaya

Abstract:

This paper examines the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts in the context of Agile, Lean, and Hybrid project management. It is part case study and part literature review. To date, relatively little has been written about non-traditional project management approaches in heritage conservation. This paper seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation, by referencing their practical application on a recent museum-based conservation project. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre for Visual Arts by private collectors Michael and Joyce Morris. The first part introduces the chronological timeline and key elements of the project. It describes a medium-size conservation project of moderate complexity, which was planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown condition and materials, unconfirmed budget. The project was also impacted by the unknown unknowns of the COVID-19 pandemic, such as indeterminate lockdowns, and the need to accommodate social distancing and remote communications. The author, a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Collection Conservation Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. Subsequent sections examine the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment, due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Collection Conservation Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, as well as the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics, particularly with respect to change management, bespoke ethics, shared decision-making, and value-based cost-benefit conservation strategy. The author concludes that the Morris Collection Conservation Project had multiple Agile and Lean features which were instrumental to the successful delivery of the project. These key features are identified as distributed decision making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point largely in favour of a Hybrid model which combines traditional and alternative project processes and tools to suit the specific needs of the project.

Keywords: project management, conservation, waterfall, agile, lean, hybrid

Procedia PDF Downloads 99
5 Modelling Spatial Dynamics of Terrorism

Authors: André Python

Abstract:

To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.

Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling

Procedia PDF Downloads 350
4 Assessing Organizational Resilience Capacity to Flooding: Index Development and Application to Greek Small & Medium-Sized Enterprises

Authors: Antonis Skouloudis, Konstantinos Evangelinos, Walter Leal-Filho, Panagiotis Vouros, Ioannis Nikolaou

Abstract:

Organizational resilience capacity to extreme weather events (EWEs) has sparked a growth in scholarly attention over the past decade as an essential aspect in business continuity management, with supporting evidence for this claim to suggest that it retains a key role in successful responses to adverse situations, crises and shocks. Small and medium-sized enterprises (SMEs) are more vulnerable to face floods compared to their larger counterparts, so they are disproportionately affected by such extreme weather events. The limited resources at their disposal, the lack of time and skills all conduce to inadequate preparedness to challenges posed by floods. SMEs tend to plan in the short-term, reacting to circumstances as they arise and focussing on their very survival. Likewise, they share less formalised structures and codified policies while they are most usually owner-managed, resulting in a command-and-control management culture. Such characteristics result in them having limited opportunities to recover from flooding and quickly turnaround their operation from a loss making to a profit making one. Scholars frame the capacity of business entities to be resilient upon an EWE disturbance (such as flash floods) as the rate of recovery and restoration of organizational performance to pre-disturbance conditions, the amount of disturbance (i.e. threshold level) a business can absorb before losing structural and/or functional components that will alter or cease operation, as well as the extent to which the organization maintains its function (i.e. impact resistance) before performance levels are driven to zero. Nevertheless, while it seems to be accepted as an essential trait of firms effectively transcending uncertain conditions, research deconstructing the enabling conditions and/or inhibitory factors of SMEs resilience capacity to natural hazards is still sparse, fragmentary and mostly fuelled by anecdotal evidence or normative assumptions. Focusing on the individual level of analysis, i.e. the individual enterprise and its endeavours to succeed, the emergent picture from this relatively new research strand delineates the specification of variables, conceptual relationships or dynamic boundaries of resilience capacity components in an attempt to provide prescriptions for policy-making as well as business management. This study will present the development of a flood resilience capacity index (FRCI) and its application to Greek SMEs. The proposed composite indicator pertains to cognitive, behavioral/managerial and contextual factors that influence an enterprise’s ability to shape effective responses to meet flood challenges. Through the proposed indicator-based approach, an analytical framework is set forth that will help standardize such assessments with the overarching aim of reducing the vulnerability of SMEs to flooding. This will be achieved by identifying major internal and external attributes explaining resilience capacity which is particularly important given the limited resources these enterprises have and that they tend to be primary sources of vulnerabilities in supply chain networks, generating Single Points of Failure (SPOF).

Keywords: Floods, Small & Medium-Sized enterprises, organizational resilience capacity, index development

Procedia PDF Downloads 189
3 A Model for Analysing Argumentative Structures and Online Deliberation in User-Generated Comments to the Website of a South African Newspaper

Authors: Marthinus Conradie

Abstract:

The conversational dynamics of democratically orientated deliberation continue to stimulate critical scholarship for its potential to bolster robust engagement between different sections of pluralist societies. Several axes of deliberation that have attracted academic attention include face-to-face vs. online interaction, and citizen-to-citizen communication vs. engagement between citizens and political elites. In all these areas, numerous researchers have explored deliberative procedures aimed at achieving instrumental goals such a securing consensus on policy issues, against procedures that prioritise expressive outcomes such as broadening the range of argumentative repertoires that discursively construct and mediate specific political issues. The study that informs this paper, works in the latter stream. Drawing its data from the reader-comments section of a South African broadsheet newspaper, the study investigates online, citizen-to-citizen deliberation by analysing the discursive practices through which competing understandings of social problems are articulated and contested. To advance this agenda, the paper deals specifically with user-generated comments posted in response to news stories on questions of race and racism in South Africa. The analysis works to discern and interpret the various sets of discourse practices that shape how citizens deliberate contentious political issues, especially racism. Since the website in question is designed to encourage the critical comparison of divergent interpretations of news events, without feeding directly into national policymaking, the study adopts an analytic framework that traces how citizens articulate arguments, rather than the instrumental effects that citizen deliberations might exert on policy. The paper starts from the argument that such expressive interactions are particularly crucial to current trends in South African politics, given that the precise nature of race and racism remain contested and uncertain. Centred on a sample of 2358 conversational moves in 814 posts to 18 news stories emanating from issues of race and racism, the analysis proceeds in a two-step fashion. The first stage conducts a qualitative content analysis that offers insights into the levels of reciprocity among commenters (do readers engage with each other or simply post isolated opinions?), as well as the structures of argumentation (do readers support opinions by citing evidence?). The second stage involves a more fine-grained discourse analysis, based on a theorisation of argumentation that delineates it into three components: opinions/conclusions, evidence/data to support opinions/conclusions and warrants that explicate precisely how evidence/data buttress opinions/conclusions. By tracing the manifestation and frequency of specific argumentative practices, this study contributes to the archive of research currently aggregating around the practices that characterise South Africans’ engagement with provocative political questions, especially racism and racial inequity. Additionally, the study also contributes to recent scholarship on the affordances of Web 2.0 software by eschewing a simplistic bifurcation between cyber-optimist vs. pessimism, in favour of a more nuanced and context-specific analysis of the patterns that structure online deliberation.

Keywords: online deliberation, discourse analysis, qualitative content analysis, racism

Procedia PDF Downloads 177
2 Multiple Primary Pulmonary Meningiomas: A Case Report

Authors: Wellemans Isabelle, Remmelink Myriam, Foucart Annick, Rusu Stefan, Compère Christophe

Abstract:

Primary pulmonary meningioma (PPM) is a very rare tumor, and its occurrence has been reported only sporadically. Multiple PPMs are even more exceptional, and herein, we report, to the best of our knowledge, the fourth case, focusing on the clinicopathological features of the tumor. Moreover, the possible relationship between the use of progesterone–only contraceptives and the development of these neoplasms will be discussed. Case Report: We report a case of a 51-year-old female presenting three solid pulmonary nodules, with the following localizations: right upper lobe, middle lobe, and left lower lobe, described as incidental findings on computed tomography (CT) during a pre-bariatric surgery check-up. The patient revealed no drinking or smoking history. The physical exam was unremarkable except for the obesity. The lesions ranged in size between 6 and 24 mm and presented as solid nodules with lobulated contours. The largest lesion situated in the middle lobe had mild fluorodeoxyglucose (FDG) uptake on F-18 FDG positron emission tomography (PET)/CT, highly suggestive of primary lung neoplasm. For pathological assessment, video-assisted thoracoscopic middle lobectomy and wedge resection of the right upper nodule was performed. Histological examination revealed relatively well-circumscribed solid proliferation of bland meningothelial cells growing in whorls and lobular nests, presenting intranuclear pseudo-inclusions and psammoma bodies. No signs of anaplasia were observed. The meningothelial cells expressed diffusely Vimentin, focally Progesterone receptors and were negative for epithelial (cytokeratin (CK) AE1/AE3, CK7, CK20, Epithelial Membrane Antigen (EMA)), neuroendocrine markers (Synaptophysin, Chromogranin, CD56) and Estrogenic receptors. The proliferation labelling index Ki-67 was low (<5%). Metastatic meningioma was ruled out by brain and spine magnetic resonance imaging (MRI) scans. The third lesion localized in the left lower lobe was followed-up and resected three years later because of its slow but significant growth (14 mm to 16 mm), alongside two new infra centimetric lesions. Those three lesions showed a morphological and immunohistochemical profile similar to previously resected lesions. The patient was disease-free one year post-last surgery. Discussion: Although PPMs are mostly benign and slow-growing tumors with an excellent prognosis, they do not present specific radiological characteristics, and it is difficult to differentiate it from other lung tumors, histopathologic examination being essential. Aggressive behavior is associated with atypical or anaplastic features (WHO grades II–III) The etiology is still uncertain and different mechanisms have been proposed. A causal connection between sexual hormones and meningothelial proliferation has long been suspected and few studies examining progesterone only contraception and meningioma risk have all suggested an association. In line with this, our patient was treated with Levonorgestrel, a progesterone agonist, intra-uterine device (IUD). Conclusions: PPM, defined by the typical histological and immunohistochemical features of meningioma in the lungs and the absence of central nervous system lesions, is an extremely rare neoplasm, mainly solitary and associating, and indolent growth. Because of the unspecific radiologic findings, it should always be considered in the differential diagnosis of lung neoplasms. Regarding multiple PPM, only three cases are reported in the literature, and this is the first described in a woman treated by a progesterone-only IUD to the best of our knowledge.

Keywords: pulmonary meningioma, multiple meningioma, meningioma, pulmonary nodules

Procedia PDF Downloads 114
1 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 504