Search results for: overt pronoun constraint
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 469

Search results for: overt pronoun constraint

169 The Effect of the Cultural Constraint on the Reform of Corporate Governance: The Observation of Taiwan's Efforts to Transform Its Corporate Governance

Authors: Yuanyi (Richard) Fang

Abstract:

Under the theory of La Porta, Lopez-de-Silanes, Shleifer, and Vishny, if a country can increase its legal protections for minority shareholders, the country can develop an ideal securities market that only arises under the dispersed ownership corporate governance. However, the path-dependence scholarship, such as Lucian Arye Bebchuk and Mark J. Roe, presented a different view with LLS&V. They pointed out that the initial framework of the ownership structure and traditional culture will prevent the change of the corporate governance structure through legal reform. This paper contends that traditional culture factors as an important aspect when forming the corporate governance structure. However, it is not impossible for the government to change its traditional corporate governance structure and traditional culture because the culture does not remain intact. Culture evolves with time. The occurrence of the important events will affect the people’s psychological process. The psychological process affects the evolution of culture. The new cultural norms can help defeat the force of the traditional culture and the resistance from the initial corporate ownership structure. Using Taiwan as an example, through analyzing the historical background, related corporate rules and the reactions of adoption new rules from the media, this paper try to show that Taiwan’s culture norms do not remain intact and have changed with time. It further provides that the culture is not always the hurdle for the adoption of the dispersed ownership corporate governance structure as the culture can change. A new culture can provide strong support for the adoption of the new corporate governance structure.

Keywords: LLS&V theory, corporate governance, culture, path–dependent theory

Procedia PDF Downloads 450
168 Determination of Direct Solar Radiation Using Atmospheric Physics Models

Authors: Pattra Pukdeekiat, Siriluk Ruangrungrote

Abstract:

This work was originated to precisely determine direct solar radiation by using atmospheric physics models since the accurate prediction of solar radiation is necessary and useful for solar energy applications including atmospheric research. The possible models and techniques for a calculation of regional direct solar radiation were challenging and compulsory for the case of unavailable instrumental measurement. The investigation was mathematically governed by six astronomical parameters i.e. declination (δ), hour angle (ω), solar time, solar zenith angle (θz), extraterrestrial radiation (Iso) and eccentricity (E0) along with two atmospheric parameters i.e. air mass (mr) and dew point temperature at Bangna meteorological station (13.67° N, 100.61° E) in Bangkok, Thailand. Analyses of five models of solar radiation determination with the assumption of clear sky were applied accompanied by three statistical tests: Mean Bias Difference (MBD), Root Mean Square Difference (RMSD) and Coefficient of determination (R2) in order to validate the accuracy of obtainable results. The calculated direct solar radiation was in a range of 491-505 Watt/m2 with relative percentage error 8.41% for winter and 532-540 Watt/m2 with relative percentage error 4.89% for summer 2014. Additionally, dataset of seven continuous days, representing both seasons were considered with the MBD, RMSD and R2 of -0.08, 0.25, 0.86 and -0.14, 0.35, 3.29, respectively, which belong to Kumar model for winter and CSR model for summer. In summary, the determination of direct solar radiation based on atmospheric models and empirical equations could advantageously provide immediate and reliable values of the solar components for any site in the region without a constraint of actual measurement.

Keywords: atmospheric physics models, astronomical parameters, atmospheric parameters, clear sky condition

Procedia PDF Downloads 389
167 Biogeography Based CO2 and Cost Optimization of RC Cantilever Retaining Walls

Authors: Ibrahim Aydogdu, Alper Akin

Abstract:

In this study, the development of minimizing the cost and the CO2 emission of the RC retaining wall design has been performed by Biogeography Based Optimization (BBO) algorithm. This has been achieved by developing computer programs utilizing BBO algorithm which minimize the cost and the CO2 emission of the RC retaining walls. Objective functions of the optimization problem are defined as the minimized cost, the CO2 emission and weighted aggregate of the cost and the CO2 functions of the RC retaining walls. In the formulation of the optimum design problem, the height and thickness of the stem, the length of the toe projection, the thickness of the stem at base level, the length and thickness of the base, the depth and thickness of the key, the distance from the toe to the key, the number and diameter of the reinforcement bars are treated as design variables. In the formulation of the optimization problem, flexural and shear strength constraints and minimum/maximum limitations for the reinforcement bar areas are derived from American Concrete Institute (ACI 318-14) design code. Moreover, the development length conditions for suitable detailing of reinforcement are treated as a constraint. The obtained optimum designs must satisfy the factor of safety for failure modes (overturning, sliding and bearing), strength, serviceability and other required limitations to attain practically acceptable shapes. To demonstrate the efficiency and robustness of the presented BBO algorithm, the optimum design example for retaining walls is presented and the results are compared to the previously obtained results available in the literature.

Keywords: bio geography, meta-heuristic search, optimization, retaining wall

Procedia PDF Downloads 378
166 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach

Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed

Abstract:

Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.

Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model

Procedia PDF Downloads 436
165 What Is At Stake When Developing and Using a Rubric to Judge Chemistry Honours Dissertations for Entry into a PhD?

Authors: Moira Cordiner

Abstract:

As a result of an Australian university approving a policy to improve the quality of assessment practices, as an academic developer (AD) with expertise in criterion-referenced assessment commenced in 2008. The four-year appointment was to support 40 'champions' in their Schools. This presentation is based on the experiences of a group of Chemistry academics who worked with the AD to develop and implement an honours dissertation rubric. Honours is a research year following a three-year undergraduate year. If the standard of the student's work is high enough (mainly the dissertation) then the student can commence a PhD. What became clear during the process was that much more was at stake than just the successful development and trial of the rubric, including academics' reputations, university rankings and research outputs. Working with the champion-Head of School(HOS) and the honours coordinator, the AD helped them adapt an honours rubric that she had helped create and trial successfully for another Science discipline. A year of many meetings and complex power plays between the two academics finally resulted in a version that was critiqued by the Chemistry teaching and learning committee. Accompanying the rubric was an explanation of grading rules plus a list of supervisor expectations to explain to students how the rubric was used for grading. Further refinements were made until all staff were satisfied. It was trialled successfully in 2011, then small changes made. It was adapted and implemented for Medicine honours with her help in 2012. Despite coming to consensus about statements of quality in the rubric, a few academics found it challenging matching these to the dissertations and allocating a grade. They had had no time to undertake training to do this, or make overt their implicit criteria and standards, which some admitted they were using - 'I know what a first class is'. Other factors affecting grading included: the small School where all supervisors knew each other and the students, meant that friendships and collegiality were at stake if low grades were given; no external examiners were appointed-all were internal with the potential for bias; supervisors’ reputations were at stake if their students did not receive a good grade; the School's reputation was also at risk if insufficient honours students qualified for PhD entry; and research output was jeopardised without enough honours students to work on supervisors’ projects. A further complication during the study was a restructure of the university and retrenchments, with pressure to increase research output as world rankings assumed greater importance to senior management. In conclusion, much more was at stake than developing a usable rubric. The HOS had to be seen to champion the 'new' assessment practice while balancing institutional demands for increased research output and ensuring as many honours dissertations as possible met high standards, so that eventually the percentage of PhD completions and research output rose. It is therefore in the institution's best interest for this cycle to be maintained as it affects rankings and reputations. In this context, are rubrics redundant?

Keywords: explicit and implicit standards, judging quality, university rankings, research reputations

Procedia PDF Downloads 311
164 Impact of COVID-19 on Hospital Waste

Authors: Caroline Correia, Stefani Perna, John Gaughan, Elizabeth Cerceo

Abstract:

Introduction: The COVID-19 pandemic has brought unprecedented changes to how hospitals function on a daily basis. Increased personal protective equipment (PPE) usage and measures to pre-package, separate, and decontaminate have the potential to increase the waste load. However, limiting non-essential surgeries drastically reduces operating room (OR) waste, and restricting visitation policies to contain outbreaks may help conserve resources. The impact of these policy changes with increased disposable PPE usage on hospital production of waste is unknown. Methods: Waste produced in pounds (lbs) was measured for January through June during both 2019 and 2020 through Stericycle in Cooper University Hospital in Camden, NJ. This timeframe was selected since the pandemic began in January 2020 in the US. The total waste produced during this time was 328,623 lbs in 2019 and 306,454 lbs in 2020. Using Poisson counts (α=.05), less waste was produced in 2020 (p < 0.001). The amount of sharps and regulated medical waste (grossly bloody items) were both significantly decreased as well (p < 0.0001, p=0.0002), and these account for 10-15% of the total waste produced. Discussion: Despite the increased usage of disposable PPE, overall hospital waste was decreased during the pandemic as compared to prior. As surgeries are estimated to be responsible for up to one-half of waste produced by hospitals, it is possible that constraint on elective procedures contributed to the decreased waste in all three categories; estimates of a 35% decrease in surgical volume would be expected to impact waste production. The effects of the pandemic on waste production should continue to be monitored to understand the environmental impact as health systems resume backlogged surgeries at a higher volume.

Keywords: COVID-19, hospital, surgery, waste

Procedia PDF Downloads 84
163 Evaluating the Seismic Stress Distribution in the High-Rise Structures Connections with Optimal Bracing System

Authors: H. R. Vosoughifar, Seyedeh Zeinab. Hosseininejad, Nahid Shabazi, Seyed Mohialdin Hosseininejad

Abstract:

In recent years, structure designers advocate further application of energy absorption devices for lateral loads damping. The Un-bonded Braced Frame (UBF) system is one of the efficient damping systems, which is made of a smart combination of steel and concrete or mortar. In this system, steel bears the earthquake-induced axial force as compressive or tension forces without loss of strength. Concrete or mortar around the steel core acts as a constraint for brace and prevents brace buckling during seismic axial load. In this study, the optimal bracing system in the high-rise structures has been evaluated considering the seismic stress distribution in the connections. An actual 18-story structure was modeled using the proper Finite Element (FE) software where braced with UBF, Eccentrically Braced Frames (EBF) and Concentrically Braced Frame (CBF) systems. Nonlinear static pushover and time-history analyses are then performed so that the acquired results demonstrate that the UBF system reduces drift values in the high-rise buildings. Further statistical analyses show that there is a significant difference between the drift values of UBF system compared with those resulted from the EBF and CBF systems. Hence, the seismic stress distribution in the connections of the proposed structure which braced with UBF system was investigated.

Keywords: optimal bracing system, high-rise structure, finite element analysis (FEA), seismic stress

Procedia PDF Downloads 399
162 Use of Quasi-3D Inversion of VES Data Based on Lateral Constraints to Characterize the Aquifer and Mining Sites of an Area Located in the North-East of Figuil, North Cameroon

Authors: Fofie Kokea Ariane Darolle, Gouet Daniel Hervé, Koumetio Fidèle, Yemele David

Abstract:

The electrical resistivity method is successfully used in this paper in order to have a clearer picture of the subsurface of the North-East ofFiguil in northern Cameroon. It is worth noting that this method is most often used when the objective of the study is to image the shallow subsoils by considering them as a set of stratified ground layers. The problem to be solved is very often environmental, and in this case, it is necessary to perform an inversion of the data in order to have a complete and accurate picture of the parameters of the said layers. In the case of this work, thirty-three (33) Schlumberger VES have been carried out on an irregular grid to investigate the subsurface of the study area. The 1D inversion applied as a preliminary modeling tool and in correlation with the mechanical drillings results indicates a complex subsurface lithology distribution mainly consisting of marbles and schists. Moreover, the quasi-3D inversion with lateral constraint shows that the misfit between the observed field data and the model response is quite good and acceptable with a value low than 10%. The method also reveals existence of two water bearing in the considered area. The first is the schist or weathering aquifer (unsuitable), and the other is the marble or the fracturing aquifer (suitable). The final quasi 3D inversion results and geological models indicate proper sites for groundwaters prospecting and for mining exploitation, thus allowing the economic development of the study area.

Keywords: electrical resistivity method, 1D inversion, quasi 3D inversion, groundwaters, mining

Procedia PDF Downloads 138
161 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 142
160 An Assessment of Vegetable Farmers’ Perceptions about Post-harvest Loss Sources in Ghana

Authors: Kofi Kyei, Kenchi Matsui

Abstract:

Loss of vegetable products has been a major constraint in the post-harvest chain. Sources of post-harvest loss in the vegetable industry start from the time of harvesting to its handling and at the various market centers. Identifying vegetable farmers’ perceptions about post-harvest loss sources is one way of addressing this issue. In this paper, we assessed farmers’ perceptions about sources of post-harvest losses in the Ashanti Region of Ghana. We also identified the factors that influence their perceptions. To clearly understand farmers’ perceptions, we selected Sekyere-Kumawu District in the Ashanti Region. Sekyere-Kumawu District is one of the major producers of vegetables in the Region. Based on a questionnaire survey, 100 vegetable farmers growing tomato, pepper, okra, cabbage, and garden egg were purposely selected from five communities in Sekyere-Kumawu District. For farmers’ perceptions, the five points Likert scale was employed. On a scale from 1 (no loss) to 5 (extremely high loss), we processed the scores for each vegetable harvest. To clarify factors influencing farmers’ perceptions, the Pearson Correlation analysis was used. Our findings revealed that farmers perceive post-harvest loss by pest infestation as the most extreme loss. However, vegetable farmers did not perceive loss during transportation as a serious source of post-harvest loss. The Pearson Correlation analysis results further revealed that farmers’ age, gender, level of education, and years of experience had an influence on their perceptions. This paper then discusses some recommendations to minimize the post-harvest loss in the region.

Keywords: Ashanti Region, pest infestation, post-harvest loss, vegetable farmers

Procedia PDF Downloads 146
159 Bi-Criteria Vehicle Routing Problem for Possibility Environment

Authors: Bezhan Ghvaberidze

Abstract:

A multiple criteria optimization approach for the solution of the Fuzzy Vehicle Routing Problem (FVRP) is proposed. For the possibility environment the levels of movements between customers are calculated by the constructed simulation interactive algorithm. The first criterion of the bi-criteria optimization problem - minimization of the expectation of total fuzzy travel time on closed routes is constructed for the FVRP. A new, second criterion – maximization of feasibility of movement on the closed routes is constructed by the Choquet finite averaging operator. The FVRP is reduced to the bi-criteria partitioning problem for the so called “promising” routes which were selected from the all admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in the real-time computing. For the numerical solution of the bi-criteria partitioning problem the -constraint approach is used. An exact algorithm is implemented based on D. Knuth’s Dancing Links technique and the algorithm DLX. The Main objective was to present the new approach for FVRP, when there are some difficulties while moving on the roads. This approach is called FVRP for extreme conditions (FVRP-EC) on the roads. Also, the aim of this paper was to construct the solving model of the constructed FVRP. Results are illustrated on the numerical example where all Pareto-optimal solutions are found. Also, an approach for more complex model FVRP with time windows was developed. A numerical example is presented in which optimal routes are constructed for extreme conditions on the roads.

Keywords: combinatorial optimization, Fuzzy Vehicle routing problem, multiple objective programming, possibility theory

Procedia PDF Downloads 454
158 Comparative Analysis of Yield before and after Access to Extension Services among Crop Farmers in Bauchi Local Government Area of Bauchi State, Nigeria

Authors: U. S. Babuga, A. H. Danwanka, A. Garba

Abstract:

The research was carried out to compare the yield of respondents before and after access to extension services on crop production technologies in the study area. Data were collected from the study area through questionnaires administered to seventy-five randomly selected respondents. Data were analyzed using descriptive statistics, t-test and regression models. The result disclosed that majority (97%) of the respondent attended one form of school or the other. The majority (78.67%) of the respondents had farm size ranging between 1-3 hectares. The majority of the respondent adopt improved variety of crops, plant spacing, herbicide, fertilizer application, land preparation, crop protection, crop processing and storage of farm produce. The result of the t-test between the yield of respondents before and after access to extension services shows that there was a significant (p<0.001) difference in yield before and after access to extension. It also indicated that farm size was significant (p<0.001) while household size, years of farming experience and extension contact were significant at (p<0.005). The major constraint to adoption of crop production technologies were shortage of extension agents, high cost of technology and lack of access to credit facility. The major pre-requisite for the improvement of extension service are employment of more extension agents or workers and adequate training. Adequate agricultural credit to farmers at low interest rates will enhance their adoption of crop production technologies.

Keywords: comparative, analysis, yield, access, extension

Procedia PDF Downloads 329
157 Variance-Aware Routing and Authentication Scheme for Harvesting Data in Cloud-Centric Wireless Sensor Networks

Authors: Olakanmi Oladayo Olufemi, Bamifewe Olusegun James, Badmus Yaya Opeyemi, Adegoke Kayode

Abstract:

The wireless sensor network (WSN) has made a significant contribution to the emergence of various intelligent services or cloud-based applications. Most of the time, these data are stored on a cloud platform for efficient management and sharing among different services or users. However, the sensitivity of the data makes them prone to various confidentiality and performance-related attacks during and after harvesting. Various security schemes have been developed to ensure the integrity and confidentiality of the WSNs' data. However, their specificity towards particular attacks and the resource constraint and heterogeneity of WSNs make most of these schemes imperfect. In this paper, we propose a secure variance-aware routing and authentication scheme with two-tier verification to collect, share, and manage WSN data. The scheme is capable of classifying WSN into different subnets, detecting any attempt of wormhole and black hole attack during harvesting, and enforcing access control on the harvested data stored in the cloud. The results of the analysis showed that the proposed scheme has more security functionalities than other related schemes, solves most of the WSNs and cloud security issues, prevents wormhole and black hole attacks, identifies the attackers during data harvesting, and enforces access control on the harvested data stored in the cloud at low computational, storage, and communication overheads.

Keywords: data block, heterogeneous IoT network, data harvesting, wormhole attack, blackhole attack access control

Procedia PDF Downloads 44
156 Women Entrepreneurship as an Inventive Approach to Ensure a Sustainable Development in Anambre State

Authors: S. Muogbo Uju, Akpunonu Uju,

Abstract:

The prevailing harsh environment factors couple with poverty rate and unemployment propels a high rate of entrepreneurial activities in developing countries of the world. Women entrepreneurs operate within gender bias among other constraint that can constitute a threat or create opportunity for women entrepreneurs. This empirical paper investigates and critically examines women entrepreneurship as an inventive approach to sustainable development in Anambra State. The study used descriptive statistics (frequencies, mean, and percentages) to answer the three research questions posed. Hypotheses testing were done with person product moment correlation and multiple regressions were employed in data analysis. SPSS [statistical package for Social Science] software was used to run the analysis. Three hundred and fifty three (353) copies of questionnaires were administered, and one hundred and forty six (146) copies were returned. Consequently, the findings of this study portrayed a significant impact between women entrepreneurship activities, job creation, wealth creation, youth empowerment, poverty reduction, employment generation, and increase in standard of livings of people. Therefore, the findings prescribe that government should ensure that managerial lessons are accompanied with the skill acquisition programs in order for them to understand the rudiment of owing and sustaining a business. The study also recommends that women entrepreneurs that have overcome the inertia of starting a business should come together to create platforms that can help those women who are yet to take a step or kick-start such venture.

Keywords: women entrepreneurship, skill acquisition, sustainability, wealth creation

Procedia PDF Downloads 420
155 Critical Discourse Analysis of Xenophobia in UK Political Party Blogs

Authors: Nourah Almulhim

Abstract:

This paper takes a critical discourse analysis (CDA) approach to investigate discourse and ideology in political blogs, focusing in particular on the Conservative Home blog from the UK’s current governing party. The Conservative party member’s discourse strategies as the blogger, alongside the discourse used by members of the public who reply to the blog in the below-the-lines comments, will be examined. The blog discourse reflects the writer's political identity and authorial voice. The analysis of the below-the-lines comments enables members of the public to engage in creating adversative positions, introducing different language users who bring their own individual and collective identities. These language users can play the role of news reporters, political analysts, protesters or supporters of a specific agenda and current socio-political topics or events. This study takes a qualitative approach to analyze the discriminatory context towards Islam/Muslims in ' The Conservative Home' blog. A cognitive approach is adopted and an analysis of dominant discourses in the blog text and the below-the-line comments is used. The focus of the study is, firstly, on the construction of self/ collective national identity in comparison to Muslim identity, highlighting the in-group and out-group construction. Second, the type of attitudes, whether feelings or judgments, related to these social actors as they are explicated to draw on the social values. Third, the role of discursive strategies in justifying and legitimizing those Islamophobic discriminatory practices. Therefore, the analysis is based on the systematic analysis of social actors drawing on actors, actions, and arguments to explicate identity construction and its development in the different discourses. A socio-semantic categorization of social actors is implemented to draw on the discursive strategies in addition to using literature to understand these strategies. An appraisal analysis is further used to classify attitudes and elaborate on core values in both genres. Finally, the grammar of othering is applied to explain how discriminatory dichotomies of 'Us' Vs. ''Them' actions are carried in discourse. Some of the key findings of the analysis can be summarized in two main points. First, the discursive practice used to represent Muslims/Islam as different from ‘Us’ are different in both genres as the blogger uses a covert voice while the commenters generally use an overt voice. This is to say that the blogger uses a mitigated strategy to represent the Muslim identity, for example, using the noun phrase ‘British Muslim’ but then representing them as ‘radical’ and ‘terrorists'. Contrary to this is in below the lines comments, where a direct strategy with an active declarative voice is used to negatively represent the Muslim identity as ‘oppressors’ and ‘terrorists’ with no inclusion of the noun phrase ‘British Muslims’. Second, the negotiation of the ‘British’ identity and values, such as culture and democracy, are prominent in the comment section as being unique and under threat by Muslims, while in the article, these standpoints are not represented.

Keywords: xenophobia, blogs, identity, critical discourse analysis

Procedia PDF Downloads 54
154 Finite Element Analysis of Cold Formed Steel Screwed Connections

Authors: Jikhil Joseph, S. R. Satish Kumar

Abstract:

Steel Structures are commonly used for rapid erections and multistory constructions due to its inherent advantages. However, the high accuracy required in detailing and heavier sections, make it difficult to erect in place and transport. Cold Formed steel which are specially made by reducing carbon and other alloys are used nowadays to make thin-walled structures. Various types of connections are being reported as well as practiced for the thin-walled members such as bolting, riveting, welding and other mechanical connections. Commonly self-drilling screw connections are used for cold-formed purlin sheeting connection. In this paper an attempt is made to develop a moment resting frame which can be rapidly and remotely constructed with thin walled sections and self-drilling screws. Semi-rigid Moment connections are developed with Rectangular thin-walled tubes and the screws. The Finite Element Analysis programme ABAQUS is used for modelling the screwed connections. The various modelling procedures for simulating the connection behavior such as tie-constraint model, oriented spring model and solid interaction modelling are compared and are critically reviewed. From the experimental validations the solid-interaction modelling identified to be the most accurate one and are used for predicting the connection behaviors. From the finite element analysis, hysteresis curves and the modes of failure were identified. Parametric studies were done on the connection model to optimize the connection configurations to get desired connection characteristics.

Keywords: buckling, cold formed steel, finite element analysis, screwed connections

Procedia PDF Downloads 159
153 The Pricing-Out Phenomenon in the U.S. Housing Market

Authors: Francesco Berald, Yunhui Zhao

Abstract:

The COVID-19 pandemic further extended the multi-year housing boom in advanced economies and emerging markets alike against massive monetary easing during the pandemic. In this paper, we analyze the pricing-out phenomenon in the U.S. residential housing market due to higher house prices associated with monetary easing. We first set up a stylized general equilibrium model and show that although monetary easing decreases the mortgage payment burden, it would raise house prices and lower housing affordability for first-time homebuyers (through the initial housing wealth channel and the liquidity constraint channel that increases repeat buyers’ housing demand), and increase housing wealth inequality between first-time and repeat homebuyers. We then use the U.S. household-level data to quantify the effect of the house price change on housing affordability relative to that of the interest rate change. We find evidence of the pricing-out effect for all homebuyers; moreover, we find that the pricing-out effect is stronger for first-time homebuyers than for repeat homebuyers. The paper highlights the importance of accounting for general equilibrium effects and distributional implications of monetary policy while assessing housing affordability. It also calls for complementing monetary easing with well-targeted policy measures that can boost housing affordability, particularly for first-time and lower-income households. Such measures are also needed during aggressive monetary tightening, given that the fall in house prices may be insufficient or too slow to fully offset the immediate adverse impact of higher rates on housing affordability.

Keywords: pricing-out, U.S. housing market, housing affordability, distributional effects, monetary policy

Procedia PDF Downloads 9
152 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory

Procedia PDF Downloads 103
151 Optimizing Groundwater Pumping for a Complex Groundwater/Surface Water System

Authors: Emery A. Coppola Jr., Suna Cinar, Ferenc Szidarovszky

Abstract:

Over-pumping of groundwater resources is a serious problem world-wide. In addition to depleting this valuable resource, hydraulically connected sensitive ecological resources like wetlands and surface water bodies are often impacted and even destroyed by over-pumping. Effectively managing groundwater in a way that satisfy human demand while preserving natural resources is a daunting challenge that will only worsen with growing human populations and climate change. As presented in this paper, a numerical flow model developed for a hypothetical but realistic groundwater/surface water system was combined with formal optimization. Response coefficients were used in an optimization management model to maximize groundwater pumping in a complex, multi-layered aquifer system while protecting against groundwater over-draft, streamflow depletion, and wetland impacts. Pumping optimization was performed for different constraint sets that reflect different resource protection preferences, yielding significantly different optimal pumping solutions. A sensitivity analysis on the optimal solutions was performed on select response coefficients to identify differences between wet and dry periods. Stochastic optimization was also performed, where uncertainty associated with changing irrigation demand due to changing weather conditions are accounted for. One of the strengths of this optimization approach is that it can efficiently and accurately identify superior management strategies that minimize risk and adverse environmental impacts associated with groundwater pumping under different hydrologic conditions.

Keywords: numerical groundwater flow modeling, water management optimization, groundwater overdraft, streamflow depletion

Procedia PDF Downloads 208
150 Residual Lifetime Estimation for Weibull Distribution by Fusing Expert Judgements and Censored Data

Authors: Xiang Jia, Zhijun Cheng

Abstract:

The residual lifetime of a product is the operation time between the current time and the time point when the failure happens. The residual lifetime estimation is rather important in reliability analysis. To predict the residual lifetime, it is necessary to assume or verify a particular distribution that the lifetime of the product follows. And the two-parameter Weibull distribution is frequently adopted to describe the lifetime in reliability engineering. Due to the time constraint and cost reduction, a life testing experiment is usually terminated before all the units have failed. Then the censored data is usually collected. In addition, other information could also be obtained for reliability analysis. The expert judgements are considered as it is common that the experts could present some useful information concerning the reliability. Therefore, the residual lifetime is estimated for Weibull distribution by fusing the censored data and expert judgements in this paper. First, the closed-forms concerning the point estimate and confidence interval for the residual lifetime under the Weibull distribution are both presented. Next, the expert judgements are regarded as the prior information and how to determine the prior distribution of Weibull parameters is developed. For completeness, the cases that there is only one, and there are more than two expert judgements are both focused on. Further, the posterior distribution of Weibull parameters is derived. Considering that it is difficult to derive the posterior distribution of residual lifetime, a sample-based method is proposed to generate the posterior samples of Weibull parameters based on the Monte Carlo Markov Chain (MCMC) method. And these samples are used to obtain the Bayes estimation and credible interval for the residual lifetime. Finally, an illustrative example is discussed to show the application. It demonstrates that the proposed method is rather simple, satisfactory, and robust.

Keywords: expert judgements, information fusion, residual lifetime, Weibull distribution

Procedia PDF Downloads 117
149 Development of a Matlab® Program for the Bi-Dimensional Truss Analysis Using the Stiffness Matrix Method

Authors: Angel G. De Leon Hernandez

Abstract:

A structure is defined as a physical system or, in certain cases, an arrangement of connected elements, capable of bearing certain loads. The structures are presented in every part of the daily life, e.g., in the designing of buildings, vehicles and mechanisms. The main goal of a structure designer is to develop a secure, aesthetic and maintainable system, considering the constraint imposed to every case. With the advances in the technology during the last decades, the capabilities of solving engineering problems have increased enormously. Nowadays the computers, play a critical roll in the structural analysis, pitifully, for university students the vast majority of these software are inaccessible due to the high complexity and cost they represent, even when the software manufacturers offer student versions. This is exactly the reason why the idea of developing a more reachable and easy-to-use computing tool. This program is designed as a tool for the university students enrolled in courser related to the structures analysis and designs, as a complementary instrument to achieve a better understanding of this area and to avoid all the tedious calculations. Also, the program can be useful for graduated engineers in the field of structural design and analysis. A graphical user interphase is included in the program to make it even simpler to operate it and understand the information requested and the obtained results. In the present document are included the theoretical basics in which the program is based to solve the structural analysis, the logical path followed in order to develop the program, the theoretical results, a discussion about the results and the validation of those results.

Keywords: stiffness matrix method, structural analysis, Matlab® applications, programming

Procedia PDF Downloads 98
148 Covalently Conjugated Gold–Porphyrin Nanostructures

Authors: L. Spitaleri, C. M. A. Gangemi, R. Purrello, G. Nicotra, G. Trusso Sfrazzetto, G. Casella, M. Casarin, A. Gulino

Abstract:

Hybrid molecular–nanoparticle materials, obtained with a bottom-up approach, are suitable for the fabrication of functional nanostructures showing structural control and well-defined properties, i.e., optical, electronic or catalytic properties, in the perspective of applications in different fields of nanotechnology. Gold nanoparticles (Au NPs) exhibit important chemical, electronic and optical properties due to their size, shape and electronic structures. In fact, Au NPs containing no more than 30-40 atoms are only luminescent because they can be considered as large molecules with discrete energy levels, while nano-sized Au NPs only show the surface plasmon resonance. Hence, it appears that gold nanoparticles can alternatively be luminescent or plasmonic, and this represents a severe constraint for their use as an optical material. The aim of this work was the fabrication of nanoscale assembly of Au NPs covalently anchored to each other by means of novel bi-functional porphyrin molecules that work as bridges between different gold nanoparticles. This functional architecture shows a strong surface plasmon due to the Au nanoparticles and a strong luminescence signal coming from porphyrin molecules, thus, behaving like an artificial organized plasmonic and fluorescent network. The self-assembly geometry of this porphyrin on the Au NPs was studied by investigation of the conformational properties of the porphyrin derivative at the DFT level. The morphology, electronic structure and optical properties of the conjugated Au NPs – porphyrin system were investigated by TEM, XPS, UV–vis and Luminescence. The present nanostructures can be used for plasmon-enhanced fluorescence, photocatalysis, nonlinear optics, etc., under atmospheric conditions since our system is not reactive to air nor water and does not need to be stored in a vacuum or inert gas.

Keywords: gold nanoparticle, porphyrin, surface plasmon resonance, luminescence, nanostructures

Procedia PDF Downloads 131
147 Use of Simulation in Medical Education: Role and Challenges

Authors: Raneem Osama Salem, Ayesha Nuzhat, Fatimah Nasser Al Shehri, Nasser Al Hamdan

Abstract:

Background: Recently, most medical schools around the globe are using simulation for teaching and assessing students’ clinical skills and competence. There are many obstacles that could face students and faculty when simulation sessions are introduced into undergraduate curriculum. Objective: The aim of this study is to obtain the opinion of undergraduate medical students and our faculty regarding the role of simulation in undergraduate curriculum, the simulation modalities used, and perceived barriers in implementing stimulation sessions. Methods: To address the role of simulation, modalities used, and perceived challenges to implementation of simulation sessions, a self-administered pilot tested questionnaire with 18 items using a 5 point Likert scale was distributed. Participants included undergraduate male medical students (n=125) and female students (n=70) as well as the faculty members (n=14). Result: Various learning outcomes are achieved and improved through the technology enhanced simulation sessions such as communication skills, diagnostic skills, procedural skills, self-confidence, and integration of basic and clinical sciences. The use of high fidelity simulators, simulated patients and task trainers was more desirable by our students and faculty for teaching and learning as well as an evaluation tool. According to most of the students,' institutional support in terms of resources, staff and duration of sessions was adequate. However, motivation to participate in the sessions and provision of adequate feedback by the staff was a constraint. Conclusion: The use of simulation laboratory is of great benefit to the students and a great teaching tool for the staff to ensure students learning of the various skills.

Keywords: simulators, medical students, skills, simulated patients, performance, challenges, skill laboratory

Procedia PDF Downloads 379
146 Supply Chain Optimisation through Geographical Network Modeling

Authors: Cyrillus Prabandana

Abstract:

Supply chain optimisation requires multiple factors as consideration or constraints. These factors are including but not limited to demand forecasting, raw material fulfilment, production capacity, inventory level, facilities locations, transportation means, and manpower availability. By knowing all manageable factors involved and assuming the uncertainty with pre-defined percentage factors, an integrated supply chain model could be developed to manage various business scenarios. This paper analyse the utilisation of geographical point of view to develop an integrated supply chain network model to optimise the distribution of finished product appropriately according to forecasted demand and available supply. The supply chain optimisation model shows that small change in one supply chain constraint is possible to largely impact other constraints, and the new information from the model should be able to support the decision making process. The model was focused on three areas, i.e. raw material fulfilment, production capacity and finished products transportation. To validate the model suitability, it was implemented in a project aimed to optimise the concrete supply chain in a mining location. The high level of operations complexity and involvement of multiple stakeholders in the concrete supply chain is believed to be sufficient to give the illustration of the larger scope. The implementation of this geographical supply chain network modeling resulted an optimised concrete supply chain from raw material fulfilment until finished products distribution to each customer, which indicated by lower percentage of missed concrete order fulfilment to customer.

Keywords: decision making, geographical supply chain modeling, supply chain optimisation, supply chain

Procedia PDF Downloads 327
145 In-Situ Determination of Radioactivity Levels and Radiological Hazards in and around the Gold Mine Tailings of the West Rand Area, South Africa

Authors: Paballo M. Moshupya, Tamiru A. Abiye, Ian Korir

Abstract:

Mining and processing of naturally occurring radioactive materials could result in elevated levels of natural radionuclides in the environment. The aim of this study was to evaluate the radioactivity levels on a large scale in the West Rand District in South Africa, which is dominated by abandoned gold mine tailings and the consequential radiological exposures to members of the public. The activity concentrations of ²³⁸U, ²³²Th and 40K in mine tailings, soil and rocks were assessed using the BGO Super-Spec (RS-230) gamma spectrometer. The measured activity concentrations for ²³⁸U, ²³²Th and 40K in the studied mine tailings were found to range from 209.95 to 2578.68 Bq/kg, 19.49 to 108.00 Bq/kg and 31.30 to 626.00 Bq/kg, respectively. In surface soils, the overall average activity concentrations were found to be 59.15 Bq/kg, 34.91 and 245.64 Bq/kg for 238U, ²³²Th and 40K, respectively. For the rock samples analyzed, the mean activity concentrations were 32.97 Bq/kg, 32.26 Bq/kg and 351.52 Bg/kg for ²³⁸U, ²³²Th and 40K, respectively. High radioactivity levels were found in mine tailings, with ²³⁸U contributing significantly to the overall activity concentration. The external gamma radiation received from surface soil in the area is generally low, with an average of 0.07 mSv/y. The highest annual effective doses were estimated from the tailings dams and the levels varied between 0.14 mSv/y and 1.09 mSv/y, with an average of 0.51 mSv/y. In certain locations, the recommended dose constraint of 0.25 mSv/y from a single source to the average member of the public within the exposed population was exceeded, indicating the need for further monitoring and regulatory control measures specific to these areas to ensure the protection of resident members of the public.

Keywords: activity concentration, gold mine tailings, in-situ gamma spectrometry, radiological exposures

Procedia PDF Downloads 104
144 Anti-Corruption, an Important Challenge for the Construction Industry!

Authors: Ahmed Stifi, Sascha Gentes, Fritz Gehbauer

Abstract:

The construction industry is perhaps one of the oldest industry of the world. The ancient monuments like the egyptian pyramids, the temples of Greeks and Romans like Parthenon and Pantheon, the robust bridges, old Roman theatres, the citadels and many more are the best testament to that. The industry also has a symbiotic relationship with other . Some of the heavy engineering industry provide construction machineries, chemical industry develop innovative construction materials, finance sector provides fund solutions for complex construction projects and many more. Construction Industry is not only mammoth but also very complex in nature. Because of the complexity, construction industry is prone to various tribulations which may have the propensity to hamper its growth. The comparitive study of this industry with other depicts that it is associated with a state of tardiness and delay especially when we focus on the managerial aspects and the study of triple constraint (time, cost and scope). While some institutes says the complexity associated with it as a major reason, others like lean construction, refers to the wastes produced across the construction process as the prime reason. This paper introduces corruption as one of the prime factors for such delays.To support this many international reports and studies are available depicting that construction industry is one of the most corrupt sectors worldwide, and the corruption can take place throught the project cycle comprising project selection, planning, design, funding, pre-qualification, tendering, execution, operation and maintenance, and even through the reconstrction phase. It also happens in many forms such as bribe, fraud, extortion, collusion, embezzlement and conflict of interest and the self-sufficient. As a solution to cope the corruption in construction industry, the paper introduces the integrity as a key factor and build a new integrity framework to develop and implement an integrity management system for construction companies and construction projects.

Keywords: corruption, construction industry, integrity, lean construction

Procedia PDF Downloads 348
143 Efficiency of Robust Heuristic Gradient Based Enumerative and Tunneling Algorithms for Constrained Integer Programming Problems

Authors: Vijaya K. Srivastava, Davide Spinello

Abstract:

This paper presents performance of two robust gradient-based heuristic optimization procedures based on 3n enumeration and tunneling approach to seek global optimum of constrained integer problems. Both these procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure, the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new-found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel solver that is provided within MS Office suite and other published results.

Keywords: constrained integer problems, enumerative search algorithm, Heuristic algorithm, Tunneling algorithm

Procedia PDF Downloads 305
142 Accurate Binding Energy of Ytterbium Dimer from Ab Initio Calculations and Ultracold Photoassociation Spectroscopy

Authors: Giorgio Visentin, Alexei A. Buchachenko

Abstract:

Recent proposals to use Yb dimer as an optical clock and as a sensor for non-Newtonian gravity imply the knowledge of its interaction potential. Here, the ground-state Born-Oppenheimer Yb₂ potential energy curve is represented by a semi-analytical function, consisting of short- and long-range contributions. For the former, the systematic ab initio all-electron exact 2-component scalar-relativistic CCSD(T) calculations are carried out. Special care is taken to saturate diffuse basis set component with the atom- and bond-centered primitives and reach the complete basis set limit through n = D, T, Q sequence of the correlation-consistent polarized n-zeta basis sets. Similar approaches are used to the long-range dipole and quadrupole dispersion terms by implementing the CCSD(3) polarization propagator method for dynamic polarizabilities. Dispersion coefficients are then computed through Casimir-Polder integration. The semiclassical constraint on the number of the bound vibrational levels known for the ¹⁷⁴Yb isotope is used to scale the potential function. The scaling, based on the most accurate ab initio results, bounds the interaction energy of two Yb atoms within the narrow 734 ± 4 cm⁻¹ range, in reasonable agreement with the previous ab initio-based estimations. The resulting potentials can be used as the reference for more sophisticated models that go beyond the Born-Oppenheimer approximation and provide the means of their uncertainty estimations. The work is supported by Russian Science Foundation grant # 17-13-01466.

Keywords: ab initio coupled cluster methods, interaction potential, semi-analytical function, ytterbium dimer

Procedia PDF Downloads 129
141 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 242
140 “It’s All in Your Head”: Epistemic Injustice, Prejudice, and Power in the Modern Healthcare System

Authors: David Tennison

Abstract:

Epistemic injustice, an injustice done to a person specifically in their capacity as a “knower”, is a subtle form of discrimination, yet its effects can be as dehumanizing and damaging as more overt forms of discrimination. The lens of epistemic injustice has, in recent years, been fruitfully applied to the field of healthcare, examining questions of agency, power, credibility and belief in doctor-patient interactions. Contested illness patients (e.g., those with illnesses lacking scientific consensuses such as fibromyalgia (FM), Myalgic Encephalomyelitis/ Chronic Fatigue Syndrome (ME/CFS) and Long Covid) face higher levels of scrutiny than other patient groups and are often disbelieved or dismissed when their ailments cannot be easily imaged or tested for- often encapsulated by the expression “it’s all in your head”. Using the case study of FM, the trials of contested illness patients in healthcare can be conceptualized in terms of epistemic injustice, and what is going wrong in these doctor-patient relationships can be effectively diagnosed. This case study also helps reveal epistemic dysfunction (structural epistemic issues embedded in the healthcare system), how this relates to stigma identity-based prejudice, and how the healthcare system upholds existing societal hierarchies and disenfranchises the most vulnerable. In the modern landscape, where cases of these chronic illnesses are not only on the rise but future pandemics threaten to add to their number, this conversation is crucial for the well-being of patients and providers. This presentation will cover what epistemic injustice is and how it can be applied to the politics of the doctor-patient interaction on a micro level and the politics of the healthcare system more broadly. Contested illnesses will be explored in terms of how the “contested” label causes the patient to experience disease stigma and lowers their credibility in healthcare and across other aspects of life. This will be explored in tandem with a discussion of existing identity-based prejudice in the healthcare system and how social identities (such as those of gender, race, and socioeconomic status) intersect with the contested illness label. The effects of epistemic injustice, which include worsening patients’ symptoms of mental health and potentially disenfranchising them from the healthcare system altogether, will be presented alongside the potential ethical quandaries this poses for providers. Finally, issues with the way healthcare appointments and the modern NHS function will be explored in terms of epistemic injustice and solutions to improve doctor-patient communication and patient care will be discussed. The relationship between contested illness patients and healthcare providers is notoriously poor, and while this can mean frustration or feelings of unfulfillment in providers, the negative effects for patients are much more severe. The purpose of this research, then, is to highlight these issues and suggest ways in which to improve the healthcare experience for these patients, along with improving doctor-patient communication and mending the doctor-patient relationship in a tangible and realistic way. This research also aims to provoke important conversations about belief and hierarchy in medical settings and how these aspects intersect with identity prejudices.

Keywords: epistemic injustice, fibromyalgia, contested illnesses, chronic illnesses, doctor-patient relationships, philosophy of medicine

Procedia PDF Downloads 37