Search results for: important
1303 Honneth, Feenberg, and the Redemption of Critical Theory of Technology
Authors: David Schafer
Abstract:
Critical Theory is in sore need of a workable account of technology. It had one in the writings of Herbert Marcuse, or so it seemed until Jürgen Habermas mounted a critique in 'Technology and Science as Ideology' (Habermas, 1970) that decisively put it away. Ever since Marcuse’s work has been regarded outdated – a 'philosophy of consciousness' no longer seriously tenable. But with Marcuse’s view has gone the important insight that technology is no norm-free system (as Habermas portrays it) but can be laden with social bias. Andrew Feenberg is among a few serious scholars who have perceived this problem in post-Habermasian critical theory and has sought to revive a basically Marcusean account of technology. On his view, while so-called ‘technical elements’ that physically make up technologies are neutral with regard to social interests, there is a sense in which we may speak of a normative grammar or ‘technical code’ built-in to technology that can be socially biased in favor of certain groups over others (Feenberg, 2002). According to Feenberg, those perspectives on technology are reified which consider technology only by their technical elements to the neglect of their technical codes. Nevertheless, Feenberg’s account fails to explain what is normatively problematic with such reified views of technology. His plausible claim that they represent false perspectives on technology by itself does not explain how such views may be oppressive, even though Feenberg surely wants to be doing that stronger level of normative theorizing. Perceiving this deficit in his own account of reification, he tries to adopt Habermas’s version of systems-theory to ground his own critical theory of technology (Feenberg, 1999). But this is a curious move in light of Feenberg’s own legitimate critiques of Habermas’s portrayals of technology as reified or ‘norm-free.’ This paper argues that a better foundation may be found in Axel Honneth’s recent text, Freedom’s Right (Honneth, 2014). Though Honneth there says little explicitly about technology, he offers an implicit account of reification formulated in opposition to Habermas’s systems-theoretic approach. On this ‘normative functionalist’ account of reification, social spheres are reified when participants prioritize individualist ideals of freedom (moral and legal freedom) to the neglect of an intersubjective form of freedom-through-recognition that Honneth calls ‘social freedom.’ Such misprioritization is ultimately problematic because it is unsustainable: individual freedom is philosophically and institutionally dependent upon social freedom. The main difficulty in adopting Honneth’s social theory for the purposes of a theory of technology, however, is that the notion of social freedom is predicable only of social institutions, whereas it appears difficult to conceive of technology as an institution. Nevertheless, in light of Feenberg’s work, the idea that technology includes within itself a normative grammar (technical code) takes on much plausibility. To the extent that this normative grammar may be understood by the category of social freedom, Honneth’s dialectical account of the relationship between individual and social forms of freedom provides a more solid basis from which to ground the normative claims of Feenberg’s sociological account of technology than Habermas’s systems theory.Keywords: Habermas, Honneth, technology, Feenberg
Procedia PDF Downloads 2001302 A Bayesian Approach for Health Workforce Planning in Portugal
Authors: Diana F. Lopes, Jorge Simoes, José Martins, Eduardo Castro
Abstract:
Health professionals are the keystone of any health system, by delivering health services to the population. Given the time and cost involved in training new health professionals, the planning process of the health workforce is particularly important as it ensures a proper balance between the supply and demand of these professionals and it plays a central role on the Health 2020 policy. In the past 40 years, the planning of the health workforce in Portugal has been conducted in a reactive way lacking a prospective vision based on an integrated, comprehensive and valid analysis. This situation may compromise not only the productivity and the overall socio-economic development but the quality of the healthcare services delivered to patients. This is even more critical given the expected shortage of the health workforce in the future. Furthermore, Portugal is facing an aging context of some professional classes (physicians and nurses). In 2015, 54% of physicians in Portugal were over 50 years old, and 30% of all members were over 60 years old. This phenomenon associated to an increasing emigration of young health professionals and a change in the citizens’ illness profiles and expectations must be considered when planning resources in healthcare. The perspective of sudden retirement of large groups of professionals in a short time is also a major problem to address. Another challenge to embrace is the health workforce imbalances, in which Portugal has one of the lowest nurse to physician ratio, 1.5, below the European Region and the OECD averages (2.2 and 2.8, respectively). Within the scope of the HEALTH 2040 project – which aims to estimate the ‘Future needs of human health resources in Portugal till 2040’ – the present study intends to get a comprehensive dynamic approach of the problem, by (i) estimating the needs of physicians and nurses in Portugal, by specialties and by quinquenium till 2040; (ii) identifying the training needs of physicians and nurses, in medium and long term, till 2040, and (iii) estimating the number of students that must be admitted into medicine and nursing training systems, each year, considering the different categories of specialties. The development of such approach is significantly more critical in the context of limited budget resources and changing health care needs. In this context, this study presents the drivers of the healthcare needs’ evolution (such as the demographic and technological evolution, the future expectations of the users of the health systems) and it proposes a Bayesian methodology, combining the best available data with experts opinion, to model such evolution. Preliminary results considering different plausible scenarios are presented. The proposed methodology will be integrated in a user-friendly decision support system so it can be used by politicians, with the potential to measure the impact of health policies, both at the regional and the national level.Keywords: bayesian estimation, health economics, health workforce planning, human health resources planning
Procedia PDF Downloads 2551301 Whistleblowing a Contemporary Topic Concerning Businesses
Authors: Andreas Kapardis, Maria Krambia-Kapardis, Sofia Michaelides-Mateou
Abstract:
Corruption and economic crime is a serious problem affecting the sustainability of businesses in the 21st century. Nowadays, many corruption or fraud cases come to light thanks to whistleblowers. This article will first discuss the concept of whistleblowing as well as some relevant legislation enacted around the world. Secondly, it will discuss the findings of a survey of whistleblowers or could-have-been whistleblowers. Finally, suggestions for the development of a comprehensive whistleblowing framework will be considered. Whistleblowing can be described as expressing a concern about a wrongdoing within an organization, such as a corporation, an association, an institution or a union. Such concern must be in the public interest and in good faith and should relate to the cover up of matters that could potentially result in a miscarriage of justice, a crime, criminal offence and threats to health and safety. Whistleblowing has proven to be an effective anti-corruption mechanism and a powerful tool that helps deterring fraud, violations, and malpractices within organizations, corporations and the public sector. Research in the field of whistleblowing has concentrated on the reasons for whistleblowing and financial bounties; the effectiveness of whistleblowing; whistleblowing being a prosocial behavior with a psychological perspective and consequences; as a tool in protecting shareholders, saving lives and billions of dollars of public funds. Whilst, no other study of whistleblowing has been carried out on whistleblowers or intended whistleblowers. The study reported in the current paper analyses the findings of 74 whistleblowers or intended whistleblowers, the reasons behind their decision to blow the whistle, or not to proceed to blow the whistle and any regrets they may have had. In addition a profile of a whistleblower is developed concerning their age, gender, marital and family status and position in an organization. Lessons learned from the intended whistleblowers and in response to the questions if they would be willing to blow the whistle again show that enacting legislation to protect the whistleblower is not enough. Similarly, rewarding the whistleblower does not appear to provide the whistleblower with an incentive since the majority noted that “work ethics is more important than financial rewards”. We recommend the development of a comprehensive and holistic framework for the protection of the whistleblower and to ensure that remedial actions are immediately taken once a whistleblower comes forward. The suggested framework comprises (a) hard legislation in ensuring the whistleblowers follow certain principles when blowing the whistle and, in return, are protected for a period of 5 years from being fired, dismissed, bullied, harassed; (b) soft legislation in establishing an agency to firstly ensure psychological and legal advice is provided to the whistleblowers and secondly any required remedial action is immediately taken to avert the undesirable events reported by a whistleblower from occurring and, finally; (c) mechanisms to ensure the coordination of actions taken.Keywords: whistleblowing, business ethics, legislation, business
Procedia PDF Downloads 3111300 The Effect of Elapsed Time on the Cardiac Troponin-T Degradation and Its Utility as a Time Since Death Marker in Cases of Death Due to Burn
Authors: Sachil Kumar, Anoop K.Verma, Uma Shankar Singh
Abstract:
It’s extremely important to study postmortem interval in different causes of death since it assists in a great way in making an opinion on the exact cause of death following such incident often times. With diligent knowledge of the interval one could really say as an expert that the cause of death is not feigned hence there is a great need in evaluating such death to have been at the CRIME SCENE before performing an autopsy on such body. The approach described here is based on analyzing the degradation or proteolysis of a cardiac protein in cases of deaths due to burn as a marker of time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (Department of Forensic Medicine and Toxicology), King George’s Medical University, Lucknow India, after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC) for different time periods (~7.30, 18.20, 30.30, 41.20, 41.40, 54.30, 65.20, and 88.40 Hours). The cases included were the subjects of burn without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. As time postmortem progresses the intact cTnT band degrades to fragments that are easily detected by the monoclonal antibodies. A decreasing trend in the level of cTnT (% of intact) was found as the PM hours increased. A significant difference was observed between <15 h and other PM hours (p<0.01). Significant difference in cTnT level (% of intact) was also observed between 16-25 h and 56-65 h & >75 h (p<0.01). Western blot data clearly showed the intact protein at 42 kDa, three major (28 kDa, 30kDa, 10kDa) fragments, three additional minor fragments (12 kDa, 14kDa, and 15 kDa) and formation of low molecular weight fragments. Overall, both PMI and cardiac tissue of burned corpse had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 41.40 Hrs and after it intact protein slowly disappears. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the time postmortem. A strong significant positive correlation was found between cTnT and PM hours (r=0.87, p=0.0001). The regression analysis showed a good variability explained (R2=0.768) The post-mortem Troponin-T fragmentation observed in this study reveals a sequential, time-dependent process with the potential for use as a predictor of PMI in cases of burning.Keywords: burn, degradation, postmortem interval, troponin-T
Procedia PDF Downloads 4521299 Big Data for Local Decision-Making: Indicators Identified at International Conference on Urban Health 2017
Authors: Dana R. Thomson, Catherine Linard, Sabine Vanhuysse, Jessica E. Steele, Michal Shimoni, Jose Siri, Waleska Caiaffa, Megumi Rosenberg, Eleonore Wolff, Tais Grippa, Stefanos Georganos, Helen Elsey
Abstract:
The Sustainable Development Goals (SDGs) and Urban Health Equity Assessment and Response Tool (Urban HEART) identify dozens of key indicators to help local decision-makers prioritize and track inequalities in health outcomes. However, presentations and discussions at the International Conference on Urban Health (ICUH) 2017 suggested that additional indicators are needed to make decisions and policies. A local decision-maker may realize that malaria or road accidents are a top priority. However, s/he needs additional health determinant indicators, for example about standing water or traffic, to address the priority and reduce inequalities. Health determinants reflect the physical and social environments that influence health outcomes often at community- and societal-levels and include such indicators as access to quality health facilities, access to safe parks, traffic density, location of slum areas, air pollution, social exclusion, and social networks. Indicator identification and disaggregation are necessarily constrained by available datasets – typically collected about households and individuals in surveys, censuses, and administrative records. Continued advancements in earth observation, data storage, computing and mobile technologies mean that new sources of health determinants indicators derived from 'big data' are becoming available at fine geographic scale. Big data includes high-resolution satellite imagery and aggregated, anonymized mobile phone data. While big data are themselves not representative of the population (e.g., satellite images depict the physical environment), they can provide information about population density, wealth, mobility, and social environments with tremendous detail and accuracy when combined with population-representative survey, census, administrative and health system data. The aim of this paper is to (1) flag to data scientists important indicators needed by health decision-makers at the city and sub-city scale - ideally free and publicly available, and (2) summarize for local decision-makers new datasets that can be generated from big data, with layperson descriptions of difficulties in generating them. We include SDGs and Urban HEART indicators, as well as indicators mentioned by decision-makers attending ICUH 2017.Keywords: health determinant, health outcome, mobile phone, remote sensing, satellite imagery, SDG, urban HEART
Procedia PDF Downloads 2141298 Stability of a Biofilm Reactor Able to Degrade a Mixture of the Organochlorine Herbicides Atrazine, Simazine, Diuron and 2,4-Dichlorophenoxyacetic Acid to Changes in the Composition of the Supply Medium
Authors: I. Nava-Arenas, N. Ruiz-Ordaz, C. J. Galindez-Mayer, M. L. Luna-Guido, S. L. Ruiz-López, A. Cabrera-Orozco, D. Nava-Arenas
Abstract:
Among the most important herbicides, the organochlorine compounds are of considerable interest due to their recalcitrance to the chemical, biological, and photolytic degradation, their persistence in the environment, their mobility, and their bioacummulation. The most widely used herbicides in North America are primarily 2,4-dichlorophenoxyacetic acid (2,4-D), the triazines (atrazine and simazine), and to a lesser extent diuron. The contamination of soils and water bodies frequently occurs by mixtures of these xenobiotics. For this reason, in this work, the operational stability to changes in the composition of the medium supplied to an aerobic biofilm reactor was studied. The reactor was packed with fragments of volcanic rock that retained a complex microbial film, able to degrade a mixture of organochlorine herbicides atrazine, simazine, diuron and 2,4-D, and whose members have microbial genes encoding the main catabolic enzymes atzABCD, tfdACD and puhB. To acclimate the attached microbial community, the biofilm reactor was fed continuously with a mineral minimal medium containing the herbicides (in mg•L-1): diuron, 20.4; atrazine, 14.2, simazine, 11.4, and 2,4-D, 59.7, as carbon and nitrogen sources. Throughout the bioprocess, removal efficiencies of 92-100% for herbicides, 78-90% for COD, 92-96% for TOC and 61-83% for dehalogenation were reached. In the microbial community, the genes encoding catabolic enzymes of different herbicides tfdACD, puhB and, occasionally, the genes atzA and atzC were detected. After the acclimatization, the triazine herbicides were eliminated from the mixture formulation. Volumetric loading rates of the mixture 2,4-D and diuron were continuously supplied to the reactor (1.9-21.5 mg herbicides •L-1 •h-1). Along the bioprocess, the removal efficiencies obtained were 86-100% for the mixture of herbicides, 63-94% for for COD, 90-100% for COT, and dehalogenation values of 63-100%. It was also observed that the genes encoding the enzymes in the catabolism of both herbicides, tfdACD and puhB, were consistently detected; and, occasionally, the atzA and atzC. Subsequently, the triazine herbicide atrazine and simazine were restored to the medium supply. Different volumetric charges of this mixture were continuously fed to the reactor (2.9 to 12.6 mg herbicides •L-1 •h-1). During this new treatment process, removal efficiencies of 65-95% for the mixture of herbicides, 63-92% for COD, 66-89% for TOC and 73-94% of dehalogenation were observed. In this last case, the genes tfdACD, puhB and atzABC encoding for the enzymes involved in the catabolism of the distinct herbicides were consistently detected. The atzD gene, encoding the cyanuric hydrolase enzyme, could not be detected, though it was determined that there was partial degradation of cyanuric acid. In general, the community in the biofilm reactor showed some catabolic stability, adapting to changes in loading rates and composition of the mixture of herbicides, and preserving their ability to degrade the four herbicides tested; although, there was a significant delay in the response time to recover to degradation of the herbicides.Keywords: biodegradation, biofilm reactor, microbial community, organochlorine herbicides
Procedia PDF Downloads 4371297 The Politics of Fantasy Meet Precarity of Place
Authors: Claudia Popescu, Adriana Mihaela Soaita
Abstract:
Within the EU accession process, Romania, as well as other CEE countries, have embarked on the post-1990 urbanization wave aiming to reduce the gaps between ‘older’ and ‘new’ EU member states. While post-socialist urban transitions have been extensively scrutinized, little is known about the developing trajectories of these new towns across the CEE region. To start addressing this knowledge gap, we wish to bring to the fore one of the most humble expressions of urbanism, that of the small, new towns of Romania. Despite rural-to-urban reclassification, urbanization levels have remained persistently low over the last three decades. In this context, it is timely and legitimate to ask about the prospects of new towns for a ‘successful’ socioeconomic performance within the urban network and avoidance of precarity and marginalization and adequate measure of place performance within the urban/settlement network and understanding the drivers that trigger towns’ socioeconomic performances. To answer these, we create a socioeconomic index of the place in order to compare the profile of the 60 new towns with large cities, old small towns and rural. We conceive ‘successful’ and ‘precarious’ performance in terms of a locality’s index value being above or below all small towns’ index average. Second, we performed logistic regression to interrogate the relevance of some key structural factors to the new towns’ socioeconomic performance (i.e. population size, urban history, regional location, connectivity and political determination of their local governments). Related to the first research question, our findings highlight the precarity of place as a long-standing condition of living and working in the new towns of Romania, particularly evident through our cross-comparative analysis across key category along the rural-urban continuum. We have substantiated the socioeconomic condition of precarity in rural places, with the new towns still maintaining features of ‘rurality’ rather than ‘urbanity’ - except a few successful satellites of economically striving large cities, particularly the country capital of Bucharest, which benefited from spillover effects. Related to our second research question, we found that the new towns of Romania have significantly higher odds of being characterized by precarity as a socioeconomic condition than all other small towns and urban places, but less so compared to the even more marginalized rural areas. Many new towns contain resource-dependent rural communities with a poor response to the context of change. Therefore, issues pertaining to local capacity building to adapt to the new urban environment should be addressed by the spatial planning policy. Our approach allowed us to bring to the fore the idea of precarity as a condition of whole localities. Thinking of precarity of place is important as it brings the whole institutional and political apparatus of spatial planning, urban and regional, into conversation with other causative or substantive axes of precarity developed in the literature. We recommend future research on the new towns in Romania and elsewhere.Keywords: politics of fantasy, precarity of place, urbanization, Romania
Procedia PDF Downloads 231296 Making Meaning, Authenticity, and Redefining a Future in Former Refugees and Asylum Seekers Detained in Australia
Authors: Lynne McCormack, Andrew Digges
Abstract:
Since 2013, the Australian government has enforced mandatory detention of anyone arriving in Australia without a valid visa, including those subsequently identified as a refugee or seeking asylum. While consistent with the increased use of immigration detention internationally, Australia’s use of offshore processing facilities both during and subsequent to refugee status determination processing has until recently remained a unique feature of Australia’s program of deterrence. The commonplace detention of refugees and asylum seekers following displacement is a significant and independent source of trauma and a contributory factor in adverse psychological outcomes. Officially, these individuals have no prospect of resettlement in Australia, are barred from applying for substantive visas, and are frequently and indefinitely detained in closed facilities such as immigration detention centres, or alternative places of detention, including hotels. It is also important to note that the limited access to Australia’s immigration detention population made available to researchers often means that data available for secondary analysis may be incomplete or delayed in its release. Further, studies into the lived experience of refugees and asylum seekers are typically cross-sectional and convenience sampled, employing a variety of designs and research methodologies that limit comparability and focused on the immediacy of the individual’s experience. Consequently, how former detainees make sense of their experience, redefine their future trajectory upon release, and recover a sense of authenticity and purpose, is unknown. As such, the present study sought the positive and negative subjective interpretations of 6 participants in Australia regarding their lived experiences as refugees and asylum seekers within Australia’s immigration detention system and its impact on their future sense of self. It made use of interpretative phenomenological analysis (IPA), a qualitative research methodology that is interested in how individuals make sense of, and ascribe meaning to, their unique lived experiences of phenomena. Underpinned by phenomenology, hermeneutics, and critical realism, this idiographic study aimed to explore both positive and negative subjective interpretations of former refugees and asylum seekers held in detention in Australia. It sought to understand how they make sense of their experiences, how detention has impacted their overall journey as displaced persons, and how they have moved forward in the aftermath of protracted detention in Australia. Examining the unique lived experiences of previously detained refugees and asylum seekers may inform the future development of theoretical models of posttraumatic growth among this vulnerable population, thereby informing the delivery of future mental health and resettlement services.Keywords: mandatory detention, refugee, asylum seeker, authenticity, Interpretative phenomenological analysis
Procedia PDF Downloads 1001295 The Gut Microbiome in Cirrhosis and Hepatocellular Carcinoma: Characterization of Disease-Related Microbial Signature and the Possible Impact of Life Style and Nutrition
Authors: Lena Lapidot, Amir Amnon, Rita Nosenko, Veitsman Ella, Cohen-Ezra Oranit, Davidov Yana, Segev Shlomo, Koren Omry, Safran Michal, Ben-Ari Ziv
Abstract:
Introduction: Hepatocellular carcinoma (HCC) is one of the leading causes of cancer related mortality worldwide. Liver Cirrhosis is the main predisposing risk factor for the development of HCC. The factor(s) influencing disease progression from Cirrhosis to HCC remain unknown. Gut microbiota has recently emerged as a major player in different liver diseases, however its association with HCC is still a mystery. Moreover, there might be an important association between the gut microbiota, nutrition, life style and the progression of Cirrhosis and HCC. The aim of our study was to characterize the gut microbial signature in association with life style and nutrition of patients with Cirrhosis, HCC-Cirrhosis and healthy controls. Design: Stool samples were collected from 95 individuals (30 patients with HCC, 38 patients with Cirrhosis and 27 age, gender and BMI-matched healthy volunteers). All participants answered lifestyle and Food Frequency Questionnaires. 16S rRNA sequencing of fecal DNA was performed (MiSeq Illumina). Results: There was a significant decrease in alpha diversity in patients with Cirrhosis (qvalue=0.033) and in patients with HCC-Cirrhosis (qvalue=0.032) compared to healthy controls. The microbiota of patients with HCC-cirrhosis compared to patients with Cirrhosis, was characterized by a significant overrepresentation of Clostridium (pvalue=0.024) and CF231 (pvalue=0.010) and lower expression of Alphaproteobacteria (pvalue=0.039) and Verrucomicrobia (pvalue=0.036) in several taxonomic levels: Verrucomicrobiae, Verrucomicrobiales, Verrucomicrobiaceae and the genus Akkermansia (pvalue=0.039). Furthermore, we performed an analysis of predicted metabolic pathways (Kegg level 2) that resulted in a significant decrease in the diversity of metabolic pathways in patients with HCC-Cirrhosis (qvalue=0.015) compared to controls, one of which was amino acid metabolism. Furthermore, investigating the life style and nutrition habits of patients with HCC-Cirrhosis, we found significant correlations between intake of artificial sweeteners and Verrucomicrobia (qvalue=0.12), High sugar intake and Synergistetes (qvalue=0.021) and High BMI and the pathogen Campylobacter (qvalue=0.066). Furthermore, overweight in patients with HCC-Cirrhosis modified bacterial diversity (qvalue=0.023) and composition (qvalue=0.033). Conclusions: To the best of the our knowledge, we present the first report of the gut microbial composition in patients with HCC-Cirrhosis, compared with Cirrhotic patients and healthy controls. We have demonstrated in our study that there are significant differences in the gut microbiome of patients with HCC-cirrhosis compared to Cirrhotic patients and healthy controls. Our findings are even more pronounced because the significantly increased bacteria Clostridium and CF231 in HCC-Cirrhosis weren't influenced by diet and lifestyle, implying this change is due to the development of HCC. Further studies are needed to confirm these findings and assess causality.Keywords: Cirrhosis, Hepatocellular carcinoma, life style, liver disease, microbiome, nutrition
Procedia PDF Downloads 1311294 Loss of the Skin Barrier after Dermal Application of the Low Molecular Methyl Siloxanes: Volatile Methyl Siloxanes, VMS Silicones
Authors: D. Glamowska, K. Szymkowska, K. Mojsiewicz- Pieńkowska, K. Cal, Z. Jankowski
Abstract:
Introduction: The integrity of the outermost layer of skin (stratum corneum) is vital to the penetration of various compounds, including toxic substances. Barrier function of skin depends of its structure. The barrier function of the stratum corneum is provided by patterned lipid lamellae (binlayer). However, a lot of substances, including the low molecular methyl siloxanes (volatile methyl siloxanes, VMS) have an impact on alteration the skin barrier due to damage of stratum corneum structure. VMS belong to silicones. They are widely used in the pharmaceutical as well as cosmetic industry. Silicones fulfill the role of ingredient or excipient in medicinal products and the excipient in personal care products. Due to the significant human exposure to this group of compounds, an important aspect is toxicology of the compounds and safety assessment of products. Silicones in general opinion are considered as a non-toxic substances, but there are some data about their negative effect on living organisms through the inhaled or oral application. However, the transdermal route has not been described in the literature as a possible alternative route of penetration. The aim of the study was to verify the possibility of penetration of the stratum corneum, further permeation into the deeper layers of the skin (epidermis and dermis) as well as to the fluid acceptor by VMS. Methods: Research methodology was developed based on the OECD and WHO guidelines. In ex-vivo study, the fluorescence microscope and ATR FT-IR spectroscopy was used. The Franz- type diffusion cells were used to application of the VMS on the sample of human skin (A=0.65 cm) for 24h. The stratum corneum at the application site was tape-stripped. After separation of epidermis, relevant dyes: fluorescein, sulforhodamine B, rhodamine B hexyl ester were put on and observations were carried in the microscope. To confirm the penetration and permeation of the cyclic or linear VMS and thus the presence of silicone in the individual layers of the skin, spectra ATR FT-IR of the sample after application of silicone and H2O (control sample) were recorded. The research included comparison of the intesity of bands in characteristic positions for silicones (1263 cm-1, 1052 cm-1 and 800 cm-1). Results: and Conclusions The results present that cyclic and linear VMS are able to overcome the barrier of the skin. Influence of them on damage of corneocytes of the stratum corneum was observed. This phenomenon was due to distinct disturbances in the lipid structure of the stratum corneum. The presence of cyclic and linear VMS were identified in the stratum corneum, epidermis as well as in the dermis by both fluorescence microscope and ATR FT-IR spectroscopy. This confirms that the cyclic and linear VMS can penetrate to stratum corneum and permeate through the human skin layers. Apart from this they cause changes in the structure of the skin. Results show to possible absorption into the blood and lymphathic vessels by the VMS with linear and cyclic structure.Keywords: low molecular methyl siloxanes, volatile methyl siloxanes, linear and cyclic siloxanes, skin penetration, skin permeation
Procedia PDF Downloads 3471293 Light, Restorativeness and Performance in the Workplace: A Pilot Study
Authors: D. Scarpanti, M. Brondino, M. Pasini
Abstract:
Background: the present study explores the role of light and restorativeness on work. According with the Attention Restoration Theory (ART) and a Model of Work Environment, the main idea is that some features of environment, i.e., lighting, influences the direct attention, and so, the performance. Restorativeness refers to the presence/absence level of all the characteristics of physical environment that help to regenerate direct attention. Specifically, lighting can affect level of fascination and attention in one hand; and in other hand promotes several biological functions via pineal gland. Different reviews on this topic show controversial results. In order to bring light on this topic, the hypotheses of this study are that lighting can affect the construct of restorativeness and, in the second time, the restorativeness can affect the performance. Method: the participants are 30 workers of a mechatronic company in the North Italy. Every subject answered to a questionnaire valuing their subjective perceptions of environment in a different way: some objective features of environment, like lighting, temperature and air quality; some subjective perceptions of this environment; finally, the participants answered about their perceived performance. The main attention is on the features of light and his components: visual comfort, general preferences and pleasantness; and the dimensions of the construct of restorativeness; fascination, coherence and being away. The construct of performance per se is conceptualized in three level: individual, team membership and organizational membership; and in three different components: proficiency, adaptability, and proactivity, for a total of 9 subcomponents. Findings: path analysis showed that some characteristics of lighting respectively affected the dimension of fascination; and, as expected, the dimension of fascination affected work performance. Conclusions: The present study is a first pilot step of a wide research. These first results can be summarized with the statement that lighting and restorativeness contribute to explain work performance variability: in details perceptions of visual comfort, satisfaction and pleasantness, and fascination respectively. Results related to fascination are particularly interesting because fascination is conceptualized as the opposite of the construct of direct attention. The main idea is, in order to regenerate attentional capacity, it’s necessary to provide a lacking of attention (fascination). The sample size did not permit to test simultaneously the role of the perceived characteristics of light to see how they differently contribute to predict fascination of the work environment. However, the results highlighted the important role that light could have in predicting restorativeness dimensions and probably with a larger sample we could find larger effects also on work performance. Furthermore, longitudinal data will contribute to better analyze the causal model along time. Applicative implications: the present pilot study highlights the relevant role of lighting and perceived restorativeness in the work environment and the importance to focus attention on light features and the restorative characteristics in the design of work environments.Keywords: lighting, performance, restorativeness, workplace
Procedia PDF Downloads 1561292 Offshore Wind Assessment and Analysis for South Western Mediterranean Sea
Authors: Abdallah Touaibia, Nachida Kasbadji Merzouk, Mustapha Merzouk, Ryma Belarbi
Abstract:
accuracy assessment and a better understand of the wind resource distribution are the most important tasks for decision making before installing wind energy operating systems in a given region, there where our interest come to the Algerian coastline and its Mediterranean sea area. Despite its large coastline overlooking the border of Mediterranean Sea, there is still no strategy encouraging the development of offshore wind farms in Algerian waters. The present work aims to estimate the offshore wind fields for the Algerian Mediterranean Sea based on wind data measurements ranging from 1995 to 2018 provided of 24 years of measurement by seven observation stations focusing on three coastline cities in Algeria under a different measurement time step recorded from 30 min, 60 min, and 180 min variate from one to each other, two stations in Spain, two other ones in Italy and three in the coast of Algeria from the east Annaba, at the center Algiers, and to Oran taken place at the west of it. The idea behind consists to have multiple measurement points that helping to characterize this area in terms of wind potential by the use of interpolation method of their average wind speed values between these available data to achieve the approximate values of others locations where aren’t any available measurement because of the difficulties against the implementation of masts within the deep depth water. This study is organized as follow: first, a brief description of the studied area and its climatic characteristics were done. After that, the statistical properties of the recorded data were checked by evaluating wind histograms, direction roses, and average speeds using MatLab programs. Finally, ArcGIS and MapInfo soft-wares were used to establish offshore wind maps for better understanding the wind resource distribution, as well as to identify windy sites for wind farm installation and power management. The study pointed out that Cap Carbonara is the windiest site with an average wind speed of 7.26 m/s at 10 m, inducing a power density of 902 W/m², then the site of Cap Caccia with 4.88 m/s inducing a power density of 282 W/m². The average wind speed of 4.83 m/s is occurred for the site of Oran, inducing a power density of 230 W/m². The results indicated also that the dominant wind direction where the frequencies are highest for the site of Cap Carbonara is the West with 34%, an average wind speed of 9.49 m/s, and a power density of 1722 W/m². Then comes the site of Cap Caccia, where the prevailing wind direction is the North-west, about 20% and 5.82 m/s occurring a power density of 452 W/m². The site of Oran comes in third place with the North dominant direction with 32% inducing an average wind speed of 4.59 m/s and power density of 189 W/m². It also shown that the proposed method is either crucial in understanding wind resource distribution for revealing windy sites over a large area and more effective for wind turbines micro-siting.Keywords: wind ressources, mediterranean sea, offshore, arcGIS, mapInfo, wind maps, wind farms
Procedia PDF Downloads 1501291 Synthesis of Methanol through Photocatalytic Conversion of CO₂: A Green Chemistry Approach
Authors: Sankha Chakrabortty, Biswajit Ruj, Parimal Pal
Abstract:
Methanol is one of the most important chemical products and intermediates. It can be used as a solvent, intermediate or raw material for a number of higher valued products, fuels or additives. From the last one decay, the total global demand of methanol has increased drastically which forces the scientists to produce a large amount of methanol from a renewable source to meet the global demand with a sustainable way. Different types of non-renewable based raw materials have been used for the synthesis of methanol on a large scale which makes the process unsustainable. In this circumstances, photocatalytic conversion of CO₂ into methanol under solar/UV excitation becomes a viable approach to give a sustainable production approach which not only meets the environmental crisis by recycling CO₂ to fuels but also reduces CO₂ amount from the atmosphere. Development of such sustainable production approach for CO₂ conversion into methanol still remains a major challenge in the current research comparing with conventional energy expensive processes. In this backdrop, the development of environmentally friendly materials, like photocatalyst has taken a great perspective for methanol synthesis. Scientists in this field are always concerned about finding an improved photocatalyst to enhance the photocatalytic performance. Graphene-based hybrid and composite materials with improved properties could be a better nanomaterial for the selective conversion of CO₂ to methanol under visible light (solar energy) or UV light. The present invention relates to synthesis an improved heterogeneous graphene-based photocatalyst with improved catalytic activity and surface area. Graphene with enhanced surface area is used as coupled material of copper-loaded titanium oxide to improve the electron capture and transport properties which substantially increase the photoinduced charge transfer and extend the lifetime of photogenerated charge carriers. A fast reduction method through H₂ purging has been adopted to synthesis improved graphene whereas ultrasonication based sol-gel method has been applied for the preparation of graphene coupled copper loaded titanium oxide with some enhanced properties. Prepared photocatalysts were exhaustively characterized using different characterization techniques. Effects of catalyst dose, CO₂ flow rate, reaction temperature and stirring time on the efficacy of the system in terms of methanol yield and productivity have been studied in the present study. The study shown that the newly synthesized photocatalyst with an enhanced surface resulting in a sustained productivity and yield of methanol 0.14 g/Lh, and 0.04 g/gcat respectively, after 3 h of illumination under UV (250W) at an optimum catalyst dosage of 10 g/L having 1:2:3 (Graphene: TiO₂: Cu) weight ratio.Keywords: renewable energy, CO₂ capture, photocatalytic conversion, methanol
Procedia PDF Downloads 1121290 Quantum Conductance Based Mechanical Sensors Fabricated with Closely Spaced Metallic Nanoparticle Arrays
Authors: Min Han, Di Wu, Lin Yuan, Fei Liu
Abstract:
Mechanical sensors have undergone a continuous evolution and have become an important part of many industries, ranging from manufacturing to process, chemicals, machinery, health-care, environmental monitoring, automotive, avionics, and household appliances. Concurrently, the microelectronics and microfabrication technology have provided us with the means of producing mechanical microsensors characterized by high sensitivity, small size, integrated electronics, on board calibration, and low cost. Here we report a new kind of mechanical sensors based on the quantum transport process of electrons in the closely spaced nanoparticle films covering a flexible polymer sheet. The nanoparticle films were fabricated by gas phase depositing of preformed metal nanoparticles with a controlled coverage on the electrodes. To amplify the conductance of the nanoparticle array, we fabricated silver interdigital electrodes on polyethylene terephthalate(PET) by mask evaporation deposition. The gaps of the electrodes ranged from 3 to 30μm. Metal nanoparticles were generated from a magnetron plasma gas aggregation cluster source and deposited on the interdigital electrodes. Closely spaced nanoparticle arrays with different coverage could be gained through real-time monitoring the conductance. In the film coulomb blockade and quantum, tunneling/hopping dominate the electronic conduction mechanism. The basic principle of the mechanical sensors relies on the mechanical deformation of the fabricated devices which are translated into electrical signals. Several kinds of sensing devices have been explored. As a strain sensor, the device showed a high sensitivity as well as a very wide dynamic range. A gauge factor as large as 100 or more was demonstrated, which can be at least one order of magnitude higher than that of the conventional metal foil gauges or even better than that of the semiconductor-based gauges with a workable maximum applied strain beyond 3%. And the strain sensors have a workable maximum applied strain larger than 3%. They provide the potential to be a new generation of strain sensors with performance superior to that of the currently existing strain sensors including metallic strain gauges and semiconductor strain gauges. When integrated into a pressure gauge, the devices demonstrated the ability to measure tiny pressure change as small as 20Pa near the atmospheric pressure. Quantitative vibration measurements were realized on a free-standing cantilever structure fabricated with closely-spaced nanoparticle array sensing element. What is more, the mechanical sensor elements can be easily scaled down, which is feasible for MEMS and NEMS applications.Keywords: gas phase deposition, mechanical sensors, metallic nanoparticle arrays, quantum conductance
Procedia PDF Downloads 2761289 Techno-Economic Analysis of 1,3-Butadiene and ε-Caprolactam Production from C6 Sugars
Authors: Iris Vural Gursel, Jonathan Moncada, Ernst Worrell, Andrea Ramirez
Abstract:
In order to achieve the transition from a fossil to bio-based economy, biomass needs to replace fossil resources in meeting the world’s energy and chemical needs. This calls for development of biorefinery systems allowing cost-efficient conversion of biomass to chemicals. In biorefinery systems, feedstock is converted to key intermediates called platforms which are converted to wide range of marketable products. The C6 sugars platform stands out due to its unique versatility as precursor for multiple valuable products. Among the different potential routes from C6 sugars to bio-based chemicals, 1,3-butadiene and ε-caprolactam appear to be of great interest. Butadiene is an important chemical for the production of synthetic rubbers, while caprolactam is used in production of nylon-6. In this study, ex-ante techno-economic performance of 1,3-butadiene and ε-caprolactam routes from C6 sugars were assessed. The aim is to provide insight from an early stage of development into the potential of these new technologies, and the bottlenecks and key cost-drivers. Two cases for each product line were analyzed to take into consideration the effect of possible changes on the overall performance of both butadiene and caprolactam production. Conceptual process design for the processes was developed using Aspen Plus based on currently available data from laboratory experiments. Then, operating and capital costs were estimated and an economic assessment was carried out using Net Present Value (NPV) as indicator. Finally, sensitivity analyses on processing capacity and prices was done to take into account possible variations. Results indicate that both processes perform similarly from an energy intensity point of view ranging between 34-50 MJ per kg of main product. However, in terms of processing yield (kg of product per kg of C6 sugar), caprolactam shows higher yield by a factor 1.6-3.6 compared to butadiene. For butadiene production, with the economic parameters used in this study, for both cases studied, a negative NPV (-642 and -647 M€) was attained indicating economic infeasibility. For the caprolactam production, one of the cases also showed economic infeasibility (-229 M€), but the case with the higher caprolactam yield resulted in a positive NPV (67 M€). Sensitivity analysis indicated that the economic performance of caprolactam production can be improved with the increase in capacity (higher C6 sugars intake) reflecting benefits of the economies of scale. Furthermore, humins valorization for heat and power production was considered and found to have a positive effect. Butadiene production was found sensitive to the price of feedstock C6 sugars and product butadiene. However, even at 100% variation of the two parameters, butadiene production remained economically infeasible. Overall, the caprolactam production line shows higher economic potential in comparison to that of butadiene. The results are useful in guiding experimental research and providing direction for further development of bio-based chemicals.Keywords: bio-based chemicals, biorefinery, C6 sugars, economic analysis, process modelling
Procedia PDF Downloads 1551288 The Impact of the Virtual Learning Environment on Teacher's Pedagogy and Student's Learning in Primary School Setting
Authors: Noor Ashikin Omar
Abstract:
The rapid growth and advancement in information and communication technology (ICT) at a global scene has greatly influenced and revolutionised interaction amongst society. The use of ICT has become second nature in managing everyday lives, particularly in the education environment. Traditional learning methods of using blackboards and chalks have been largely improved by the use of ICT devices such as interactive whiteboards and computers in school. This paper aims to explore the impacts of virtual learning environments (VLE) on teacher’s pedagogy and student’s learning in primary school settings. The research was conducted in two phases. Phase one of this study comprised a short interview with the school’s senior assistants to examine issues and challenges faced during planning and implementation of FrogVLE in their respective schools. Phase two involved a survey of a number of questionnaires directed to three major stakeholders; the teachers, students and parents. The survey intended to explore teacher’s and student’s perspective and attitude towards the use of VLE as a teaching and learning medium and as a learning experience as a whole. In addition, the survey from parents provided insights on how they feel towards the use of VLE for their child’s learning. Collectively, the two phases enable improved understanding and provided observations on factors that had affected the implementation of the VLE into primary schools. This study offers the voices of the students which were frequently omitted when addressing innovations as well as teachers who may not always be heard. It is also significant in addressing the importance of teacher’s pedagogy on students’ learning and its effects to enable more effective ICT integration with a student-centred approach. Finally, parental perceptions in the implementation of VLE in supporting their children’s learning have been implicated as having a bearing on educational achievement. The results indicate that the all three stakeholders were positive and highly supportive towards the use of VLE in schools. They were able to understand the benefits of moving towards the modern method of teaching using ICT and accept the change in the education system. However, factors such as condition of ICT facilities at schools and homes as well as inadequate professional development for the teachers in both ICT skills and management skills hindered exploitation of the VLE system in order to fully utilise its benefits. Social influences within different communities and cultures and costs of using the technology also has a significant impact. The findings of this study are important to the Malaysian Ministry of Education because it informs policy makers on the impact of the Virtual Learning Environment (VLE) on teacher’s pedagogy and learning of Malaysian primary school children. The information provided to policy makers allows them to make a sound judgement and enables an informed decision making.Keywords: attitudes towards virtual learning environment (VLE), parental perception, student's learning, teacher's pedagogy
Procedia PDF Downloads 2081287 Thinking Historiographically in the 21st Century: The Case of Spanish Musicology, a History of Music without History
Authors: Carmen Noheda
Abstract:
This text provides a reflection on the way of thinking about the study of the history of music by examining the production of historiography in Spain at the turn of the century. Based on concepts developed by the historical theorist Jörn Rüsen, the article focuses on the following aspects: the theoretical artifacts that structure the interpretation of the limits of writing the history of music, the narrative patterns used to give meaning to the discourse of history, and the orientation context that functions as a source of criteria of significance for both interpretation and representation. This analysis intends to show that historical music theory is not only a means to abstractly explore the complex questions connected to the production of historical knowledge, but also a tool for obtaining concrete images about the intellectual practice of professional musicologists. Writing about the historiography of contemporary Spanish music is a task that requires both a knowledge of the history that is being written and investigated, as well as a familiarity with current theoretical trends and methodologies that allow for the recognition and definition of the different tendencies that have arisen in recent decades. With the objective of carrying out these premises, this project takes as its point of departure the 'immediate historiography' in relation to Spanish music at the beginning of the 21st century. The hesitation that Spanish musicology has shown in opening itself to new anthropological and sociological approaches, along with its rigidity in the face of the multiple shifts in dynamic forms of thinking about history, have produced a standstill whose consequences can be seen in the delayed reception of the historiographical revolutions that have emerged in the last century. Methodologically, this essay is underpinned by Rüsen’s notion of the disciplinary matrix, which is an important contribution to the understanding of historiography. Combined with his parallel conception of differing paradigms of historiography, it is useful for analyzing the present-day forms of thinking about the history of music. Following these theories, the article will in the first place address the characteristics and identification of present historiographical currents in Spanish musicology to thereby carry out an analysis based on the theories of Rüsen. Finally, it will establish some considerations for the future of musical historiography, whose atrophy has not only fostered the maintenance of an ingrained positivist tradition, but has also implied, in the case of Spain, an absence of methodological schools and an insufficient participation in international theoretical debates. An update of fundamental concepts has become necessary in order to understand that thinking historically about music demands that we remember that subjects are always linked by reciprocal interdependencies that structure and define what it is possible to create. In this sense, the fundamental aim of this research departs from the recognition that the history of music is embedded in the conditions that make it conceivable, communicable and comprehensible within a society.Keywords: historiography, Jörn Rüssen, Spanish musicology, theory of history of music
Procedia PDF Downloads 1931286 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂
Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang
Abstract:
CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces
Procedia PDF Downloads 2811285 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling
Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé
Abstract:
Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation
Procedia PDF Downloads 821284 Distribution, Source Apportionment and Assessment of Pollution Level of Trace Metals in Water and Sediment of a Riverine Wetland of the Brahmaputra Valley
Authors: Kali Prasad Sarma, Sanghita Dutta
Abstract:
Deepor Beel (DB), the lone Ramsar site and an important wetland of the Brahmaputra valley in the state of Assam. The local people from fourteen peripheral villages traditionally utilize the wetland for harvesting vegetables, flowers, aquatic seeds, medicinal plants, fish, molluscs, fodder for domestic cattle etc. Therefore, it is of great importance to understand the concentration and distribution of trace metals in water-sediment system of the beel in order to protect its ecological environment. DB lies between26°05′26′′N to 26°09′26′′N latitudes and 90°36′39′′E to 91°41′25′′E longitudes. Water samples from the surface layer of water up to 40cm deep and sediment samples from the top 5cm layer of surface sediments were collected. The trace metals in waters and sediments were analysed using ICP-OES. The organic Carbon was analysed using the TOC analyser. The different mineral present in the sediments were confirmed by X-ray diffraction method (XRD). SEM images were recorded for the samples using SEM, attached with energy dispersive X-ray unit, with an accelerating voltage of 20 kv. All the statistical analyses were performed using SPSS20.0 for windows. In the present research, distribution, source apportionment, temporal and spatial variability, extent of pollution and the ecological risk of eight toxic trace metals in sediments and water of DB were investigated. The average concentrations of chromium(Cr) (both the seasons), copper(Cu) and lead(Pb) (pre-monsoon) and zinc(Zn) and cadmium(Cd) (post-monsoon) in sediments were higher than the consensus based threshold concentration(TEC). The persistent exposure of toxic trace metals in sediments pose a potential threat, especially to sediment dwelling organisms. The degree of pollution in DB sediments for Pb, Cobalt (Co) Zn, Cd, Cr, Cu and arsenic (As) was assessed using Enrichment Factor (EF), Geo-accumulation index (Igeo) and Pollution Load Index (PLI). The results indicated that contamination of surface sediments in DB is dominated by Pb and Cd and to a lesser extent by Co, Fe, Cu, Cr, As and Zn. A significant positive correlation among the pairs of element Co/Fe, Zn/As in water, and Cr/Zn, Fe/As in sediments indicates similar source of origin of these metals. The effects of interaction among trace metals between water and sediments shows significant variations (F =94.02, P < 0.001), suggesting maximum mobility of trace metals in DB sediments and water. The source apportionment of the heavy metals was carried out using Principal Component Analysis (PCA). SEM-EDS detects the presence of Cd, Cu, Cr, Zn, Pb, As and Fe in the sediment sample. The average concentration of Cd, Zn, Pb and As in the bed sediments of DB are found to be higher than the crustal abundance. The EF values indicate that Cd and Pb are significantly enriched. From source apportionment studies of the eight metals using PCA revealed that Cd was anthropogenic in origin; Pb, As, Cr, and Zn had mixed sources; whereas Co, Cu and Fe were natural in origin.Keywords: Deepor Beel, enrichment factor, principal component analysis, trace metals
Procedia PDF Downloads 2921283 Computational Investigation on Structural and Functional Impact of Oncogenes and Tumor Suppressor Genes on Cancer
Authors: Abdoulie K. Ceesay
Abstract:
Within the sequence of the whole genome, it is known that 99.9% of the human genome is similar, whilst our difference lies in just 0.1%. Among these minor dissimilarities, the most common type of genetic variations that occurs in a population is SNP, which arises due to nucleotide substitution in a protein sequence that leads to protein destabilization, alteration in dynamics, and other physio-chemical properties’ distortions. While causing variations, they are equally responsible for our difference in the way we respond to a treatment or a disease, including various cancer types. There are two types of SNPs; synonymous single nucleotide polymorphism (sSNP) and non-synonymous single nucleotide polymorphism (nsSNP). sSNP occur in the gene coding region without causing a change in the encoded amino acid, while nsSNP is deleterious due to its replacement of a nucleotide residue in the gene sequence that results in a change in the encoded amino acid. Predicting the effects of cancer related nsSNPs on protein stability, function, and dynamics is important due to the significance of phenotype-genotype association of cancer. In this thesis, Data of 5 oncogenes (ONGs) (AKT1, ALK, ERBB2, KRAS, BRAF) and 5 tumor suppressor genes (TSGs) (ESR1, CASP8, TET2, PALB2, PTEN) were retrieved from ClinVar. Five common in silico tools; Polyphen, Provean, Mutation Assessor, Suspect, and FATHMM, were used to predict and categorize nsSNPs as deleterious, benign, or neutral. To understand the impact of each variation on the phenotype, Maestro, PremPS, Cupsat, and mCSM-NA in silico structural prediction tools were used. This study comprises of in-depth analysis of 10 cancer gene variants downloaded from Clinvar. Various analysis of the genes was conducted to derive a meaningful conclusion from the data. Research done indicated that pathogenic variants are more common among ONGs. Our research also shows that pathogenic and destabilizing variants are more common among ONGs than TSGs. Moreover, our data indicated that ALK(409) and BRAF(86) has higher benign count among ONGs; whilst among TSGs, PALB2(1308) and PTEN(318) genes have higher benign counts. Looking at the individual cancer genes predisposition or frequencies of causing cancer according to our research data, KRAS(76%), BRAF(55%), and ERBB2(36%) among ONGs; and PTEN(29%) and ESR1(17%) among TSGs have higher tendencies of causing cancer. Obtained results can shed light to the future research in order to pave new frontiers in cancer therapies.Keywords: tumor suppressor genes (TSGs), oncogenes (ONGs), non synonymous single nucleotide polymorphism (nsSNP), single nucleotide polymorphism (SNP)
Procedia PDF Downloads 901282 Perception of the End of a Same Sex Relationship and Preparation towards It: A Qualitative Research about Anticipation, Coping and Conflict Management against the Backdrop of Partial Legal Recognition
Authors: Merav Meiron-Goren, Orna Braun-Lewensohn, Tal Litvak-Hirsh
Abstract:
In recent years, there has been an increasing tendency towards separation and divorce in relationships. Nevertheless, many couples in a first marriage do not anticipate this as a probable possibility and do not make any preparation for it. Same sex couples establishing a family encounter a much more complicated situation than do heterosexual couples. Although there is a trend towards legal recognition of same sex marriage, many countries, including Israel, do not recognize it. The absence of legal recognition or the existence of partial recognition creates complexity for these couples. They have to fight for their right to establish a family, like the recognition of the biological child of a woman, as a child of her woman spouse too, or the option of surrogacy for a male couple who want children, and more. The lack of legal recognition is burden on the lives of these couples. In the absence of clear norms regarding the conduct of the family unit, the couples must define for themselves the family structure, and deal with everyday dilemmas that lack institutional solutions. This may increase the friction between the two couple members, and it is one of the factors that make it difficult for them to maintain the relationship. This complexity exists, perhaps even more so, in separation. The end of relationship is often accompanied by a deep crisis, causing pain and stress. In most cases, there are also other conflicts that must be settled. These are more complicated when rights are in doubt or do not exist at all. Complex issues for separating same sex couples may include matters of property, recognition of parenthood, and care and support for the children. The significance of the study is based on the fact that same sex relationships are becoming more and more widespread, and are an integral part of the society. Even so, there is still an absence of research focusing on such relationships and their ending. The objective of the study is to research the perceptions of same sex couples regarding the possibility of separation, preparing for it, conflict management and resolving disputes through the separation process. It is also important to understand the point of view of couples that have gone through separation, how they coped with the emotional and practical difficulties involved in the separation process. The doctoral research will use a qualitative research method in a phenomenological approach, based on semi-structured in-depth interviews. The interviewees will be divided into three groups- at the beginning of a relationship, during the separation crisis and after separation, with a time perspective, with about 10 couples from each group. The main theoretical model serving as the basis of the study will be the Lazarus and Folkman theory of coping with stress. This model deals with the coping process, including cognitive appraisal of an experience as stressful, appraisal of the coping resources, and using strategies of coping. The strategies are divided into two main groups, emotion-focused forms of coping and problem-focused forms of coping.Keywords: conflict management, coping, legal recognition, same-sex relationship, separation
Procedia PDF Downloads 1431281 Socially Sustainable Urban Rehabilitation Projects: Case Study of Ortahisar, Trabzon
Authors: Elif Berna Var
Abstract:
Cultural, physical, socio-economic, or politic changes occurred in urban areas might be resulted in the decaying period which may cause social problems. As a solution to that, urban renewal projects have been used in European countries since World War II whereas they have gained importance in Turkey after the 1980s. The first attempts were mostly related to physical or economic aspects which caused negative effects on social pattern later. Thus, social concerns have also started to include in renewal processes in developed countries. This integrative approach combining social, physical, and economic aspects promotes creating more sustainable neighbourhoods for both current and future generations. However, it is still a new subject for developing countries like Turkey. Concentrating on Trabzon-Turkey, this study highlights the importance of socially sustainable urban renewal processes especially in historical neighbourhoods where protecting the urban identity of the area is vital, as well as social structure, to create sustainable environments. Being in the historic city centre and having remarkable traditional houses, Ortahisar is an important image for Trabzon. Because of the fact that architectural and historical pattern of the area is still visible but need rehabilitations, it is preferred to use 'urban rehabilitation' as a way of urban renewal method for this study. A project is developed by the local government to create a secondary city centre and a new landmark for the city. But it is still ambiguous if this project can provide social sustainability of area which is one of the concerns of the research. In the study, it is suggested that social sustainability of an area can be achieved by several factors. In order to determine the factors affecting the social sustainability of an urban rehabilitation project, previous studies have been analysed and some common features are attempted to define. To achieve this, firstly, several analyses are conducted to find out social structure of Ortahisar. Secondly, structured interviews are implemented to 150 local people which aims to measure satisfaction level, awareness, the expectation of them, and to learn their demographical background in detail. Those data are used to define the critical factors for a more socially sustainable neighbourhood in Ortahisar. Later, the priority of those factors is asked to 50 experts and 150 local people to compare their attitudes and to find common criterias. According to the results, it can be said that social sustainability of Ortahisar neighbourhood can be improved by considering various factors like quality of urban areas, demographical factors, public participation, social cohesion and harmony, proprietorial factors, facilities of education and employment. In the end, several suggestions are made for Ortahisar case to promote more socially sustainable urban neighbourhood. As a pilot study highlighting the importance of social sustainability, it is hoped that this attempt might be the contributory effect on achieving more socially sustainable urban rehabilitation projects in Turkey.Keywords: urban rehabilitation, social sustainability, Trabzon, Turkey
Procedia PDF Downloads 3801280 Developing a Roadmap by Integrating of Environmental Indicators with the Nitrogen Footprint in an Agriculture Region, Hualien, Taiwan
Authors: Ming-Chien Su, Yi-Zih Chen, Nien-Hsin Kao, Hideaki Shibata
Abstract:
The major component of the atmosphere is nitrogen, yet atmospheric nitrogen has limited availability for biological use. Human activities have produced different types of nitrogen related compounds such as nitrogen oxides from combustion, nitrogen fertilizers from farming, and the nitrogen compounds from waste and wastewater, all of which have impacted the environment. Many studies have indicated the N-footprint is dominated by food, followed by housing, transportation, and goods and services sectors. To solve the impact issues from agricultural land, nitrogen cycle research is one of the key solutions. The study site is located in Hualien County, Taiwan, a major rice and food production area of Taiwan. Importantly, environmentally friendly farming has been promoted for years, and an environmental indicator system has been established by previous authors based on the concept of resilience capacity index (RCI) and environmental performance index (EPI). Nitrogen management is required for food production, as excess N causes environmental pollution. Therefore it is very important to develop a roadmap of the nitrogen footprint, and to integrate it with environmental indicators. The key focus of the study thus addresses (1) understanding the environmental impact caused by the nitrogen cycle of food products and (2) uncovering the trend of the N-footprint of agricultural products in Hualien, Taiwan. The N-footprint model was applied, which included both crops and energy consumption in the area. All data were adapted from government statistics databases and crosschecked for consistency before modeling. The actions involved with agricultural production were evaluated and analyzed for nitrogen loss to the environment, as well as measuring the impacts to humans and the environment. The results showed that rice makes up the largest share of agricultural production by weight, at 80%. The dominant meat production is pork (52%) and poultry (40%); fish and seafood were at similar levels to pork production. The average per capita food consumption in Taiwan is 2643.38 kcal capita−1 d−1, primarily from rice (430.58 kcal), meats (184.93 kcal) and wheat (ca. 356.44 kcal). The average protein uptake is 87.34 g capita−1 d−1, and 51% is mainly from meat, milk, and eggs. The preliminary results showed that the nitrogen footprint of food production is 34 kg N per capita per year, congruent with the results of Shibata et al. (2014) for Japan. These results provide a better understanding of the nitrogen demand and loss in the environment, and the roadmap can furthermore support the establishment of nitrogen policy and strategy. Additionally, the results serve to develop a roadmap of the nitrogen cycle of an environmentally friendly farming area, thus illuminating the nitrogen demand and loss of such areas.Keywords: agriculture productions, energy consumption, environmental indicator, nitrogen footprint
Procedia PDF Downloads 3041279 Attachment Theory and Quality of Life: Grief Education and Training
Authors: Jane E. Hill
Abstract:
Quality of life is an important component for many. With that in mind, everyone will experience some type of loss within his or her lifetime. A person can experience loss due to break up, separation, divorce, estrangement, or death. An individual may experience loss of a job, loss of capacity, or loss caused by human or natural-caused disasters. An individual’s response to such a loss is unique to them, and not everyone will seek services to assist them with their grief due to loss. Counseling can promote positive outcomes for clients that are grieving by addressing the client’s personal loss and helping the client process their grief. However, a lack of understanding on the part of counselors of how people grieve may result in negative client outcomes such as poor health, psychological distress, or an increased risk of depression. Education and training in grief counseling can improve counselors’ problem recognition and skills in treatment planning. The purpose of this study was to examine whether the Council for Accreditation of Counseling and Related Educational Programs (CACREP) master’s degree counseling students view themselves as having been adequately trained in grief theories and skills. Many people deal with grief issues that prevent them from having joy or purpose in their lives and that leaves them unable to engage in positive opportunities or relationships. This study examined CACREP-accredited master’s counseling students’ self-reported competency, training, and education in providing grief counseling. The implications for positive social change arising from the research may be to incorporate and promote education and training in grief theories and skills in a majority of counseling programs and to provide motivation to incorporate professional standards for grief training and practice in the mental health counseling field. The theoretical foundation used was modern grief theory based on John Bowlby’s work on Attachment Theory. The overall research question was how competent do master’s-level counselors view themselves regarding the education or training they received in grief theories or counseling skills in their CACREP-accredited studies. The author used a non-experimental, one shot survey comparative quantitative research design. Cicchetti’s Grief Counseling Competency Scale (GCCS) was administered to CACREP master’s-level counseling students enrolled in their practicum or internship experience, which resulted in 153 participants. Using a MANCOVA, there was significance found for relationships between coursework taken and (a) perceived assessment skills (p = .029), (b) perceived treatment skills (p = .025), and (c) perceived conceptual skills and knowledge (p = .003). Results of this study provided insight for CACREP master’s-level counseling programs to explore and discuss curriculum coursework inclusion of education and training in grief theories and skills.Keywords: counselor education and training, grief education and training, grief and loss, quality of life
Procedia PDF Downloads 1941278 Genetics of Pharmacokinetic Drug-Drug Interactions of Most Commonly Used Drug Combinations in the UK: Uncovering Unrecognised Associations
Authors: Mustafa Malki, Ewan R. Pearson
Abstract:
Tools utilized by health care practitioners to flag potential adverse drug reactions secondary to drug-drug interactions ignore individual genetic variation, which has the potential to markedly alter the severity of these interactions. To our best knowledge, there have been limited published studies on the impact of genetic variation on drug-drug interactions. Therefore, our aim in this project is the discovery of previously unrecognized, clinically important drug-drug-gene interactions (DDGIs) within the list of most commonly used drug combinations in the UK. The UKBB database was utilized to identify the top most frequently prescribed drug combinations in the UK with at least one route of interaction (over than 200 combinations were identified). We have recognised 37 common and unique interacting genes considering all of our drug combinations. Out of around 600 potential genetic variants found in these 37 genes, 100 variants have met the selection criteria (common variant with minor allele frequency ≥ 5%, independence, and has passed HWE test). The association between these variants and the use of each of our top drug combinations has been tested with a case-control analysis under the log-additive model. As the data is cross-sectional, drug intolerance has been identified from the genotype distribution as presented by the lower percentage of patients carrying the risky allele and on the drug combination compared to those free of these risk factors and vice versa with drug tolerance. In GoDARTs database, the same list of common drug combinations identified by the UKBB was utilized here with the same list of candidate genetic variants but with the addition of 14 new SNPs so that we have a total of 114 variants which have met the selection criteria in GoDARTs. From the list of the top 200 drug combinations, we have selected 28 combinations where the two drugs in each combination are known to be used chronically. For each of our 28 combinations, three drug response phenotypes have been identified (drug stop/switch, dose decrease, or dose increase of any of the two drugs during their interaction). The association between each of the three phenotypes belonging to each of our 28 drug combinations has been tested against our 114 candidate genetic variants. The results show replication of four findings between both databases : (1) Omeprazole +Amitriptyline +rs2246709 (A > G) variant in CYP3A4 gene (p-values and ORs with the UKBB and GoDARTs respectively = 0.048,0.037,0.92,and 0.52 (dose increase phenotype)) (2) Simvastatin + Ranitidine + rs9332197 (T > C) variant in CYP2C9 gene (0.024,0.032,0.81, and 5.75 (drug stop/switch phenotype)) (3) Atorvastatin + Doxazosin + rs9282564 (T > C) variant in ABCB1 gene (0.0015,0.0095,1.58,and 3.14 (drug stop/switch phenotype)) (4) Simvastatin + Nifedipine + rs2257401 (C > G) variant in CYP3A7 gene (0.025,0.019,0.77,and 0.30 (drug stop/switch phenotype)). In addition, some other non-replicated, but interesting, significant findings were detected. Our work also provides a great source of information for researchers interested in DD, DG, or DDG interactions studies as it has highlighted the top common drug combinations in the UK with recognizing 114 significant genetic variants related to drugs' pharmacokinetic.Keywords: adverse drug reactions, common drug combinations, drug-drug-gene interactions, pharmacogenomics
Procedia PDF Downloads 1661277 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction
Authors: Alisawi Alaa T., Collins P. E. F.
Abstract:
The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard
Procedia PDF Downloads 1041276 Optimisation of Energy Harvesting for a Composite Aircraft Wing Structure Bonded with Discrete Macro Fibre Composite Sensors
Authors: Ali H. Daraji, Ye Jianqiao
Abstract:
The micro electrical devices of the wireless sensor network are continuously developed and become very small and compact with low electric power requirements using limited period life conventional batteries. The low power requirement for these devices, cost of conventional batteries and its replacement have encouraged researcher to find alternative power supply represented by energy harvesting system to provide an electric power supply with infinite period life. In the last few years, the investigation of energy harvesting for structure health monitoring has increased to powering wireless sensor network by converting waste mechanical vibration into electricity using piezoelectric sensors. Optimisation of energy harvesting is an important research topic to ensure a flowing of efficient electric power from structural vibration. The harvesting power is mainly based on the properties of piezoelectric material, dimensions of piezoelectric sensor, its position on a structure and value of an external electric load connected between sensor electrodes. Larger surface area of sensor is not granted larger power harvesting when the sensor area is covered positive and negative mechanical strain at the same time. Thus lead to reduction or cancellation of piezoelectric output power. Optimisation of energy harvesting is achieved by locating these sensors precisely and efficiently on the structure. Limited published work has investigated the energy harvesting for aircraft wing. However, most of the published studies have simplified the aircraft wing structure by a cantilever flat plate or beam. In these studies, the optimisation of energy harvesting was investigated by determination optimal value of an external electric load connected between sensor electrode terminals or by an external electric circuit or by randomly splitting piezoelectric sensor to two segments. However, the aircraft wing structures are complex than beam or flat plate and mostly constructed from flat and curved skins stiffened by stringers and ribs with more complex mechanical strain induced on the wing surfaces. This aircraft wing structure bonded with discrete macro fibre composite sensors was modelled using multiphysics finite element to optimise the energy harvesting by determination of the optimal number of sensors, location and the output resistance load. The optimal number and location of macro fibre sensors were determined based on the maximization of the open and close loop sensor output voltage using frequency response analysis. It was found different optimal distribution, locations and number of sensors bounded on the top and the bottom surfaces of the aircraft wing.Keywords: energy harvesting, optimisation, sensor, wing
Procedia PDF Downloads 3041275 Forum Shopping in Biotechnology Law: Understanding Conflict of Laws in Protecting GMO-Based Inventions as Part of a Patent Portfolio in the Greater China Region
Authors: Eugene C. Lim
Abstract:
This paper seeks to examine the extent to which ‘forum shopping’ is available to patent filers seeking protection of GMO (genetically modified organisms)-based inventions in Hong Kong. Under Hong Kong’s current re-registration system for standard patents, an inventor must first seek patent protection from one of three Designated Patent Offices (DPO) – those of the People’s Republic of China (PRC), the Europe Union (EU) (designating the UK), or the United Kingdom (UK). The ‘designated patent’ can then be re-registered by the successful patentee in Hong Kong. Interestingly, however, the EU and the PRC do not adopt a harmonized approach toward the patenting of GMOs, and there are discrepancies in their interpretation of the phrase ‘animal or plant variety’. In view of these divergences, the ability to effectively manage ‘conflict of law’ issues is an important priority for multinational biotechnology firms with a patent portfolio in the Greater China region. Generally speaking, both the EU and the PRC exclude ‘animal and plant varieties’ from the scope of patentable subject matter. However, in the EU, Article 4(2) of the Biotechnology Directive allows a genetically modified plant or animal to be patented if its ‘technical feasibility is not limited to a specific variety’. This principle has allowed for certain ‘transgenic’ mammals, such as the ‘Harvard Oncomouse’, to be the subject of a successful patent grant in the EU. There is no corresponding provision on ‘technical feasibility’ in the patent legislation of the PRC. Although the PRC has a sui generis system for protecting plant varieties, its patent legislation allows the patenting of non-biological methods for producing transgenic organisms, not the ‘organisms’ themselves. This might lead to a situation where an inventor can obtain patent protection in Hong Kong over transgenic life forms through the re-registration of a patent from a more ‘biotech-friendly’ DPO, even though the subject matter in question might not be patentable per se in the PRC. Through a comparative doctrinal analysis of legislative provisions, cases and court interpretations, this paper argues that differences in the protection afforded to GMOs do not generally prejudice the ability of global MNCs to obtain patent protection in Hong Kong. Corporations which are able to first obtain patents for GMO-based inventions in Europe can generally use their European patent as the basis for re-registration in Hong Kong, even if such protection might not be available in the PRC itself. However, the more restrictive approach to GMO-based patents adopted in the PRC would be more acutely felt by enterprises and inventors based in mainland China. The broader scope of protection offered to GMO-based patents in Europe might not be available in Hong Kong to mainland Chinese patentees under the current re-registration model for standard patents, unless they have the resources to apply for patent protection as well from another (European) DPO as the basis for re-registration.Keywords: biotechnology, forum shopping, genetically modified organisms (GMOs), greater China region, patent portfolio
Procedia PDF Downloads 3301274 Organ Donation after Medical Aid in Dying: A Critical Study of Clinical Processes and Legal Rules in Place
Authors: Louise Bernier
Abstract:
Under some jurisdictions (including Canada), eligible patients can request and receive medical assistance in dying (MAiD) through lethal injections, inducing their cardiocirculatory death. Those same patients can also wish to donate their organs in the process. If they qualify as organ donors, a clinical and ethical rule called the 'dead donor rule' (DDR) requires the transplant teams to wait after cardiocirculatory death is confirmed, followed by a 'no touch' period (5 minutes in Canada) before they can proceed with organ removal. The medical procedures (lethal injections) as well as the delays associated with the DDR can damage organs (mostly thoracic organs) due to prolonged anoxia. Yet, strong scientific evidences demonstrate that operating differently and reconsidering the DDR would result in more organs of better quality available for transplant. This idea generates discomfort and resistance, but it is also worth considering, especially in a context of chronic shortage of available organs. One option that could be examined for MAiD’ patients who wish and can be organ donors would be to remove vital organs while patients are still alive (and under sedation). This would imply accepting that patient’s death would occur through organ donation instead of lethal injections required under MAiD’ legal rules. It would also mean that patients requesting MAiD and wishing to be organ donors could aspire to donate better quality organs, including their heart, an altruistic gesture that carries important symbolic value for many donors and their families. Following a patient centered approach, our hypothesis is that preventing vital organ donation from a living donor in all circumstance is neither perfectly coherent with how legal mentalities have evolved lately in the field of fundamental rights nor compatible with the clinical and ethical frameworks that shape the landscape in which those complex medical decisions unfold. Through a study of the legal, ethical, and clinical rules in place, both at the national and international levels, this analysis raises questions on the numerous inconsistencies associated with respecting the DDR with patients who have chosen to die through MAiD. We will begin with an assessment of the erosion of certain national legal frameworks that pertain to the sacred nature of the right to life which now also includes the right to choose how one wishes to die. We will then study recent innovative clinical protocols tested in different countries to help address acute organ shortage problems in creative ways. We will conclude this analysis with an ethical assessment of the situation, referring to principles such as justice, autonomy, altruism, beneficence, and non-malfeasance. This study will build a strong argument in favor of starting to allow vital organ donations from living donors in countries where MAiD is already permitted.Keywords: altruism, autonomy, dead donor rule, medical assistance in dying, non-malfeasance, organ donation
Procedia PDF Downloads 183