Search results for: philosophy of quantum mechanics
95 Pedagogy of the Oppressed: Fifty Years Later. Implications for Policy and Reforms
Authors: Mohammad Ibrahim Alladin
Abstract:
The Pedagogy of the Oppressed by Paulo Freire was first published in 1970. Since its publication it has become one of most cited book in the social sciences. Over a million copies have been sold worldwide. The Pedagogy of the Oppressed by Paulo Freire was published in 1970 (New York: Herder and Herder), The book has caused a “revolution” in the education world and his theory has been examined and analysed. It has influenced educational policy, curriculum development and teacher education. The revolution started half a century ago. “Paolo Freire’s Pedagogy of the Oppressed develops a theory of education fitted to the needs of the disenfranchised and marginalized members of capitalist societies. Combining educational and political philosophy, the book offers an analysis of oppression and a theory of liberation. Freire believes that traditional education serves to support the dominance of the powerful within society and thereby maintain the powerful’s social, political, and economic status quo. To overcome the oppression endemic to an exploitative society, education must be remade to inspire and enable the oppressed in their struggle for liberation. This new approach to education focuses on consciousness-raising, dialogue, and collaboration between teacher and student in the effort to achieve greater humanization for all. For Freire, education is political and functions either to preserve the current social order or to transform it. The theories of education and revolutionary action he offers in Pedagogy of the Oppressed are addressed educators committed to the struggle for liberation from oppression. Freire’s own commitment to this struggle developed through years of teaching literacy to Brazilian and Chilean peasants and laborers. His efforts at educational and political reform resulted in a brief period of imprisonment followed exile from his native Brazil for fifteen years. In Pedagogy of the Oppressed begins Freire asserts the importance of consciousness-raising, or conscientização, as the means enabling the oppressed to recognize their oppression and commit to the effort to overcome it, taking full responsibility for themselves in the struggle for liberation. He addresses the “fear of freedom,” which inhibits the oppressed from assuming this responsibility. He also cautions against the dangers of sectarianism, which can undermine the revolutionary purpose as well as serve as a refuge for the committed conservative. Freire provides an alternative view of education by attacking tradition education and knowledge. He is highly critical of how is imparted and how knowledge is structured that limits the learner’s thinking. Hence, education becomes oppressive and school functions as an institution of social control. Since its publication, education has gone through a series of reforms and in some areas total transformation. This paper addresses the following: The role of education in social transformation The teacher/learner relationship :Critical thinking The paper essentially examines what happened in the last fifty years since Freire’s book. It seeks to explain what happened to Freire’s education revolution, and what is the status of the movement that started almost fifty years ago.Keywords: pedagogy, reform, curriculum, teacher education
Procedia PDF Downloads 9394 A Concept in Addressing the Singularity of the Emerging Universe
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation
Procedia PDF Downloads 8993 The Publishing Process and Results of the Chinese Annotated Edition of John Dewey’s “Experience and Education: The 60th Anniversary Edition”
Authors: Wen-jing Shan
Abstract:
The Chinese annotated edition of “Experience and education: The 60th anniversary edition,” originally written in English by John Dewey (1859-1952), was published in 2015 by this author. A report of the process and results of the translation and annotation of the book is the purpose of this paper. It is worth mentioning that the original 1938 edition was considered as the best concise statement on education by John Dewey, one the most important educational theorists of the twentieth century. One of the features of this The 60th anniversary edition is that the original publisher, Kappa Delta Pi International Honor Society, invited four contemporary Deweyan scholars who had been awarded the Society’s Laureate Scholar to write a review of the book published by Dewey, who was the first to receive this honor. The four scholars are Maxine Greene(1917-2014), Philip W. Jackson(1928-2015), Linda Darling-Hammond(1951-), and O. L. Davis, Jr.(1928-). The original 1938 edition, the best concise statement on education by the most important educational theorist of the twentieth century, was translated into Chinese for five times after its publication in the U.S.A, three in the 1940s, one in the 1990s, and one in 2010s. Nonetheless, the five translations have few or no annotations and have some flaws of mis-interpretations and lack of information. The author retranslated and annotated the book to make the interpretations more faithful, expressive, and elegant, and providing the readers with more understanding and more correct information. This author started the project of translation and annotation sponsored by Taiwan Ministry of Science and Technology in August 2011 and finished and published by July 2015. The work, the author, did was divided into three stages. First, in the preparatory stage of the project, the summary of each chapter, the rationale of the book, the textual commentary, the development of the original and Chinese editions, and reviews and criticisms, as well as Dewey’s biography and bibliography were initially investigated. Secondly, on the basis of the above preliminary work, the translation with annotation of Experience and Education, an epitome of Dewey’s biography and bibliography, a chronology, and a critical introduction for the Experience and Education were written. In the critical introduction, Dewey’s philosophy of experience and educational ideas will be examined along the timeline of human thought. And the vast literature about Dewey and his work will be instrumental to reveal the historical significance of Experience and Education on the modern age and make the critical introduction more knowledgeable. Third, the final stage took another two years to review and revise the draft of the work and send it for publication. There are two parts in the book. The first part is a scholarly introduction including Dewey’s chronicle (in short form), Dewey’s mind, people and life, the importance of “Experience and education”, the necessity of re-translation and re-annotation of “Experience and education” into Chinese. The second part is the re-translation and re-annotation version, including Dewey’s “Experience and education” and four papers written by contemporary scholars.Keywords: John Dewey, experience and education: the 60th anniversary edition, translation, annotation
Procedia PDF Downloads 16192 Preparation of Allyl BODIPY for the Click Reaction with Thioglycolic Acid
Authors: Chrislaura Carmo, Luca Deiana, Mafalda Laranjo, Abilio Sobral, Armando Cordova
Abstract:
Photodynamic therapy (PDT) is currently used for the treatment of malignancies and premalignant tumors. It is based on the capture of a photosensitizing molecule (PS) which, when excited by light at a certain wavelength, reacts with oxygen and generates oxidizing species (radicals, singlet oxygen, triplet species) in target tissues, leading to cell death. BODIPY (4,4-difluoro-4-bora-3a,4a-diaza-s-indaceno) derivatives are emerging as important candidates for photosensitizer in photodynamic therapy of cancer cells due to their high triplet quantum yield. Today these dyes are relevant molecules in photovoltaic materials and fluorescent sensors. In this study, it will be demonstrated the possibility that BODIPY can be covalently linked to thioglycolic acid through the click reaction. Thiol−ene click chemistry has become a powerful synthesis method in materials science and surface modification. The design of biobased allyl-terminated precursors with high renewable carbon content for the construction of the thiol-ene polymer networks is essential for sustainable development and green chemistry. The work aims to synthesize the BODIPY (10-(4-(allyloxy) phenyl)-2,8-diethyl-5,5-difluoro-1,3,7,9-tetramethyl-5H-dipyrrolo[1,2-c:2',1'-f] [1,3,2] diazaborinin-4-ium-5-uide) and to click reaction with Thioglycolic acid. BODIPY was synthesized by the condensation reaction between aldehyde and pyrrole in dichloromethane, followed by in situ complexation with BF3·OEt2 in the presence of the base. Then it was functionalized with allyl bromide to achieve the double bond and thus be able to carry out the click reaction. The thiol−ene click was performed using DMPA (2,2-Dimethoxy-2-phenylacetophenone) as a photo-initiator in the presence of UV light (320–500 nm) in DMF at room temperature for 24 hours. Compounds were characterized by standard analytical techniques, including UV-Vis Spectroscopy, 1H, 13C, 19F NMR and mass spectroscopy. The results of this study will be important to link BODIPY to polymers through the thiol group offering a diversity of applications and functionalization. This new molecule can be tested as third-generation photosensitizers, in which the dye is targeted by antibodies or nanocarriers by cells, mainly in cancer cells, PDT and Photodynamic Antimicrobial Chemotherapy (PACT). According to our studies, it was possible to visualize a click reaction between allyl BODIPY and thioglycolic acid. Our team will also test the reaction with other thiol groups for comparison. Further, we will do the click reaction of BODIPY with a natural polymer linked with a thiol group. The results of the above compounds will be tested in PDT assays on various lung cancer cell lines.Keywords: bodipy, click reaction, thioglycolic acid, allyl, thiol-ene click
Procedia PDF Downloads 13291 Nanomechanical Characterization of Healthy and Tumor Lung Tissues at Cell and Extracellular Matrix Level
Authors: Valeria Panzetta, Ida Musella, Sabato Fusco, Paolo Antonio Netti
Abstract:
The study of the biophysics of living cells drew attention to the pivotal role of the cytoskeleton in many cell functions, such as mechanics, adhesion, proliferation, migration, differentiation and neoplastic transformation. In particular, during the complex process of malignant transformation and invasion cell cytoskeleton devolves from a rigid and organized structure to a more compliant state, which confers to the cancer cells a great ability to migrate and adapt to the extracellular environment. In order to better understand the malignant transformation process from a mechanical point of view, it is necessary to evaluate the direct crosstalk between the cells and their surrounding extracellular matrix (ECM) in a context which is close to in vivo conditions. In this study, human biopsy tissues of lung adenocarcinoma were analyzed in order to define their mechanical phenotype at cell and ECM level, by using particle tracking microrheology (PTM) technique. Polystyrene beads (500 nm) were introduced into the sample slice. The motion of beads was obtained by tracking their displacements across cell cytoskeleton and ECM structures and mean squared displacements (MSDs) were calculated from bead trajectories. It has been already demonstrated that the amplitude of MSD is inversely related to the mechanical properties of intracellular and extracellular microenvironment. For this reason, MSDs of particles introduced in cytoplasm and ECM of healthy and tumor tissues were compared. PTM analyses showed that cancerous transformation compromises mechanical integrity of cells and extracellular matrix. In particular, the MSD amplitudes in cells of adenocarcinoma were greater as compared to cells of normal tissues. The increased motion is probably associated to a less structured cytoskeleton and consequently to an increase of deformability of cells. Further, cancer transformation is also accompanied by extracellular matrix stiffening, as confirmed by the decrease of MSDs of matrix in tumor tissue, a process that promotes tumor proliferation and invasiveness, by activating typical oncogenic signaling pathways. In addition, a clear correlation between MSDs of cells and tumor grade was found. MSDs increase when tumor grade passes from 2 to 3, indicating that cells undergo to a trans-differentiation process during tumor progression. ECM stiffening is not dependent on tumor grade, but the tumor stage resulted to be strictly correlated with both cells and ECM mechanical properties. In fact, a greater stage is assigned to tumor spread to regional lymph nodes and characterized by an up-regulation of different ECM proteins, such as collagen I fibers. These results indicate that PTM can be used to get nanomechanical characterization at different scale levels in an interpretative and diagnostic context.Keywords: cytoskeleton, extracellular matrix, mechanical properties, particle tracking microrheology, tumor
Procedia PDF Downloads 27990 The Concept of Path in Original Buddhism and the Concept of Psychotherapeutic Improvement
Authors: Beth Jacobs
Abstract:
The landmark movement of Western clinical psychology in the 20th century was the development of psychotherapy. The landmark movement of clinical psychology in the 21st century will be the absorption of meditation practices from Buddhist psychology. While millions of people explore meditation and related philosophy, very few people are exposed to the materials of original Buddhism on this topic, especially to the Theravadan Abhidharma. The Abhidharma is an intricate system of lists and matrixes that were used to understand and remember Buddha’s teaching. The Abhidharma delineates the first psychological system of Buddhism, how the mind works in the universe of reality and why meditation training strengthens and purifies the experience of life. Its lists outline the psychology of mental constructions, perception, emotion and cosmological causation. While the Abhidharma is technical, elaborate and complex, its essential purpose relates to the central purpose of clinical psychology: to relieve human suffering. Like Western depth psychology, the methodology rests on understanding underlying processes of consciousness and perception. What clinical psychologists might describe as therapeutic improvement, the Abhidharma delineates as a specific pathway of purified actions of consciousness. This paper discusses the concept of 'path' as presented in aspects of the Theravadan Abhidharma and relates this to current clinical psychological views of therapy outcomes and gains. The core path in Buddhism is the Eight-Fold Path, which is the fourth noble truth and the launching of activity toward liberation. The path is not composed of eight ordinal steps; it’s eight-fold and is described as opening the way, not funneling choices. The specific path in the Abhidharma is described in many steps of development of consciousness activities. The path is not something a human moves on, but something that moments of consciousness develop within. 'Cittas' are extensively described in the Abhidharma as the atomic-level unit of a raw action of consciousness touching upon an object in a field, and there are 121 types of cittas categorized. The cittas are embedded in the mental factors, which could be described as the psychological packaging elements of our experiences of consciousness. Based on these constellations of infinitesimal, linked occurrences of consciousness, citta are categorized by dimensions of purification. A path is a chain of citta developing through causes and conditions. There are no selves, no pronouns in the Abhidharma. Instead of me walking a path, this is about a person working with conditions to cultivate a stream of consciousness that is pure, immediate, direct and generous. The same effort, in very different terms, informs the work of most psychotherapies. Depth psychology seeks to release the bound, unconscious elements of mental process into the clarity of realization. Cognitive and behavioral psychologies work on breaking down automatic thought valuations and actions, changing schemas and interpersonal dynamics. Understanding how the original Buddhist concept of positive human development relates to the clinical psychological concept of therapy weaves together two brilliant systems of thought on the development of human well being.Keywords: Abhidharma, Buddhist path, clinical psychology, psychotherapeutic outcome
Procedia PDF Downloads 21389 Strategic Entrepreneurship: Model Proposal for Post-Troika Sustainable Cultural Organizations
Authors: Maria Inês Pinho
Abstract:
Recent literature on issues of Cultural Management (also called Strategic Management for cultural organizations) systematically seeks for models that allow such equipment to adapt to the constant change that occurs in contemporary societies. In the last decade, the world, and in particular Europe has experienced a serious financial problem that has triggered defensive mechanisms, both in the direction of promoting the balance of public accounts and in the sense of the anonymous loss of the democratic and cultural values of each nation. If in the first case emerged the Troika that led to strong cuts in funding for Culture, deeply affecting those organizations; in the second case, the commonplace citizen is seen fighting for the non-closure of cultural equipment. Despite this, the cultural manager argues that there is no single formula capable of solving the need to adapt to change. In another way, it is up to this agent to know the existing scientific models and to adapt them in the best way to the reality of the institution he coordinates. These actions, as a rule, are concerned with the best performance vis-à-vis external audiences or with the financial sustainability of cultural organizations. They forget, therefore, that all this mechanics cannot function without its internal public, without its Human Resources. The employees of the cultural organization must then have an entrepreneurial posture - must be intrapreneurial. This paper intends to break this form of action and lead the cultural manager to understand that his role should be in the sense of creating value for society, through a good organizational performance. This is only possible with a posture of strategic entrepreneurship. In other words, with a link between: Cultural Management, Cultural Entrepreneurship and Cultural Intrapreneurship. In order to prove this assumption, the case study methodology was used with the symbol of the European Capital of Culture (Casa da Música) as well as qualitative and quantitative techniques. The qualitative techniques included the procedure of in-depth interviews to managers, founders and patrons and focus groups to public with and without experience in managing cultural facilities. The quantitative techniques involved the application of a questionnaire to middle management and employees of Casa da Música. After the triangulation of the data, it was proved that contemporary management of cultural organizations must implement among its practices, the concept of Strategic Entrepreneurship and its variables. Also, the topics which characterize the Cultural Intrapreneurship notion (job satisfaction, the quality in organizational performance, the leadership and the employee engagement and autonomy) emerged. The findings show then that to be sustainable, a cultural organization should meet the concerns of both external and internal forum. In other words, it should have an attitude of citizenship to the communities, visible on a social responsibility and a participatory management, only possible with the implementation of the concept of Strategic Entrepreneurship and its variable of Cultural Intrapreneurship.Keywords: cultural entrepreneurship, cultural intrapreneurship, cultural organizations, strategic management
Procedia PDF Downloads 18288 Reading and Writing Memories in Artificial and Human Reasoning
Authors: Ian O'Loughlin
Abstract:
Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.Keywords: artificial reasoning, human memory, machine learning, neural networks
Procedia PDF Downloads 27187 Material Chemistry Level Deformation and Failure in Cementitious Materials
Authors: Ram V. Mohan, John Rivas-Murillo, Ahmed Mohamed, Wayne D. Hodo
Abstract:
Cementitious materials, an excellent example of highly complex, heterogeneous material systems, are cement-based systems that include cement paste, mortar, and concrete that are heavily used in civil infrastructure; though commonly used are one of the most complex in terms of the material morphology and structure than most materials, for example, crystalline metals. Processes and features occurring at the nanometer sized morphological structures affect the performance, deformation/failure behavior at larger length scales. In addition, cementitious materials undergo chemical and morphological changes gaining strength during the transient hydration process. Hydration in cement is a very complex process creating complex microstructures and the associated molecular structures that vary with hydration. A fundamental understanding can be gained through multi-scale level modeling for the behavior and properties of cementitious materials starting from the material chemistry level atomistic scale to further explore their role and the manifested effects at larger length and engineering scales. This predictive modeling enables the understanding, and studying the influence of material chemistry level changes and nanomaterial additives on the expected resultant material characteristics and deformation behavior. Atomistic-molecular dynamic level modeling is required to couple material science to engineering mechanics. Starting at the molecular level a comprehensive description of the material’s chemistry is required to understand the fundamental properties that govern behavior occurring across each relevant length scale. Material chemistry level models and molecular dynamics modeling and simulations are employed in our work to describe the molecular-level chemistry features of calcium-silicate-hydrate (CSH), one of the key hydrated constituents of cement paste, their associated deformation and failure. The molecular level atomic structure for CSH can be represented by Jennite mineral structure. Jennite has been widely accepted by researchers and is typically used to represent the molecular structure of the CSH gel formed during the hydration of cement clinkers. This paper will focus on our recent work on the shear and compressive deformation and failure behavior of CSH represented by Jennite mineral structure that has been widely accepted by researchers and is typically used to represent the molecular structure of CSH formed during the hydration of cement clinkers. The deformation and failure behavior under shear and compression loading deformation in traditional hydrated CSH; effect of material chemistry changes on the predicted stress-strain behavior, transition from linear to non-linear behavior and identify the on-set of failure based on material chemistry structures of CSH Jennite and changes in its chemistry structure will be discussed.Keywords: cementitious materials, deformation, failure, material chemistry modeling
Procedia PDF Downloads 28686 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano
Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das
Abstract:
Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption
Procedia PDF Downloads 41585 Community Music in Puerto Rico
Authors: Francisco Luis Reyes
Abstract:
The multiple-case study explores the intricacies of three Puerto Rican Community Music (CM) initiatives. This research concentrates on the teaching and learning dynamics of three of the nation’s traditional musical genres, Plena, Bomba, and Música Jíbara, which have survived for centuries through oral transmission and enculturation in community settings. Accordingly, this research focuses on how music education is carried out in Puerto Rican CM initiatives that foster and preserve the country’s traditional music. This study examines the CM initiatives of La Junta, in Santurce (Plena), Taller Tambuyé in Rio Piedras (Bomba), and Decimanía (Música Jíbara), an initiative that stems from the municipality of Hatillo. In terms of procedure, 45–60-minute semi-structured interviews were conducted with organizers and administrators of the CM initiatives to gain insight into the educational philosophy of each project. Following this, a second series of 45–60-minute semi-structured interviews were undertaken with CM educators to collect data on their musical development, teaching practices, and relationship with learners. Subsequently, four weeks were spent observing/participating in each of the three CM initiatives. In addition to participant observations in these projects, five CM learners from each locale were recruited for two one-on-one semi-structured interviews at the beginning and end of the data collection period. The initial interview centered on the participants’ rationale for joining the CM initiative whereas the exit interview focused on participants’ experience within it. Alumni from each of the CM initiatives partook in 45–60-minute semi-structured interviews to investigate their understanding of what it means to be a member of each musical community. Finally, observations and documentation of additional activities hosted/promoted by each initiative, such as festivals, concerts, social gatherings, and workshops, were undertaken. These three initiatives were chosen because of their robust and dynamic practices in fostering the musical expressions of Puerto Rico. Data collection consisted of participant observation, narrative inquiry, historical research, philosophical inquiry, and semi-structured interviews. Data analysis for this research involved relying on theoretical propositions, which entails comparing the results—from each case and as a collective— to the arguments that led to the basis of the research (e.g., literature review, research questions, hypothesis). Comparisons to the theoretical propositions were made through pattern matching, which requires comparing predicted patterns from the literature review to findings from each case. Said process led to the development of an analytic outlook of each CM case and a cross-case synthesis. The purpose of employing said data analysis methodology is to present robust findings about CM practices in Puerto Rico and elucidate similarities and differences between the cases that comprise this research and the relevant literature. Furthermore, through the use of Sound Links’ Nine Domains of Community Music, comparisons to other community projects are made in order to point out parallels and highlight particularities in Puerto Rico.Keywords: community music, Puerto Rico, music learning, traditional music
Procedia PDF Downloads 2784 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle
Authors: Aloke Bapli, Debabrata Seth
Abstract:
Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation
Procedia PDF Downloads 15583 Conceptualizing a Biomimetic Fablab Based on the Makerspace Concept and Biomimetics Design Research
Authors: Petra Gruber, Ariana Rupp, Peter Niewiarowski
Abstract:
This paper presents a concept for a biomimetic fablab as a physical space for education, research and development of innovation inspired by nature. Biomimetics as a discipline finds increasing recognition in academia and has started to be institutionalized at universities in programs and centers. The Biomimicry Research and Innovation Center was founded in 2012 at the University of Akron as an interdisciplinary venture for the advancement of innovation inspired by nature and is part of a larger community fostering the approach of bioimimicry in the Great Lakes region of the US. With 30 faculty members the center has representatives from Colleges of Arts and Sciences (e.g., biology, chemistry, geoscience, and philosophy) Engineering (e.g., mechanical, civil, and biomedical), Polymer Science, and Myers School of Arts. A platform for training PhDs in Biomimicry (17 students currently enrolled) is co-funded by educational institutions and industry partners. Research at the center touches on many areas but is also currently biased towards materials and structures, with highlights being materials based on principles found in spider silk and gecko attachment mechanisms. As biomimetics is also a novel scientific discipline, there is little standardisation in programming and the equipment of research facilities. As a field targeting innovation, design and prototyping processes are fundamental parts of the developments. For experimental design and prototyping, MIT's maker space concept seems to fit well to the requirements, but facilities need to be more specialised in terms of accessing biological systems and knowledge, specific research, production or conservation requirements. For the education and research facility BRIC we conceptualize the concept of a biomimicry fablab, that ties into the existing maker space concept and creates the setting for interdisciplinary research and development carried out in the program. The concept takes on the process of biomimetics as a guideline to define core activities that shall be enhanced by the allocation of specific spaces and tools. The limitations of such a facility and the intersections to further specialised labs housed in the classical departments are of special interest. As a preliminary proof of concept two biomimetic design courses carried out in 2016 are investigated in terms of needed tools and infrastructure. The spring course was a problem based biomimetic design challenge in collaboration with an innovation company interested in product design for assisted living and medical devices. The fall course was a solution based biomimetic design course focusing on order and hierarchy in nature with the goal of finding meaningful translations into art and technology. The paper describes the background of the BRIC center, identifies and discusses the process of biomimetics, evaluates the classical maker space concept and explores how these elements can shape the proposed research facility of a biomimetic fablab by examining two examples of design courses held in 2016.Keywords: biomimetics, biomimicry, design, biomimetic fablab
Procedia PDF Downloads 29482 “It’s All in Your Head”: Epistemic Injustice, Prejudice, and Power in the Modern Healthcare System
Authors: David Tennison
Abstract:
Epistemic injustice, an injustice done to a person specifically in their capacity as a “knower”, is a subtle form of discrimination, yet its effects can be as dehumanizing and damaging as more overt forms of discrimination. The lens of epistemic injustice has, in recent years, been fruitfully applied to the field of healthcare, examining questions of agency, power, credibility and belief in doctor-patient interactions. Contested illness patients (e.g., those with illnesses lacking scientific consensuses such as fibromyalgia (FM), Myalgic Encephalomyelitis/ Chronic Fatigue Syndrome (ME/CFS) and Long Covid) face higher levels of scrutiny than other patient groups and are often disbelieved or dismissed when their ailments cannot be easily imaged or tested for- often encapsulated by the expression “it’s all in your head”. Using the case study of FM, the trials of contested illness patients in healthcare can be conceptualized in terms of epistemic injustice, and what is going wrong in these doctor-patient relationships can be effectively diagnosed. This case study also helps reveal epistemic dysfunction (structural epistemic issues embedded in the healthcare system), how this relates to stigma identity-based prejudice, and how the healthcare system upholds existing societal hierarchies and disenfranchises the most vulnerable. In the modern landscape, where cases of these chronic illnesses are not only on the rise but future pandemics threaten to add to their number, this conversation is crucial for the well-being of patients and providers. This presentation will cover what epistemic injustice is and how it can be applied to the politics of the doctor-patient interaction on a micro level and the politics of the healthcare system more broadly. Contested illnesses will be explored in terms of how the “contested” label causes the patient to experience disease stigma and lowers their credibility in healthcare and across other aspects of life. This will be explored in tandem with a discussion of existing identity-based prejudice in the healthcare system and how social identities (such as those of gender, race, and socioeconomic status) intersect with the contested illness label. The effects of epistemic injustice, which include worsening patients’ symptoms of mental health and potentially disenfranchising them from the healthcare system altogether, will be presented alongside the potential ethical quandaries this poses for providers. Finally, issues with the way healthcare appointments and the modern NHS function will be explored in terms of epistemic injustice and solutions to improve doctor-patient communication and patient care will be discussed. The relationship between contested illness patients and healthcare providers is notoriously poor, and while this can mean frustration or feelings of unfulfillment in providers, the negative effects for patients are much more severe. The purpose of this research, then, is to highlight these issues and suggest ways in which to improve the healthcare experience for these patients, along with improving doctor-patient communication and mending the doctor-patient relationship in a tangible and realistic way. This research also aims to provoke important conversations about belief and hierarchy in medical settings and how these aspects intersect with identity prejudices.Keywords: epistemic injustice, fibromyalgia, contested illnesses, chronic illnesses, doctor-patient relationships, philosophy of medicine
Procedia PDF Downloads 6081 The Establishment of Primary Care Networks (England, UK) Throughout the COVID-19 Pandemic: A Qualitative Exploration of Workforce Perceptions
Authors: Jessica Raven Gates, Gemma Wilson-Menzfeld, Professor Alison Steven
Abstract:
In 2019, the Primary Care system in the UK National Health Service (NHS) was subject to reform and restructuring. Primary Care Networks (PCNs) were established, which aligned with a trend towards integrated care both within the NHS and internationally. The introduction of PCNs brought groups of GP practices in a locality together, to operate as a network, build on existing services and collaborate at a larger scale. PCNs were expected to bring a range of benefits to patients and address some of the workforce pressures in the NHS, through an expanded and collaborative workforce. The early establishment of PCNs was disrupted by the emerging COVID-19 pandemic. This study, set in the context of the pandemic, aimed to explore experiences of the PCN workforce, and their perceptions of the establishment of PCNs. Specific objectives focussed on examining factors perceived as enabling or hindering the success of a PCN, the impact on day-to-day work, the approach to implementing change, and the influence of the COVID-19 pandemic upon PCN development. This study is part of a three-phase PhD project that utilized qualitative approaches and was underpinned by social constructionist philosophy. Phase 1: a systematic narrative review explored the provision of preventative healthcare services in UK primary settings and examined facilitators and barriers to delivery as experienced by the workforce. Phase 2: informed by the findings of phase 1, semi-structured interviews were conducted with fifteen participants (PCN workforce). Phase 3: follow-up interviews were conducted with original participants to examine any changes to their experiences and perceptions of PCNs. Three main themes span across phases 2 and 3 and were generated through a Framework Analysis approach: 1) working together at scale, 2) network infrastructure, and 3) PCN leadership. Findings suggest that through efforts to work together at scale and collaborate as a network, participants have broadly accepted the concept of PCNs. However, the workforce has been hampered by system design and system complexity. Operating against such barriers has led to a negative psychological impact on some PCN leaders and others in the PCN workforce. While the pandemic undeniably increased pressure on healthcare systems around the world, it also acted as a disruptor, offering a glimpse into how collaboration in primary care can work well. Through the integration of findings from all phases, a new theoretical model has been developed, which conceptualises the findings from this Ph.D. study and demonstrates how the workforce has experienced change associated with the establishment of PCNs. The model includes a contextual component of the COVID-19 pandemic and has been informed by concepts from Complex Adaptive Systems theory. This model is the original contribution to knowledge of the PhD project, alongside recommendations for practice, policy and future research. This study is significant in the realm of health services research, and while the setting for this study is the UK NHS, the findings will be of interest to an international audience as the research provides insight into how the healthcare workforce may experience imposed policy and service changes.Keywords: health services research, qualitative research, NHS workforce, primary care
Procedia PDF Downloads 5880 Facilitating the Learning Environment as a Servant Leader: Empowering Self-Directed Student Learning
Authors: Thomas James Bell III
Abstract:
Pedagogy is thought of as one's philosophy, theory, or teaching method. This study examines the science of learning, considering the forced reconsideration of effective pedagogy brought on by the aftermath of the 2020 coronavirus pandemic. With the aid of various technologies, online education holds challenges and promises to enhance the learning environment if implemented to facilitate student learning. Behaviorism centers around the belief that the instructor is the sage on the classroom stage using repetition techniques as the primary learning instrument. This approach to pedagogy ascribes complete control of the learning environment and works best for students to learn by allowing students to answer questions with immediate feedback. Such structured learning reinforcement tends to guide students' learning without considering learners' independence and individual reasoning. And such activities may inadvertently stifle the student's ability to develop critical thinking and self-expression skills. Fundamentally liberationism pedagogy dismisses the concept that education is merely about students learning things and more about the way students learn. Alternatively, the liberationist approach democratizes the classroom by redefining the role of the teacher and student. The teacher is no longer viewed as the sage on the stage but as a guide on the side. Instead, this approach views students as creators of knowledge and not empty vessels to be filled with knowledge. Moreover, students are well suited to decide how best to learn and which areas improvements are needed. This study will explore the classroom instructor as a servant leader in the twenty-first century, which allows students to integrate technology that encapsulates more individual learning styles. The researcher will examine the Professional Scrum Master (PSM I) exam pass rate results of 124 students in six sections of an Agile scrum course. The students will be separated into two groups; the first group will follow a structured instructor-led course outlined by a course syllabus. The second group will consist of several small teams (ten or fewer) of self-led and self-empowered students. The teams will conduct several event meetings that include sprint planning meetings, daily scrums, sprint reviews, and retrospective meetings throughout the semester will the instructor facilitating the teams' activities as needed. The methodology for this study will use the compare means t-test to compare the mean of an exam pass rate in one group to the mean of the second group. A one-tailed test (i.e., less than or greater than) will be used with the null hypothesis, for the difference between the groups in the population will be set to zero. The major findings will expand the pedagogical approach that suggests pedagogy primarily exist in support of teacher-led learning, which has formed the pillars of traditional classroom teaching. But in light of the fourth industrial revolution, there is a fusion of learning platforms across the digital, physical, and biological worlds with disruptive technological advancements in areas such as the Internet of Things (IoT), artificial intelligence (AI), 3D printing, robotics, and others.Keywords: pedagogy, behaviorism, liberationism, flipping the classroom, servant leader instructor, agile scrum in education
Procedia PDF Downloads 14279 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution
Procedia PDF Downloads 9978 Place-Making Theory behind Claremont Court
Authors: Sandra Costa-Santos, Nadia Bertolino, Stephen Hicks, Vanessa May, Camilla Lewis
Abstract:
This paper aims to elaborate the architectural theory on place-making that supported Claremont Court housing scheme (Edinburgh, United Kingdom). Claremont Court (1959-62) is a large post-war mixed development housing scheme designed by Basil Spence, which included ‘place-making’ as one of its founding principles. Although some stylistic readings of the housing scheme have been published, the theory on place-making that allegedly ruled the design has yet to be clarified. The architecture allows us to mark or make a place within space in order to dwell. Under the framework of contemporary philosophical theories of place, this paper aims to explore the relationship between place and dwelling through a cross-disciplinary reading of Claremont Court, with a view to develop an architectural theory on place-making. Since dwelling represents the way we are immersed in our world in an existential manner, this theme is not just relevant for architecture but also for philosophy and sociology. The research in this work is interpretive-historic in nature. It examines documentary evidence of the original architectural design, together with relevant literature in sociology, history, and architecture, through the lens of theories of place. First, the paper explores how the dwelling types originally included in Claremont Court supported ideas of dwelling or meanings of home. Then, it traces shared space and social ties in order to study the symbolic boundaries that allow the creation of a collective identity or sense of belonging. Finally, the relation between the housing scheme and the supporting theory is identified. The findings of this research reveal Scottish architect Basil Spence’s exploration of the meaning of home, as he changed his approach to the mass housing while acting as President of the Royal Incorporation of British Architects (1958-60). When the British Government was engaged in various ambitious building programmes, he sought to drive architecture to a wider socio-political debate as president of the RIBA, hence moving towards a more ambitious and innovative socio-architectural approach. Rather than trying to address the ‘genius loci’ with an architectural proposition, as has been stated, the research shows that the place-making theory behind the housing scheme was supported by notions of community-based on shared space and dispositions. The design of the housing scheme was steered by a desire to foster social relations and collective identities, rather than by the idea of keeping the spirit of the place. This research is part of a cross-disciplinary project funded by the Arts and Humanities Research Council. The findings present Claremont Court as a signifier of Basil Spence’s attempt to address the post-war political debate on housing in United Kingdom. They highlight the architect’s theoretical agenda and challenge current purely stylistic readings of Claremont Court as they fail to acknowledge its social relevance.Keywords: architectural theory, dwelling, place-making, post-war housing
Procedia PDF Downloads 26577 Cycle-Oriented Building Components and Constructions Made from Paper Materials
Authors: Rebecca Bach, Evgenia Kanli, Nihat Kiziltoprak, Linda Hildebrand, Ulrich Knaack, Jens Schneider
Abstract:
The building industry has a high demand for resources and at the same time is responsible for a significant amount of waste created worldwide. Today's building components need to contribute to the protection of natural resources without creating waste. This is defined in the product development phase and impacts the product’s degree of being cycle-oriented. Paper-based materials show advantage due to their renewable origin and their ability to incorporate different functions. Besides the ecological aspects like renewable origin and recyclability the main advantages of paper materials are its light-weight but stiff structure, the optimized production processes and good insulation values. The main deficits from building technology’s perspective are the material's vulnerability to humidity and water as well as inflammability. On material level, those problems can be solved by coatings or through material modification. On construction level intelligent setup and layering of a building component can improve and also solve these issues. The target of the present work is to provide an overview of developed building components and construction typologies mainly made from paper materials. The research is structured in four parts: (1) functions and requirements, (2) preselection of paper-based materials, (3) development of building components and (4) evaluation. As part of the research methodology at first the needs of the building sector are analyzed with the aim to define the main areas of application and consequently the requirements. Various paper materials are tested in order to identify to what extent the requirements are satisfied and determine potential optimizations or modifications, also in combination with other construction materials. By making use of the material’s potentials and solving the deficits on material and on construction level, building components and construction typologies are developed. The evaluation and the calculation of the structural mechanics and structural principals will show that different construction typologies can be derived. Profiles like paper tubes can be used at best for skeleton constructions. Massive structures on the other hand can be formed by plate-shaped elements like solid board or honeycomb. For insulation purposes corrugated cardboard or cellulose flakes have the best properties, while layered solid board can be applied to prevent inner condensation. Enhancing these properties by material combinations for instance with mineral coatings functional constructions mainly out of paper materials were developed. In summary paper materials offer a huge variety of possible applications in the building sector. By these studies a general base of knowledge about how to build with paper was developed and is to be reinforced by further research.Keywords: construction typologies, cycle-oriented construction, innovative building material, paper materials, renewable resources
Procedia PDF Downloads 27576 Enhancing Algal Bacterial Photobioreactor Efficiency: Nutrient Removal and Cost Analysis Comparison for Light Source Optimization
Authors: Shahrukh Ahmad, Purnendu Bose
Abstract:
Algal-Bacterial photobioreactors (ABPBRs) have emerged as a promising technology for sustainable biomass production and wastewater treatment. Nutrient removal is seldom done in sewage treatment plants and large volumes of wastewater which still have nutrients are being discharged and that can lead to eutrophication. That is why ABPBR plays a vital role in wastewater treatment. However, improving the efficiency of ABPBR remains a significant challenge. This study aims to enhance ABPBR efficiency by focusing on two key aspects: nutrient removal and cost-effective optimization of the light source. By integrating nutrient removal and cost analysis for light source optimization, this study proposes practical strategies for improving ABPBR efficiency. To reduce organic carbon and convert ammonia to nitrates, domestic wastewater from a 130 MLD sewage treatment plant (STP) was aerated with a hydraulic retention time (HRT) of 2 days. The treated supernatant had an approximate nitrate and phosphate values of 16 ppm as N and 6 ppm as P, respectively. This supernatant was then fed into the ABPBR, and the removal of nutrients (nitrate as N and phosphate as P) was observed using different colored LED bulbs, namely white, blue, red, yellow, and green. The ABPBR operated with a 9-hour light and 3-hour dark cycle, using only one color of bulbs per cycle. The study found that the white LED bulb, with a photosynthetic photon flux density (PPFD) value of 82.61 µmol.m-2 .sec-1 , exhibited the highest removal efficiency. It achieved a removal rate of 91.56% for nitrate and 86.44% for phosphate, surpassing the other colored bulbs. Conversely, the green LED bulbs showed the lowest removal efficiencies, with 58.08% for nitrate and 47.48% for phosphate at an HRT of 5 days. The quantum PAR (Photosynthetic Active Radiation) meter measured the photosynthetic photon flux density for each colored bulb setting inside the photo chamber, confirming that white LED bulbs operated at a wider wavelength band than the others. Furthermore, a cost comparison was conducted for each colored bulb setting. The study revealed that the white LED bulb had the lowest average cost (Indian Rupee)/light intensity (µmol.m-2 .sec-1 ) value at 19.40, while the green LED bulbs had the highest average cost (INR)/light intensity (µmol.m-2 .sec-1 ) value at 115.11. Based on these comparative tests, it was concluded that the white LED bulbs were the most efficient and costeffective light source for an algal photobioreactor. They can be effectively utilized for nutrient removal from secondary treated wastewater which helps in improving the overall wastewater quality before it is discharged back into the environment.Keywords: algal bacterial photobioreactor, domestic wastewater, nutrient removal, led bulbs
Procedia PDF Downloads 7775 Surface Defect-engineered Ceo₂−x by Ultrasound Treatment for Superior Photocatalytic H₂ Production and Water Treatment
Authors: Nabil Al-Zaqri
Abstract:
Semiconductor photocatalysts with surface defects display incredible light absorption bandwidth, and these defects function as highly active sites for oxidation processes by interacting with the surface band structure. Accordingly, engineering the photocatalyst with surface oxygen vacancies will enhance the semiconductor nanostructure's photocatalytic efficiency. Herein, a CeO2₋ₓ nanostructure is designed under the influence of low-frequency ultrasonic waves to create surface oxygen vacancies. This approach enhances the photocatalytic efficiency compared to many heterostructures while keeping the intrinsiccrystal structure intact. Ultrasonic waves induce the acoustic cavitation effect leading to the dissemination of active elements on the surface, which results in vacancy formation in conjunction with larger surface area and smaller particle size. The structural analysis of CeO₂₋ₓ revealed higher crystallinity, as well as morphological optimization, and the presence of oxygen vacancies is verified through Raman, X-rayphotoelectron spectroscopy, temperature-programmed reduction, photoluminescence, and electron spinresonance analyses. Oxygen vacancies accelerate the redox cycle between Ce₄+ and Ce₃+ by prolongingphotogenerated charge recombination. The ultrasound-treated pristine CeO₂ sample achieved excellenthydrogen production showing a quantum efficiency of 1.125% and efficient organic degradation. Ourpromising findings demonstrated that ultrasonic treatment causes the formation of surface oxygenvacancies and improves photocatalytic hydrogen evolution and pollution degradation. Conclusion: Defect engineering of the ceria nanoparticles with oxygen vacancies was achieved for the first time using low-frequency ultrasound treatment. The U-CeO₂₋ₓsample showed high crystallinity, and morphological changes were observed. Due to the acoustic cavitation effect, a larger surface area and small particle size were observed. The ultrasound treatment causes particle aggregation and surface defects leading to oxygen vacancy formation. The XPS, Raman spectroscopy, PL spectroscopy, and ESR results confirm the presence of oxygen vacancies. The ultrasound-treated sample was also examined for pollutant degradation, where 1O₂was found to be the major active species. Hence, the ultrasound treatment influences efficient photocatalysts for superior hydrogen evolution and an excellent photocatalytic degradation of contaminants. The prepared nanostructure showed excellent stability and recyclability. This work could pave the way for a unique post-synthesis strategy intended for efficient photocatalytic nanostructures.Keywords: surface defect, CeO₂₋ₓ, photocatalytic, water treatment, H₂ production
Procedia PDF Downloads 14174 Assessing the Material Determinants of Cavity Polariton Relaxation using Angle-Resolved Photoluminescence Excitation Spectroscopy
Authors: Elizabeth O. Odewale, Sachithra T. Wanasinghe, Aaron S. Rury
Abstract:
Cavity polaritons form when molecular excitons strongly couple to photons in carefully constructed optical cavities. These polaritons, which are hybrid light-matter states possessing a unique combination of photonic and excitonic properties, present the opportunity to manipulate the properties of various semiconductor materials. The systematic manipulation of materials through polariton formation could potentially improve the functionalities of many optoelectronic devices such as lasers, light-emitting diodes, photon-based quantum computers, and solar cells. However, the prospects of leveraging polariton formation for novel devices and device operation depend on more complete connections between the properties of molecular chromophores, and the hybrid light-matter states they form, which remains an outstanding scientific goal. Specifically, for most optoelectronic applications, it is paramount to understand how polariton formation affects the spectra of light absorbed by molecules coupled strongly to cavity photons. An essential feature of a polariton state is its dispersive energy, which occurs due to the enhanced spatial delocalization of the polaritons relative to bare molecules. To leverage the spatial delocalization of cavity polaritons, angle-resolved photoluminescence excitation spectroscopy was employed in characterizing light emission from the polaritonic states. Using lasers of appropriate energies, the polariton branches were resonantly excited to understand how molecular light absorption changes under different strong light-matter coupling conditions. Since an excited state has a finite lifetime, the photon absorbed by the polariton decays non-radiatively into lower-lying molecular states, from which radiative relaxation to the ground state occurs. The resulting fluorescence is collected across several angles of excitation incidence. By modeling the behavior of the light emission observed from the lower-lying molecular state and combining this result with the output of angle-resolved transmission measurements, inferences are drawn about how the behavior of molecules changes when they form polaritons. These results show how the intrinsic properties of molecules, such as the excitonic lifetime, affect the rate at which the polaritonic states relax. While it is true that the lifetime of the photon mediates the rate of relaxation in a cavity, the results from this study provide evidence that the lifetime of the molecular exciton also limits the rate of polariton relaxation.Keywords: flourescece, molecules in cavityies, optical cavity, photoluminescence excitation, spectroscopy, strong coupling
Procedia PDF Downloads 7373 From Design, Experience and Play Framework to Common Design Thinking Tools: Using Serious Modern Board Games
Authors: Micael Sousa
Abstract:
Board games (BGs) are thriving as new designs emerge from the hobby community to greater audiences all around the world. Although digital games are gathering most of the attention in game studies and serious games research fields, the post-digital movement helps to explain why in the world dominated by digital technologies, the analog experiences are still unique and irreplaceable to users, allowing innovation in new hybrid environments. The BG’s new designs are part of these post-digital and hybrid movements because they result from the use of powerful digital tools that enable the production and knowledge sharing about the BGs and their face-to-face unique social experiences. These new BGs, defined as modern by many authors, are providing innovative designs and unique game mechanics that are still not yet fully explored by the main serious games (SG) approaches. Even the most established frameworks settled to address SG, as fun games implemented to achieve predefined goals need more development, especially when considering modern BGs. Despite the many anecdotic perceptions, researchers are only now starting to rediscover BGs and demonstrating their potentials. They are proving that BGs are easy to adapt and to grasp by non-expert players in experimental approaches, with the possibility of easy-going adaptation to players’ profiles and serious objectives even during gameplay. Although there are many design thinking (DT) models and practices, their relations with SG frameworks are also underdeveloped, mostly because this is a new research field, lacking theoretical development and the systematization of the experimental practices. Using BG as case studies promise to help develop these frameworks. Departing from the Design, Experience, and Play (DPE) framework and considering the Common Design Think Tools (CDST), this paper proposes a new experimental framework for the adaptation and development of modern BG design for DT: the Design, Experience, and Play for Think (DPET) experimental framework. This is done through the systematization of the DPE and CDST approaches applied in two case studies, where two different sequences of adapted BG were employed to establish a DT collaborative process. These two sessions occurred with different participants and in different contexts, also using different sequences of games for the same DT approach. The first session took place at the Faculty of Economics at the University of Coimbra in a training session of serious games for project development. The second session took place in the Casa do Impacto through The Great Village Design Jam light. Both sessions had the same duration and were designed to progressively achieve DT goals, using BGs as SGs in a collaborative process. The results from the sessions show that a sequence of BGs, when properly adapted to address the DPET framework, can generate a viable and innovative process of collaborative DT that is productive, fun, and engaging. The DPET proposed framework intents to help establish how new SG solutions could be defined for new goals through flexible DT. Applications in other areas of research and development can also benefit from these findings.Keywords: board games, design thinking, methodology, serious games
Procedia PDF Downloads 11172 The French Ekang Ethnographic Dictionary. The Quantum Approach
Authors: Henda Gnakate Biba, Ndassa Mouafon Issa
Abstract:
Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, language, entenglement, science, research
Procedia PDF Downloads 6971 Measuring the Impact of Social Innovation Education on Student’s Engagement
Authors: Irene Kalemaki, Ioanna Garefi
Abstract:
Social Innovation Education (SIE) is a new educational approach that aims to empower students to take action for a more democratic and sustainable society. Conceptually and pedagogically wise, it is situated at the intersection of Enterprise Education and Citizenship Education as it aspires to i) combine action with activism, ii) personal development with collective efficacy, iii) entrepreneurial mindsets with democratic values and iv) individual competences with collective competences. This paper abstract presents the work of the NEMESIS project, funded by H2020, that aims to design, test and validate the first consolidated approach for embedding Social Innovation Education in schools of primary and secondary education. During the academic year 2018-2019, eight schools from five European countries experimented with different approaches and methodologies to incorporate SIE in their settings. This paper reports briefly on these attempts and discusses the wider educational philosophy underlying these interventions with a particular focus on analyzing the learning outcomes and impact on students. That said, this paper doesn’t only report on the theoretical and practical underpinnings of SIE, but most importantly, it provides evidence on the impact of SIE on students. In terms of methodology, the study took place from September 2018 to July 2019 in eight schools from Greece, Spain, Portugal, France, and the UK involving directly 56 teachers, 1030 students and 69 community stakeholders. Focus groups, semi-structured interviews, classroom observations as well as students' written narratives were used to extract data on the impact of SIE on students. The overall design of the evaluation activities was informed by a realist approach, which enabled us to go beyond “what happened” and towards understanding “why it happened”. Research findings suggested that SIE can benefit students in terms of their emotional, cognitive, behavioral and agentic engagement. Specifically, the emotional engagement of students was increased because through SIE interventions; students voice was heard, valued, and acted upon. This made students feel important to their school, increasing their sense of belonging, confidence and level of autonomy. As regards cognitive engagement, both students and teachers reported positive outcomes as SIE enabled students to take ownership of their ideas to drive their projects forward and thus felt more motivated to perform in class because it felt personal, important and relevant to them. In terms of behavioral engagement, the inclusive environment and the collective relationships that were reinforced through the SIE interventions had a direct positive impact on behaviors among peers. Finally, with regard to agentic engagement, it has been observed that students became very proactive which was connected to the strong sense of ownership and enthusiasm developed during collective efforts to deliver real-life social innovations. Concluding, from a practical and policy point of view these research findings could encourage the inclusion of SIE in schools, while from a research point of view, they could contribute to the scientific discourse providing evidence and clarity on the emergent field of SIE.Keywords: education, engagement, social innovation, students
Procedia PDF Downloads 13770 Coupling Strategy for Multi-Scale Simulations in Micro-Channels
Authors: Dahia Chibouti, Benoit Trouette, Eric Chenier
Abstract:
With the development of micro-electro-mechanical systems (MEMS), understanding fluid flow and heat transfer at the micrometer scale is crucial. In the case where the flow characteristic length scale is narrowed to around ten times the mean free path of gas molecules, the classical fluid mechanics and energy equations are still valid in the bulk flow, but particular attention must be paid to the gas/solid interface boundary conditions. Indeed, in the vicinity of the wall, on a thickness of about the mean free path of the molecules, called the Knudsen layer, the gas molecules are no longer in local thermodynamic equilibrium. Therefore, macroscopic models based on the continuity of velocity, temperature and heat flux jump conditions must be applied at the fluid/solid interface to take this non-equilibrium into account. Although these macroscopic models are widely used, the assumptions on which they depend are not necessarily verified in realistic cases. In order to get rid of these assumptions, simulations at the molecular scale are carried out to study how molecule interaction with walls can change the fluid flow and heat transfers at the vicinity of the walls. The developed approach is based on a kind of heterogeneous multi-scale method: micro-domains overlap the continuous domain, and coupling is carried out through exchanges of information between both the molecular and the continuum approaches. In practice, molecular dynamics describes the fluid flow and heat transfers in micro-domains while the Navier-Stokes and energy equations are used at larger scales. In this framework, two kinds of micro-simulation are performed: i) in bulk, to obtain the thermo-physical properties (viscosity, conductivity, ...) as well as the equation of state of the fluid, ii) close to the walls to identify the relationships between the slip velocity and the shear stress or between the temperature jump and the normal temperature gradient. The coupling strategy relies on an implicit formulation of the quantities extracted from micro-domains. Indeed, using the results of the molecular simulations, a Bayesian regression is performed in order to build continuous laws giving both the behavior of the physical properties, the equation of state and the slip relationships, as well as their uncertainties. These latter allow to set up a learning strategy to optimize the number of micro simulations. In the present contribution, the first results regarding this coupling associated with the learning strategy are illustrated through parametric studies of convergence criteria, choice of basis functions and noise of input data. Anisothermic flows of a Lennard Jones fluid in micro-channels are finally presented.Keywords: multi-scale, microfluidics, micro-channel, hybrid approach, coupling
Procedia PDF Downloads 16669 Modification of Magneto-Transport Properties of Ferrimagnetic Mn₄N Thin Films by Ni Substitution and Their Magnetic Compensation
Authors: Taro Komori, Toshiki Gushi, Akihito Anzai, Taku Hirose, Kaoru Toko, Shinji Isogami, Takashi Suemasu
Abstract:
Ferrimagnetic antiperovskite Mn₄₋ₓNiₓN thin film exhibits both small saturation magnetization and rather large perpendicular magnetic anisotropy (PMA) when x is small. Both of them are suitable features for application to current induced domain wall motion devices using spin transfer torque (STT). In this work, we successfully grew antiperovskite 30-nm-thick Mn₄₋ₓNiₓN epitaxial thin films on MgO(001) and STO(001) substrates by MBE in order to investigate their crystalline qualities and magnetic and magneto-transport properties. Crystalline qualities were investigated by X-ray diffraction (XRD). The magnetic properties were measured by vibrating sample magnetometer (VSM) at room temperature. Anomalous Hall effect was measured by physical properties measurement system. Both measurements were performed at room temperature. Temperature dependence of magnetization was measured by VSM-Superconducting quantum interference device. XRD patterns indicate epitaxial growth of Mn₄₋ₓNiₓN thin films on both substrates, ones on STO(001) especially have higher c-axis orientation thanks to greater lattice matching. According to VSM measurement, PMA was observed in Mn₄₋ₓNiₓN on MgO(001) when x ≤ 0.25 and on STO(001) when x ≤ 0.5, and MS decreased drastically with x. For example, MS of Mn₃.₉Ni₀.₁N on STO(001) was 47.4 emu/cm³. From the anomalous Hall resistivity (ρAH) of Mn₄₋ₓNiₓN thin films on STO(001) with the magnetic field perpendicular to the plane, we found out Mr/MS was about 1 when x ≤ 0.25, which suggests large magnetic domains in samples and suitable features for DW motion device application. In contrast, such square curves were not observed for Mn₄₋ₓNiₓN on MgO(001), which we attribute to difference in lattice matching. Furthermore, it’s notable that although the sign of ρAH was negative when x = 0 and 0.1, it reversed positive when x = 0.25 and 0.5. The similar reversal occurred for temperature dependence of magnetization. The magnetization of Mn₄₋ₓNiₓN on STO(001) increases with decreasing temperature when x = 0 and 0.1, while it decreases when x = 0.25. We considered that these reversals were caused by magnetic compensation which occurred in Mn₄₋ₓNiₓN between x = 0.1 and 0.25. We expect Mn atoms of Mn₄₋ₓNiₓN crystal have larger magnetic moments than Ni atoms do. The temperature dependence stated above can be explained if we assume that Ni atoms preferentially occupy the corner sites, and their magnetic moments have different temperature dependence from Mn atoms at the face-centered sites. At the compensation point, Mn₄₋ₓNiₓN is expected to show very efficient STT and ultrafast DW motion with small current density. What’s more, if angular momentum compensation is found, the efficiency will be best optimized. In order to prove the magnetic compensation, X-ray magnetic circular dichroism will be performed. Energy dispersive X-ray spectrometry is a candidate method to analyze the accurate composition ratio of samples.Keywords: compensation, ferrimagnetism, Mn₄N, PMA
Procedia PDF Downloads 13468 Maintaining Energy Security in Natural Gas Pipeline Operations by Empowering Process Safety Principles Through Alarm Management Applications
Authors: Huseyin Sinan Gunesli
Abstract:
Process Safety Management is a disciplined framework for managing the integrity of systems and processes that handle hazardous substances. It relies on good design principles, well-implemented automation systems, and operating and maintenance practices. Alarm Management Systems play a critically important role in the safe and efficient operation of modern industrial plants. In that respect, Alarm Management is one of the critical factors feeding the safe operations of the plants in the manner of applying effective process safety principles. Trans Anatolian Natural Gas Pipeline (TANAP) is part of the Southern Gas Corridor, which extends from the Caspian Sea to Italy. TANAP transports Natural Gas from the Shah Deniz gas field of Azerbaijan, and possibly from other neighboring countries, to Turkey and through Trans Adriatic Pipeline (TAP) Pipeline to Europe. TANAP plays a crucial role in maintaining Energy Security for the region and Europe. In that respect, the application of Process Safety principles is vital to deliver safe, reliable and efficient Natural Gas delivery to Shippers both in the region and Europe. Effective Alarm Management is one of those Process Safety principles which feeds safe operations of the TANAP pipeline. Alarm Philosophy was designed and implemented in TANAP Pipeline according to the relevant standards. However, it is essential to manage the alarms received in the control room effectively to maintain safe operations. In that respect, TANAP has commenced Alarm Management & Rationalization program as of February 2022 after transferring to Plateau Regime, reaching the design parameters. While Alarm Rationalization started, there were more than circa 2300 alarms received per hour from one of the compressor stations. After applying alarm management principles such as reviewing and removal of bad actors, standing, stale, chattering, fleeting alarms, comprehensive review and revision of alarm set points through a change management principle, conducting alarm audits/design verification and etc., it has been achieved to reduce down to circa 40 alarms per hour. After the successful implementation of alarm management principles as specified above, the number of alarms has been reduced to industry standards. That significantly improved operator vigilance to focus on mainly important and critical alarms to avoid any excursion beyond safe operating limits leading to any potential process safety events. Following the ‟What Gets Measured, Gets Managed” principle, TANAP has identified key Performance Indicators (KPIs) to manage Process Safety principles effectively, where Alarm Management has formed one of the key parameters of those KPIs. However, review and analysis of the alarms were performed manually. Without utilizing Alarm Management Software, achieving full compliance with international standards is almost infeasible. In that respect, TANAP has started using one of the industry-wide known Alarm Management Applications to maintain full review and analysis of alarms and define actions as required. That actually significantly empowered TANAP’s process safety principles in terms of Alarm Management.Keywords: process safety principles, energy security, natural gas pipeline operations, alarm rationalization, alarm management, alarm management application
Procedia PDF Downloads 10367 Slope Stability Assessment in Metasedimentary Deposit of an Opencast Mine: The Case of the Dikuluwe-Mashamba (DIMA) Mine in the DR Congo
Authors: Dina Kon Mushid, Sage Ngoie, Tshimbalanga Madiba, Kabutakapua Kakanda
Abstract:
Slope stability assessment is still the biggest challenge in mining activities and civil engineering structures. The slope in an opencast mine frequently reaches multiple weak layers that lead to the instability of the pit. Faults and soft layers throughout the rock would increase weathering and erosion rates. Therefore, it is essential to investigate the stability of the complex strata to figure out how stable they are. In the Dikuluwe-Mashamba (DIMA) area, the lithology of the stratum is a set of metamorphic rocks whose parent rocks are sedimentary rocks with a low degree of metamorphism. Thus, due to the composition and metamorphism of the parent rock, the rock formation is different in hardness and softness, which means that when the content of dolomitic and siliceous is high, the rock is hard. It is softer when the content of argillaceous and sandy is high. Therefore, from the vertical direction, it appears as a weak and hard layer, and from the horizontal direction, it seems like a smooth and hard layer in the same rock layer. From the structural point of view, the main structures in the mining area are the Dikuluwe dipping syncline and the Mashamba dipping anticline, and the occurrence of rock formations varies greatly. During the folding process of the rock formation, the stress will concentrate on the soft layer, causing the weak layer to be broken. At the same time, the phenomenon of interlayer dislocation occurs. This article aimed to evaluate the stability of metasedimentary rocks of the Dikuluwe-Mashamba (DIMA) open-pit mine using limit equilibrium and stereographic methods Based on the presence of statistical structural planes, the stereographic projection was used to study the slope's stability and examine the discontinuity orientation data to identify failure zones along the mine. The results revealed that the slope angle is too steep, and it is easy to induce landslides. The numerical method's sensitivity analysis showed that the slope angle and groundwater significantly impact the slope safety factor. The increase in the groundwater level substantially reduces the stability of the slope. Among the factors affecting the variation in the rate of the safety factor, the bulk density of soil is greater than that of rock mass, the cohesion of soil mass is smaller than that of rock mass, and the friction angle in the rock mass is much larger than that in the soil mass. The analysis showed that the rock mass structure types are mostly scattered and fragmented; the stratum changes considerably, and the variation of rock and soil mechanics parameters is significant.Keywords: slope stability, weak layer, safety factor, limit equilibrium method, stereography method
Procedia PDF Downloads 26066 Against the Philosophical-Scientific Racial Project of Biologizing Race
Authors: Anthony F. Peressini
Abstract:
The concept of race has recently come prominently back into discussion in the context of medicine and medical science, along with renewed effort to biologize racial concepts. This paper argues that this renewed effort to biologize race by way of medicine and population genetics fail on their own terms, and more importantly, that the philosophical project of biologizing race ought to be recognized for what it is—a retrograde racial project—and abandoned. There is clear agreement that standard racial categories and concepts cannot be grounded in the old way of racial naturalism, which understand race as a real, interest-independent biological/metaphysical category in which its members share “physical, moral, intellectual, and cultural characteristics.” But equally clear is the very real and pervasive presence of racial concepts in individual and collective consciousness and behavior, and so it remains a pressing area in which to seek deeper understanding. Recent philosophical work has endeavored to reconcile these two observations by developing a “thin” conception of race, grounded in scientific concepts but without the moral and metaphysical content. Such “thin,” science-based analyses take the “commonsense” or “folk” sense of race as it functions in contemporary society as the starting point for their philosophic-scientific projects to biologize racial concepts. A “philosophic-scientific analysis” is a special case of the cornerstone of analytic philosophy: a conceptual analysis. That is, a rendering of a concept into the more perspicuous concepts that constitute it. Thus a philosophic-scientific account of a concept is an attempt to work out an analysis of a concept that makes use of empirical science's insights to ground, legitimate and explicate the target concept in terms of clearer concepts informed by empirical results. The focus in this paper is on three recent philosophic-scientific cases for retaining “race” that all share this general analytic schema, but that make use of “medical necessity,” population genetics, and human genetic clustering, respectively. After arguing that each of these three approaches suffers from internal difficulties, the paper considers the general analytic schema employed by such biologizations of race. While such endeavors are inevitably prefaced with the disclaimer that the theory to follow is non-essentialist and non-racialist, the case will be made that such efforts are not neutral scientific or philosophical projects but rather are what sociologists call a racial project, that is, one of many competing efforts that conjoin a representation of what race means to specific efforts to determine social and institutional arrangements of power, resources, authority, etc. Accordingly, philosophic-scientific biologizations of race, since they begin from and condition their analyses on “folk” conceptions, cannot pretend to be “prior to” other disciplinary insights, nor to transcend the social-political dynamics involved in formulating theories of race. As a result, such traditional philosophical efforts can be seen to be disciplinarily parochial and to address only a caricature of a large and important human problem—and thereby further contributing to the unfortunate isolation of philosophical thinking about race from other disciplines.Keywords: population genetics, ontology of race, race-based medicine, racial formation theory, racial projects, racism, social construction
Procedia PDF Downloads 273