Search results for: adverse light condition
629 Consensual A-Monogamous Relationships: Challenges and Ways of Coping
Authors: Tal Braverman Uriel, Tal Litvak Hirsch
Abstract:
Background and Objectives: Little or only partial emphasis has been placed on exploring the complexity of consensual non-monogamous relationships. The term "polyamory" refers to consensual non-monogamy, and it is defined as having emotional and/or sexual relations simultaneously with two or more people, the consent and knowledge of all the partners concerned. Managing multiple romantic relationships with different people evokes more emotions, leads to more emotional conflicts arising from different interests, and demands practical strategies. An individual's transition from a monogamous lifestyle to a consensual non-monogamous lifestyle yields new challenges, accompanied by stress, uncertainty, and question marks, as do other life-changing events, such as divorce or transition to parenthood. The study examines both the process of transition and adaptation to a consensually non-monogamous relationship, as well as the coping mechanism involved in the daily conduct of this lifestyle. The research focuses on understanding the consequences, challenges, and coping methods from a personal, marital, and familial point of view and focuses on 40 middle-aged individuals (20 men and 20 women ages 40-60). The research sheds light on a way of life that has not been previously studied in Israel and is still considered unacceptable. Theories of crisis (e.g., as Folkman and Lazarus) were applied, and as a result, a deeper understanding of the subject was reached, all while focusing on multiple aspects of dealing with stress. The basic research question examines the consequences of entering a polyamorous life from a personal point of view as an individual, partner, and parent and the ways of coping with these consequences. Method: The research is conducted with a narrative qualitative approach in the interpretive paradigm, including semi-structured in-depth interviews. The method of analysis is thematic. Results: The findings indicate that in most cases, an individual's motivation to open the relationship is mainly a longing for better sexuality and for an added layer of excitement to their lives. Most of the interviewees were assisted by their spouses in the process, as well as by social networks and podcasts on the subject. Some of them therapeutic professionals from the field are helpful. It also clearly emerged that among those who experienced acute emotional crises with the primary partner or painful separations from secondary partners, all believed polyamory to be the adequate way of life for them. Finally, a key resource for managing tension and stress is the ability to share and communicate with the primary partner. Conclusions: The study points to the challenges and benefits of a non-monogamous lifestyle as well as the use of coping mechanisms and resources that are consistent with the existing theory and research in the field in the context of life changes. The study indicates the need to expand the research canvas in the future in the context of parenting and the consequences for children.Keywords: a-monogamy, consent, family, stress, tension
Procedia PDF Downloads 76628 Dynamic Exergy Analysis for the Built Environment: Fixed or Variable Reference State
Authors: Valentina Bonetti
Abstract:
Exergy analysis successfully helps optimizing processes in various sectors. In the built environment, a second-law approach can enhance potential interactions between constructions and their surrounding environment and minimise fossil fuel requirements. Despite the research done in this field in the last decades, practical applications are hard to encounter, and few integrated exergy simulators are available for building designers. Undoubtedly, an obstacle for the diffusion of exergy methods is the strong dependency of results on the definition of its 'reference state', a highly controversial issue. Since exergy is the combination of energy and entropy by means of a reference state (also called "reference environment", or "dead state"), the reference choice is crucial. Compared to other classical applications, buildings present two challenging elements: They operate very near to the reference state, which means that small variations have relevant impacts, and their behaviour is dynamical in nature. Not surprisingly then, the reference state definition for the built environment is still debated, especially in the case of dynamic assessments. Among the several characteristics that need to be defined, a crucial decision for a dynamic analysis is between a fixed reference environment (constant in time) and a variable state, which fluctuations follow the local climate. Even if the latter selection is prevailing in research, and recommended by recent and widely-diffused guidelines, the fixed reference has been analytically demonstrated as the only choice which defines exergy as a proper function of the state in a fluctuating environment. This study investigates the impact of that crucial choice: Fixed or variable reference. The basic element of the building energy chain, the envelope, is chosen as the object of investigation as common to any building analysis. Exergy fluctuations in the building envelope of a case study (a typical house located in a Mediterranean climate) are confronted for each time-step of a significant summer day, when the building behaviour is highly dynamical. Exergy efficiencies and fluxes are not familiar numbers, and thus, the more easy-to-imagine concept of exergy storage is used to summarize the results. Trends obtained with a fixed and a variable reference (outside air) are compared, and their meaning is discussed under the light of the underpinning dynamical energy analysis. As a conclusion, a fixed reference state is considered the best choice for dynamic exergy analysis. Even if the fixed reference is generally only contemplated as a simpler selection, and the variable state is often stated as more accurate without explicit justifications, the analytical considerations supporting the adoption of a fixed reference are confirmed by the usefulness and clarity of interpretation of its results. Further discussion is needed to address the conflict between the evidence supporting a fixed reference state and the wide adoption of a fluctuating one. A more robust theoretical framework, including selection criteria of the reference state for dynamical simulations, could push the development of integrated dynamic tools and thus spread exergy analysis for the built environment across the common practice.Keywords: exergy, reference state, dynamic, building
Procedia PDF Downloads 226627 Synthesis and Characterization of pH-Sensitive Graphene Quantum Dot-Loaded Metal-Organic Frameworks for Targeted Drug Delivery and Fluorescent Imaging
Authors: Sayed Maeen Badshah, Kuen-Song Lin, Abrar Hussain, Jamshid Hussain
Abstract:
Liver cancer is a significant global health issue, ranking fifth in incidence and second in mortality. Effective therapeutic strategies are urgently needed to combat this disease, particularly in regions with high prevalence. This study focuses on developing and characterizing fluorescent organometallic frameworks as distinct drug delivery carriers with potential applications in both the treatment and biological imaging of liver cancer. This work introduces two distinct organometallic frameworks: the cake-shaped GQD@NH₂-MIL-125 and the cross-shaped M8U6/FM8U6. The GQD@NH₂-MIL-125 framework is particularly noteworthy for its high fluorescence, making it an effective tool for biological imaging. X-ray diffraction (XRD) analysis revealed specific diffraction peaks at 6.81ᵒ (011), 9.76ᵒ (002), and 11.69ᵒ (121), with an additional significant peak at 26ᵒ (2θ), corresponding to the carbon material. Morphological analysis using Field Emission Scanning Electron Microscopy (FE-SEM), and Transmission Electron Microscopy (TEM) demonstrated that the framework has a front particle size of 680 nm and a side particle size of 55±5 nm. High-resolution TEM (HR-TEM) images confirmed the successful attachment of graphene quantum dots (GQDs) onto the NH2-MIL-125 framework. Fourier-Transform Infrared (FT-IR) spectroscopy identified crucial functional groups within the GQD@NH₂-MIL-125 structure, including O-Ti-O metal bonds within the 500 to 700 cm⁻¹ range, and N-H and C-N bonds at 1,646 cm⁻¹ and 1,164 cm⁻¹, respectively. BET isotherm analysis further revealed a specific surface area of 338.1 m²/g and an average pore size of 46.86 nm. This framework also demonstrated UV-active properties, as identified by UV-visible light spectra, and its photoluminescence (PL) spectra showed an emission peak around 430 nm when excited at 350 nm, indicating its potential as a fluorescent drug delivery carrier. In parallel, the cross-shaped M8U6/FM8U6 frameworks were synthesized and characterized using X-ray diffraction, which identified distinct peaks at 2θ = 7.4 (111), 8.5 (200), 9.2 (002), 10.8 (002), 12.1 (220), 16.7 (103), and 17.1 (400). FE-SEM, HR-TEM, and TEM analyses revealed particle sizes of 350±50 nm for M8U6 and 200±50 nm for FM8U6. These frameworks, synthesized from terephthalic acid (H₂BDC), displayed notable vibrational bonds, such as C=O at 1,650 cm⁻¹, Fe-O in MIL-88 at 520 cm⁻¹, and Zr-O in UIO-66 at 482 cm⁻¹. BET analysis showed specific surface areas of 740.1 m²/g with a pore size of 22.92 nm for M8U6 and 493.9 m²/g with a pore size of 35.44 nm for FM8U6. Extended X-ray Absorption Fine Structure (EXAFS) spectra confirmed the stability of Ti-O bonds in the frameworks, with bond lengths of 2.026 Å for MIL-125, 1.962 Å for NH₂-MIL-125, and 1.817 Å for GQD@NH₂-MIL-125. These findings highlight the potential of these organometallic frameworks for enhanced liver cancer therapy through precise drug delivery and imaging, representing a significant advancement in nanomaterial applications in biomedical science.Keywords: liver cancer cells, metal organic frameworks, Doxorubicin (DOX), drug release.
Procedia PDF Downloads 8626 The Effect of TiO₂ Nanoparticles on Zebrafish Embryos
Authors: Elena Maria Scalisi
Abstract:
Currently, photodegradation by nanoparticles (NPs) is a common solution for wastewater treatment. Nanoparticles are efficient for removing organic and inorganic pollutants, heavy metals from wastewater and killing microorganisms through environmentally friendly. In this context, the major representative of photocatalytic technology for industrial wastewater treatment are TiO₂ nanoparticles (TiO₂-NPs). TiO₂-NPs have a strong catalytic activity that depends to their physicochemical properties. Thanks to their small size (between 1-100 nm), nanoparticles occupy less volume, then their surface area increases. The increase in the surface-to-volume ratio results in the increase of the particle surface energy, which improve their reactivity potential. However, these unique properties represent risks to the ecosystems and organisms when unintentionally TiO₂-NPs are release into the environment and absorbed by living organisms. Several studies confirm that there is a high level of interest concerning the safety of TiO₂-NPs in the aquatic environment, furthermore, ecotoxicological tools are useful to correctly evaluate their toxicity. In the current study, we aimed to characterize potential toxic effects of TiO₂-NP suspension to zebrafish during embryo-larval stages to evaluate parameters such as survival rates, malformation, hatching, the overall length of the larvae heartbeat, and biochemical biomarkers that reflect the acute toxicity and sublethal effects of TiO₂-NPs. Zebrafish embryos were exposed to titanium dioxide nanoparticles (TiO₂-NPs at 1mg/L, 2mg/L, and 4mg/L) from fertilization to the free swimming stage (144hpf). Every day, we recorded the toxicological endpoints, moreover, immunohistochemical analysis has been performed at the end of the exposure. In particular, we have evaluate the expression of the following biomarkers: Heat Shock Protein 70 (HSP70), Poly ADP-Ribose Polymerase-1 (PARP-1), Metallothioneins (MTs). Our results have shown that hatch ability, survival, and malformation rate were not affected by TiO₂ NPs at these exposure levels. However, TiO₂-NPs caused an increase of heartbeat and reduction of body length; at the same time, TiO₂-NPs have inducted the production of ROS and the expression of oxidative stress biomarkers HSP70 and PARP-1. Hight positivity for PARP-1 at all concentration tested was observed. As regards MT, positivity was found in the expression of this biomarker in the whole body of the embryo, with the exception of the end of the tail. Metallothioneins (MT) are biomarkers widely used in environmental monitoring programs for aquatic creatures. At the light of our results i.e. no death until the end of the experiment (144hpf), no malformation and expression of the biomarkers mentioned, it is evident that zebrafish larvae with their natural detoxification pathways are able to resist the presence of toxic substances and then they can tolerate the presence of metal concentrations. However, an excessive oxidative state can compromise cell function, therefore the uncontrolled release of nanoparticles into the environment is severe and must be constantly monitored.Keywords: nanoparticles, embryo zebrafish, HSP70, PARP-1
Procedia PDF Downloads 139625 Building Community through Discussion Forums in an Online Accelerated MLIS Program: Perspectives of Instructors and Students
Authors: Mary H Moen, Lauren H. Mandel
Abstract:
Creating a sense of community in online learning is important for student engagement and success. The integration of discussion forums within online learning environments presents an opportunity to explore how this computer mediated communications format can cultivate a sense of community among students in accelerated master’s degree programs. This research has two aims, to delve into the ways instructors utilize this communications technology to create community and to understand the feelings and experiences of graduate students participating in these forums in regard to its effectiveness in community building. This study is a two-phase approach encompassing qualitative and quantitative methodologies. The data will be collected at an online accelerated Master of Library and Information Studies program at a public university in the northeast of the United States. Phase 1 is a content analysis of the syllabi from all courses taught in the 2023 calendar year, which explores the format and rules governing discussion forum assignments. Four to six individual interviews of department faculty and part time faculty will also be conducted to illuminate their perceptions of the successes and challenges of their discussion forum activities. Phase 2 will be an online survey administered to students in the program during the 2023 calendar year. Quantitative data will be collected for statistical analysis, and short answer responses will be analyzed for themes. The survey is adapted from the Classroom Community Scale Short-Form (CSS-SF), which measures students' self-reported responses on their feelings of connectedness and learning. The prompts will contextualize the items from their experience in discussion forums during the program. Short answer responses on the challenges and successes of using discussion forums will be analyzed to gauge student perceptions and experiences using this type of communication technology in education. This research study is in progress. The authors anticipate that the findings will provide a comprehensive understanding of the varied approaches instructors use in discussion forums for community-building purposes in an accelerated MLIS program. They predict that the more varied, flexible, and consistent student uses of discussion forums are, the greater the sense of community students will report. Additionally, students’ and instructors’ perceptions and experiences within these forums will shed light on the successes and challenges faced, thereby offering valuable recommendations for enhancing online learning environments. The findings are significant because they can contribute actionable insights for instructors, educational institutions, and curriculum designers aiming to optimize the use of discussion forums in online accelerated graduate programs, ultimately fostering a richer and more engaging learning experience for students.Keywords: accelerated online learning, discussion forums, LIS programs, sense of community, g
Procedia PDF Downloads 83624 Natural Monopolies and Their Regulation in Georgia
Authors: Marina Chavleishvili
Abstract:
Introduction: Today, the study of monopolies, including natural monopolies, is topical. In real life, pure monopolies are natural monopolies. Natural monopolies are used widely and are regulated by the state. In particular, the prices and rates are regulated. The paper considers the problems associated with the operation of natural monopolies in Georgia, in particular, their microeconomic analysis, pricing mechanisms, and legal mechanisms of their operation. The analysis was carried out on the example of the power industry. The rates of natural monopolies in Georgia are controlled by the Georgian National Energy and Water Supply Regulation Commission. The paper analyzes the positive role and importance of the regulatory body and the issues of improving the legislative base that will support the efficient operation of the branch. Methodology: In order to highlight natural monopolies market tendencies, the domestic and international markets are studied. An analysis of monopolies is carried out based on the endogenous and exogenous factors that determine the condition of companies, as well as the strategies chosen by firms to increase the market share. According to the productivity-based competitiveness assessment scheme, the segmentation opportunities, business environment, resources, and geographical location of monopolist companies are revealed. Main Findings: As a result of the analysis, certain assessments and conclusions were made. Natural monopolies are quite a complex and versatile economic element, and it is important to specify and duly control their frame conditions. It is important to determine the pricing policy of natural monopolies. The rates should be transparent, should show the level of life in the country, and should correspond to the incomes. The analysis confirmed the significance of the role of the Antimonopoly Service in the efficient management of natural monopolies. The law should adapt to reality and should be applied only to regulate the market. The present-day differential electricity tariffs varying depending on the consumed electrical power need revision. The effects of the electricity price discrimination are important, segmentation in different seasons in particular. Consumers use more electricity in winter than in summer, which is associated with extra capacities and maintenance costs. If the price of electricity in winter is higher than in summer, the electricity consumption will decrease in winter. The consumers will start to consume the electricity more economically, what will allow reducing extra capacities. Conclusion: Thus, the practical realization of the views given in the paper will contribute to the efficient operation of natural monopolies. Consequently, their activity will be oriented not on the reduction but on the increase of increments of the consumers or producers. Overall, the optimal management of the given fields will allow for improving the well-being throughout the country. In the article, conclusions are made, and the recommendations are developed to deliver effective policies and regulations toward the natural monopolies in Georgia.Keywords: monopolies, natural monopolies, regulation, antimonopoly service
Procedia PDF Downloads 86623 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 110622 Waste Analysis and Classification Study (WACS) in Ecotourism Sites of Samal Island, Philippines Towards a Circular Economy Perspective
Authors: Reeden Bicomong
Abstract:
Ecotourism activities, though geared towards conservation efforts, still put pressures against the natural state of the environment. Influx of visitors that goes beyond carrying capacity of the ecotourism site, the wastes generated, greenhouse gas emissions, are just few of the potential negative impacts of a not well-managed ecotourism activities. According to Girard and Nocca (2017) tourism produces many negative impacts because it is configured according to the model of linear economy, operating on a linear model of take, make and dispose (Ellen MacArthur Foundation 2015). With the influx of tourists in an ecotourism area, more wastes are generated, and if unregulated, natural state of the environment will be at risk. It is in this light that a study on waste analysis and classification study in five different ecotourism sites of Samal Island, Philippines was conducted. The major objective of the study was to analyze the amount and content of wastes generated from ecotourism sites in Samal Island, Philippines and make recommendations based on the circular economy perspective. Five ecotourism sites in Samal Island, Philippines was identified such as Hagimit Falls, Sanipaan Vanishing Shoal, Taklobo Giant Clams, Monfort Bat Cave, and Tagbaobo Community Based Ecotourism. Ocular inspection of each ecotourism site was conducted. Likewise, key informant interview of ecotourism operators and staff was done. Wastes generated from these ecotourism sites were analyzed and characterized to come up with recommendations that are based on the concept of circular economy. Wastes generated were classified into biodegradables, recyclables, residuals and special wastes. Regression analysis was conducted to determine if increase in number of visitors would equate to increase in the amount of wastes generated. Ocular inspection indicated that all of the five ecotourism sites have their own system of waste collection. All of the sites inspected were found to be conducting waste separation at source since there are different types of garbage bins for all of the four classification of wastes such as biodegradables, recyclables, residuals and special wastes. Furthermore, all five ecotourism sites practice composting of biodegradable wastes and recycling of recyclables. Therefore, only residuals are being collected by the municipal waste collectors. Key informant interview revealed that all five ecotourism sites offer mostly nature based activities such as swimming, diving, site seeing, bat watching, rice farming experiences and community living. Among the five ecotourism sites, Sanipaan Vanishing Shoal has the highest average number of visitors in a weekly basis. At the same time, in the wastes assessment study conducted, Sanipaan has the highest amount of wastes generated. Further results of wastes analysis revealed that biodegradables constitute majority of the wastes generated in all of the five selected ecotourism sites. Meanwhile, special wastes proved to be the least generated as there was no amount of this type was observed during the three consecutive weeks WACS was conducted.Keywords: Circular economy, ecotourism, sustainable development, WACS
Procedia PDF Downloads 220621 Interwoven Realms: The Relationship Between Textiles, Fashion, and Architecture
Authors: Toktam mehrabani
Abstract:
Textiles, fashion, and architecture, though seemingly disparate fields, share a deep and evolving relationship. This paper explores the intersection of these disciplines, examining how the tactile, structural, and aesthetic qualities of textiles have influenced both fashion and architecture over time. By investigating historical and contemporary examples, this paper seeks to unravel the ways in which textiles and fashion have not only shaped architectural design but have also acted as a bridge between functionality, art, and human experience in the built environment.Textiles have been integral to human culture since the dawn of civilization. Their presence transcends mere functionality, serving as a medium for artistic expression, cultural identity, and social commentary. Fashion, derived from textiles, has long been associated with personal identity and societal trends, while architecture reflects human needs, environmental context, and cultural values. This paper posits that the relationship between textiles, fashion, and architecture is more interconnected than often perceived, with each influencing and inspiring the other across time. Textiles in Architectural Design: From ancient draperies in temples to tapestries in castles, textiles have adorned structures, softening rigid spaces and adding layers of warmth and luxury. Fabric screens and curtains have also served functional purposes, such as controlling light, acoustics, and temperature. Fashion as Architectural Expression: Renaissance and Baroque fashion used exaggerated forms, corsetry, and layers to mirror the grandiosity of architectural styles of the time. Clothing acted as wearable architecture, with structured garments mirroring the strong lines and curves of buildings..Structural Textiles in Architecture: In the 21st century, textiles are no longer just decorative; they have become integral to architectural innovation. Materials like tensile fabrics and smart textiles are used in creating flexible, lightweight structures. Iconic examples include Frei Otto’s work with tensile membranes, seen in the Munich Olympic Stadium.Technological advancements have drastically transformed the relationship between textiles, fashion, and architecture. Digital tools like 3D printing and laser cutting allow designers in both fields to push the limits of form and structure. Smart textiles that react to environmental stimuli are being explored for use in both wearable technology and adaptable architecture, such as facades that change in response to weather conditions. Textiles, fashion, and architecture are inextricably linked through their shared exploration of form, structure, and expression. This interdisciplinary relationship continues to evolve, driven by technological advancements and a growing emphasis on sustainability. As fashion becomes more architectural in its construction and architecture more fluid in its forms, the lines between these disciplines blur, offering new possibilities for creativity and functionality in both wearable and built environments.Keywords: textiles in architecture, fashion and architecture, textile architecture, structural textiles, wearable architecture, architectural fashion
Procedia PDF Downloads 29620 Analytical Solutions of Josephson Junctions Dynamics in a Resonant Cavity for Extended Dicke Model
Authors: S.I.Mukhin, S. Seidov, A. Mukherjee
Abstract:
The Dicke model is a key tool for the description of correlated states of quantum atomic systems, excited by resonant photon absorption and subsequently emitting spontaneous coherent radiation in the superradiant state. The Dicke Hamiltonian (DH) is successfully used for the description of the dynamics of the Josephson Junction (JJ) array in a resonant cavity under applied current. In this work, we have investigated a generalized model, which is described by DH with a frustrating interaction term. This frustrating interaction term is explicitly the infinite coordinated interaction between all the spin half in the system. In this work, we consider an array of N superconducting islands, each divided into two sub-islands by a Josephson Junction, taken in a charged qubit / Cooper Pair Box (CPB) condition. The array is placed inside the resonant cavity. One important aspect of the problem lies in the dynamical nature of the physical observables involved in the system, such as condensed electric field and dipole moment. It is important to understand how these quantities behave with time to define the quantum phase of the system. The Dicke model without frustrating term is solved to find the dynamical solutions of the physical observables in analytic form. We have used Heisenberg’s dynamical equations for the operators and on applying newly developed Rotating Holstein Primakoff (HP) transformation and DH we have arrived at the four coupled nonlinear dynamical differential equations for the momentum and spin component operators. It is possible to solve the system analytically using two-time scales. The analytical solutions are expressed in terms of Jacobi's elliptic functions for the metastable ‘bound luminosity’ dynamic state with the periodic coherent beating of the dipoles that connect the two double degenerate dipolar ordered phases discovered previously. In this work, we have proceeded the analysis with the extended DH with a frustrating interaction term. Inclusion of the frustrating term involves complexity in the system of differential equations and it gets difficult to solve analytically. We have solved semi-classical dynamic equations using the perturbation technique for small values of Josephson energy EJ. Because the Hamiltonian contains parity symmetry, thus phase transition can be found if this symmetry is broken. Introducing spontaneous symmetry breaking term in the DH, we have derived the solutions which show the occurrence of finite condensate, showing quantum phase transition. Our obtained result matches with the existing results in this scientific field.Keywords: Dicke Model, nonlinear dynamics, perturbation theory, superconductivity
Procedia PDF Downloads 134619 Transgender Practices as Queer Politics: African a Variant
Authors: Adekeye Joshua Temitope
Abstract:
“Transgender” presents a complexion of ambiguity in the African context and it remains a contested topography in the discourse of sexual identity. The casts and stigmatisations towards transgender unveils vital facts and intricacies often ignored in the academic communities; the problems and oppressions of given sex/gender system, the constrain of monogamy and ignorance of fluidity of human sexuality thereby generating dual discords of “enforced heterosexual” and “unavoidable homosexual.” The African culture voids transgender movements and perceive same-sex sexual behavior as “taboo or bad habits” and this provide reasonable explanations for the failure of asserting for the sexual rights in GLBT movement in most discourse on sexuality in the African context. However, we could not deny the real existence of active flowing and fluidity of human sexuality even though its variants could be latent. The incessant consciousness of the existence of transgender practices in Africa either in form of bisexual desire or bisexual behavior with or without sexual identity, including people who identify themselves as bisexual opens up the vision for us to reconsider and reexamine what constitutes such ambiguity and controversy of transgender identity at present time. The notion of identity politics in gay, lesbian, and transgender community has its complexity and debates in its historical development. This paper analyses the representation of the historical trajectory of transgender practices by presenting the dynamic transition of how people cognize transgender practices under different historical conditions since the understanding of historical transition of bisexual practices would be very crucial and meaningful for gender/sexuality liberation movement at present time and in the future. The paper did a juxtaposition of the trajectories of bisexual practices between Anglo-American world and Africa, as it has certain similarities and differences within diverse historical complexities. The similar condition is the emergence of gay identity under the influence of capitalism but within different cultural context. Therefore, the political economy of each cultural context plays very important role in understanding the formation of sexual identities historically and its development and influence for the GLBT movement afterwards and in the future. By reexamining Kinsey’s categorization and applying Klein’s argument on individual’s sexual orientation this paper is poised to break the given and fixed connection among sexual behavior/sexual orientation/sexual identity, on the other hand to present the potential fluidity of human sexuality by reconsidering and reexamining the present given sex/gender system in our world. The paper concludes that it is obligatory for the essentialist and exclusionary trend at this historical moment since gay and lesbian communities in Africa need to clearly demonstrate and voice for themselves under the nuances of gender/sexuality liberation.Keywords: heterosexual, homosexual, identity politics, queer politics, transgender
Procedia PDF Downloads 305618 Selective Conversion of Biodiesel Derived Glycerol to 1,2-Propanediol over Highly Efficient γ-Al2O3 Supported Bimetallic Cu-Ni Catalyst
Authors: Smita Mondal, Dinesh Kumar Pandey, Prakash Biswas
Abstract:
During past two decades, considerable attention has been given to the value addition of biodiesel derived glycerol (~10wt.%) to make the biodiesel industry economically viable. Among the various glycerol value-addition methods, hydrogenolysis of glycerol to 1,2-propanediol is one of the attractive and promising routes. In this study, highly active and selective γ-Al₂O₃ supported bimetallic Cu-Ni catalyst was developed for selective hydrogenolysis of glycerol to 1,2-propanediol in the liquid phase. The catalytic performance was evaluated in a high-pressure autoclave reactor. The formation of mixed oxide indicated the strong interaction of Cu, Ni with the alumina support. Experimental results demonstrated that bimetallic copper-nickel catalyst was more active and selective to 1,2-PDO as compared to monometallic catalysts due to bifunctional behavior. To verify the effect of calcination temperature on the formation of Cu-Ni mixed oxide phase, the calcination temperature of 20wt.% Cu:Ni(1:1)/Al₂O₃ catalyst was varied from 300°C-550°C. The physicochemical properties of the catalysts were characterized by various techniques such as specific surface area (BET), X-ray diffraction study (XRD), temperature programmed reduction (TPR), and temperature programmed desorption (TPD). The BET surface area and pore volume of the catalysts were in the range of 71-78 m²g⁻¹, and 0.12-0.15 cm³g⁻¹, respectively. The peaks at the 2θ range of 43.3°-45.5° and 50.4°-52°, was corresponded to the copper-nickel mixed oxidephase [JCPDS: 78-1602]. The formation of mixed oxide indicated the strong interaction of Cu, Ni with the alumina support. The crystallite size decreased with increasing the calcination temperature up to 450°C. Further, the crystallite size was increased due to agglomeration. Smaller crystallite size of 16.5 nm was obtained for the catalyst calcined at 400°C. Total acidic sites of the catalysts were determined by NH₃-TPD, and the maximum total acidic of 0.609 mmol NH₃ gcat⁻¹ was obtained over the catalyst calcined at 400°C. TPR data suggested the maximum of 75% degree of reduction of catalyst calcined at 400°C among all others. Further, 20wt.%Cu:Ni(1:1)/γ-Al₂O₃ catalyst calcined at 400°C exhibited highest catalytic activity ( > 70%) and 1,2-PDO selectivity ( > 85%) at mild reaction condition due to highest acidity, highest degree of reduction, smallest crystallite size. Further, the modified Power law kinetic model was developed to understand the true kinetic behaviour of hydrogenolysis of glycerol over 20wt.%Cu:Ni(1:1)/γ-Al₂O₃ catalyst. Rate equations obtained from the model was solved by ode23 using MATLAB coupled with Genetic Algorithm. Results demonstrated that the model predicted data were very well fitted with the experimental data. The activation energy of the formation of 1,2-PDO was found to be 45 kJ mol⁻¹.Keywords: glycerol, 1, 2-PDO, calcination, kinetic
Procedia PDF Downloads 145617 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis
Procedia PDF Downloads 153616 Ultrasonic Irradiation Synthesis of High-Performance Pd@Copper Nanowires/MultiWalled Carbon Nanotubes-Chitosan Electrocatalyst by Galvanic Replacement toward Ethanol Oxidation in Alkaline Media
Authors: Majid Farsadrouh Rashti, Amir Shafiee Kisomi, Parisa Jahani
Abstract:
The direct ethanol fuel cells (DEFCs) are contemplated as a promising energy source because, In addition to being used in portable electronic devices, it is also used for electric vehicles. The synthesis of bimetallic nanostructures due to their novel optical, catalytic and electronic characteristic which is precisely in contrast to their monometallic counterparts is attracting extensive attention. Galvanic replacement (sometimes is named to as cementation or immersion plating) is an uncomplicated and effective technique for making nanostructures (such as core-shell) of different metals, semiconductors, and their application in DEFCs. The replacement of galvanic does not need any external power supply compared to electrodeposition. In addition, it is different from electroless deposition because there is no need for a reducing agent to replace galvanizing. In this paper, a fast method for the palladium (Pd) wire nanostructures synthesis with the great surface area through galvanic replacement reaction utilizing copper nanowires (CuNWS) as a template by the assistance of ultrasound under room temperature condition is proposed. To evaluate the morphology and composition of Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan, emission scanning electron microscopy, energy dispersive X-ray spectroscopy were applied. In order to measure the phase structure of the electrocatalysts were performed via room temperature X-ray powder diffraction (XRD) applying an X-ray diffractometer. Various electrochemical techniques including chronoamperometry and cyclic voltammetry were utilized for the electrocatalytic activity of ethanol electrooxidation and durability in basic solution. Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan catalyst demonstrated substantially enhanced performance and long-term stability for ethanol electrooxidation in the basic solution in comparison to commercial Pd/C that demonstrated the potential in utilizing Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan as efficient catalysts towards ethanol oxidation. Noticeably, the Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan presented excellent catalytic activities with a peak current density of 320.73 mAcm² which was 9.5 times more than in comparison to Pd/C (34.2133 mAcm²). Additionally, activation energy thermodynamic and kinetic evaluations revealed that the Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan catalyst has lower compared to Pd/C which leads to a lower energy barrier and an excellent charge transfer rate towards ethanol oxidation.Keywords: core-shell structure, electrocatalyst, ethanol oxidation, galvanic replacement reaction
Procedia PDF Downloads 147615 ‘Only Amharic or Leave Quick!’: Linguistic Genocide in the Western Tigray Region of Ethiopia
Authors: Merih Welay Welesilassie
Abstract:
Language is a potent instrument that does not only serve the purpose of communication but also plays a pivotal role in shaping our cultural practices and identities. The right to choose one's language is a fundamental human right that helps to safeguard the integrity of both personal and communal identities. Language holds immense significance in Ethiopia, a nation with a diverse linguistic landscape that extends beyond mere communication to delineate administrative boundaries. Consequently, depriving Ethiopians of their linguistic rights represents a multifaceted punishment, more complex than food embargoes. In the aftermath of the civil war that shook Ethiopia in November 2020, displacing millions and resulting in the loss of hundreds of thousands of lives, concerns have been raised about the preservation of the indigenous Tigrayan language and culture. This is particularly true following the annexation of western Tigray into the Amhara region and the implementation of an Amharic-only language and culture education policy. This scholarly inquiry explores the intricacies surrounding the Amhara regional state's prohibition of Tigrayans' indigenous language and culture and the subsequent adoption of a monolingual and monocultural Amhara language and culture in western Tigray. The study adopts the linguistic genocide conceptual framework as an analytical tool to gain a deeper insight into the factors that contributed to and facilitated this significant linguistic and cultural shift. The research was conducted by interviewing ten teachers selected through a snowball sampling. Additionally, document analysis was performed to support the findings. The findings revealed that the push for linguistic and cultural assimilation was driven by various political and economic factors and the desire to promote a single language and culture policy. This process, often referred to as ‘Amharanization,’ aimed to homogenize the culture and language of the society. The Amhara authorities have enacted several measures in pursuit of their objectives, including the outlawing of the Tigrigna language, punishment for speaking Tigrigna, imposition of the Amhara language and culture, mandatory relocation, and even committing heinous acts that have inflicted immense physical and emotional suffering upon members of the Tigrayan community. Upon conducting a comprehensive analysis of the contextual factors, actions, intentions, and consequences, it has been posited that there may be instances of linguistic genocide taking place in the Western Tigray region. The present study sheds light on the severe consequences that could arise because of implementing monolingual and monocultural policies in multilingual areas. Through thoroughly scrutinizing the implications of such policies, this study provides insightful recommendations and directions for future research in this critical area.Keywords: linguistic genocide, linguistic human right, mother tongue, Western Tigray
Procedia PDF Downloads 65614 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 114613 Spatial Variability of Soil Metal Contamination to Detect Cancer Risk Zones in Coimbatore Region of India
Authors: Aarthi Mariappan, Janani Selvaraj, P. B. Harathi, M. Prashanthi Devi
Abstract:
Anthropogenic modification of the urban environment has largely increased in the recent years in order to sustain the growing human population. Intense industrial activity, permanent and high traffic on the roads, a developed subterranean infrastructure network, land use patterns are just some specific characteristics. Every day, the urban environment is polluted by more or less toxic emissions, organic or metals wastes discharged from specific activities such as industrial, commercial, municipal. When these eventually deposit into the soil, the physical and chemical properties of the surrounding soil is changed, transforming it into a human exposure indicator. Metals are non-degradable and occur cumulative in soil due to regular deposits are a result of permanent human activity. Due to this, metals are a contaminant factor for soil when persistent over a long period of time and a possible danger for inhabitant’s health on prolonged exposure. Metals accumulated in contaminated soil may be transferred to humans directly, by inhaling the dust raised from top soil, or by ingesting, or by dermal contact and indirectly, through plants and animals grown on contaminated soil and used for food. Some metals, like Cu, Mn, Zn, are beneficial for human’s health and represent a danger only if their concentration is above permissible levels, but other metals, like Pb, As, Cd, Hg, are toxic even at trace level causing gastrointestinal and lung cancers. In urban areas, metals can be emitted from a wide variety of sources like industrial, residential, commercial activities. Our study interrogates the spatial distribution of heavy metals in soil in relation to their permissible levels and their association with the health risk to the urban population in Coimbatore, India. Coimbatore region is a high cancer risk zone and case records of gastro intestinal and respiratory cancer patients were collected from hospitals and geocoded in ArcGIS10.1. The data of patients pertaining to the urban limits were retained and checked for their diseases history based on their diagnosis and treatment. A disease map of cancer was prepared to show the disease distribution. It has been observed that in our study area Cr, Pb, As, Fe and Mg exceeded their permissible levels in the soil. Using spatial overlay analysis a relationship between environmental exposure to these potentially toxic elements in soil and cancer distribution in Coimbatore district was established to show areas of cancer risk. Through this, our study throws light on the impact of prolonged exposure to soil contamination in soil in the urban zones, thereby exploring the possibility to detect cancer risk zones and to create awareness among the exposed groups on cancer risk.Keywords: soil contamination, cancer risk, spatial analysis, India
Procedia PDF Downloads 403612 The Role of Law in the Transformation of Collective Identities in Nigeria
Authors: Henry Okechukwu Onyeiwu
Abstract:
Nigeria, with its rich tapestry of ethnicities, cultures, and religions, serves as a critical case study in understanding how law influences and shapes collective identities. This abstract delves into the historical context of legal systems in Nigeria, examining the colonial legacies that have influenced contemporary laws and how these laws interact with traditional practices and beliefs. This study examines the critical role of law in shaping and transforming collective identities in Nigeria, a nation characterized by its rich tapestry of ethnicities, cultures, and religions. The legal framework in Nigeria has evolved in response to historical, social, and political dynamics, influencing the way communities perceive themselves and interact with one another. This research highlights the interplay between law and collective identity, exploring how legal instruments, such as constitutions, statutes, and judicial rulings, have contributed to the formation, negotiation, and reformation of group identities over time. Moreover, contemporary legal debates surrounding issues such as citizenship, resource allocation, and communal conflicts further illustrate the law's role in identity formation. The legal recognition of different ethnic groups fosters a sense of belonging and collective identity among these groups, yet it simultaneously raises questions about inclusivity and equality. Laws concerning indigenous rights and affirmative action are essential in this discourse, as they reflect the necessity of balancing majority rule with minority rights—a challenge that Nigeria continues to navigate. By employing a multidisciplinary approach that integrates legal studies, sociology, and anthropology, the study analyses key historical milestones, such as colonial legal legacies, post-independence constitutional developments, and ongoing debates surrounding federalism and ethnic rights. It also investigates how laws affect social cohesion and conflict among Nigeria's diverse ethnic groups, as well as the role of law in promoting inclusivity and recognizing minority rights. Case studies are utilized to illustrate practical examples of legal transformations and their impact on collective identities in various Nigerian contexts, including land rights, religious freedoms, and ethnic representation in government. The findings reveal that while the law has the potential to unify disparate groups under a national identity, it can also exacerbate divisions when applied inequitably or favouring particular groups over others. Ultimately, this study aims to shed light on the dual nature of law as both a tool for transformation and a potential source of conflict in the evolution of collective identities in Nigeria. By understanding these dynamics, policymakers and legal practitioners can develop strategies to foster unity and respect for diversity in a complex societal landscape.Keywords: law, collective identity, Nigeria, ethnicity, conflict, inclusion, legal framework, transformation
Procedia PDF Downloads 26611 Empowering South African Female Farmers through Organic Lamb Production: A Cost Analysis Case Study
Authors: J. M. Geyser
Abstract:
Lamb is a popular meat throughout the world, particularly in Europe, the Middle East and Oceania. However, the conventional lamb industry faces challenges related to environmental sustainability, climate change, consumer health and dwindling profit margins. This has stimulated an increasing demand for organic lamb, as it is perceived to increase environmental sustainability, offer superior quality, taste, and nutritional value, which is appealing to farmers, including small-scale and female farmers, as it often commands a premium price. Despite its advantages, organic lamb production presents challenges, with a significant hurdle being the high production costs encompassing organic certification, lower stocking rates, higher mortality rates and marketing cost. These costs impact the profitability and competitiveness or organic lamb producers, particularly female and small-scale farmers, who often encounter additional obstacles, such as limited access to resources and markets. Therefore, this paper examines the cost of producing organic lambs and its impact on female farmers and raises the research question: “Is organic lamb production the saving grace for female and small-scale farmers?” Objectives include estimating and comparing production costs and profitability or organic lamb production with conventional lamb production, analyzing influencing factors, and assessing opportunities and challenges for female and small-scale farmers. The hypothesis states that organic lamb production can be a viable and beneficial option for female and small-scale farmers, provided that they can overcome high production costs and access premium markets. The study uses a mixed-method approach, combining qualitative and quantitative data. Qualitative data involves semi-structured interviews with ten female and small-scale farmers engaged in organic lamb production in South Africa. The interview covered topics such as farm characteristics, practices, cost components, mortality rates, income sources and empowerment indicators. Quantitative data used secondary published information and primary data from a female farmer. The research findings indicate that when a female farmer moves from conventional lamb production to organic lamb production, the cost in the first year of organic lamb production exceed those of conventional lamb production by over 100%. This is due to lower stocking rates and higher mortality rates in the organic system. However, costs start decreasing in the second year as stocking rates increase due to manure applications on grazing and lower mortality rates due to better worm resistance in the herd. In conclusion, this article sheds light on the economic dynamics of organic lamb production, particularly focusing on its impact on female farmers. To empower female farmers and to promote sustainable agricultural practices, it is imperative to understand the cost structures and profitability of organic lamb production.Keywords: cost analysis, empowerment, female farmers, organic lamb production
Procedia PDF Downloads 75610 Generating a Multiplex Sensing Platform for the Accurate Diagnosis of Sepsis
Authors: N. Demertzis, J. L. Bowen
Abstract:
Sepsis is a complex and rapidly evolving condition, resulting from uncontrolled prolonged activation of host immune system due to pathogenic insult. The aim of this study is the development of a multiplex electrochemical sensing platform, capable of detecting both pathogen associated and host immune markers to enable the rapid and definitive diagnosis of sepsis. A combination of aptamers and molecular imprinting approaches have been employed to generate sensing systems for lipopolysaccharide (LPS), c-reactive protein (CRP) and procalcitonin (PCT). Gold working electrodes were mechanically polished and electrochemically cleaned with 0.1 M sulphuric acid using cyclic voltammetry (CV). Following activation, a self-assembled monolayer (SAM) was generated, by incubating the electrodes with a thiolated anti-LPS aptamer / dithiodibutiric acid (DTBA) mixture (1:20). 3-aminophenylboronic acid (3-APBA) in combination with the anti-LPS aptamer was used for the development of the hybrid molecularly imprinted sensor (apta-MIP). Aptasensors, targeting PCT and CRP were also fabricated, following the same approach as in the case of LPS, with mercaptohexanol (MCH) replacing DTBA. In the case of the CRP aptasensor, the SAM was formed following incubation of a 1:1 aptamer: MCH mixture. However, in the case of PCT, the SAM was formed with the aptamer itself, with subsequent backfilling with 1 μM MCH. The binding performance of all systems has been evaluated using electrochemical impedance spectroscopy. The apta-MIP’s polymer thickness is controlled by varying the number of electropolymerisation cycles. In the ideal number of polymerisation cycles, the polymer must cover the electrode surface and create a binding pocket around LPS and its aptamer binding site. Less polymerisation cycles will create a hybrid system which resembles an aptasensor, while more cycles will be able to cover the complex and demonstrate a bulk polymer-like behaviour. Both aptasensor and apta-MIP were challenged with LPS and compared to conventional imprinted (absence of aptamer from the binding site, polymer formed in presence of LPS) and non-imprinted polymers (NIPS, absence of LPS whilst hybrid polymer is formed). A stable LPS aptasensor, capable of detecting down to 5 pg/ml of LPS was generated. The apparent Kd of the system was estimated at 17 pM, with a Bmax of approximately 50 pM. The aptasensor demonstrated high specificity to LPS. The apta-MIP demonstrated superior recognition properties with a limit of detection of 1 fg/ml and a Bmax of 100 pg/ml. The CRP and PCT aptasensors were both able to detect down to 5 pg/ml. Whilst full binding performance is currently being evaluated, there is none of the sensors demonstrate cross-reactivity towards LPS, CRP or PCT. In conclusion, stable aptasensors capable of detecting LPS, PCT and CRP at low concentrations have been generated. The realisation of a multiplex panel such as described herein, will effectively contribute to the rapid, personalised diagnosis of sepsis.Keywords: aptamer, electrochemical impedance spectroscopy, molecularly imprinted polymers, sepsis
Procedia PDF Downloads 125609 Enhancing of Antibacterial Activity of Essential Oil by Rotating Magnetic Field
Authors: Tomasz Borowski, Dawid Sołoducha, Agata Markowska-Szczupak, Aneta Wesołowska, Marian Kordas, Rafał Rakoczy
Abstract:
Essential oils (EOs) are fragrant volatile oils obtained from plants. These are used for cooking (for flavor and aroma), cleaning, beauty (e.g., rosemary essential oil is used to promote hair growth), health (e.g. thyme essential oil cures arthritis, normalizes blood pressure, reduces stress on the heart, cures chest infection and cough) and in the food industry as preservatives and antioxidants. Rosemary and thyme essential oils are considered the most eminent herbs based on their history and medicinal properties. They possess a wide range of activity against different types of bacteria and fungi compared with the other oils in both in vitro and in vivo studies. However, traditional uses of EOs are limited due to rosemary and thyme oils in high concentrations can be toxic. In light of the accessible data, the following hypothesis was put forward: Low frequency rotating magnetic field (RMF) increases the antimicrobial potential of EOs. The aim of this work was to investigate the antimicrobial activity of commercial Salvia Rosmarinus L. and Thymus vulgaris L. essential oil from Polish company Avicenna-Oil under Rotating Magnetic Field (RMF) at f = 25 Hz. The self-constructed reactor (MAP) was applied for this study. The chemical composition of oils was determined by gas chromatography coupled with mass spectrometry (GC-MS). Model bacteria Escherichia coli K12 (ATCC 25922) was used. Minimum inhibitory concentrations (MIC) against E. coli were determined for the essential oils. Tested oils in very small concentrations were prepared (from 1 to 3 drops of essential oils per 3 mL working suspensions). From the results of disc diffusion assay and MIC tests, it can be concluded that thyme oil had the highest antibacterial activity against E. coli. Moreover, the study indicates the exposition to the RMF, as compared to the unexposed controls causing an increase in the efficacy of antibacterial properties of tested oils. The extended radiation exposure to RMF at the frequency f= 25 Hz beyond 160 minutes resulted in a significant increase in antibacterial potential against E. coli. Bacteria were killed within 40 minutes in thyme oil in lower tested concentration (1 drop of essential oils per 3 mL working suspension). Rapid decrease (>3 log) of bacteria number was observed with rosemary oil within 100 minutes (in concentration 3 drops of essential oils per 3 mL working suspension). Thus, a method for improving the antimicrobial performance of essential oil in low concentrations was developed. However, it still remains to be investigated how bacteria get killed by the EOs treated by an electromagnetic field. The possible mechanisms relies on alteration in the permeability of ionic channels in ionic channels in the bacterial cell walls that transport in the cells was proposed. For further studies, it is proposed to examine other types of essential oils and other antibiotic-resistant bacteria (ARB), which are causing a serious concern throughout the world.Keywords: rotating magnetic field, rosemary, thyme, essential oils, Escherichia coli
Procedia PDF Downloads 156608 Narrating Atatürk Cultural Center as a Place of Memory and a Space of Politics
Authors: Birge Yildirim Okta
Abstract:
This paper aims to narrate the story of Atatürk Cultural Center in Taksim Square, which was demolished in 2018 and discuss its architectonic as a social place of memory and its existence and demolishment as the space of politics. The paper uses narrative discourse analysis to research Atatürk Cultural Center (AKM) as a place of memory and space of politics from the establishment of the Turkish Republic (1923) until today. After the establishment of the Turkish Republic, one of the most important implementations in Taksim Square, reflecting the internationalist style, was the construction of the Opera Building in Prost Plan. The first design of the opera building belonged to Aguste Perret, which could not be implemented due to economic hardship during World War II. Later the project was designed by architects Feridun Kip and Rüknettin Güney in 1946 but could not be completed due to the 1960 military coup. Later the project was shifted to another architect Hayati Tabanlıoglu, with a change in its function as a cultural center. Eventually, the construction of the building was completed in 1969 in a completely different design. AKM became a symbol of republican modernism not only with its modern architectural style but also with it is function as the first opera building of the Republic, reflecting the western, modern cultural heritage by professional groups, artists, and the intelligentsia. In 2005, Istanbul’s council for the protection of cultural heritage decided to list AKM as a grade 1 cultural heritage, ending a period of controversy which saw calls for the demolition of the center as it was claimed, it ended its useful lifespan. In 2008 the building was announced to be closed for repairs and restoration. Over the following years, the building was demolished piece by piece silently while the Taksim mosque has been built just in front of Atatürk Cultural Center. Belonging to the early republican period AKM was a representation of the cultural production of modern society for the emergence and westward looking, secular public space in Turkey. Its erasure from the Taksim scene under the rule of the conservative government, Justice, and Development Party, and the construction of the Taksim mosque in front of AKM’s parcel is also representational. The question of governing the city through space has always been an important aspect for governments, those holding political power since cities are the chaotic environments that are seen as a threat for the governments, carrying the tensions of the proletariat or the contradictory groups. The story of AKM as a dispositive or a regulatory apparatus demonstrates how space itself is becoming a political medium, to transform the socio-political condition. The paper narrates the existence and demolishment of the Atatürk Cultural Center by discussing the constructed and demolished building as a place of memory and space of politics.Keywords: space of politics, place of memory, Atatürk Cultural Center, Taksim square, collective memory
Procedia PDF Downloads 140607 Understanding Feminization of Indian Agriculture and the Dynamics of Intrahousehold Bargaining Power at a Household Level
Authors: Arpit Sachan, Nilanshu Kumar
Abstract:
This paper tries to understand the nuances of feminisation of agriculture in the Indian context and how that is associated with better intrahousehold bargaining power for women. The economic survey of India indicates a constant increase in the share of the female workforce in Indian agriculture in the past few decades. This can be accounted for by many factors like the migration of male workers to urban areas and, therefore, the complete burden of agriculture shifting on the female counterparts. Therefore this study is an attempt to study that how this increase in the female workforce corresponds to a better decision-making ability for women in rural farm households. This paper is an attempt to carefully evaluate this aspect of the feminisation of Indian agriculture. The paper tries to study how various factors that improve the status of women in agriculture change with things like resource ownership. This paper uses both the macro-level and micro-level data to study the dynamics of the proportion of the workforce in agriculture across different states in India and how that has translated into better indicators for women in rural areas. The fall in India’s rank in the global gender wage gap index is alarming in such a context, and this creates a puzzle with increasing female workforce participation. The paper will consider if the condition of women improved over time with the increased share of employment or not? Using field survey data, this paper tries to understand if there exists any digression for some of the indicators both at the macro and micro level. The paper also tries to integrate the economic understanding of gender aspects of the workforce and the sociological stance prevailing in the existing literature. Therefore, this paper takes a mixed-method approach to better understand the role that social structure plays in the improved status of women within and across various households. Therefore, this paper will finally help us understanding if at all there is a feminisation of Indian agriculture or it's just exploitation of a different kind. This study intends to create a distinction between the gendered labour force in Indian agriculture and the complete democratization of Indian agriculture. The study is primarily focused on areas where the exodus of male migrants pushes women to work on agricultural farms. The question posits is whether it is the willingness of women to work in agriculture or is it urbanisation and development-induced conditions that make women work in agriculture as farm labourers? The motive is to understand if factors like resource ownership and the ability to autonomous decision-making are interlinked with an increased proportion of the female workforce or not? Based on this framework, we finally provide a brief comment on policy implications of government intervention in improving Indian agriculture and the gender aspects associated with it.Keywords: feminisation, intrahousehold bargaining, farm households, migration, agriculture, decision-making
Procedia PDF Downloads 130606 Treatment of Onshore Petroleum Drill Cuttings via Soil Washing Process: Characterization and Optimal Conditions
Authors: T. Poyai, P. Painmanakul, N. Chawaloesphonsiya, P. Dhanasin, C. Getwech, P. Wattana
Abstract:
Drilling is a key activity in oil and gas exploration and production. Drilling always requires the use of drilling mud for lubricating the drill bit and controlling the subsurface pressure. As drilling proceeds, a considerable amount of cuttings or rock fragments is generated. In general, water or Water Based Mud (WBM) serves as drilling fluid for the top hole section. The cuttings generated from this section is non-hazardous and normally applied as fill materials. On the other hand, drilling the bottom hole to reservoir section uses Synthetic Based Mud (SBM) of which synthetic oils are composed. The bottom-hole cuttings, SBM cuttings, is regarded as a hazardous waste, in accordance with the government regulations, due to the presence of hydrocarbons. Currently, the SBM cuttings are disposed of as an alternative fuel and raw material in cement kiln. Instead of burning, this work aims to propose an alternative for drill cuttings management under two ultimate goals: (1) reduction of hazardous waste volume; and (2) making use of the cleaned cuttings. Soil washing was selected as the major treatment process. The physiochemical properties of drill cuttings were analyzed, such as size fraction, pH, moisture content, and hydrocarbons. The particle size of cuttings was analyzed via light scattering method. Oil present in cuttings was quantified in terms of total petroleum hydrocarbon (TPH) through gas chromatography equipped with flame ionization detector (GC-FID). Other components were measured by the standard methods for soil analysis. Effects of different washing agents, liquid-to-solid (L/S) ratio, washing time, mixing speed, rinse-to-solid (R/S) ratio, and rinsing time were also evaluated. It was found that drill cuttings held the electrical conductivity of 3.84 dS/m, pH of 9.1, and moisture content of 7.5%. The TPH in cuttings existed in the diesel range with the concentration ranging from 20,000 to 30,000 mg/kg dry cuttings. A majority of cuttings particles held a mean diameter of 50 µm, which represented silt fraction. The results also suggested that a green solvent was considered most promising for cuttings treatment regarding occupational health, safety, and environmental benefits. The optimal washing conditions were obtained at L/S of 5, washing time of 15 min, mixing speed of 60 rpm, R/S of 10, and rinsing time of 1 min. After washing process, three fractions including clean cuttings, spent solvent, and wastewater were considered and provided with recommendations. The residual TPH less than 5,000 mg/kg was detected in clean cuttings. The treated cuttings can be then used for various purposes. The spent solvent held the calorific value of higher than 3,000 cal/g, which can be used as an alternative fuel. Otherwise, the recovery of the used solvent can be conducted using distillation or chromatography techniques. Finally, the generated wastewater can be combined with the produced water and simultaneously managed by re-injection into the reservoir.Keywords: drill cuttings, green solvent, soil washing, total petroleum hydrocarbon (TPH)
Procedia PDF Downloads 153605 Roadway Infrastructure and Bus Safety
Authors: Richard J. Hanowski, Rebecca L. Hammond
Abstract:
Very few studies have been conducted to investigate safety issues associated with motorcoach/bus operations. The current study investigates the impact that roadway infrastructure, including locality, roadway grade, traffic flow and traffic density, have on bus safety. A naturalistic driving study was conducted in the U.S.A that involved 43 motorcoaches. Two fleets participated in the study and over 600,000 miles of naturalistic driving data were collected. Sixty-five bus drivers participated in this study; 48 male and 17 female. The average age of the drivers was 49 years. A sophisticated data acquisition system (DAS) was installed on each of the 43 motorcoaches and a variety of kinematic and video data were continuously recorded. The data were analyzed by identifying safety critical events (SCEs), which included crashes, near-crashes, crash-relevant conflicts, and unintentional lane deviations. Additionally, baseline (normative driving) segments were also identified and analyzed for comparison to the SCEs. This presentation highlights the need for bus safety research and the methods used in this data collection effort. With respect to elements of roadway infrastructure, this study highlights the methods used to assess locality, roadway grade, traffic flow, and traffic density. Locality was determined by manual review of the recorded video for each event and baseline and was characterized in terms of open country, residential, business/industrial, church, playground, school, urban, airport, interstate, and other. Roadway grade was similarly determined through video review and characterized in terms of level, grade up, grade down, hillcrest, and dip. The video was also used to make a determination of the traffic flow and traffic density at the time of the event or baseline segment. For traffic flow, video was used to assess which of the following best characterized the event or baseline: not divided (2-way traffic), not divided (center 2-way left turn lane), divided (median or barrier), one-way traffic, or no lanes. In terms of traffic density, level-of-service categories were used: A1, A2, B, C, D, E, and F. Highlighted in this abstract are only a few of the many roadway elements that were coded in this study. Other elements included lighting levels, weather conditions, roadway surface conditions, relation to junction, and roadway alignment. Note that a key component of this study was to assess the impact that driver distraction and fatigue have on bus operations. In this regard, once the roadway elements had been coded, the primary research questions that were addressed were (i) “What environmental condition are associated with driver choice of engagement in tasks?”, and (ii) “what are the odds of being in a SCE while engaging in tasks while encountering these conditions?”. The study may be of interest to researchers and traffic engineers that are interested in the relationship between roadway infrastructure elements and safety events in motorcoach bus operations.Keywords: bus safety, motorcoach, naturalistic driving, roadway infrastructure
Procedia PDF Downloads 180604 Effects of Different Fungicide In-Crop Treatments on Plant Health Status of Sunflower (Helianthus annuus L.)
Authors: F. Pal-Fam, S. Keszthelyi
Abstract:
Phytosanitary condition of sunflower (Helianthus annuus L.) was endangered by several phytopathogenic agents, mainly microfungi, such as Sclerotinia sclerotiorum, Diaporthe helianthi, Plasmopara halstedtii, Macrophomina phaseolina and so on. There are more agrotechnical and chemical technologies against them, for instance, tolerant hybrids, crop rotations and eventually several in-crop chemical treatments. There are different fungicide treatment methods in sunflower in Hungarian agricultural practice in the quest of obtaining healthy and economic plant products. Besides, there are many choices of useable active ingredients in Hungarian sunflower protection. This study carried out into the examination of the effect of five different fungicide active substances (found on the market) and three different application modes (early; late; and early and late treatments) in a total number of 9 sample plots, 0.1 ha each other. Five successive vegetation periods have been investigated in long term, between 2013 and 2017. The treatments were: 1)untreated control; 2) boscalid and dimoxystrobin late treatment (July); 3) boscalid and dimoxystrobin early treatment (June); 4) picoxystrobin and cyproconazole early treatment; 5) picoxystrobin and cymoxanil and famoxadone early treatment; 6) picoxystrobin and cyproconazole early; cymoxanil and famoxadone late treatments; 7) picoxystrobin and cyproconazole early; picoxystrobin and cymoxanil and famoxadone late treatments; 8) trifloxystrobin and cyproconazole early treatment; and 9) trifloxystrobin and cyproconazole both early and late treatments. Due to the very different yearly weather conditions different phytopathogenic fungi were dominant in the particular years: Diaporthe and Alternaria in 2013; Alternaria and Sclerotinia in 2014 and 2015; Alternaria, Sclerotinia and Diaporthe in 2016; and Alternaria in 2017. As a result of treatments ‘infection frequency’ and ‘infestation rate’ showed a significant decrease compared to the control plot. There were no significant differences between the efficacies of the different fungicide mixes; all were almost the same effective against the phytopathogenic fungi. The most dangerous Sclerotinia infection was practically eliminated in all of the treatments. Among the single treatments, the late treatment realised in July was the less efficient, followed by the early treatments effectuated in June. The most efficient was the double treatments realised in both June and July, resulting 70-80% decrease of the infection frequency, respectively 75-90% decrease of the infestation rate, comparing with the control plot in the particular years. The lowest yield quantity was observed in the control plot, followed by the late single treatment. The yield of the early single treatments was higher, while the double treatments showed the highest yield quantities (18.3-22.5% higher than the control plot in particular years). In total, according to our five years investigation, the most effective application mode is the double in-crop treatment per vegetation time, which is reflected by the yield surplus.Keywords: fungicides, treatments, phytopathogens, sunflower
Procedia PDF Downloads 141603 Predicting OpenStreetMap Coverage by Means of Remote Sensing: The Case of Haiti
Authors: Ran Goldblatt, Nicholas Jones, Jennifer Mannix, Brad Bottoms
Abstract:
Accurate, complete, and up-to-date geospatial information is the foundation of successful disaster management. When the 2010 Haiti Earthquake struck, accurate and timely information on the distribution of critical infrastructure was essential for the disaster response community for effective search and rescue operations. Existing geospatial datasets such as Google Maps did not have comprehensive coverage of these features. In the days following the earthquake, many organizations released high-resolution satellite imagery, catalyzing a worldwide effort to map Haiti and support the recovery operations. Of these organizations, OpenStreetMap (OSM), a collaborative project to create a free editable map of the world, used the imagery to support volunteers to digitize roads, buildings, and other features, creating the most detailed map of Haiti in existence in just a few weeks. However, large portions of the island are still not fully covered by OSM. There is an increasing need for a tool to automatically identify which areas in Haiti, as well as in other countries vulnerable to disasters, that are not fully mapped. The objective of this project is to leverage different types of remote sensing measurements, together with machine learning approaches, in order to identify geographical areas where OSM coverage of building footprints is incomplete. Several remote sensing measures and derived products were assessed as potential predictors of OSM building footprints coverage, including: intensity of light emitted at night (based on VIIRS measurements), spectral indices derived from Sentinel-2 satellite (normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), soil-adjusted vegetation index (SAVI), urban index (UI)), surface texture (based on Sentinel-1 SAR measurements)), elevation and slope. Additional remote sensing derived products, such as Hansen Global Forest Change, DLR`s Global Urban Footprint (GUF), and World Settlement Footprint (WSF), were also evaluated as predictors, as well as OSM street and road network (including junctions). Using a supervised classification with a random forest classifier resulted in the prediction of 89% of the variation of OSM building footprint area in a given cell. These predictions allowed for the identification of cells that are predicted to be covered but are actually not mapped yet. With these results, this methodology could be adapted to any location to assist with preparing for future disastrous events and assure that essential geospatial information is available to support the response and recovery efforts during and following major disasters.Keywords: disaster management, Haiti, machine learning, OpenStreetMap, remote sensing
Procedia PDF Downloads 125602 Fire Safe Medical Oxygen Delivery for Aerospace Environments
Authors: M. A. Rahman, A. T. Ohta, H. V. Trinh, J. Hyvl
Abstract:
Atmospheric pressure and oxygen (O2) concentration are critical life support parameters for human-occupied aerospace vehicles and habitats. Various medical conditions may require medical O2; for example, the American Medical Association has determined that commercial air travel exposes passengers to altitude-related hypoxia and gas expansion. It may cause some passengers to experience significant symptoms and medical complications during the flight, requiring supplemental medical-grade O2 to maintain adequate tissue oxygenation and prevent hypoxemic complications. Although supplemental medical grade O2 is a successful lifesaver for respiratory and cardiac failure, O2-enriched exhaled air can contain more than 95 % O2, increasing the likelihood of a fire. In an aerospace environment, a localized high concentration O2 bubble forms around a patient being treated for hypoxia, increasing the cabin O2 beyond the safe limit. To address this problem, this work describes a medical O2 delivery system that can reduce the O2 concentration from patient-exhaled O2-rich air to safe levels while maintaining the prescribed O2 administration to the patient. The O2 delivery system is designed to be a part of the medical O2 kit. The system uses cationic multimetallic cobalt complexes to reversibly, selectively, and stoichiometrically chemisorb O2 from the exhaled air. An air-release sub-system monitors the exhaled air, and as soon the O2 percentage falls below 21%, the air is released to the room air. The O2-enriched exhaled air is channeled through a layer of porous, thin-film heaters coated with the cobalt complex. The complex absorbs O2, and when saturated, the complex is heated to 100°C using the thin-film heater. Upon heating, the complex desorbs O2 and is once again ready to absorb or remove the excess O2 from exhaled air. The O2 absorption is a sub-second process, and desorption is a multi-second process. While heating at 0.685 °C/sec, the complex desorbs ~90% O2 in 110 sec. These fast reaction times mean that a simultaneous absorb/desorb process in the O2 delivery system will create a continuous absorption of O2. Moreover, the complex can concentrate O2 by a factor of 160 times that in air and desorb over 90% of the O2 at 100°C. Over 12 cycles of thermogravimetry measurement, less than 0.1% decrease in reversibility in O2 uptake was observed. The 1 kg complex can desorb over 20L of O2, so simultaneous O2 desorption by 0.5 kg of complex and absorption by 0.5 kg of complex can potentially continuously remove 9L/min O2 (~90% desorbed at 100°C) from exhaled air. The complex is synthesized and characterized for reversible O2 absorption and efficacy. The complex changes its color from dark brown to light gray after O2 desorption. In addition to thermogravimetric analysis, the O2 absorption/desorption cycle is characterized using optical imaging, showing stable color changes over ten cycles. The complex was also tested at room temperature in a low O2 environment in its O2 desorbed state, and observed to hold the deoxygenated state under these conditions. The results show the feasibility of using the complex for reversible O2 absorption in the proposed fire safe medical O2 delivery system.Keywords: fire risk, medical oxygen, oxygen removal, reversible absorption
Procedia PDF Downloads 104601 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data
Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour
Abstract:
Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.Keywords: physical activity, machine learning, under 5s, disability, accelerometer
Procedia PDF Downloads 210600 Navigating the Digital Landscape: An Ethnographic Content Analysis of Black Youth's Encounters with Racially Traumatic Content on Social Media
Authors: Tiera Tanksley, Amanda M. McLeroy
Abstract:
The advent of technology and social media has ushered in a new era of communication, providing platforms for news dissemination and cause advocacy. However, this digital landscape has also exposed a distressing phenomenon termed "Black death," or trauma porn. This paper delves into the profound effects of repeated exposure to traumatic content on Black youth via social media, exploring the psychological impacts and potential reinforcing of stereotypes. Employing Critical Race Technology Theory (CRTT), the study sheds light on algorithmic anti-blackness and its influence on Black youth's lives and educational experiences. Through ethnographic content analysis, the research investigates common manifestations of Black death encountered online by Black adolescents. Findings unveil distressing viral videos, traumatic images, racial slurs, and hate speech, perpetuating stereotypes. However, amidst the distress, the study identifies narratives of activism and social justice on social media platforms, empowering Black youth to engage in positive change. Coping mechanisms and community support emerge as significant factors in navigating the digital landscape. The study underscores the need for comprehensive interventions and policies informed by evidence-based research. By addressing algorithmic anti-blackness and promoting digital resilience, the paper advocates for a more empathetic and inclusive online environment. Understanding coping mechanisms and community support becomes imperative for fostering mental well-being among Black adolescents navigating social media. In education, the implications are substantial. Acknowledging the impact of Black death content, educators play a pivotal role in promoting media literacy and digital resilience. Creating inclusive and safe online spaces, educators can mitigate negative effects and encourage open discussions about traumatic content. The application of CRTT in educational technology emphasizes dismantling systemic biases and promoting equity. In conclusion, this study calls for educators to be cognizant of the impact of Black death content on social media. By prioritizing media literacy, fostering digital resilience, and advocating for unbiased technologies, educators contribute to an inclusive and just educational environment for all students, irrespective of their race or background. Addressing challenges related to Black death content proactively ensures the well-being and mental health of Black adolescents, fostering an empathetic and inclusive digital space.Keywords: algorithmic anti-Blackness, digital resilience, media literacy, traumatic content
Procedia PDF Downloads 56