Search results for: complex and dynamic systems
14 Observations on Cultural Alternative and Environmental Conservation: Populations "Delayed" and Excluded from Health and Public Hygiene Policies in Mexico (1890-1930)
Authors: Marcela Davalos Lopez
Abstract:
The history of the circulation of hygienic knowledge and the consolidation of public health in Latin American cities towards the end of the 19th century is well known. Among them, Mexico City was inserted in international politics, strengthened institutions, medical knowledge, applied parameters of modernity and built sanitary engineering works. Despite the power that this hygienist system achieved, its scope was relative: it cannot be generalized to all cities. From a comparative and contextual analysis, it will be shown that conclusions derived from modern urban historiography present, from our contemporary observations, fractures. Between 1890 and 1930, the small cities and areas surrounding the Mexican capital adapted in their own way the international and federal public health regulations. This will be shown for neighborhoods located around Mexico City and in a medium city, close to the Mexican capital (about 80 km), called Cuernavaca. While the inhabitants of the neighborhoods kept awaiting the evolutionary process and the forms that public hygiene policies were taking (because they were witnesses and affected in their territories), in Cuernavaca, the dictates came as an echo. While the capital was drained, large roads were opened, roundabouts were erected, residents were expelled, and drains, sewers, drinking water pipes, etc., were built; Cuernavaca was sheltered in other times and practices. What was this due to? Undoubtedly, the time and energy that it took politicians and the group of "scientists" to carry out these enormous works in the Mexican capital took them away from addressing the issue in remote villages. It was not until the 20th century that the federal hygiene policy began to be strengthened. Despite this, there are other factors that emphasize the particularities of each site. I would like to draw attention here to the different receptions that each town prepared on public hygiene. We will see that Cuernavaca responded to its own semi-rural culture, history, orography and functions, prolonging for much longer, for example, the use of its deep ravines as sewers. For their part, the neighborhoods surrounding the capital, although affected and excluded from hygienist policies, chose to move away from them and solve the deficiencies with their own resources (they resorted to the waste that was left from the dried lake of Mexico to continue their lake practices). All of this points to a paradox that shapes our contemporary concerns: on the one hand, the benefits derived from medical knowledge and its technological applications (in this work referring particularly to the urban health system) and, on the other, the alteration it caused in environmental settings. Places like Cuernavaca (classified by the nineteenth-century and hygienists of the first decades of the twentieth century as backward), as well as landscapes such as neighborhoods, affected by advances in sanitary engineering, keep in their memory buried practices that we observe today as possible ways to reestablish environmental balances: alternative uses of water; recycling of organic materials; local uses of fauna; various systems for breaking down excreta, and so on. In sum, what the nineteenth and first half of the twentieth centuries graduated as levels of backwardness or progress, turn out to be key information to rethink the routes of environmental conservation. When we return to the observations of the scientists, politicians and lawyers of that period, we find historically rejected cultural alterity. Populations such as Cuernavaca that, due to their history, orography and/or insufficiency of federal policies, kept different relationships with the environment, today give us clues to reorient basic elements of cities: alternative uses of water, waste of raw materials, organic or consumption of local products, among others. It is, therefore, a matter of unearthing the rejected that cries out to emerge to the surface.Keywords: sanitary hygiene, Mexico city, cultural alterity, environmental conservation, environmental history
Procedia PDF Downloads 16613 Transforming Emergency Care: Revolutionizing Obstetrics and Gynecology Operations for Enhanced Excellence
Authors: Lolwa Alansari, Hanen Mrabet, Kholoud Khaled, Abdelhamid Azhaghdani, Sufia Athar, Aska Kaima, Zaineb Mhamdia, Zubaria Altaf, Almunzer Zakaria, Tamara Alshadafat
Abstract:
Introduction: The Obstetrics and Gynecology Emergency Department at Alwakra Hospital has faced significant challenges, which have been further worsened by the impact of the COVID-19 pandemic. These challenges involve issues such as overcrowding, extended wait times, and a notable surge in demand for emergency care services. Moreover, prolonged waiting times have emerged as a primary factor contributing to situations where patients leave without receiving attention, known as left without being seen (LWBS), and unexpectedly abscond. Addressing the issue of insufficient patient mobility in the obstetrics and gynecology emergency department has brought about substantial improvements in patient care, healthcare administration, and overall departmental efficiency. These changes have not only alleviated overcrowding but have also elevated the quality of emergency care, resulting in higher patient satisfaction, better outcomes, and operational rewards. Methodology: The COVID-19 pandemic has served as a catalyst for substantial transformations in the obstetrics and gynecology emergency, aligning seamlessly with the strategic direction of Hamad Medical Corporation (HMC). The fundamental aim of this initiative is to revolutionize the operational efficiency of the OB-GYN ED. To accomplish this mission, a range of transformations has been initiated, focusing on essential areas such as digitizing systems, optimizing resource allocation, enhancing budget efficiency, and reducing overall costs. The project utilized the Plan-Do-Study-Act (PDSA) model, involving a diverse team collecting baseline data and introducing throughput improvements. Post-implementation data and feedback were analysed, leading to the integration of effective interventions into standard procedures. These interventions included optimized space utilization, real-time communication, bedside registration, technology integration, pre-triage screening, enhanced communication and patient education, consultant presence, and a culture of continuous improvement. These strategies significantly reduced waiting times, enhancing both patient care and operational efficiency. Results: Results demonstrated a substantial reduction in overall average waiting time, dropping from 35 to approximately 14 minutes by August 2023. The wait times for priority 1 cases have been reduced from 22 to 0 minutes, and for priority 2 cases, the wait times have been reduced from 32 to approximately 13.6 minutes. The proportion of patients spending less than 8 hours in the OB ED observation beds rose from 74% in January 2022 to over 98% in 2023. Notably, there was a remarkable decrease in LWBS and absconded patient rates from 2020 to 2023. Conclusion: The project initiated a profound change in the department's operational environment. Efficiency became deeply embedded in the unit's culture, promoting teamwork among staff that went beyond the project's original focus and had a positive influence on operations in other departments. This effectiveness not only made processes more efficient but also resulted in significant cost reductions for the hospital. These cost savings were achieved by reducing wait times, which in turn led to fewer prolonged patient stays and reduced the need for additional treatments. These continuous improvement initiatives have now become an integral part of the Obstetrics and Gynecology Division's standard operating procedures, ensuring that the positive changes brought about by the project persist and evolve over time.Keywords: overcrowding, waiting time, person centered care, quality initiatives
Procedia PDF Downloads 6512 Rapid Situation Assessment of Family Planning in Pakistan: Exploring Barriers and Realizing Opportunities
Authors: Waqas Abrar
Abstract:
Background: Pakistan is confronted with a formidable challenge to increase uptake of modern contraceptive methods. USAID, through its flagship Maternal and Child Survival Program (MCSP), in Pakistan is determined to support provincial Departments of Health and Population Welfare to increase the country's contraceptive prevalence rates (CPR) in Sindh, Punjab and Balochistan to achieve FP2020 goals. To inform program design and planning, a Rapid Situation Assessment (RSA) of family planning was carried out in Rawalpindi and Lahore districts in Punjab and Karachi district in Sindh. Methodology: The methodology consisted of comprehensive desk review of available literature and used a qualitative approach comprising of in-depth interviews (IDIs) and focus group discussions (FGDs). FGDs were conducted with community women, men, and mothers-in-law whereas IDIs were conducted with health facility in-charges/chiefs, healthcare providers, and community health workers. Results: Some of the oft-quoted reasons captured during desk review included poor quality of care at public sector facilities, affordability and accessibility in rural communities and providers' technical incompetence. Moreover, providers had inadequate knowledge of contraceptive methods and lacked counseling techniques; thereby, leading to dissatisfied clients and hence, discontinuation of contraceptive methods. These dissatisfied clients spread the myths and misconceptions about contraceptives in their respective communities which seriously damages community-level family planning efforts. Private providers were found reluctant to insert Intrauterine Contraceptive Devices (IUCDs) due to inadequate knowledge vis-à-vis post insertion issues/side effects. FGDs and IDIs unveiled multi-faceted reasons for poor contraceptives uptake. It was found that low education and socio-economic levels lead to low contraceptives uptake and mostly uneducated women rely on condoms provided by Lady Health Workers (LHWs). Providers had little or no knowledge about postpartum family planning or lactational amenorrhea. At community level family planning counseling sessions organized by LHWs and Male Mobilizers do not sensitize community men on permissibility of contraception in Islam. Many women attributed their physical ailments to the use of contraceptives. Lack of in-service training, job-aids and Information, Education and Communications (IEC) materials at facilities seriously comprise the quality of care in effective family planning service delivery. This is further compounded by frequent stock-outs of contraceptives at public healthcare facilities, poor data quality, false reporting, lack of data verification systems and follow-up. Conclusions: Some key conclusions from this assessment included capacity building of healthcare providers on long acting reversible contraceptives (LARCs) which give women contraception for a longer period. Secondly, capacity building of healthcare providers on postpartum family planning is an enormous challenge that can be best addressed through institutionalization. Thirdly, Providers should be equipped with counseling skills and techniques including inculcation of pros and cons of all contraceptive methods. Fourthly, printed materials such as job-aids and Information, Education and Communications (IEC) materials should be disseminated among healthcare providers and clients. These concluding statements helped MCSP to make informed decisions with regard to setting broad objectives of project and were duly approved by USAID.Keywords: capacity building, contraceptive prevalence rate, family planning, Institutionalization, Pakistan, postpartum care, postpartum family planning services
Procedia PDF Downloads 15511 Fabrication of Highly Stable Low-Density Self-Assembled Monolayers by Thiolyne Click Reaction
Authors: Leila Safazadeh, Brad Berron
Abstract:
Self-assembled monolayers have tremendous impact in interfacial science, due to the unique opportunity they offer to tailor surface properties. Low-density self-assembled monolayers are an emerging class of monolayers where the environment-interfacing portion of the adsorbate has a greater level of conformational freedom when compared to traditional monolayer chemistries. This greater range of motion and increased spacing between surface-bound molecules offers new opportunities in tailoring adsorption phenomena in sensing systems. In particular, we expect low-density surfaces to offer a unique opportunity to intercalate surface bound ligands into the secondary structure of protiens and other macromolecules. Additionally, as many conventional sensing surfaces are built upon gold surfaces (SPR or QCM), these surfaces must be compatible with gold substrates. Here, we present the first stable method of generating low-density self assembled monolayer surfaces on gold for the analysis of their interactions with protein targets. Our approach is based on the 2:1 addition of thiol-yne chemistry to develop new classes of y-shaped adsorbates on gold, where the environment-interfacing group is spaced laterally from neighboring chemical groups. This technique involves an initial deposition of a crystalline monolayer of 1,10 decanedithiol on the gold substrate, followed by grafting of a low-packed monolayer on through a photoinitiated thiol-yne reaction in presence of light. Orthogonality of the thiol-yne chemistry (commonly referred to as a click chemistry) allows for preparation of low-density monolayers with variety of functional groups. To date, carboxyl, amine, alcohol, and alkyl terminated monolayers have been prepared using this core technology. Results from surface characterization techniques such as FTIR, contact angle goniometry and electrochemical impedance spectroscopy confirm the proposed low chain-chain interactions of the environment interfacing groups. Reductive desorption measurements suggest a higher stability for the click-LDMs compared to traditional SAMs, along with the equivalent packing density at the substrate interface, which confirms the proposed stability of the monolayer-gold interface. In addition, contact angle measurements change in the presence of an applied potential, supporting our description of a surface structure which allows the alkyl chains to freely orient themselves in response to different environments. We are studying the differences in protein adsorption phenomena between well packed and our loosely packed surfaces, and we expect this data will be ready to present at the GRC meeting. This work aims to contribute biotechnology science in the following manner: Molecularly imprinted polymers are a promising recognition mode with several advantages over natural antibodies in the recognition of small molecules. However, because of their bulk polymer structure, they are poorly suited for the rapid diffusion desired for recognition of proteins and other macromolecules. Molecularly imprinted monolayers are an emerging class of materials where the surface is imprinted, and there is not a bulk material to impede mass transfer. Further, the short distance between the binding site and the signal transduction material improves many modes of detection. My dissertation project is to develop a new chemistry for protein-imprinted self-assembled monolayers on gold, for incorporation into SPR sensors. Our unique contribution is the spatial imprinting of not only physical cues (seen in current imprinted monolayer techniques), but to also incorporate complementary chemical cues. This is accomplished through a photo-click grafting of preassembled ligands around a protein template. This conference is important for my development as a graduate student to broaden my appreciation of the sensor development beyond surface chemistry.Keywords: low-density self-assembled monolayers, thiol-yne click reaction, molecular imprinting
Procedia PDF Downloads 22710 Critical Factors for Successful Adoption of Land Value Capture Mechanisms – An Exploratory Study Applied to Indian Metro Rail Context
Authors: Anjula Negi, Sanjay Gupta
Abstract:
Paradigms studied inform inadequacies of financial resources, be it to finance metro rails for construction or to meet operational revenues or to derive profits in the long term. Funding sustainability is far and wide for much-needed public transport modes, like urban rail or metro rails, to be successfully operated. India embarks upon a sustainable transport journey and has proposed metro rail systems countrywide. As an emerging economic leader, its fiscal constraints are paramount, and the land value capture (LVC) mechanism provides necessary support and innovation toward development. India’s metro rail policy promotes multiple methods of financing, including private-sector investments and public-private-partnership. The critical question that remains to be addressed is what factors can make such mechanisms work. Globally, urban rail is a revolution noted by many researchers as future mobility. Researchers in this study deep dive by way of literature review and empirical assessments into factors that can lead to the adoption of LVC mechanisms. It is understood that the adoption of LVC methods is in the nascent stages in India. Research posits numerous challenges being faced by metro rail agencies in raising funding and for incremental value capture. A few issues pertaining to land-based financing, inter alia: are long-term financing, inter-institutional coordination, economic/ market suitability, dedicated metro funds, land ownership issues, piecemeal approach to real estate development, property development legal frameworks, etc. The question under probe is what are the parameters that can lead to success in the adoption of land value capture (LVC) as a financing mechanism. This research provides insights into key parameters crucial to the adoption of LVC in the context of Indian metro rails. Researchers have studied current forms of LVC mechanisms at various metro rails of the country. This study is significant as little research is available on the adoption of LVC, which is applicable to the Indian context. Transit agencies, State Government, Urban Local Bodies, Policy makers and think tanks, Academia, Developers, Funders, Researchers and Multi-lateral agencies may benefit from this research to take ahead LVC mechanisms in practice. The study deems it imperative to explore and understand key parameters that impact the adoption of LVC. Extensive literature review and ratification by experts working in the metro rails arena were undertaken to arrive at parameters for the study. Stakeholder consultations in the exploratory factor analysis (EFA) process were undertaken for principal component extraction. 43 seasoned and specialized experts participated in a semi-structured questionnaire to scale the maximum likelihood on each parameter, represented by various types of stakeholders. Empirical data was collected on chosen eighteen parameters, and significant correlation was extracted for output descriptives and inferential statistics. Study findings reveal these principal components as institutional governance framework, spatial planning features, legal frameworks, funding sustainability features and fiscal policy measures. In particular, funding sustainability features highlight sub-variables of beneficiaries to pay and use of multiple revenue options towards success in LVC adoption. Researchers recommend incorporation of these variables during early stage in design and project structuring for success in adoption of LVC. In turn leading to improvements in revenue sustainability of a public transport asset and help in undertaking informed transport policy decisions.Keywords: Exploratory factor analysis, land value capture mechanism, financing metro rails, revenue sustainability, transport policy
Procedia PDF Downloads 849 The Integration of Digital Humanities into the Sociology of Knowledge Approach to Discourse Analysis
Authors: Gertraud Koch, Teresa Stumpf, Alejandra Tijerina García
Abstract:
Discourse analysis research approaches belong to the central research strategies applied throughout the humanities; they focus on the countless forms and ways digital texts and images shape present-day notions of the world. Despite the constantly growing number of relevant digital, multimodal discourse resources, digital humanities (DH) methods are thus far not systematically developed and accessible for discourse analysis approaches. Specifically, the significance of multimodality and meaning plurality modelling are yet to be sufficiently addressed. In order to address this research gap, the D-WISE project aims to develop a prototypical working environment as digital support for the sociology of knowledge approach to discourse analysis and new IT-analysis approaches for the use of context-oriented embedding representations. Playing an essential role throughout our research endeavor is the constant optimization of hermeneutical methodology in the use of (semi)automated processes and their corresponding epistemological reflection. Among the discourse analyses, the sociology of knowledge approach to discourse analysis is characterised by the reconstructive and accompanying research into the formation of knowledge systems in social negotiation processes. The approach analyses how dominant understandings of a phenomenon develop, i.e., the way they are expressed and consolidated by various actors in specific arenas of discourse until a specific understanding of the phenomenon and its socially accepted structure are established. This article presents insights and initial findings from D-WISE, a joint research project running since 2021 between the Institute of Anthropological Studies in Culture and History and the Language Technology Group of the Department of Informatics at the University of Hamburg. As an interdisciplinary team, we develop central innovations with regard to the availability of relevant DH applications by building up a uniform working environment, which supports the procedure of the sociology of knowledge approach to discourse analysis within open corpora and heterogeneous, multimodal data sources for researchers in the humanities. We are hereby expanding the existing range of DH methods by developing contextualized embeddings for improved modelling of the plurality of meaning and the integrated processing of multimodal data. The alignment of this methodological and technical innovation is based on the epistemological working methods according to grounded theory as a hermeneutic methodology. In order to systematically relate, compare, and reflect the approaches of structural-IT and hermeneutic-interpretative analysis, the discourse analysis is carried out both manually and digitally. Using the example of current discourses on digitization in the healthcare sector and the associated issues regarding data protection, we have manually built an initial data corpus of which the relevant actors and discourse positions are analysed in conventional qualitative discourse analysis. At the same time, we are building an extensive digital corpus on the same topic based on the use and further development of entity-centered research tools such as topic crawlers and automated newsreaders. In addition to the text material, this consists of multimodal sources such as images, video sequences, and apps. In a blended reading process, the data material is filtered, annotated, and finally coded with the help of NLP tools such as dependency parsing, named entity recognition, co-reference resolution, entity linking, sentiment analysis, and other project-specific tools that are being adapted and developed. The coding process is carried out (semi-)automated by programs that propose coding paradigms based on the calculated entities and their relationships. Simultaneously, these can be specifically trained by manual coding in a closed reading process and specified according to the content issues. Overall, this approach enables purely qualitative, fully automated, and semi-automated analyses to be compared and reflected upon.Keywords: entanglement of structural IT and hermeneutic-interpretative analysis, multimodality, plurality of meaning, sociology of knowledge approach to discourse analysis
Procedia PDF Downloads 2288 Design and Construction of a Solar Dehydration System as a Technological Strategy for Food Sustainability in Difficult-to-Access Territories
Authors: Erika T. Fajardo-Ariza, Luis A. Castillo-Sanabria, Andrea Nieto-Veloza, Carlos M. Zuluaga-Domínguez
Abstract:
The growing emphasis on sustainable food production and preservation has driven the development of innovative solutions to minimize postharvest losses and improve market access for small-scale farmers. This project focuses on designing, constructing, and selecting materials for solar dryers in certain regions of Colombia where inadequate infrastructure limits access to major commercial hubs. Postharvest losses pose a significant challenge, impacting food security and farmer income. Addressing these losses is crucial for enhancing the value of agricultural products and supporting local economies. A comprehensive survey of local farmers revealed substantial challenges, including limited market access, inefficient transportation, and significant postharvest losses. For crops such as coffee, bananas, and citrus fruits, losses range from 0% to 50%, driven by factors like labor shortages, adverse climatic conditions, and transportation difficulties. To address these issues, the project prioritized selecting effective materials for the solar dryer. Various materials, recovered acrylic, original acrylic, glass, and polystyrene, were tested for their performance. The tests showed that recovered acrylic and glass were most effective in increasing the temperature difference between the interior and the external environment. The solar dryer was designed using Fusion 360® software (Autodesk, USA) and adhered to architectural guidelines from Architectural Graphic Standards. It features up to sixteen aluminum trays, each with a maximum load capacity of 3.5 kg, arranged in two levels to optimize drying efficiency. The constructed dryer was then tested with two locally available plant materials: green plantains (Musa paradisiaca L.) and snack bananas (Musa AA Simonds). To monitor performance, Thermo hygrometers and an Arduino system recorded internal and external temperature and humidity at one-minute intervals. Despite challenges such as adverse weather conditions and delays in local government funding, the active involvement of local producers was a significant advantage, fostering ownership and understanding of the project. The solar dryer operated under conditions of 31°C dry bulb temperature (Tbs), 55% relative humidity, and 21°C wet bulb temperature (Tbh). The drying curves showed a consistent drying period with critical moisture content observed between 200 and 300 minutes, followed by a sharp decrease in moisture loss, reaching an equilibrium point after 3,400 minutes. Although the solar dryer requires more time and is highly dependent on atmospheric conditions, it can approach the efficiency of an electric dryer when properly optimized. The successful design and construction of solar dryer systems in difficult-to-access areas represent a significant advancement in agricultural sustainability and postharvest loss reduction. By choosing effective materials such as recovered acrylic and implementing a carefully planned design, the project provides a valuable tool for local farmers. The initiative not only improves the quality and marketability of agricultural products but also offers broader environmental benefits, such as reduced reliance on fossil fuels and decreased waste. Additionally, it supports economic growth by enhancing the value of crops and potentially increasing farmer income. The successful implementation and testing of the dryer, combined with the engagement of local stakeholders, highlight its potential for replication and positive impact in similar contexts.Keywords: drying technology, postharvest loss reduction, solar dryers, sustainable agriculture
Procedia PDF Downloads 337 Open Science Philosophy, Research and Innovation
Authors: C.Ardil
Abstract:
Open Science translates the understanding and application of various theories and practices in open science philosophy, systems, paradigms and epistemology. Open Science originates with the premise that universal scientific knowledge is a product of a collective scholarly and social collaboration involving all stakeholders and knowledge belongs to the global society. Scientific outputs generated by public research are a public good that should be available to all at no cost and without barriers or restrictions. Open Science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of global society. Open Science is a global movement to improve accessibility to and reusability of research practices and outputs. In its broadest definition, it encompasses open access to publications, open research data and methods, open source, open educational resources, open evaluation, and citizen science. The implementation of open science provides an excellent opportunity to renegotiate the social roles and responsibilities of publicly funded research and to rethink the science system as a whole. Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open Science represents a novel systematic approach to the scientific process, shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process, based on cooperative work and diffusing scholarly knowledge with no barriers and restrictions. Open Science refers to efforts to make the primary outputs of publicly funded research results (publications and the research data) publicly accessible in digital format with no limitations. Open Science is about extending the principles of openness to the whole research cycle, fostering, sharing and collaboration as early as possible, thus entailing a systemic change to the way science and research is done. Open Science is the ongoing transition in how open research is carried out, disseminated, deployed, and transformed to make scholarly research more open, global, collaborative, creative and closer to society. Open Science involves various movements aiming to remove the barriers for sharing any kind of output, resources, methods or tools, at any stage of the research process. Open Science embraces open access to publications, research data, source software, collaboration, peer review, notebooks, educational resources, monographs, citizen science, or research crowdfunding. The recognition and adoption of open science practices, including open science policies that increase open access to scientific literature and encourage data and code sharing, is increasing in the open science philosophy. Revolutionary open science policies are motivated by ethical, moral or utilitarian arguments, such as the right to access digital research literature for open source research or science data accumulation, research indicators, transparency in the field of academic practice, and reproducibility. Open science philosophy is adopted primarily to demonstrate the benefits of open science practices. Researchers use open science applications for their own advantage in order to get more offers, increase citations, attract media attention, potential collaborators, career opportunities, donations and funding opportunities. In open science philosophy, open data findings are evidence that open science practices provide significant benefits to researchers in scientific research creation, collaboration, communication, and evaluation according to more traditional closed science practices. Open science considers concerns such as the rigor of peer review, common research facts such as financing and career development, and the sacrifice of author rights. Therefore, researchers are recommended to implement open science research within the framework of existing academic evaluation and incentives. As a result, open science research issues are addressed in the areas of publishing, financing, collaboration, resource management and sharing, career development, discussion of open science questions and conclusions.Keywords: Open Science, Open Science Philosophy, Open Science Research, Open Science Data
Procedia PDF Downloads 1336 Modeling the Human Harbor: An Equity Project in New York City, New York USA
Authors: Lauren B. Birney
Abstract:
The envisioned long-term outcome of this three-year research, and implementation plan is for 1) teachers and students to design and build their own computational models of real-world environmental-human health phenomena occurring within the context of the “Human Harbor” and 2) project researchers to evaluate the degree to which these integrated Computer Science (CS) education experiences in New York City (NYC) public school classrooms (PreK-12) impact students’ computational-technical skill development, job readiness, career motivations, and measurable abilities to understand, articulate, and solve the underlying phenomena at the center of their models. This effort builds on the partnership’s successes over the past eight years in developing a benchmark Model of restoration-based Science, Technology, Engineering, and Math (STEM) education for urban public schools and achieving relatively broad-based implementation in the nation’s largest public school system. The Billion Oyster Project Curriculum and Community Enterprise for Restoration Science (BOP-CCERS STEM + Computing) curriculum, teacher professional developments, and community engagement programs have reached more than 200 educators and 11,000 students at 124 schools, with 84 waterfront locations and Out of School of Time (OST) programs. The BOP-CCERS Partnership is poised to develop a more refined focus on integrating computer science across the STEM domains; teaching industry-aligned computational methods and tools; and explicitly preparing students from the city’s most under-resourced and underrepresented communities for upwardly mobile careers in NYC’s ever-expanding “digital economy,” in which jobs require computational thinking and an increasing percentage require discreet computer science technical skills. Project Objectives include the following: 1. Computational Thinking (CT) Integration: Integrate computational thinking core practices across existing middle/high school BOP-CCERS STEM curriculum as a means of scaffolding toward long term computer science and computational modeling outcomes. 2. Data Science and Data Analytics: Enabling Researchers to perform interviews with Teachers, students, community members, partners, stakeholders, and Science, Technology, Engineering, and Mathematics (STEM) industry Professionals. Collaborative analysis and data collection were also performed. As a centerpiece, the BOP-CCERS partnership will expand to include a dedicated computer science education partner. New York City Department of Education (NYCDOE), Computer Science for All (CS4ALL) NYC will serve as the dedicated Computer Science (CS) lead, advising the consortium on integration and curriculum development, working in tandem. The BOP-CCERS Model™ also validates that with appropriate application of technical infrastructure, intensive teacher professional developments, and curricular scaffolding, socially connected science learning can be mainstreamed in the nation’s largest urban public school system. This is evidenced and substantiated in the initial phases of BOP-CCERS™. The BOP-CCERS™ student curriculum and teacher professional development have been implemented in approximately 24% of NYC public middle schools, reaching more than 250 educators and 11,000 students directly. BOP-CCERS™ is a fully scalable and transferable educational model, adaptable to all American school districts. In all settings of the proposed Phase IV initiative, the primary beneficiary group will be underrepresented NYC public school students who live in high-poverty neighborhoods and are traditionally underrepresented in the STEM fields, including African Americans, Latinos, English language learners, and children from economically disadvantaged households. In particular, BOP-CCERS Phase IV will explicitly prepare underrepresented students for skilled positions within New York City’s expanding digital economy, computer science, computational information systems, and innovative technology sectors.Keywords: computer science, data science, equity, diversity and inclusion, STEM education
Procedia PDF Downloads 595 A Comprehensive Study of Spread Models of Wildland Fires
Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling
Procedia PDF Downloads 824 Recent Developments in E-waste Management in India
Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhay, Ananya Mukhopadhyay, Harendra Nath Bhattacharya
Abstract:
This study investigates the global issue of electronic waste (e-waste), focusing on its prevalence in India and other regions. E-waste has emerged as a significant worldwide problem, with India contributing a substantial share of annual e-waste generation. The primary sources of e-waste in India are computer equipment and mobile phones. Many developed nations utilize India as a dumping ground for their e-waste, with major contributions from the United States, China, Europe, Taiwan, South Korea, and Japan. The study identifies Maharashtra, Tamil Nadu, Mumbai, and Delhi as prominent contributors to India's e-waste crisis. This issue is contextualized within the broader framework of the United Nations' 2030 Agenda for Sustainable Development, which encompasses 17 Sustainable Development Goals (SDGs) and 169 associated targets to address poverty, environmental preservation, and universal prosperity. The study underscores the interconnectedness of e-waste management with several SDGs, including health, clean water, economic growth, sustainable cities, responsible consumption, and ocean conservation. Central Pollution Control Board (CPCB) data reveals that e-waste generation surpasses that of plastic waste, increasing annually at a rate of 31%. However, only 20% of electronic waste is recycled through organized and regulated methods in underdeveloped nations. In Europe, efficient e-waste management stands at just 35%. E-waste pollution poses serious threats to soil, groundwater, and public health due to toxic components such as mercury, lead, bromine, and arsenic. Long-term exposure to these toxins, notably arsenic in microchips, has been linked to severe health issues, including cancer, neurological damage, and skin disorders. Lead exposure, particularly concerning for children, can result in brain damage, kidney problems, and blood disorders. The study highlights the problematic transboundary movement of e-waste, with approximately 352,474 metric tonnes of electronic waste illegally shipped from Europe to developing nations annually, mainly to Africa, including Nigeria, Ghana, and Tanzania. Effective e-waste management, underpinned by appropriate infrastructure, regulations, and policies, offers opportunities for job creation and aligns with the objectives of the 2030 Agenda for SDGs, especially in the realms of decent work, economic growth, and responsible production and consumption. E-waste represents hazardous pollutants and valuable secondary resources, making it a focal point for anthropogenic resource exploitation. The United Nations estimates that e-waste holds potential secondary raw materials worth around 55 billion Euros. The study also identifies numerous challenges in e-waste management, encompassing the sheer volume of e-waste, child labor, inadequate legislation, insufficient infrastructure, health concerns, lack of incentive schemes, limited awareness, e-waste imports, high costs associated with recycling plant establishment, and more. To mitigate these issues, the study offers several solutions, such as providing tax incentives for scrap dealers, implementing reward and reprimand systems for e-waste management compliance, offering training on e-waste handling, promoting responsible e-waste disposal, advancing recycling technologies, regulating e-waste imports, and ensuring the safe disposal of domestic e-waste. A mechanism, Buy-Back programs, will compensate customers in cash when they deposit unwanted digital products. This E-waste could contain any portable electronic device, such as cell phones, computers, tablets, etc. Addressing the e-waste predicament necessitates a multi-faceted approach involving government regulations, industry initiatives, public awareness campaigns, and international cooperation to minimize environmental and health repercussions while harnessing the economic potential of recycling and responsible management.Keywords: e-waste management, sustainable development goal, e-waste disposal, recycling technology, buy-back policy
Procedia PDF Downloads 883 A Study on the Use Intention of Smart Phone
Authors: Zhi-Zhong Chen, Jun-Hao Lu, Jr., Shih-Ying Chueh
Abstract:
Based on Unified Theory of Acceptance and Use of Technology (UTAUT), the study investigates people’s intention on using smart phones. The study additionally incorporates two new variables: 'self-efficacy' and 'attitude toward using'. Samples are collected by questionnaire survey, in which 240 are valid. After Correlation Analysis, Reliability Test, ANOVA, t-test and Multiple Regression Analysis, the study finds that social impact and self-efficacy have positive effect on use intentions, and the use intentions also have positive effect on use behavior.Keywords: [1] Ajzen & Fishbein (1975), “Belief, attitude, intention and behavior: An introduction to theory and research”, Reading MA: Addison-Wesley. [2] Bandura (1977) Self-efficacy: toward a unifying theory of behavioural change. Psychological Review , 84, 191–215. [3] Bandura( 1986) A. Bandura, Social foundations of though and action, Prentice-Hall. Englewood Cliffs. [4] Ching-Hui Huang (2005). The effect of Regular Exercise on Elderly Optimism: The Self-efficacy and Theory of Reasoned Action Perspectives.(Master's dissertation, National Taiwan Sport University, 2005).National Digital Library of Theses and Dissertations in Taiwan。 [5] Chun-Mo Wu (2007).The Effects of Perceived Risk and Service Quality on Purchase Intention - an Example of Taipei City Long-Term Care Facilities. (Master's dissertation, Ming Chuan University, 2007).National Digital Library of Theses and Dissertations in Taiwan. [6] Compeau, D.R., and Higgins, C.A., (1995) “Application of social cognitive theory to training for computer skills.”, Information Systems Research, 6(2), pp.118-143. [7] computer-self-efficacy and mediators of the efficacy-performance relationship. International Journal of Human-Computer Studies, 62, 737-758. [8] Davis et al(1989), “User acceptance of computer technology: A comparison of two theoretical models ”, Management Science, 35(8), p.982-1003. [9] Davis et al(1989), “User acceptance of computer technology:A comparison of two theoretical models ”, Management Science, 35(8), p.982-1003. [10] Davis, F.D. (1989). Perceived Usefulness, Perceived Ease of Use and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319-340。 [11] Davis. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319–340. doi:10.2307/249008 [12] Johnson, R. D. (2005). An empirical investigation of sources of application-specific [13] Mei-yin Hsu (2010).The Study on Attitude and Satisfaction of Electronic Documents System for Administrators of Elementary Schools in Changhua County.(Master's dissertation , Feng Chia University, 2010).National Digital Library of Theses and Dissertations in Taiwan. [14] Ming-Chun Hsieh (2010). Research on Parents’ Attitudes Toward Electronic Toys: The case of Taichung City.(Master's dissertation, Chaoyang University of Technology,2010).National Digital Library of Theses and Dissertations in Taiwan. [15] Moon and Kim(2001). Extending the TAM for a World-Wide-Web context, Information and Management, v.38 n.4, p.217-230. [16] Shang-Yi Hu (2010).The Impacts of Knowledge Management on Customer Relationship Management – Enterprise Characteristicsand Corporate Governance as a Moderator.(Master's dissertation, Leader University, 2010)。National Digital Library of Theses and Dissertations in Taiwan. [17] Sheng-Yi Hung (2013, September10).Worldwide sale of smartphones to hit one billion IDC:Android dominate the market. ETtoday. Retrieved data form the available protocol:2013/10/3. [18] Thompson, R.L., Higgins, C.A., and Howell, J.M.(1991), “Personal Computing: Toward a Conceptual Model of Utilization”, MIS Quarterly(15:1), pp. 125-143. [19] Venkatesh, V., M.G. Morris, G.B. Davis, and F. D. Davis (2003), “User acceptance of information technology: Toward a unified view, ” MIS Quarterly, 27, No. 3, pp.425-478. [20] Vijayasarathy, L. R. (2004), Predicting Consumer Intentions to Use On-Line Shopping: The Case for an Augmented Technology Acceptance Model, Information and Management, Vol.41, No.6, pp.747-762. [21] Wikipedia - smartphone (http://zh.wikipedia.org/zh-tw/%E6%99%BA%E8%83%BD%E6%89%8B%E6%9C%BA)。 [22] Wu-Minsan (2008).The impacts of self-efficacy, social support on work adjustment with hearing impaired. (Master's dissertation, Southern Taiwan University of Science and Technology, 2008).National Digital Library of Theses and Dissertations in Taiwan. [23] Yu-min Lin (2006). The Influence of Business Employee’s MSN Self-efficacy On Instant Messaging Usage Behavior and Communicaiton Satisfaction.(Master's dissertation, National Taiwan University of Science and Technology, 2006).National Digital Library of Theses and Dissertations in Taiwan.
Procedia PDF Downloads 4112 “MaxSALIVA-II” Advancing a Nano-Sized Dual-Drug Delivery System for Salivary Gland Radioprotection, Regeneration and Repair in a Head and Neck Cancer Pre-Clinical Murine Model
Authors: Ziyad S. Haidar
Abstract:
Background: Saliva plays a major role in maintaining oral, dental, and general health and well-being; where it normally bathes the oral cavity acting as a clearing agent. This becomes more apparent when the amount and quality of saliva are significantly reduced due to medications, salivary gland neoplasms, disorders such as Sjögren’s syndrome, and especially ionizing radiation therapy for tumors of the head and neck, the 5th most common malignancy worldwide, during which the salivary glands are included within the radiation field/zone. Clinically, patients affected by salivary gland dysfunction often opt to terminate their radiotherapy course prematurely as they become malnourished and experience a significant decrease in their QoL. Accordingly, the formulation of a radio-protection/-prevention modality and development of an alternative Rx to restore damaged salivary gland tissue is eagerly awaited and highly desirable. Objectives: Assess the pre-clinical radio-protective effect and reparative/regenerative potential of layer-by-layer self-assembled lipid-polymer-based core-shell nanocapsules designed and fine-tuned for the sequential (ordered) release of dual cytokines, following a single local administration (direct injection) into a murine sub-mandibular salivary gland model of irradiation. Methods: The formulated core-shell nanocapsules were characterized by physical-chemical-mechanically pre-/post-loading with the drugs, followed by optimizing the pharmaco-kinetic profile. Then, nanosuspensions were administered directly into the salivary glands, 24hrs pre-irradiation (PBS, un-loaded nanocapsules, and individual and combined vehicle-free cytokines were injected into the control glands for an in-depth comparative analysis). External irradiation at an elevated dose of 18Gy was exposed to the head-and-neck region of C57BL/6 mice. Salivary flow rate (un-stimulated) and salivary protein content/excretion were regularly assessed using an enzyme-linked immunosorbent assay (3-month period). Histological and histomorphometric evaluation and apoptosis/proliferation analysis followed by local versus systemic bio-distribution and immuno-histochemical assays were then performed on all harvested major organs (at the distinct experimental end-points). Results: Monodisperse, stable, and cytocompatible nanocapsules capable of maintaining the bioactivity of the encapsulant within the different compartments with the core and shell and with controlled/customizable pharmaco-kinetics, resulted, as is illustrated in the graphical abstract (Figure) below. The experimental animals demonstrated a significant increase in salivary flow rates when compared to the controls. Herein, salivary protein content was comparable to the pre-irradiation (baseline) level. Histomorphometry further confirmed the biocompatibility and localization of the nanocapsules, in vivo, into the site of injection. Acinar cells showed fewer vacuoles and nuclear aberration in the experimental group, while the amount of mucin was higher in controls. Overall, fewer apoptotic activities were detected by a Terminal deoxynucleotidyl Transferase (TdT) dUTP Nick-End Labeling (TUNEL) assay and proliferative rates were similar to the controls, suggesting an interesting reparative and regenerative potential of irradiation-damaged/-dysfunctional salivary glands. The Figure below exemplifies some of these findings. Conclusions: Biocompatible, reproducible, and customizable self-assembling layer-by-layer core-shell delivery system is formulated and presented. Our findings suggest that localized sequential bioactive delivery of dual cytokines (in specific dose and order) can prevent irradiation-induced damage via reducing apoptosis and also has the potential to promote in situ proliferation of salivary gland cells; maxSALIVA is scalable (Good Manufacturing Practice or GMP production for human clinical trials) and patent-pending.Keywords: cancer, head and neck, oncology, drug development, drug delivery systems, nanotechnology, nanoncology
Procedia PDF Downloads 801 Detailed Degradation-Based Model for Solid Oxide Fuel Cells Long-Term Performance
Authors: Mina Naeini, Thomas A. Adams II
Abstract:
Solid Oxide Fuel Cells (SOFCs) feature high electrical efficiency and generate substantial amounts of waste heat that make them suitable for integrated community energy systems (ICEs). By harvesting and distributing the waste heat through hot water pipelines, SOFCs can meet thermal demand of the communities. Therefore, they can replace traditional gas boilers and reduce greenhouse gas (GHG) emissions. Despite these advantages of SOFCs over competing power generation units, this technology has not been successfully commercialized in large-scale to replace traditional generators in ICEs. One reason is that SOFC performance deteriorates over long-term operation, which makes it difficult to find the proper sizing of the cells for a particular ICE system. In order to find the optimal sizing and operating conditions of SOFCs in a community, a proper knowledge of degradation mechanisms and effects of operating conditions on SOFCs long-time performance is required. The simplified SOFC models that exist in the current literature usually do not provide realistic results since they usually underestimate rate of performance drop by making too many assumptions or generalizations. In addition, some of these models have been obtained from experimental data by curve-fitting methods. Although these models are valid for the range of operating conditions in which experiments were conducted, they cannot be generalized to other conditions and so have limited use for most ICEs. In the present study, a general, detailed degradation-based model is proposed that predicts the performance of conventional SOFCs over a long period of time at different operating conditions. Conventional SOFCs are composed of Yttria Stabilized Zirconia (YSZ) as electrolyte, Ni-cermet anodes, and LaSr₁₋ₓMnₓO₃ (LSM) cathodes. The following degradation processes are considered in this model: oxidation and coarsening of nickel particles in the Ni-cermet anodes, changes in the pore radius in anode, electrolyte, and anode electrical conductivity degradation, and sulfur poisoning of the anode compartment. This model helps decision makers discover the optimal sizing and operation of the cells for a stable, efficient performance with the fewest assumptions. It is suitable for a wide variety of applications. Sulfur contamination of the anode compartment is an important cause of performance drop in cells supplied with hydrocarbon-based fuel sources. H₂S, which is often added to hydrocarbon fuels as an odorant, can diminish catalytic behavior of Ni-based anodes by lowering their electrochemical activity and hydrocarbon conversion properties. Therefore, the existing models in the literature for H₂-supplied SOFCs cannot be applied to hydrocarbon-fueled SOFCs as they only account for the electrochemical activity reduction. A regression model is developed in the current work for sulfur contamination of the SOFCs fed with hydrocarbon fuel sources. The model is developed as a function of current density and H₂S concentration in the fuel. To the best of authors' knowledge, it is the first model that accounts for impact of current density on sulfur poisoning of cells supplied with hydrocarbon-based fuels. Proposed model has wide validity over a range of parameters and is consistent across multiple studies by different independent groups. Simulations using the degradation-based model illustrated that SOFCs voltage drops significantly in the first 1500 hours of operation. After that, cells exhibit a slower degradation rate. The present analysis allowed us to discover the reason for various degradation rate values reported in literature for conventional SOFCs. In fact, the reason why literature reports very different degradation rates, is that literature is inconsistent in definition of how degradation rate is calculated. In the literature, the degradation rate has been calculated as the slope of voltage versus time plot with the unit of voltage drop percentage per 1000 hours operation. Due to the nonlinear profile of voltage over time, degradation rate magnitude depends on the magnitude of time steps selected to calculate the curve's slope. To avoid this issue, instantaneous rate of performance drop is used in the present work. According to a sensitivity analysis, the current density has the highest impact on degradation rate compared to other operating factors, while temperature and hydrogen partial pressure affect SOFCs performance less. The findings demonstrated that a cell running at lower current density performs better in long-term in terms of total average energy delivered per year, even though initially it generates less power than if it had a higher current density. This is because of the dominant and devastating impact of large current densities on the long-term performance of SOFCs, as explained by the model.Keywords: degradation rate, long-term performance, optimal operation, solid oxide fuel cells, SOFCs
Procedia PDF Downloads 133