Search results for: qualitative descriptive study
35 Prospects of Acellular Organ Scaffolds for Drug Discovery
Authors: Inna Kornienko, Svetlana Guryeva, Natalia Danilova, Elena Petersen
Abstract:
Drug toxicity often goes undetected until clinical trials, the most expensive and dangerous phase of drug development. Both human cell culture and animal studies have limitations that cannot be overcome by improvements in drug testing protocols. Tissue engineering is an emerging alternative approach to creating models of human malignant tumors for experimental oncology, personalized medicine, and drug discovery studies. This new generation of bioengineered tumors provides an opportunity to control and explore the role of every component of the model system including cell populations, supportive scaffolds, and signaling molecules. An area that could greatly benefit from these models is cancer research. Recent advances in tissue engineering demonstrated that decellularized tissue is an excellent scaffold for tissue engineering. Decellularization of donor organs such as heart, liver, and lung can provide an acellular, naturally occurring three-dimensional biologic scaffold material that can then be seeded with selected cell populations. Preliminary studies in animal models have provided encouraging results for the proof of concept. Decellularized Organs preserve organ microenvironment, which is critical for cancer metastasis. Utilizing 3D tumor models results greater proximity of cell culture morphological characteristics in a model to its in vivo counterpart, allows more accurate simulation of the processes within a functioning tumor and its pathogenesis. 3D models allow study of migration processes and cell proliferation with higher reliability as well. Moreover, cancer cells in a 3D model bear closer resemblance to living conditions in terms of gene expression, cell surface receptor expression, and signaling. 2D cell monolayers do not provide the geometrical and mechanical cues of tissues in vivo and are, therefore, not suitable to accurately predict the responses of living organisms. 3D models can provide several levels of complexity from simple monocultures of cancer cell lines in liquid environment comprised of oxygen and nutrient gradients and cell-cell interaction to more advanced models, which include co-culturing with other cell types, such as endothelial and immune cells. Following this reasoning, spheroids cultivated from one or multiple patient-derived cell lines can be utilized to seed the matrix rather than monolayer cells. This approach furthers the progress towards personalized medicine. As an initial step to create a new ex vivo tissue engineered model of a cancer tumor, optimized protocols have been designed to obtain organ-specific acellular matrices and evaluate their potential as tissue engineered scaffolds for cultures of normal and tumor cells. Decellularized biomatrix was prepared from animals’ kidneys, urethra, lungs, heart, and liver by two decellularization methods: perfusion in a bioreactor system and immersion-agitation on an orbital shaker with the use of various detergents (SDS, Triton X-100) in different concentrations and freezing. Acellular scaffolds and tissue engineered constructs have been characterized and compared using morphological methods. Models using decellularized matrix have certain advantages, such as maintaining native extracellular matrix properties and biomimetic microenvironment for cancer cells; compatibility with multiple cell types for cell culture and drug screening; utilization to culture patient-derived cells in vitro to evaluate different anticancer therapeutics for developing personalized medicines.Keywords: 3D models, decellularization, drug discovery, drug toxicity, scaffolds, spheroids, tissue engineering
Procedia PDF Downloads 30034 Settings of Conditions Leading to Reproducible and Robust Biofilm Formation in vitro in Evaluation of Drug Activity against Staphylococcal Biofilms
Authors: Adela Diepoltova, Klara Konecna, Ondrej Jandourek, Petr Nachtigal
Abstract:
A loss of control over antibiotic-resistant pathogens has become a global issue due to severe and often untreatable infections. This state is reflected in complicated treatment, health costs, and higher mortality. All these factors emphasize the urgent need for the discovery and development of new anti-infectives. One of the most common pathogens mentioned in the phenomenon of antibiotic resistance are bacteria of the genus Staphylococcus. These bacterial agents have developed several mechanisms against the effect of antibiotics. One of them is biofilm formation. In staphylococci, biofilms are associated with infections such as endocarditis, osteomyelitis, catheter-related bloodstream infections, etc. To author's best knowledge, no validated and standardized methodology evaluating candidate compound activity against staphylococcal biofilms exists. However, a variety of protocols for in vitro drug activity testing has been suggested, yet there are often fundamental differences. Based on our experience, a key methodological step that leads to credible results is to form a robust biofilm with appropriate attributes such as firm adherence to the substrate, a complex arrangement in layers, and the presence of extracellular polysaccharide matrix. At first, for the purpose of drug antibiofilm activity evaluation, the focus was put on various conditions (supplementation of cultivation media by human plasma/fetal bovine serum, shaking mode, the density of initial inoculum) that should lead to reproducible and robust in vitro staphylococcal biofilm formation in microtiter plate model. Three model staphylococcal reference strains were included in the study: Staphylococcus aureus (ATCC 29213), methicillin-resistant Staphylococcus aureus (ATCC 43300), and Staphylococcus epidermidis (ATCC 35983). The total biofilm biomass was quantified using the Christensen method with crystal violet, and results obtained from at least three independent experiments were statistically processed. Attention was also paid to the viability of the biofilm-forming staphylococcal cells and the presence of extracellular polysaccharide matrix. The conditions that led to robust biofilm biomass formation with attributes for biofilms mentioned above were then applied by introducing an alternative method analogous to the commercially available test system, the Calgary Biofilm Device. In this test system, biofilms are formed on pegs that are incorporated into the lid of the microtiter plate. This system provides several advantages (in situ detection and quantification of biofilm microbial cells that have retained their viability after drug exposure). Based on our preliminary studies, it was found that the attention to the peg surface and substrate on which the bacterial biofilms are formed should also be paid to. Therefore, further steps leading to the optimization were introduced. The surface of pegs was coated by human plasma, fetal bovine serum, and L-polylysine. Subsequently, the willingness of bacteria to adhere and form biofilm was monitored. In conclusion, suitable conditions were revealed, leading to the formation of reproducible, robust staphylococcal biofilms in vitro for the microtiter model and the system analogous to the Calgary biofilm device, as well. The robustness and typical slime texture could be detected visually. Likewise, an analysis by confocal laser scanning microscopy revealed a complex three-dimensional arrangement of biofilm forming organisms surrounded by an extracellular polysaccharide matrix.Keywords: anti-biofilm drug activity screening, in vitro biofilm formation, microtiter plate model, the Calgary biofilm device, staphylococcal infections, substrate modification, surface coating
Procedia PDF Downloads 15533 Integrating Experiential Real-World Learning in Undergraduate Degrees: Maximizing Benefits and Overcoming Challenges
Authors: Anne E. Goodenough
Abstract:
One of the most important roles of higher education professionals is to ensure that graduates have excellent employment prospects. This means providing students with the skills necessary to be immediately effective in the workplace. Increasingly, universities are seeking to achieve this by moving from lecture-based and campus-delivered curricula to more varied delivery, which takes students out of their academic comfort zone and allows them to engage with, and be challenged by, real world issues. One popular approach is integration of problem-based learning (PBL) projects into curricula. However, although the potential benefits of PBL are considerable, it can be difficult to devise projects that are meaningful, such that they can be regarded as mere ‘hoop jumping’ exercises. This study examines three-way partnerships between academics, students, and external link organizations. It studied the experiences of all partners involved in different collaborative projects to identify how benefits can be maximized and challenges overcome. Focal collaborations included: (1) development of real-world modules with novel assessment whereby the organization became the ‘client’ for student consultancy work; (2) frameworks where students collected/analyzed data for link organizations in research methods modules; (3) placement-based internships and dissertations; (4) immersive fieldwork projects in novel locations; and (5) students working as partners on staff-led research with link organizations. Focus groups, questionnaires and semi-structured interviews were used to identify opportunities and barriers, while quantitative analysis of students’ grades was used to determine academic effectiveness. Common challenges identified by academics were finding suitable link organizations and devising projects that simultaneously provided education opportunities and tangible benefits. There was no ‘one size fits all’ formula for success, but careful planning and ensuring clarity of roles/responsibilities were vital. Students were very positive about collaboration projects. They identified benefits to confidence, time-keeping and communication, as well as conveying their enthusiasm when their work was of benefit to the wider community. They frequently highlighted employability opportunities that collaborative projects opened up and analysis of grades demonstrated the potential for such projects to increase attainment. Organizations generally recognized the value of project outputs, but often required considerable assistance to put the right scaffolding in place to ensure projects worked. Benefits were maximized by ensuring projects were well-designed, innovative, and challenging. Co-publication of projects in peer-reviewed journals sometimes gave additional benefits for all involved, being especially beneficial for student curriculum vitae. PBL and student projects are by no means new pedagogic approaches: the novelty here came from creating meaningful three-way partnerships between academics, students, and link organizations at all undergraduate levels. Such collaborations can allow students to make a genuine contribution to knowledge, answer real questions, solve actual problems, all while providing tangible benefits to organizations. Because projects are actually needed, students tend to engage with learning at a deep level. This enhances student experience, increases attainment, encourages development of subject-specific and transferable skills, and promotes networking opportunities. Such projects frequently rely upon students and staff working collaboratively, thereby also acting to break down the traditional teacher/learner division that is typically unhelpful in developing students as advanced learners.Keywords: higher education, employability, link organizations, innovative teaching and learning methods, interactions between enterprise and education, student experience
Procedia PDF Downloads 18332 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 4131 Socio-Sensorial Assessment of Nursing Homes in Singapore: Towards Integrated Enabling Design
Authors: Zdravko Trivic, John Chye Fung, Ruzica Bozovic-Stamenovic
Abstract:
Within the context of rapidly ageing population in Singapore and the pressing demands on both caregivers and care providers, an integrated approach to ageing-friendly and ability-sensitive enabling environment becomes an imperative. This particularly applies to nursing home environments and their immediate surroundings, as they are becoming one of the main available options of long-term care for many senior adults who are unable to age at home. Yet, despite the considerable efforts to break the still predominant clinical approach to eldercare and to introduce more home-like design and person-centric care model, nursing homes keep being stigmatised and perceived as not so desirable environments to grow old in. The challenges are further emphasised by the associated physical, sensorial, psychological and cognitive declines that are the common consequences of ageing. Such declines have an immense impact on almost all aspects of older adults’ daily functioning, including problems with mobility and spatial orientation, difficulties in communication, withdrawal from social interaction, higher level of depression and decreased sense of independence and autonomy. However, typical nursing home designs tend to neglect the full capacities of balanced and carefully integrated multisensory stimuli as active component of care and ability building. This paper outlines part of a larger multi-disciplinary study of six nursing homes in Singapore, with overarching objectives to create new models of supportive nursing home environments that go beyond the clinical care model and encourage community integration with the nursing home settings. The paper focuses on the largely neglected aspects of sensorial comfort and multi-sensorial properties of nursing homes, including both indoor and immediate outdoor spaces (boundaries). The objective was to investigate the sensory rhythms and explore their role in nursing home users’ daily routine and therapeutic capacities. Socio-sensory rhythms were captured and analysed through a combination of on-site sensory recordings of “objective” quantitative sensory data (air temperature and humidity, sound level and luminance) using multi-function environment meter, perceived experienced data, spatial mapping, first-person observations of nursing home users’ activity patterns, and interviews. This was done in addition to employment of available assessment tools, such as Wisconsin Person Directed Care assessment tool, Dementia Quality of Life [DQoL] instrument, and Resident Environment Impact Scale [REIS], as these tools address the issues of sensorial experience insufficiently and selectively. Key findings indicate varied levels of sensory comfort, as well as diversity, intensity, and customisation of multi-sensory conditions within different nursing home spaces. Sensory stimulation is typically concentrated in communal living areas of the nursing homes or in the areas that often provide controlled or limited access, including specifically designed sensory rooms and outdoor green spaces (gardens and terraces). Opportunities for sensory stimulation are particularly limited for bed-bound senior residents and within more functional areas, such as corridors. This suggests that the capacities of nursing home designs to provide more diverse and better integrated pleasant sensory conditions as integrated “therapeutic devices” to build nursing home residents’ physical and mental abilities, encourage activity and improve wellbeing are far from exhausted.Keywords: ageing-supportive environment, enabling design, multi-sensory assessment, nursing home environment
Procedia PDF Downloads 17230 Transforming Mindsets and Driving Action through Environmental Sustainability Education: A Course in Case Studies and Project-Based Learning in Public Education
Authors: Sofia Horjales, Florencia Palma
Abstract:
Our society is currently experiencing a profound transformation, demanding a proactive response from governmental bodies and higher education institutions to empower the next generation as catalysts for change. Environmental sustainability is rooted in the critical need to maintain the equilibrium and integrity of natural ecosystems, ensuring the preservation of precious natural resources and biodiversity for the benefit of both present and future generations. It is an essential cornerstone of sustainable development, complementing social and economic sustainability. In this evolving landscape, active methodologies take a central role, aligning perfectly with the principles of the 2030 Agenda for Sustainable Development and emerging as a pivotal element of teacher education. The emphasis on active learning methods has been driven by the urgent need to nurture sustainability and instill social responsibility in our future leaders. The Universidad Tecnológica of Uruguay (UTEC) is a public, technologically-oriented institution established in 2012. UTEC is dedicated to decentralization, expanding access to higher education throughout Uruguay, and promoting inclusive social development. Operating through Regional Technological Institutes (ITRs) and associated centers spread across the country, UTEC faces the challenge of remote student populations. To address this, UTEC utilizes e-learning for equal opportunities, self-regulated learning, and digital skills development, enhancing communication among students, teachers, and peers through virtual classrooms. The Interdisciplinary Continuing Education Program is part of the Innovation and Entrepreneurship Department of UTEC. The main goal is to strengthen innovation skills through a transversal and multidisciplinary approach. Within this Program, we have developed a Case of Study and Project-Based Learning Virtual Course designed for university students and open to the broader UTEC community. The primary aim of this course is to establish a strong foundation for comprehending and addressing environmental sustainability issues from an interdisciplinary perspective. Upon completing the course, we expect students not only to understand the intricate interactions between social and ecosystem environments but also to utilize their knowledge and innovation skills to develop projects that offer enhancements or solutions to real-world challenges. Our course design centers on innovative learning experiences, rooted in active methodologies. We explore the intersection of these methods with sustainability and social responsibility in the education of university students. A paramount focus lies in gathering student feedback, empowering them to autonomously generate ideas with guidance from instructors, and even defining their own project topics. This approach underscores that when students are genuinely engaged in subjects of their choice, they not only acquire the necessary knowledge and skills but also develop essential attributes like effective communication, critical thinking, and problem-solving abilities. These qualities will benefit them throughout their lifelong learning journey. We are convinced that education serves as the conduit to merge knowledge and cultivate interdisciplinary collaboration, igniting awareness and instigating action for environmental sustainability. While systemic changes are undoubtedly essential for society and the economy, we are making significant progress by shaping perspectives and sparking small, everyday actions within the UTEC community. This approach empowers our students to become engaged global citizens, actively contributing to the creation of a more sustainable future.Keywords: active learning, environmental education, project-based learning, soft skills development
Procedia PDF Downloads 7129 Multimodal Integration of EEG, fMRI and Positron Emission Tomography Data Using Principal Component Analysis for Prognosis in Coma Patients
Authors: Denis Jordan, Daniel Golkowski, Mathias Lukas, Katharina Merz, Caroline Mlynarcik, Max Maurer, Valentin Riedl, Stefan Foerster, Eberhard F. Kochs, Andreas Bender, Ruediger Ilg
Abstract:
Introduction: So far, clinical assessments that rely on behavioral responses to differentiate coma states or even predict outcome in coma patients are unreliable, e.g. because of some patients’ motor disabilities. The present study was aimed to provide prognosis in coma patients using markers from electroencephalogram (EEG), blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) and [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET). Unsuperwised principal component analysis (PCA) was used for multimodal integration of markers. Methods: Approved by the local ethics committee of the Technical University of Munich (Germany) 20 patients (aged 18-89) with severe brain damage were acquired through intensive care units at the Klinikum rechts der Isar in Munich and at the Therapiezentrum Burgau (Germany). At the day of EEG/fMRI/PET measurement (date I) patients (<3.5 month in coma) were grouped in the minimal conscious state (MCS) or vegetative state (VS) on the basis of their clinical presentation (coma recovery scale-revised, CRS-R). Follow-up assessment (date II) was also based on CRS-R in a period of 8 to 24 month after date I. At date I, 63 channel EEG (Brain Products, Gilching, Germany) was recorded outside the scanner, and subsequently simultaneous FDG-PET/fMRI was acquired on an integrated Siemens Biograph mMR 3T scanner (Siemens Healthineers, Erlangen Germany). Power spectral densities, permutation entropy (PE) and symbolic transfer entropy (STE) were calculated in/between frontal, temporal, parietal and occipital EEG channels. PE and STE are based on symbolic time series analysis and were already introduced as robust markers separating wakefulness from unconsciousness in EEG during general anesthesia. While PE quantifies the regularity structure of the neighboring order of signal values (a surrogate of cortical information processing), STE reflects information transfer between two signals (a surrogate of directed connectivity in cortical networks). fMRI was carried out using SPM12 (Wellcome Trust Center for Neuroimaging, University of London, UK). Functional images were realigned, segmented, normalized and smoothed. PET was acquired for 45 minutes in list-mode. For absolute quantification of brain’s glucose consumption rate in FDG-PET, kinetic modelling was performed with Patlak’s plot method. BOLD signal intensity in fMRI and glucose uptake in PET was calculated in 8 distinct cortical areas. PCA was performed over all markers from EEG/fMRI/PET. Prognosis (persistent VS and deceased patients vs. recovery to MCS/awake from date I to date II) was evaluated using the area under the curve (AUC) including bootstrap confidence intervals (CI, *: p<0.05). Results: Prognosis was reliably indicated by the first component of PCA (AUC=0.99*, CI=0.92-1.00) showing a higher AUC when compared to the best single markers (EEG: AUC<0.96*, fMRI: AUC<0.86*, PET: AUC<0.60). CRS-R did not show prediction (AUC=0.51, CI=0.29-0.78). Conclusion: In a multimodal analysis of EEG/fMRI/PET in coma patients, PCA lead to a reliable prognosis. The impact of this result is evident, as clinical estimates of prognosis are inapt at time and could be supported by quantitative biomarkers from EEG, fMRI and PET. Due to the small sample size, further investigations are required, in particular allowing superwised learning instead of the basic approach of unsuperwised PCA.Keywords: coma states and prognosis, electroencephalogram, entropy, functional magnetic resonance imaging, machine learning, positron emission tomography, principal component analysis
Procedia PDF Downloads 33928 Synthesis of Chitosan/Silver Nanocomposites: Antibacterial Properties and Tissue Regeneration for Thermal Burn Injury
Authors: B.L. España-Sánchez, E. Luna-Hernández, R.A. Mauricio-Sánchez, M.E. Cruz-Soto, F. Padilla-Vaca, R. Muñoz, L. Granados-López, L.R. Ovalle-Flores, J.L. Menchaca-Arredondo, G. Luna-Bárcenas
Abstract:
Treatment of burn injured has been considered an important clinical problem due to the fluid control and the presence of microorganisms during the healing process. Conventional treatment includes antiseptic techniques, topical medication and surgical removal of damaged skin, to avoid bacterial growth. In order to accelerate this process, different alternatives for tissue regeneration have been explored, including artificial skin, polymers, hydrogels and hybrid materials. Some requirements consider a nonreactive organic polymer with high biocompatibility and skin adherence, avoiding bacterial infections. Chitin-derivative biopolymer such as chitosan (CS) has been used in skin regeneration following third-degree burns. The biological interest of CS is associated with the improvement of tissue cell stimulation, biocompatibility and antibacterial properties. In particular, antimicrobial properties of CS can be significantly increased when is blended with nanostructured materials. Silver-based nanocomposites have gained attention in medicine due to their high antibacterial properties against pathogens, related to their high surface area/volume ratio at nanomolar concentrations. Silver nanocomposites can be blended or synthesized with chitin-derivative biopolymers in order to obtain a biodegradable/antimicrobial hybrid with improved physic-mechanical properties. In this study, nanocomposites based on chitosan/silver nanoparticles (CS/nAg) were synthesized by the in situ chemical reduction method, improving their antibacterial properties against pathogenic bacteria and enhancing the healing process in thermal burn injuries produced in an animal model. CS/nAg was prepared in solution by the chemical reduction method, using AgNO₃ as precursor. CS was dissolved in acetic acid and mixed with different molar concentrations of AgNO₃: 0.01, 0.025, 0.05 and 0.1 M. Solutions were stirred at 95°C during 20 hours, in order to promote the nAg formation. CS/nAg solutions were placed in Petri dishes and dried, to obtain films. Structural analyses confirm the synthesis of silver nanoparticles (nAg) by means of UV-Vis and TEM, with an average size of 7.5 nm and spherical morphology. FTIR analyses showed the complex formation by the interaction of hydroxyl and amine groups with metallic nanoparticles, and surface chemical analysis (XPS) shows low concentration of Ag⁰/Ag⁺ species. Topography surface analyses by means of AFM shown that hydrated CS form a mesh with an average diameter of 10 µm. Antibacterial activity against S. aureus and P. aeruginosa was improved in all evaluated conditions, such as nAg loading and interaction time. CS/nAg nanocomposites films did not show Ag⁰/Ag⁺ release in saline buffer and rat serum after exposition during 7 days. Healing process was significantly enhanced by the presence of CS/nAg nanocomposites, inducing the production of myofibloblasts, collagen remodelation, blood vessels neoformation and epidermis regeneration after 7 days of injury treatment, by means of histological and immunohistochemistry assays. The present work suggests that hydrated CS/nAg nanocomposites can be formed a mesh, improving the bacterial penetration and the contact with embedded nAg, producing complete growth inhibition after 1.5 hours. Furthermore, CS/nAg nanocomposites improve the cell tissue regeneration in thermal burn injuries induced in rats. Synthesis of antibacterial, non-toxic, and biocompatible nanocomposites can be an important issue in tissue engineering and health care applications.Keywords: antibacterial, chitosan, healing process, nanocomposites, silver
Procedia PDF Downloads 28727 Modelling Farmer’s Perception and Intention to Join Cashew Marketing Cooperatives: An Expanded Version of the Theory of Planned Behaviour
Authors: Gospel Iyioku, Jana Mazancova, Jiri Hejkrlik
Abstract:
The “Agricultural Promotion Policy (2016–2020)” represents a strategic initiative by the Nigerian government to address domestic food shortages and the challenges in exporting products at the required quality standards. Hindered by an inefficient system for setting and enforcing food quality standards, coupled with a lack of market knowledge, the Federal Ministry of Agriculture and Rural Development (FMARD) aims to enhance support for the production and activities of key crops like cashew. By collaborating with farmers, processors, investors, and stakeholders in the cashew sector, the policy seeks to define and uphold high-quality standards across the cashew value chain. Given the challenges and opportunities faced by Nigerian cashew farmers, active participation in cashew marketing groups becomes imperative. These groups serve as essential platforms for farmers to collectively navigate market intricacies, access resources, share knowledge, improve output quality, and bolster their overall bargaining power. Through engagement in these cooperative initiatives, farmers not only boost their economic prospects but can also contribute significantly to the sustainable growth of the cashew industry, fostering resilience and community development. This study explores the perceptions and intentions of farmers regarding their involvement in cashew marketing cooperatives, utilizing an expanded version of the Theory of Planned Behaviour. Drawing insights from a diverse sample of 321 cashew farmers in Southwest Nigeria, the research sheds light on the factors influencing decision-making in cooperative participation. The demographic analysis reveals a diverse landscape, with a substantial presence of middle-aged individuals contributing significantly to the agricultural sector and cashew-related activities emerging as a primary income source for a substantial proportion (23.99%). Employing Structural Equation Modelling (SEM) with Maximum Likelihood Robust (MLR) estimation in R, the research elucidates the associations among latent variables. Despite the model’s complexity, the goodness-of-fit indices attest to the validity of the structural model, explaining approximately 40% of the variance in the intention to join cooperatives. Moral norms emerge as a pivotal construct, highlighting the profound influence of ethical considerations in decision-making processes, while perceived behavioural control presents potential challenges in active participation. Attitudes toward joining cooperatives reveal nuanced perspectives, with strong beliefs in enhanced connections with other farmers but varying perceptions on improved access to essential information. The SEM analysis establishes positive and significant effects of moral norms, perceived behavioural control, subjective norms, and attitudes on farmers’ intention to join cooperatives. The knowledge construct positively affects key factors influencing intention, emphasizing the importance of informed decision-making. A supplementary analysis using partial least squares (PLS) SEM corroborates the robustness of our findings, aligning with covariance-based SEM results. This research unveils the determinants of cooperative participation and provides valuable insights for policymakers and practitioners aiming to empower and support this vital demographic in the cashew industry.Keywords: marketing cooperatives, theory of planned behaviour, structural equation modelling, cashew farmers
Procedia PDF Downloads 8426 Telemedicine for Telerehabilitation in Areas Affected by Social Conflicts in Colombia
Authors: Lilia Edit Aparicio Pico, Paulo Cesar Coronado Sánchez, Roberto Ferro Escobar
Abstract:
This paper presents the implementation of telemedicine services for physiotherapy, occupational therapy, and speech therapy rehabilitation, utilizing telebroadcasting of audiovisual content to enhance comprehensive patient recovery in rural areas of San Vicente del Caguán municipality, characterized by high levels of social conflict in Colombia. The region faces challenges such as dysfunctional problems, physical rehabilitation needs, and a high prevalence of hearing diseases, leading to neglect and substandard health services. Limited access to healthcare due to communication barriers and transportation difficulties exacerbates these issues. To address these challenges, a research initiative was undertaken to leverage information and communication technologies (ICTs) to improve healthcare quality and accessibility for this vulnerable population. The primary objective was to develop a tele-rehabilitation system to provide asynchronous online therapies and teleconsultation services for patient follow-up during the recovery process. The project comprises two components: Communication systems and human development. A technological component involving the establishment of a wireless network connecting rural centers and the development of a mobile application for video-based therapy delivery. Communications systems will be provided by a radio link that utilizes internet provided by the Colombian government, located in the municipality of San Vicente del Caguán to connect two rural centers (Pozos and Tres Esquinas) and a mobile application for managing videos for asynchronous broadcasting in sidewalks and patients' homes. This component constitutes an operational model integrating information and telecommunications technologies. The second component involves pedagogical and human development. The primary focus is on the patient, where performance indicators and the efficiency of therapy support were evaluated for the assessment and monitoring of telerehabilitation results in physical, occupational, and speech therapy. They wanted to implement a wireless network to ensure audiovisual content transmission for tele-rehabilitation, design audiovisual content for tele-rehabilitation based on services provided by the ESE Hospital San Rafael in physiotherapy, occupational therapy, and speech therapy, develop a software application for fixed and mobile devices enabling access to tele-rehabilitation audiovisual content for healthcare personnel and patients and finally to evaluate the technological solution's contribution to the ESE Hospital San Rafael community. The research comprised four phases: wireless network implementation, audiovisual content design, software application development, and evaluation of the technological solution's impact. Key findings include the successful implementation of virtual teletherapy, both synchronously and asynchronously, and the assessment of technological performance indicators, patient evolution, timeliness, acceptance, and service quality of tele-rehabilitation therapies. The study demonstrated improved service coverage, increased care supply, enhanced access to timely therapies for patients, and positive acceptance of teletherapy modalities. Additionally, the project generated new knowledge for potential replication in other regions and proposed strategies for short- and medium-term improvement of service quality and care indicatorsKeywords: e-health, medical informatics, telemedicine, telerehabilitation, virtual therapy
Procedia PDF Downloads 5425 Interference of Polymers Addition in Wastewaters Microbial Survey: Case Study of Viral Retention in Sludges
Authors: Doriane Delafosse, Dominique Fontvieille
Abstract:
Background: Wastewater treatment plants (WWTPs) generally display significant efficacy in virus retention yet, are sometimes highly variable, partly in relation to large fluctuating loads at the head of the plant and partly because of episodic dysfunctions in some treatment processes. The problem is especially sensitive when human enteric viruses, such as human Noroviruses Genogroup I or Adenoviruses, are in concern: their release downstream WWTP, in environments often interconnected to recreational areas, may be very harmful to human communities even at low concentrations. It points out the importance of WWTP permanent monitoring from which their internal treatment processes could be adjusted. One way to adjust primary treatments is to add coagulants and flocculants to sewage ahead settling tanks to improve decantation. In this work, sludge produced by three coagulants (two organics, one mineral), four flocculants (three cationic, one anionic), and their combinations were studied for their efficacy in human enteric virus retention. Sewage samples were coming from a WWTP in the vicinity of the laboratory. All experiments were performed three times and in triplicates in laboratory pilots, using Murine Norovirus (MNV-1), a surrogate of human Norovirus, as an internal control (spiking). Viruses were quantified by (RT-)qPCR after nucleic acid extraction from both treated water and sediment. Results: Low values of sludge virus retention (from 4 to 8% of the initial sewage concentration) were observed with each cationic organic flocculant added to wastewater and no coagulant. The largest part of the virus load was detected in the treated water (48 to 90%). However, it was not counterbalancing the amount of the introduced virus (MNV-1). The results pertained to two types of cationic flocculants, branched and linear, and in the last case, to two percentages of cations. Results were quite similar to the association of a linear cationic organic coagulant and an anionic flocculant, though suggesting that differences between water and sludges would sometimes be related to virus size or virus origins (autochthonous/allochthonous). FeCl₃, as a mineral coagulant associated with an anionic flocculant, significantly increased both auto- and allochthonous virus retention in the sediments (15 to 34%). Accordingly, virus load in treated water was lower (14 to 48%) but with a total that still does not reach the amount of the introduced virus (MNV-1). It also appeared that the virus retrieval in a bare 0.1M NaCl suspension varied rather strongly according to the FeCl₃ concentration, suggesting an inhibiting effect on the molecular analysis used to detect the virus. Finally, no viruses were detected in both phases (sediment and water) with the combination branched cationic coagulant-linear anionic flocculant, which was later demonstrated as an effect, here also, of polymers on the virus detection-molecular analysis. Conclusions: The combination of FeCl₃-anionic flocculant gave its highest performance to the decantation-based virus removal process. However, large unbalanced values in spiking experiments were observed, suggesting that polymers cast additional obstacles to both elution buffer and lysis buffer on their way to reach the virus. The situation was probably even worse with autochthonous viruses already embedded into sewage's particulate matter. Polymers and FeCl₃ also appeared to interfere in some steps of molecular analyses. More attention should be paid to such impediments wherever chemical additives are considered to be used to enhance WWTP processes. Acknowledgments: This research was supported by the ABIOLAB laboratory (Montbonnot Saint-Martin, France) and by the ASPOSAN association. Field experiments were possible thanks to the Grand Chambéry WWTP authorities (Chambéry, France).Keywords: flocculants-coagulants, polymers, enteric viruses, wastewater sedimentation treatment plant
Procedia PDF Downloads 12424 A Case Study on Utility of 18FDG-PET/CT Scan in Identifying Active Extra Lymph Nodes and Staging of Breast Cancer
Authors: Farid Risheq, M. Zaid Alrisheq, Shuaa Al-Sadoon, Karim Al-Faqih, Mays Abdulazeez
Abstract:
Breast cancer is the most frequently diagnosed cancer worldwide, and a common cause of death among women. Various conventional anatomical imaging tools are utilized for diagnosis, histological assessment and TNM (Tumor, Node, Metastases) staging of breast cancer. Biopsy of sentinel lymph node is becoming an alternative to the axillary lymph node dissection. Advances in 18-Fluoro-Deoxi-Glucose Positron Emission Tomography/Computed Tomography (18FDG-PET/CT) imaging have facilitated breast cancer diagnosis utilizing biological trapping of 18FDG inside lesion cells, expressed as Standardized Uptake Value (SUVmax). Objective: To present the utility of 18FDG uptake PET/CT scans in detecting active extra lymph nodes and distant occult metastases for breast cancer staging. Subjects and Methods: Four female patients were presented with initially classified TNM stages of breast cancer based on conventional anatomical diagnostic techniques. 18FDG-PET/CT scans were performed one hour post 18FDG intra-venous injection of (300-370) MBq, and (7-8) bed/130sec. Transverse, sagittal, and coronal views; fused PET/CT and MIP modality were reconstructed for each patient. Results: A total of twenty four lesions in breast, extended lesions to lung, liver, bone and active extra lymph nodes were detected among patients. The initial TNM stage was significantly changed post 18FDG-PET/CT scan for each patient, as follows: Patient-1: Initial TNM-stage: T1N1M0-(stage I). Finding: Two lesions in right breast (3.2cm2, SUVmax=10.2), (1.8cm2, SUVmax=6.7), associated with metastases to two right axillary lymph nodes. Final TNM-stage: T1N2M0-(stage II). Patient-2: Initial TNM-stage: T2N2M0-(stage III). Finding: Right breast lesion (6.1cm2, SUVmax=15.2), associated with metastases to right internal mammary lymph node, two right axillary lymph nodes, and sclerotic lesions in right scapula. Final TNM-stage: T2N3M1-(stage IV). Patient-3: Initial TNM-stage: T2N0M1-(stage III). Finding: Left breast lesion (11.1cm2, SUVmax=18.8), associated with metastases to two lymph nodes in left hilum, and three lesions in both lungs. Final TNM-stage: T2N2M1-(stage IV). Patient-4: Initial TNM-stage: T4N1M1-(stage III). Finding: Four lesions in upper outer quadrant area of right breast (largest: 12.7cm2, SUVmax=18.6), in addition to one lesion in left breast (4.8cm2, SUVmax=7.1), associated with metastases to multiple lesions in liver (largest: 11.4cm2, SUV=8.0), and two bony-lytic lesions in left scapula and cervicle-1. No evidence of regional or distant lymph node involvement. Final TNM-stage: T4N0M2-(stage IV). Conclusions: Our results demonstrated that 18FDG-PET/CT scans had significantly changed the TNM stages of breast cancer patients. While the T factor was unchanged, N and M factors showed significant variations. A single session of PET/CT scan was effective in detecting active extra lymph nodes and distant occult metastases, which were not identified by conventional diagnostic techniques, and might advantageously replace bone scan, and contrast enhanced CT of chest, abdomen and pelvis. Applying 18FDG-PET/CT scan early in the investigation, might shorten diagnosis time, helps deciding adequate treatment protocol, and could improve patients’ quality of life and survival. Trapping of 18FDG in malignant lesion cells, after a PET/CT scan, increases the retention index (RI%) for a considerable time, which might help localize sentinel lymph node for biopsy using a hand held gamma probe detector. Future work is required to demonstrate its utility.Keywords: axillary lymph nodes, breast cancer staging, fluorodeoxyglucose positron emission tomography/computed tomography, lymph nodes
Procedia PDF Downloads 31323 Exploring Symptoms, Causes and Treatments of Feline Pruritus Using Thematic Analysis of Pet Owner Social Media Posts
Authors: Sitira Williams, Georgina Cherry, Andrea Wright, Kevin Wells, Taran Rai, Richard Brown, Travis Street, Alasdair Cook
Abstract:
Social media sources (50) were identified, keywords defined by veterinarians and organised into 6 topics known to be indicative of feline pruritus: body areas, behaviors, symptoms, diagnosis, and treatments. These were augmented using academic literature, a cat owner survey, synonyms, and Google Trends. The content was collected using a social intelligence solution, with keywords tagged and filtered. Data were aggregated and de-duplicated. SL content matching body areas, behaviors and symptoms were reviewed manually, and posts were marked relevant if: posted by a pet owner, identifying an itchy cat and not duplicated. A sub-set of 493 posts published from 2009-2022 was used for reflexive thematic analysis in NVIVO (Burlington, MA) to identify themes. Five themes were identified: allergy, pruritus, additional behaviors, unusual or undesirable behaviors, diagnosis, and treatment. Most (258) posts reported the cat was excessively licking, itching, and scratching. The majority were indoor cats and were less playful and friendly when itchy. Half of these posts did not indicate a known cause of pruritus. Bald spots and scabs (123) were reported, often causing swelling and fur loss, and 56 reported bumps, lumps, and dry patches. Other impacts on the cat’s quality of life were ear mites, cat self-trauma and stress. Seven posts reported their cats’ symptoms caused them ongoing anxiety and depression. Cats with food allergies to poultry (often chicken and beef) causing bald spots featured in 23 posts. Veterinarians advised switching to a raw food diet and/or changing their bowls. Some cats got worse after switching, leaving owners’ needs unmet. Allergic reactions to flea bites causing excessive itching, red spots, scabs, and fur loss were reported in 13 posts. Some (3) posts indicated allergic reactions to medication. Cats with seasonal and skin allergies, causing sneezing, scratching, headshaking, watery eyes, and nasal discharge, were reported 17 times. Eighty-five posts identified additional behaviors. Of these, 13 reported their cat’s burst pimple or insect bite. Common behaviors were headshaking, rubbing, pawing at their ears, and aggressively chewing. In some cases, bites or pimples triggered previously unseen itchiness, making the cat irritable. Twenty-four reported their cat had anxiety: overgrooming, itching, losing fur, hiding, freaking out, breathing quickly, sleeplessness, hissing and vocalising. Most reported these cats as having itchy skin, fleas, and bumps. Cats were commonly diagnosed with an ear infection, ringworm, acne, or kidney disease. Acne was diagnosed in cats with an allergy flare-up or overgrooming. Ear infections were diagnosed in itchy cats with mites or other parasites. Of the treatments mentioned, steroids were most frequently used, then anti-parasitics, including flea treatments and oral medication (steroids, antibiotics). Forty-six posts reported distress following poor outcomes after medication or additional vet consultations. SL provides veterinarians with unique insights. Verbatim comments highlight the detrimental effects of pruritus on pets and owner quality of life. This study demonstrates the need for veterinarians to communicate management and treatment options more effectively to relieve owner frustrations. Data analysis could be scaled up using machine learning for topic modeling.Keywords: content analysis, feline, itch, pruritus, social media, thematic analysis, veterinary dermatology
Procedia PDF Downloads 19122 Circular Nitrogen Removal, Recovery and Reuse Technologies
Authors: Lina Wu
Abstract:
The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and threatens water quality. Nitrogen pollution control has become a global concern. The concentration of nitrogen in water is reduced by converting ammonia nitrogen, nitrate nitrogen and nitrite nitrogen into nitrogen-containing gas through biological treatment, physicochemical treatment and oxidation technology. However, some wastewater containing high ammonia nitrogen including landfill leachate, is difficult to be treated by traditional nitrification and denitrification because of its high COD content. The core process of denitrification is that denitrifying bacteria convert nitrous acid produced by nitrification into nitrite under anaerobic conditions. Still, its low-carbon nitrogen does not meet the conditions for denitrification. Many studies have shown that the natural autotrophic anammox bacteria can combine nitrous and ammonia nitrogen without a carbon source through functional genes to achieve total nitrogen removal, which is very suitable for removing nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The short-range nitrification and denitrification coupled with anaerobic ammoX ensures total nitrogen removal. It improves the removal efficiency, meeting the needs of society for an ecologically friendly and cost-effective nutrient removal treatment technology. In recent years, research has found that the symbiotic system has more water treatment advantages because this process not only helps to improve the efficiency of wastewater treatment but also allows carbon dioxide reduction and resource recovery. Microalgae use carbon dioxide dissolved in water or released through bacterial respiration to produce oxygen for bacteria through photosynthesis under light, and bacteria, in turn, provide metabolites and inorganic carbon sources for the growth of microalgae, which may lead the algal bacteria symbiotic system save most or all of the aeration energy consumption. It has become a trend to make microalgae and light-avoiding anammox bacteria play synergistic roles by adjusting the light-to-dark ratio. Microalgae in the outer layer of light particles block most of the light and provide cofactors and amino acids to promote nitrogen removal. In particular, myxoccota MYX1 can degrade extracellular proteins produced by microalgae, providing amino acids for the entire bacterial community, which helps anammox bacteria save metabolic energy and adapt to light. As a result, initiating and maintaining the process of combining dominant algae and anaerobic denitrifying bacterial communities has great potential in treating landfill leachate. Chlorella has a brilliant removal effect and can withstand extreme environments in terms of high ammonia nitrogen, high salt and low temperature. It is urgent to study whether the algal mud mixture rich in denitrifying bacteria and chlorella can greatly improve the efficiency of landfill leachate treatment under an anaerobic environment where photosynthesis is stopped. The optimal dilution concentration of simulated landfill leachate can be found by determining the treatment effect of the same batch of bacteria and algae mixtures under different initial ammonia nitrogen concentrations and making a comparison. High-throughput sequencing technology was used to analyze the changes in microbial diversity, related functional genera and functional genes under optimal conditions, providing a theoretical and practical basis for the engineering application of novel bacteria-algae symbiosis system in biogas slurry treatment and resource utilization.Keywords: nutrient removal and recovery, leachate, anammox, Partial nitrification, Algae-bacteria interaction
Procedia PDF Downloads 4021 Investigation of Delamination Process in Adhesively Bonded Hardwood Elements under Changing Environmental Conditions
Authors: M. M. Hassani, S. Ammann, F. K. Wittel, P. Niemz, H. J. Herrmann
Abstract:
Application of engineered wood, especially in the form of glued-laminated timbers has increased significantly. Recent progress in plywood made of high strength and high stiffness hardwoods, like European beech, gives designers in general more freedom by increased dimensional stability and load-bearing capacity. However, the strong hygric dependence of basically all mechanical properties renders many innovative ideas futile. The tendency of hardwood for higher moisture sorption and swelling coefficients lead to significant residual stresses in glued-laminated configurations, cross-laminated patterns in particular. These stress fields cause initiation and evolution of cracks in the bond-lines resulting in: interfacial de-bonding, loss of structural integrity, and reduction of load-carrying capacity. Subsequently, delamination of glued-laminated timbers made of hardwood elements can be considered as the dominant failure mechanism in such composite elements. In addition, long-term creep and mechano-sorption under changing environmental conditions lead to loss of stiffness and can amplify delamination growth over the lifetime of a structure even after decades. In this study we investigate the delamination process of adhesively bonded hardwood (European beech) elements subjected to changing climatic conditions. To gain further insight into the long-term performance of adhesively bonded elements during the design phase of new products, the development and verification of an authentic moisture-dependent constitutive model for various species is of great significance. Since up to now, a comprehensive moisture-dependent rheological model comprising all possibly emerging deformation mechanisms was missing, a 3D orthotropic elasto-plastic, visco-elastic, mechano-sorptive material model for wood, with all material constants being defined as a function of moisture content, was developed. Apart from the solid wood adherends, adhesive layer also plays a crucial role in the generation and distribution of the interfacial stresses. Adhesive substance can be treated as a continuum layer constructed from finite elements, represented as a homogeneous and isotropic material. To obtain a realistic assessment on the mechanical performance of the adhesive layer and a detailed look at the interfacial stress distributions, a generic constitutive model including all potentially activated deformation modes, namely elastic, plastic, and visco-elastic creep was developed. We focused our studies on the three most common adhesive systems for structural timber engineering: one-component polyurethane adhesive (PUR), melamine-urea-formaldehyde (MUF), and phenol-resorcinol-formaldehyde (PRF). The corresponding numerical integration approaches, with additive decomposition of the total strain are implemented within the ABAQUS FEM environment by means of user subroutine UMAT. To predict the true stress state, we perform a history dependent sequential moisture-stress analysis using the developed material models for both wood substrate and adhesive layer. Prediction of the delamination process is founded on the fracture mechanical properties of the adhesive bond-line, measured under different levels of moisture content and application of the cohesive interface elements. Finally, we compare the numerical predictions with the experimental observations of de-bonding in glued-laminated samples under changing environmental conditions.Keywords: engineered wood, adhesive, material model, FEM analysis, fracture mechanics, delamination
Procedia PDF Downloads 43620 Adaptable Path to Net Zero Carbon: Feasibility Study of Grid-Connected Rooftop Solar PV Systems with Rooftop Rainwater Harvesting to Decrease Urban Flooding in India
Authors: Rajkumar Ghosh, Ananya Mukhopadhyay
Abstract:
India has seen enormous urbanization in recent years, resulting in increased energy consumption and water demand in its metropolitan regions. Adoption of grid-connected solar rooftop systems and rainwater collection has gained significant popularity in urban areas to address these challenges while also boosting sustainability and environmental consciousness. Grid-connected solar rooftop systems offer a long-term solution to India's growing energy needs. Solar panels are erected on the rooftops of residential and commercial buildings to generate power by utilizing the abundant solar energy available across the country. Solar rooftop systems generate clean, renewable electricity, reducing reliance on fossil fuels and lowering greenhouse gas emissions. This is compatible with India's goal of reducing its carbon footprint. Urban residents and companies can save money on electricity by generating their own and possibly selling excess power back to the grid through net metering arrangements. India gives several financial incentives (subsidies 40% for system capacity 1 kW to 3 kW) to stimulate the building of solar rooftop systems, making them an economically viable option for city dwellers. India provides subsidies up to 70% to special states such as Uttarakhand, Sikkim, Himachal Pradesh, Jammu & Kashmir, and Lakshadweep. Incorporating solar rooftops into urban infrastructure contributes to sustainable urban expansion by alleviating pressure on traditional energy sources and improving air quality. Incorporating solar rooftops into urban infrastructure contributes to sustainable urban expansion by alleviating demand on existing energy sources and improving power supply reliability. Rainwater harvesting is another key component of India's sustainable urban development. It comprises collecting and storing rainwater for use in non-potable water applications such as irrigation, toilet flushing, and groundwater recharge. Rainwater gathering 2 helps to conserve water resources by lowering the demand for freshwater sources. This technology is crucial in water-stressed areas to ensure a sustainable water supply. Excessive rainwater runoff in metropolitan areas can lead to Urban flooding. Solar PV system with Rooftop Rainwater harvesting systems absorb and channel excess rainwater, which helps to reduce flooding and waterlogging in Smart cities. Rainwater harvesting systems are inexpensive and quick to set up, making them a tempting option for city dwellers and businesses looking to save money on water. Rainwater harvesting systems are now compulsory in several Indian states for specified types of buildings (bye law, Rooftop space ≥ 300 sq. m.), ensuring widespread adoption. Finally, grid-connected solar rooftop systems and rainwater collection are important to India's long-term urban development. They not only reduce the environmental impact of urbanization, but also empower individuals and businesses to control their energy and water requirements. The G20 summit will focus on green financing, fossil fuel phaseout, and renewable energy transition. The G20 Summit in New Delhi reaffirmed India's commitment to battle climate change by doubling renewable energy capacity. To address climate change and mitigate global warming, India intends to attain 280 GW of solar renewable energy by 2030 and Net Zero carbon emissions by 2070. With continued government support and increased awareness, these strategies will help India develop a more resilient and sustainable urban future.Keywords: grid-connected solar PV system, rooftop rainwater harvesting, urban flood, groundwater, urban flooding, net zero carbon emission
Procedia PDF Downloads 9119 Transforming Emergency Care: Revolutionizing Obstetrics and Gynecology Operations for Enhanced Excellence
Authors: Lolwa Alansari, Hanen Mrabet, Kholoud Khaled, Abdelhamid Azhaghdani, Sufia Athar, Aska Kaima, Zaineb Mhamdia, Zubaria Altaf, Almunzer Zakaria, Tamara Alshadafat
Abstract:
Introduction: The Obstetrics and Gynecology Emergency Department at Alwakra Hospital has faced significant challenges, which have been further worsened by the impact of the COVID-19 pandemic. These challenges involve issues such as overcrowding, extended wait times, and a notable surge in demand for emergency care services. Moreover, prolonged waiting times have emerged as a primary factor contributing to situations where patients leave without receiving attention, known as left without being seen (LWBS), and unexpectedly abscond. Addressing the issue of insufficient patient mobility in the obstetrics and gynecology emergency department has brought about substantial improvements in patient care, healthcare administration, and overall departmental efficiency. These changes have not only alleviated overcrowding but have also elevated the quality of emergency care, resulting in higher patient satisfaction, better outcomes, and operational rewards. Methodology: The COVID-19 pandemic has served as a catalyst for substantial transformations in the obstetrics and gynecology emergency, aligning seamlessly with the strategic direction of Hamad Medical Corporation (HMC). The fundamental aim of this initiative is to revolutionize the operational efficiency of the OB-GYN ED. To accomplish this mission, a range of transformations has been initiated, focusing on essential areas such as digitizing systems, optimizing resource allocation, enhancing budget efficiency, and reducing overall costs. The project utilized the Plan-Do-Study-Act (PDSA) model, involving a diverse team collecting baseline data and introducing throughput improvements. Post-implementation data and feedback were analysed, leading to the integration of effective interventions into standard procedures. These interventions included optimized space utilization, real-time communication, bedside registration, technology integration, pre-triage screening, enhanced communication and patient education, consultant presence, and a culture of continuous improvement. These strategies significantly reduced waiting times, enhancing both patient care and operational efficiency. Results: Results demonstrated a substantial reduction in overall average waiting time, dropping from 35 to approximately 14 minutes by August 2023. The wait times for priority 1 cases have been reduced from 22 to 0 minutes, and for priority 2 cases, the wait times have been reduced from 32 to approximately 13.6 minutes. The proportion of patients spending less than 8 hours in the OB ED observation beds rose from 74% in January 2022 to over 98% in 2023. Notably, there was a remarkable decrease in LWBS and absconded patient rates from 2020 to 2023. Conclusion: The project initiated a profound change in the department's operational environment. Efficiency became deeply embedded in the unit's culture, promoting teamwork among staff that went beyond the project's original focus and had a positive influence on operations in other departments. This effectiveness not only made processes more efficient but also resulted in significant cost reductions for the hospital. These cost savings were achieved by reducing wait times, which in turn led to fewer prolonged patient stays and reduced the need for additional treatments. These continuous improvement initiatives have now become an integral part of the Obstetrics and Gynecology Division's standard operating procedures, ensuring that the positive changes brought about by the project persist and evolve over time.Keywords: overcrowding, waiting time, person centered care, quality initiatives
Procedia PDF Downloads 6518 Extracellular Polymeric Substances Study in an MBR System for Fouling Control
Authors: Dimitra C. Banti, Gesthimani Liona, Petros Samaras, Manasis Mitrakas
Abstract:
Municipal and industrial wastewaters are often treated biologically, by the activated sludge process (ASP). The ASP not only requires large aeration and sedimentation tanks, but also generates large quantities of excess sludge. An alternative technology is the membrane bioreactor (MBR), which replaces two stages of the conventional ASP—clarification and settlement—with a single, integrated biotreatment and clarification step. The advantages offered by the MBR over conventional treatment include reduced footprint and sludge production through maintaining a high biomass concentration in the bioreactor. Notwithstanding these advantages, the widespread application of the MBR process is constrained by membrane fouling. Fouling leads to permeate flux decline, making more frequent membrane cleaning and replacement necessary and resulting to increased operating costs. In general, membrane fouling results from the interaction between the membrane material and the components in the activated sludge liquor. The latter includes substrate components, cells, cell debris and microbial metabolites, such as Extracellular Polymeric Substances (EPS) and Sludge Microbial Products (SMPs). The challenge for effective MBR operation is to minimize the rate of Transmembrane Pressure (TMP) increase. This can be achieved by several ways, one of which is the addition of specific additives, that enhance the coagulation and flocculation of compounds, which are responsible for fouling, hence reducing biofilm formation on the membrane surface and limiting the fouling rate. In this project the effectiveness of a non-commercial composite coagulant was studied as an agent for fouling control in a lab scale MBR system consisting in two aerated tanks. A flat sheet membrane module with 0.40 um pore size was submerged into the second tank. The system was fed by50 L/d of municipal wastewater collected from the effluent of the primary sedimentation basin. The TMP increase rate, which is directly related to fouling growth, was monitored by a PLC system. EPS, MLSS and MLVSS measurements were performed in samples of mixed liquor; in addition, influent and effluent samples were collected for the determination of physicochemical characteristics (COD, BOD5, NO3-N, NH4-N, Total N and PO4-P). The coagulant was added in concentrations 2, 5 and 10mg/L during a period of 2 weeks and the results were compared with the control system (without coagulant addition). EPS fractions were extracted by a three stages physical-thermal treatment allowing the identification of Soluble EPS (SEPS) or SMP, Loosely Bound EPS (LBEPS) and Tightly Bound EPS (TBEPS). Proteins and carbohydrates concentrations were measured in EPS fractions by the modified Lowry method and Dubois method, respectively. Addition of 2 mg/L coagulant concentration did not affect SEPS proteins in comparison with control process and their values varied between 32 to 38mg/g VSS. However a coagulant dosage of 5mg/L resulted in a slight increase of SEPS proteins at 35-40 mg/g VSS while 10mg/L coagulant further increased SEPS to 44-48mg/g VSS. Similar results were obtained for SEPS carbohydrates. Carbohydrates values without coagulant addition were similar to the corresponding values measured for 2mg/L coagulant; the addition of mg/L coagulant resulted to a slight increase of carbohydrates SEPS to 6-7mg/g VSS while a dose of 10 mg/L further increased carbohydrates content to 9-10mg/g VSS. Total LBEPS and TBEPS, consisted of proteins and carbohydrates of LBEPS and TBEPS respectively, presented similar variations by the addition of the coagulant. Total LBEPS at 2mg/L dose were almost equal to 17mg/g VSS, and their values increased to 22 and 29 mg/g VSS during the addition of 5 mg/L and 10 mg/L of coagulant respectively. Total TBEPS were almost 37 mg/g VSS at a coagulant dose of 2 mg/L and increased to 42 and 51 mg/g VSS at 5 mg/L and 10 mg/L doses, respectively. Therefore, it can be concluded that coagulant addition could potentially affect microorganisms activities, excreting EPS in greater amounts. Nevertheless, EPS increase, mainly SEPS increase, resulted to a higher membrane fouling rate, as justified by the corresponding TMP increase rate. However, the addition of the coagulant, although affected the EPS content in the reactor mixed liquor, did not change the filtration process: an effluent of high quality was produced, with COD values as low as 20-30 mg/L.Keywords: extracellular polymeric substances, MBR, membrane fouling, EPS
Procedia PDF Downloads 26817 Targeting Tumour Survival and Angiogenic Migration after Radiosensitization with an Estrone Analogue in an in vitro Bone Metastasis Model
Authors: Jolene M. Helena, Annie M. Joubert, Peace Mabeta, Magdalena Coetzee, Roy Lakier, Anne E. Mercier
Abstract:
Targeting the distant tumour and its microenvironment whilst preserving bone density is important in improving the outcomes of patients with bone metastases. 2-Ethyl-3-O-sulphamoyl-estra1,3,5(10)16-tetraene (ESE-16) is an in-silico-designed 2- methoxyestradiol analogue which aimed at enhancing the parent compound’s cytotoxicity and providing a more favourable pharmacokinetic profile. In this study, the potential radiosensitization effects of ESE-16 were investigated in an in vitro bone metastasis model consisting of murine pre-osteoblastic (MC3T3-E1) and pre-osteoclastic (RAW 264.7) bone cells, metastatic prostate (DU 145) and breast (MDA-MB-231) cancer cells, as well as human umbilical vein endothelial cells (HUVECs). Cytotoxicity studies were conducted on all cell lines via spectrophotometric quantification of 3-(4,5-dimethylthiazol-2-yl)-2,5- diphenyltetrazolium bromide. The experimental set-up consisted of flow cytometric analysis of cell cycle progression and apoptosis detection (Annexin V-fluorescein isothiocyanate) to determine the lowest ESE-16 and radiation doses to induce apoptosis and significantly reduce cell viability. Subsequent experiments entailed a 24-hour low-dose ESE-16-exposure followed by a single dose of radiation. Termination proceeded 2, 24 or 48 hours thereafter. The effect of the combination treatment was investigated on osteoclasts via tartrate-resistant acid phosphatase (TRAP) activity- and actin ring formation assays. Tumour cell experiments included investigation of mitotic indices via haematoxylin and eosin staining; pro-apoptotic signalling via spectrophotometric quantification of caspase 3; deoxyribonucleic acid (DNA) damage via micronuclei analysis and histone H2A.X phosphorylation (γ-H2A.X); and Western blot analyses of bone morphogenetic protein-7 and matrix metalloproteinase-9. HUVEC experiments included flow cytometric quantification of cell cycle progression and free radical production; fluorescent examination of cytoskeletal morphology; invasion and migration studies on an xCELLigence platform; and Western blot analyses of hypoxia-inducible factor 1-alpha and vascular endothelial growth factor receptor 1 and 2. Tumour cells yielded half-maximal growth inhibitory concentration (GI50) values in the nanomolar range. ESE-16 concentrations of 235 nM (DU 145) and 176 nM (MDA-MB-231) and a radiation dose of 4 Gy were found to be significant in cell cycle and apoptosis experiments. Bone and endothelial cells were exposed to the same doses as DU 145 cells. Cytotoxicity studies on bone cells reported that RAW 264.7 cells were more sensitive to the combination treatment than MC3T3-E1 cells. Mature osteoclasts were more sensitive than pre-osteoclasts with respect to TRAP activity. However, actin ring morphology was retained. The mitotic arrest was evident in tumour and endothelial cells in the mitotic index and cell cycle experiments. Increased caspase 3 activity and superoxide production indicated pro-apoptotic signalling in tumour and endothelial cells. Increased micronuclei numbers and γ-H2A.X foci indicated increased DNA damage in tumour cells. Compromised actin and tubulin morphologies and decreased invasion and migration were observed in endothelial cells. Western blot analyses revealed reduced metastatic and angiogenic signalling. ESE-16-induced radiosensitization inhibits metastatic signalling and tumour cell survival whilst preferentially preserving bone cells. This low-dose combination treatment strategy may promote the quality of life of patients with metastatic bone disease. Future studies will include 3-dimensional in-vitro and murine in-vivo models.Keywords: angiogenesis, apoptosis, bone metastasis, cancer, cell migration, cytoskeleton, DNA damage, ESE-16, radiosensitization.
Procedia PDF Downloads 16216 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications
Authors: Atish Bagchi, Siva Chandrasekaran
Abstract:
Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning
Procedia PDF Downloads 15015 An Integrated Multisensor/Modeling Approach Addressing Climate Related Extreme Events
Authors: H. M. El-Askary, S. A. Abd El-Mawla, M. Allali, M. M. El-Hattab, M. El-Raey, A. M. Farahat, M. Kafatos, S. Nickovic, S. K. Park, A. K. Prasad, C. Rakovski, W. Sprigg, D. Struppa, A. Vukovic
Abstract:
A clear distinction between weather and climate is a necessity because while they are closely related, there are still important differences. Climate change is identified when we compute the statistics of the observed changes in weather over space and time. In this work we will show how the changing climate contribute to the frequency, magnitude and extent of different extreme events using a multi sensor approach with some synergistic modeling activities. We are exploring satellite observations of dust over North Africa, Gulf Region and the Indo Gangetic basin as well as dust versus anthropogenic pollution events over the Delta region in Egypt and Seoul through remote sensing and utilize the behavior of the dust and haze on the aerosol optical properties. Dust impact on the retreat of the glaciers in the Himalayas is also presented. In this study we also focus on the identification and monitoring of a massive dust plume that blew off the western coast of Africa towards the Atlantic on October 8th, 2012 right before the development of Hurricane Sandy. There is evidence that dust aerosols played a non-trivial role in the cyclogenesis process of Sandy. Moreover, a special dust event "An American Haboob" in Arizona is discussed as it was predicted hours in advance because of the great improvement we have in numerical, land–atmosphere modeling, computing power and remote sensing of dust events. Therefore we performed a full numerical simulation to that event using the coupled atmospheric-dust model NMME–DREAM after generating a mask of the potentially dust productive regions using land cover and vegetation data obtained from satellites. Climate change also contributes to the deterioration of different marine habitats. In that regard we are also presenting some work dealing with change detection analysis of Marine Habitats over the city of Hurghada, Red Sea, Egypt. The motivation for this work came from the fact that coral reefs at Hurghada have undergone significant decline. They are damaged, displaced, polluted, stepped on, and blasted off, in addition to the effects of climate change on the reefs. One of the most pressing issues affecting reef health is mass coral bleaching that result from an interaction between human activities and climatic changes. Over another location, namely California, we have observed that it exhibits highly-variable amounts of precipitation across many timescales, from the hourly to the climate timescale. Frequently, heavy precipitation occurs, causing damage to property and life (floods, landslides, etc.). These extreme events, variability, and the lack of good, medium to long-range predictability of precipitation are already a challenge to those who manage wetlands, coastal infrastructure, agriculture and fresh water supply. Adding on to the current challenges for long-range planning is climate change issue. It is known that La Niña and El Niño affect precipitation patterns, which in turn are entwined with global climate patterns. We have studied ENSO impact on precipitation variability over different climate divisions in California. On the other hand the Nile Delta has experienced lately an increase in the underground water table as well as water logging, bogging and soil salinization. Those impacts would pose a major threat to the Delta region inheritance and existing communities. There has been an undergoing effort to address those vulnerabilities by looking into many adaptation strategies.Keywords: remote sensing, modeling, long range transport, dust storms, North Africa, Gulf Region, India, California, climate extremes, sea level rise, coral reefs
Procedia PDF Downloads 48814 Hydrocarbon Source Rocks of the Maragh Low
Authors: Elhadi Nasr, Ibrahim Ramadan
Abstract:
Biostratigraphical analyses of well sections from the Maragh Low in the Eastern Sirt Basin has allowed high resolution correlations to be undertaken. Full integration of this data with available palaeoenvironmental, lithological, gravity, seismic, aeromagnetic, igneous, radiometric and wireline log information and a geochemical analysis of source rock quality and distribution has led to a more detailed understanding of the geological and the structural history of this area. Pre Sirt Unconformity two superimposed rifting cycles have been identified. The oldest is represented by the Amal Group of sediments and is of Late Carboniferous, Kasimovian / Gzelian to Middle Triassic, Anisian age. Unconformably overlying is a younger rift cycle which is represented the Sarir Group of sediments and is of Early Cretaceous, late Neocomian to Aptian in age. Overlying the Sirt Unconformity is the marine Late Cretaceous section. An assessment of pyrolysis results and a palynofacies analysis has allowed hydrocarbon source facies and quality to be determined. There are a number of hydrocarbon source rock horizons in the Maragh Low, these are sometimes vertically stacked and they are of fair to excellent quality. The oldest identified source rock is the Triassic Shale, this unit is unconformably overlain by sandstones belonging to the Sarir Group and conformably overlies a Triassic Siltstone unit. Palynological dating of the Triassic Shale unit indicates a Middle Triassic, Anisian age. The Triassic Shale is interpreted to have been deposited in a lacustrine palaeoenvironment. This particularly is evidenced by the dark, fine grained, organic rich nature of the sediment and is supported by palynofacies analysis and by the recovery of fish fossils. Geochemical analysis of the Triassic Shale indicates total organic carbon varying between 1.37 and 3.53. S2 pyrolysate yields vary between 2.15 mg/g and 6.61 mg/g and hydrogen indices vary between 156.91 and 278.91. The source quality of the Triassic Shale varies from being of fair to very good / rich. Linked to thermal maturity it is now a very good source for light oil and gas. It was once a very good to rich oil source. The Early Barremian Shale was also deposited in a lacustrine palaeoenvironment. Recovered palynomorphs indicate an Early Cretaceous, late Neocomian to early Barremian age. The Early Barremian Shale is conformably underlain and overlain by sandstone units belonging to the Sarir Group of sediments which are also of Early Cretaceous age. Geochemical analysis of the Early Barremian Shale indicates that it is a good oil source and was originally very good. Total organic carbon varies between 3.59% and 7%. S2 varies between 6.30 mg/g and 10.39 mg/g and the hydrogen indices vary between 148.4 and 175.5. A Late Barremian Shale unit of this age has also been identified in the central Maragh Low. Geochemical analyses indicate that total organic carbon varies between 1.05 and 2.38%, S2 pyrolysate between 1.6 and 5.34 mg/g and the hydrogen index between 152.4 and 224.4. It is a good oil source rock which is now mature. In addition to the non marine hydrocarbon source rocks pre Sirt Unconformity, three formations in the overlying Late Cretaceous section also provide hydrocarbon quality source rocks. Interbedded shales within the Rachmat Formation of Late Cretaceous, early Campanian age have total organic carbon ranging between, 0.7 and 1.47%, S2 pyrolysate varying between 1.37 and 4.00 mg/g and hydrogen indices varying between 195.7 and 272.1. The indication is that this unit would provide a fair gas source to a good oil source. Geochemical analyses of the overlying Tagrifet Limestone indicate that total organic carbon varies between 0.26% and 1.01%. S2 pyrolysate varies between 1.21 and 2.16 mg/g and hydrogen indices vary between 195.7 and 465.4. For the overlying Sirt Shale Formation of Late Cretaceous, late Campanian age, total organic carbon varies between 1.04% and 1.51%, S2 pyrolysate varies between 4.65 mg/g and 6.99 mg/g and the hydrogen indices vary between 151 and 462.9. The study has proven that both the Sirt Shale Formation and the Tagrifet Limestone are good to very good and rich sources for oil in the Maragh Low. High resolution biostratigraphical interpretations have been integrated and calibrated with thermal maturity determinations (Vitrinite Reflectance (%Ro), Spore Colour Index (SCI) and Tmax (ºC) and the determined present day geothermal gradient of 25ºC / Km for the Maragh Low. Interpretation of generated basin modelling profiles allows a detailed prediction of timing of maturation development of these source horizons and leads to a determination of amounts of missing section at major unconformities. From the results the top of the oil window (0.72% Ro) is picked as high as 10,700’ and the base of the oil window (1.35% Ro) assuming a linear trend and by projection is picked as low as 18,000’ in the Maragh Low. For the Triassic Shale the early phase of oil generation was in the Late Palaeocene / Early to Middle Eocene and the main phase of oil generation was in the Middle to Late Eocene. The Early Barremian Shale reached the main phase of oil generation in the Early Oligocene with late generation being reached in the Middle Miocene. For the Rakb Group section (Rachmat Formation, Tagrifet Limestone and Sirt Shale Formation) the early phase of oil generation started in the Late Eocene with the main phase of generation being between the Early Oligocene and the Early Miocene. From studying maturity profiles and from regional considerations it can be predicted that up to 500’ of sediment may have been deposited and eroded by the Sirt Unconformity in the central Maragh Low while up to 2000’ of sediment may have been deposited and then eroded to the south of the trough.Keywords: Geochemical analysis of the source rocks from wells in Eastern Sirt Basin.
Procedia PDF Downloads 40813 A Spatial Repetitive Controller Applied to an Aeroelastic Model for Wind Turbines
Authors: Riccardo Fratini, Riccardo Santini, Jacopo Serafini, Massimo Gennaretti, Stefano Panzieri
Abstract:
This paper presents a nonlinear differential model, for a three-bladed horizontal axis wind turbine (HAWT) suited for control applications. It is based on a 8-dofs, lumped parameters structural dynamics coupled with a quasi-steady sectional aerodynamics. In particular, using the Euler-Lagrange Equation (Energetic Variation approach), the authors derive, and successively validate, such model. For the derivation of the aerodynamic model, the Greenbergs theory, an extension of the theory proposed by Theodorsen to the case of thin airfoils undergoing pulsating flows, is used. Specifically, in this work, the authors restricted that theory under the hypothesis of low perturbation reduced frequency k, which causes the lift deficiency function C(k) to be real and equal to 1. Furthermore, the expressions of the aerodynamic loads are obtained using the quasi-steady strip theory (Hodges and Ormiston), as a function of the chordwise and normal components of relative velocity between flow and airfoil Ut, Up, their derivatives, and section angular velocity ε˙. For the validation of the proposed model, the authors carried out open and closed-loop simulations of a 5 MW HAWT, characterized by radius R =61.5 m and by mean chord c = 3 m, with a nominal angular velocity Ωn = 1.266rad/sec. The first analysis performed is the steady state solution, where a uniform wind Vw = 11.4 m/s is considered and a collective pitch angle θ = 0.88◦ is imposed. During this step, the authors noticed that the proposed model is intrinsically periodic due to the effect of the wind and of the gravitational force. In order to reject this periodic trend in the model dynamics, the authors propose a collective repetitive control algorithm coupled with a PD controller. In particular, when the reference command to be tracked and/or the disturbance to be rejected are periodic signals with a fixed period, the repetitive control strategies can be applied due to their high precision, simple implementation and little performance dependency on system parameters. The functional scheme of a repetitive controller is quite simple and, given a periodic reference command, is composed of a control block Crc(s) usually added to an existing feedback control system. The control block contains and a free time-delay system eτs in a positive feedback loop, and a low-pass filter q(s). It should be noticed that, while the time delay term reduces the stability margin, on the other hand the low pass filter is added to ensure stability. It is worth noting that, in this work, the authors propose a phase shifting for the controller and the delay system has been modified as e^(−(T−γk)), where T is the period of the signal and γk is a phase shifting of k samples of the same periodic signal. It should be noticed that, the phase shifting technique is particularly useful in non-minimum phase systems, such as flexible structures. In fact, using the phase shifting, the iterative algorithm could reach the convergence also at high frequencies. Notice that, in our case study, the shifting of k samples depends both on the rotor angular velocity Ω and on the rotor azimuth angle Ψ: we refer to this controller as a spatial repetitive controller. The collective repetitive controller has also been coupled with a C(s) = PD(s), in order to dampen oscillations of the blades. The performance of the spatial repetitive controller is compared with an industrial PI controller. In particular, starting from wind speed velocity Vw = 11.4 m/s the controller is asked to maintain the nominal angular velocity Ωn = 1.266rad/s after an instantaneous increase of wind speed (Vw = 15 m/s). Then, a purely periodic external disturbance is introduced in order to stress the capabilities of the repetitive controller. The results of the simulations show that, contrary to a simple PI controller, the spatial repetitive-PD controller has the capability to reject both external disturbances and periodic trend in the model dynamics. Finally, the nominal value of the angular velocity is reached, in accordance with results obtained with commercial software for a turbine of the same type.Keywords: wind turbines, aeroelasticity, repetitive control, periodic systems
Procedia PDF Downloads 24912 The Impact of Neighborhood Effects on the Economic Mobility of the Inhabitants of Three Segregated Communities in Salvador (Brazil)
Authors: Stephan Treuke
Abstract:
The paper analyses the neighbourhood effects on the economic mobility of the inhabitants of three segregated communities of Salvador (Brazil), in other words, the socio-economic advantages and disadvantages affecting the lives of poor people due to their embeddedness in specific socio-residential contexts. Recent studies performed in Brazilian metropolis have concentrated on the structural dimensions of negative externalities in order to explain neighbourhood-level variations in a field of different phenomena (delinquency, violence, access to the labour market and education) in spatial isolated and socially homogeneous slum areas (favelas). However, major disagreement remains whether the contiguity between residents of poor neighbourhoods and higher-class condominio-dwellers provides structures of opportunities or whether it fosters socio-spatial stigmatization. Based on a set of interviews, investigating the variability of interpersonal networks and their activation in the struggle for economic inclusion, the study confirms that the proximity of Nordeste de Amaralina to middle-/upper-class communities affects positively the access to labour opportunities. Nevertheless, residential stigmatization, as well as structures of social segmentation, annihilate these potentials. The lack of exposition to individuals and groups extrapolating from the favela’s social, educational and cultural context restricts the structures of opportunities to local level. Therefore, residents´ interpersonal networks reveal a high degree of redundancy and localism, based on bonding ties connecting family and neighbourhood members. The resilience of segregational structures in Plataforma contributes to the naturalization of social distance patters. It’s embeddedness in a socially homogeneous residential area (Subúrbio Ferroviário), growing informally and beyond official urban politics, encourages the construction of isotopic patterns of sociability, sharing the same values, social preferences, perspectives and behaviour models. Whereas it’s spatial isolation correlates with the scarcity of economic opportunities, the social heterogeneity of Fazenda Grande II interviewees and the socialising effects of public institutions mitigate the negative repercussions of segregation. The networks’ composition admits a higher degree of heterophilia and a greater proportion of bridging ties accounting for the access to broader information actives and facilitating economic mobility. The variability observed within the three different scenarios urges to reflect about the responsability of urban politics when it comes to the prevention or consolidation of the social segregation process in Salvador. Instead of promoting the local development of the favela Plataforma, public housing programs priorize technocratic habitational solutions without providing the residents’ socio-economic integration. The impact of negative externalities related to the homogeneously poor neighbourhood is potencialized in peripheral areas, turning its’ inhabitants socially invisible, thus being isolated from other social groups. The example of Nordeste de Amaralina portrays the failing interest of urban politics to bridge the social distances structuring the brazilian society’s rigid stratification model, founded on mecanisms of segmentation (unequal access to labour market and education system, public transport, social security and law protection) and generating permanent conflicts between the two socioeconomically distant groups living in geographic contiguity. Finally, in the case of Fazenda Grande II, the public investments in both housing projects and complementary infrastructure (e.g. schools, hospitals, community center, police stations, recreation areas) contributes to the residents’ socio-economic inclusion.Keywords: economic mobility, neighborhood effects, Salvador, segregation
Procedia PDF Downloads 27911 Mapping the Neurotoxic Effects of Sub-Toxic Manganese Exposure: Behavioral Outcomes, Imaging Biomarkers, and Dopaminergic System Alterations
Authors: Katie M. Clark, Adriana A. Tienda, Krista C. Paffenroth, Lindsey N. Brigante, Daniel C. Colvin, Jose Maldonado, Erin S. Calipari, Fiona E. Harrison
Abstract:
Manganese (Mn) is an essential trace element required for human health and is important in antioxidant defenses, as well as in the development and function of dopaminergic neurons. However, chronic low-level Mn exposure, such as through contaminated drinking water, poses risks that may contribute to neurodevelopmental and neurodegenerative conditions, including attention deficit hyperactivity disorder (ADHD). Pharmacological inhibition of the dopamine transporter (DAT) blocks reuptake, elevates synaptic dopamine, and alleviates ADHD symptoms. This study aimed to determine whether Mn exposure in juvenile mice modifies their response to DAT blockers, amphetamine, and methylphenidate and utilize neuroimaging methods to visualize and quantify Mn distribution across dopaminergic brain regions. Male and female heterozygous DATᵀ³⁵⁶ᴹ and wild-type littermates were randomly assigned to receive control (2.5% Stevia) or high Manganese (2.5 mg/ml Mn + 2.5% Stevia) via water ad libitum from weaning (21-28 days) for 4-5 weeks. Mice underwent repeated testing in locomotor activity chambers for three consecutive days (60 mins.) to ensure that they were fully habituated to the environments. On the fourth day, a 3-hour activity session was conducted following treatment with amphetamine (3 mg/kg) or methylphenidate (5 mg/kg). The second drug was administered in a second 3-hour activity session following a 1-week washout period. Following the washout, the mice were given one last injection of amphetamine and euthanized one hour later. Using the ex-vivo brains, magnetic resonance relaxometry (MRR) was performed on a 7Telsa imaging system to map T1- and T2-weighted (T1W, T2W) relaxation times. Mn inherent paramagnetic properties shorten both T1W and T2W times, which enhances the signal intensity and contrast, enabling effective visualization of Mn accumulation in the entire brain. A subset of mice was treated with amphetamine 1 hour before euthanasia. SmartSPIM light sheet microscopy with cleared whole brains and cFos and tyrosine hydroxylase (TH) labeling enabled an unbiased automated counting and densitometric analysis of TH and cFos positive cells. Immunohistochemistry was conducted to measure synaptic protein markers and quantify changes in neurotransmitter regulation. Mn exposure elevated Mn brain levels and potentiated stimulant effects in males. The globus pallidus, substantia nigra, thalamus, and striatum exhibited more pronounced T1W shortening, indicating regional susceptibility to Mn accumulation (p<0.0001, 2-Way ANOVA). In the cleared whole brains, initial analyses suggest that TH and c-Fos co-staining mirrors behavioral data with decreased co-staining in DATT356M+/- mice. Ongoing studies will identify the molecular basis of the effect of Mn, including changes to DAergic metabolism and transport and post-translational modification to the DAT. These findings demonstrate that alterations in T1W relaxation times, as measured by MRR, may serve as an early biomarker for Mn neurotoxicity. This neuroimaging approach exhibits remarkable accuracy in identifying Mn-susceptible brain regions, with a spatial resolution and sensitivity that surpasses current conventional dissection and mass spectrometry approaches. The capability to label and map TH and cFos expression across the entire brain provides insights into whole-brain neuronal activation and its connections to functional neural circuits and behavior following amphetamine and methylphenidate administration.Keywords: manganese, environmental toxicology, dopamine dysfunction, biomarkers, drinking water, light sheet microscopy, magnetic resonance relaxometry (MRR)
Procedia PDF Downloads 910 Surface Acoustic Wave (SAW)-Induced Mixing Enhances Biomolecules Kinetics in a Novel Phase-Interrogation Surface Plasmon Resonance (SPR) Microfluidic Biosensor
Authors: M. Agostini, A. Sonato, G. Greco, M. Travagliati, G. Ruffato, E. Gazzola, D. Liuni, F. Romanato, M. Cecchini
Abstract:
Since their first demonstration in the early 1980s, surface plasmon resonance (SPR) sensors have been widely recognized as useful tools for detecting chemical and biological species, and the interest of the scientific community toward this technology has known a rapid growth in the past two decades owing to their high sensitivity, label-free operation and possibility of real-time detection. Recent works have suggested that a turning point in SPR sensor research would be the combination of SPR strategies with other technologies in order to reduce human handling of samples, improve integration and plasmonic sensitivity. In this light, microfluidics has been attracting growing interest. By properly designing microfluidic biochips it is possible to miniaturize the analyte-sensitive areas with an overall reduction of the chip dimension, reduce the liquid reagents and sample volume, improve automation, and increase the number of experiments in a single biochip by multiplexing approaches. However, as the fluidic channel dimensions approach the micron scale, laminar flows become dominant owing to the low Reynolds numbers that typically characterize microfluidics. In these environments mixing times are usually dominated by diffusion, which can be prohibitively long and lead to long-lasting biochemistry experiments. An elegant method to overcome these issues is to actively perturb the liquid laminar flow by exploiting surface acoustic waves (SAWs). With this work, we demonstrate a new approach for SPR biosensing based on the combination of microfluidics, SAW-induced mixing and the real-time phase-interrogation grating-coupling SPR technology. On a single lithium niobate (LN) substrate the nanostructured SPR sensing areas, interdigital transducer (IDT) for SAW generation and polydimethylsiloxane (PDMS) microfluidic chambers were fabricated. SAWs, impinging on the microfluidic chamber, generate acoustic streaming inside the fluid, leading to chaotic advection and thus improved fluid mixing, whilst analytes binding detection is made via SPR method based on SPP excitation via gold metallic grating upon azimuthal orientation and phase interrogation. Our device has been fully characterized in order to separate for the very first time the unwanted SAW heating effect with respect to the fluid stirring inside the microchamber that affect the molecules binding dynamics. Avidin/biotin assay and thiol-polyethylene glycol (bPEG-SH) were exploited as model biological interaction and non-fouling layer respectively. Biosensing kinetics time reduction with SAW-enhanced mixing resulted in a ≈ 82% improvement for bPEG-SH adsorption onto gold and ≈ 24% for avidin/biotin binding—≈ 50% and 18% respectively compared to the heating only condition. These results demonstrate that our biochip can significantly reduce the duration of bioreactions that usually require long times (e.g., PEG-based sensing layer, low concentration analyte detection). The sensing architecture here proposed represents a new promising technology satisfying the major biosensing requirements: scalability and high throughput capabilities. The detection system size and biochip dimension could be further reduced and integrated; in addition, the possibility of reducing biological experiment duration via SAW-driven active mixing and developing multiplexing platforms for parallel real-time sensing could be easily combined. In general, the technology reported in this study can be straightforwardly adapted to a great number of biological system and sensing geometry.Keywords: biosensor, microfluidics, surface acoustic wave, surface plasmon resonance
Procedia PDF Downloads 2809 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion
Authors: Ali Kazemi
Abstract:
Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting
Procedia PDF Downloads 668 Improvement in the Photocatalytic Activity of Nanostructured Manganese Ferrite – Type of Materials by Mechanochemical Activation
Authors: Katerina Zaharieva, Katya Milenova, Zara Cherkezova-Zheleva, Alexander Eliyas, Boris Kunev, Ivan Mitov
Abstract:
The synthesized nanosized manganese ferrite-type of samples have been tested as photocatalysts in the reaction of oxidative degradation of model contaminant Reactive Black 5 (RB5) dye in aqueous solutions under UV irradiation. As it is known this azo dye is applied in the textile-coloring industry and it is discharged into the waterways causing pollution. The co-precipitation procedure has been used for the synthesis of manganese ferrite-type of materials: Sample 1 - Mn0.25Fe2.75O4, Sample 2 - Mn0.5Fe2.5O4 and Sample 3 - MnFe2O4 from 0.03M aqueous solutions of MnCl2•4H2O, FeCl2•4H2O and/or FeCl3•6H2O and 0.3M NaOH in appropriate amounts. The mechanochemical activation of co-precipitated ferrite-type of samples has been performed in argon (Samples 1 and 2) or in air atmosphere (Sample 3) for 2 hours at a milling speed of 500 rpm. The mechano-chemical treatment has been carried out in a high energy planetary ball mill type PM 100, Retsch, Germany. The mass ratio between balls and powder was 30:1. As a result mechanochemically activated Sample 4 - Mn0.25Fe2.75O4, Sample 5 - Mn0.5Fe2.5O4 and Sample 6 - MnFe2O4 have been obtained. The synthesized manganese ferrite-type photocatalysts have been characterized by X-ray diffraction method and Moessbauer spectroscopy. The registered X-ray diffraction patterns and Moessbauer spectra of co-precipitated ferrite-type of materials show the presence of manganese ferrite and additional akaganeite phase. The presence of manganese ferrite and small amounts of iron phases is established in the mechanochemically treated samples. The calculated average crystallite size of manganese ferrites varies within the range 7 – 13 nm. This result is confirmed by Moessbauer study. The registered spectra show superparamagnetic behavior of the prepared materials at room temperature. The photocatalytic investigations have been made using polychromatic UV-A light lamp (Sylvania BLB, 18 W) illumination with wavelength maximum at 365 nm. The intensity of light irradiation upon the manganese ferrite-type photocatalysts was 0.66 mW.cm-2. The photocatalytic reaction of oxidative degradation of RB5 dye was carried out in a semi-batch slurry photocatalytic reactor with 0.15 g of ferrite-type powder, 150 ml of 20 ppm dye aqueous solution under magnetic stirring at rate 400 rpm and continuously feeding air flow. The samples achieved adsorption-desorption equilibrium in the dark period for 30 min and then the UV-light was turned on. After regular time intervals aliquot parts from the suspension were taken out and centrifuged to separate the powder from solution. The residual concentrations of dye were established by a UV-Vis absorbance single beam spectrophotometer CamSpec M501 (UK) measuring in the wavelength region from 190 to 800 nm. The photocatalytic measurements determined that the apparent pseudo-first-order rate constants calculated by linear slopes approximating to first order kinetic equation, increase in following order: Sample 3 (1.1х10-3 min-1) < Sample 1 (2.2х10-3 min-1) < Sample 2 (3.3 х10-3 min-1) < Sample 4 (3.8х10-3 min-1) < Sample 6 (11х10-3 min-1) < Sample 5 (15.2х10-3 min-1). The mechanochemically activated manganese ferrite-type of photocatalyst samples show significantly higher degree of oxidative degradation of RB5 dye after 120 minutes of UV light illumination in comparison with co-precipitated ferrite-type samples: Sample 5 (92%) > Sample 6 (91%) > Sample 4 (63%) > Sample 2 (53%) > Sample 1 (42%) > Sample 3 (15%). Summarizing the obtained results we conclude that the mechanochemical activation leads to a significant enhancement of the degree of oxidative degradation of the RB5 dye and photocatalytic activity of tested manganese ferrite-type of catalyst samples under our experimental conditions. The mechanochemically activated Mn0.5Fe2.5O4 ferrite-type of material displays the highest photocatalytic activity (15.2х10-3 min-1) and degree of oxidative degradation of the RB5 dye (92%) compared to the other synthesized samples. Especially a significant improvement in the degree of oxidative degradation of RB5 dye (91%) has been determined for mechanochemically treated MnFe2O4 ferrite-type of sample with the highest extent of substitution of iron ions by manganese ions than in the case of the co-precipitated MnFe2O4 sample (15%). The mechanochemically activated manganese ferrite-type of samples show good photocatalytic properties in the reaction of oxidative degradation of RB5 azo dye in aqueous solutions and it could find potential application for dye removal from wastewaters originating from textile industry.Keywords: nanostructured manganese ferrite-type materials, photocatalytic activity, Reactive Black 5, water treatment
Procedia PDF Downloads 3477 Times2D: A Time-Frequency Method for Time Series Forecasting
Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan
Abstract:
Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation
Procedia PDF Downloads 426 A Comprehensive Study of Spread Models of Wildland Fires
Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling
Procedia PDF Downloads 81