Search results for: accumulated degree days
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6339

Search results for: accumulated degree days

9 Clinical Course and Prognosis of Cutaneous Manifestations of COVID-19: A Systematic Review of Reported Cases

Authors: Hilary Modir, Kyle Dutton, Michelle Swab, Shabnam Asghari

Abstract:

Since its emergence, the cutaneous manifestations of COVID-19 have been documented in the literature. However, the majority are case reports with significant limitations in appraisal quality, thus leaving the role of dermatological manifestations of COVID-19 erroneously underexplored. The primary aim of this review was to systematically examine clinical patterns of dermatological manifestations as reported in the literature. This study was designed as a systematic review of case reports. The inclusion criteria consisted of all published reports and articles regarding COVID-19 in English, from September 1st, 2019, until June 22nd, 2020. The population consisted of confirmed cases of COVID-19 with associated cutaneous signs and symptoms. Exclusion criteria included research in planning stages, protocols, book reviews, news articles, review studies, and policy analyses. With the collaboration of a librarian, a search strategy was created consisting of a mixture of keyword terms and controlled vocabulary. Electronic databases searched were MEDLINE via PubMed, EMBASE, CINAHL, Web of Science, LILACS, PsycINFO, WHO Global Literature on Coronavirus Disease, Cochrane Library, Campbell Collaboration, Prospero, WHO International Clinical Trials Registry Platform, Australian and New Zealand Clinical Trials Registry, U.S. Institutes of Health Ongoing Trials Register, AAD Registry, OSF preprints, SSRN, MedRxiV and BioRxiV. The study selection featured an initial pre-screening of titles and abstracts by one independent reviewer. Results were verified by re-examining a random sample of 1% of excluded articles. Eligible studies progressed for full-text review by two calibrated independent reviewers. Covidence was used to store and extract data, such as citation information and findings pertaining to COVID-19 and cutaneous signs and symptoms. Data analysis and summarization methodology reflect the framework proposed by PRISMA and recommendations set out by Cochrane and Joanna Brigg’s Institute for conducting systematic reviews. The Oxford Centre for Evidence-Based Medicine’s level of evidence was used to appraise the quality of individual studies. The literature search revealed a total of 1221 articles. After the abstract and full-text screening, only 95 studies met the eligibility criteria, proceeding to data extraction. Studies were divided into 58% case reports and 42% series. A total of 833 manifestations were reported in 723 confirmed COVID-19 cases. The most frequent lesions were 23% maculopapular, 15% urticarial and 13% pseudo-chilblains, with 46% of lesions reporting pruritus, 16% erythema, 14% pain, 12% burning sensation, and 4% edema. The most common lesion locations were 20% trunk, 19.5% lower limbs, and 17.7% upper limbs. The time to resolution of lesions was between one and twenty-one days. In conclusion, over half of the reported cutaneous presentations in COVID-19 positive patients were maculopapular, urticarial and pseudo-chilblains, with the majority of lesions distributed to the extremities and trunk. As this review’s sample size only contained COVID-19 confirmed cases with skin presentations, it becomes difficult to deduce the direct relationship between skin findings and COVID-19. However, it can be correlated that acute onset of skin lesions, such as chilblains-like, may be associated with or may warrant consideration of COVID-19 as part of the differential diagnosis.

Keywords: COVID-19, cutaneous manifestations, cutaneous signs, general dermatology, medical dermatology, Sars-Cov-2, skin and infectious disease, skin findings, skin manifestations

Procedia PDF Downloads 174
8 Development of a Core Set of Clinical Indicators to Measure Quality of Care for Thyroid Cancer: A Modified-Delphi Approach

Authors: Liane J. Ioannou, Jonathan Serpell, Cino Bendinelli, David Walters, Jenny Gough, Dean Lisewski, Win Meyer-Rochow, Julie Miller, Duncan Topliss, Bill Fleming, Stephen Farrell, Andrew Kiu, James Kollias, Mark Sywak, Adam Aniss, Linda Fenton, Danielle Ghusn, Simon Harper, Aleksandra Popadich, Kate Stringer, David Watters, Susannah Ahern

Abstract:

BACKGROUND: There are significant variations in the management, treatment and outcomes of thyroid cancer, particularly in the role of: diagnostic investigation and pre-treatment scanning; optimal extent of surgery (total or hemi-thyroidectomy); use of active surveillance for small low-risk cancers; central lymph node dissections (therapeutic or prophylactic); outcomes following surgery (e.g. recurrent laryngeal nerve palsy, hypocalcaemia, hypoparathyroidism); post-surgical hormone, calcium and vitamin D therapy; and provision and dosage of radioactive iodine treatment. A proven strategy to reduce variations in the outcome and to improve survival is to measure and compare it using high-quality clinical registry data. Clinical registries provide the most effective means of collecting high-quality data and are a tool for quality improvement. Where they have been introduced at a state or national level, registries have become one of the most clinically valued tools for quality improvement. To benchmark clinical care, clinical quality registries require systematic measurement at predefined intervals and the capacity to report back information to participating clinical units. OBJECTIVE: The aim of this study was to develop a core set clinical indicators that enable measurement and reporting of quality of care for patients with thyroid cancer. We hypothesise that measuring clinical quality indicators, developed to identify differences in quality of care across sites, will reduce variation and improve patient outcomes and survival, thereby lessening costs and healthcare burden to the Australian community. METHOD: Preparatory work and scoping was conducted to identify existing high quality, clinical guidelines and best practice for thyroid cancer both nationally and internationally, as well as relevant literature. A bi-national panel was invited to participate in a modified Delphi process. Panelists were asked to rate each proposed indicator on a Likert scale of 1–9 in a three-round iterative process. RESULTS: A total of 236 potential quality indicators were identified. One hundred and ninety-two indicators were removed to reflect the data capture by the Australian and New Zealand Thyroid Cancer Registry (ANZTCR) (from diagnosis to 90-days post-surgery). The remaining 44 indicators were presented to the panelists for voting. A further 21 indicators were later added by the panelists bringing the total potential quality indicators to 65. Of these, 21 were considered the most important and feasible indicators to measure quality of care in thyroid cancer, of which 12 were recommended for inclusion in the final set. The consensus indicator set spans the spectrum of care, including: preoperative; surgery; surgical complications; staging and post-surgical treatment planning; and post-surgical treatment. CONCLUSIONS: This study provides a core set of quality indicators to measure quality of care in thyroid cancer. This indicator set can be applied as a tool for internal quality improvement, comparative quality reporting, public reporting and research. Inclusion of these quality indicators into monitoring databases such as clinical quality registries will enable opportunities for benchmarking and feedback on best practice care to clinicians involved in the management of thyroid cancer.

Keywords: clinical registry, Delphi survey, quality indicators, quality of care

Procedia PDF Downloads 165
7 Modeling the Human Harbor: An Equity Project in New York City, New York USA

Authors: Lauren B. Birney

Abstract:

The envisioned long-term outcome of this three-year research, and implementation plan is for 1) teachers and students to design and build their own computational models of real-world environmental-human health phenomena occurring within the context of the “Human Harbor” and 2) project researchers to evaluate the degree to which these integrated Computer Science (CS) education experiences in New York City (NYC) public school classrooms (PreK-12) impact students’ computational-technical skill development, job readiness, career motivations, and measurable abilities to understand, articulate, and solve the underlying phenomena at the center of their models. This effort builds on the partnership’s successes over the past eight years in developing a benchmark Model of restoration-based Science, Technology, Engineering, and Math (STEM) education for urban public schools and achieving relatively broad-based implementation in the nation’s largest public school system. The Billion Oyster Project Curriculum and Community Enterprise for Restoration Science (BOP-CCERS STEM + Computing) curriculum, teacher professional developments, and community engagement programs have reached more than 200 educators and 11,000 students at 124 schools, with 84 waterfront locations and Out of School of Time (OST) programs. The BOP-CCERS Partnership is poised to develop a more refined focus on integrating computer science across the STEM domains; teaching industry-aligned computational methods and tools; and explicitly preparing students from the city’s most under-resourced and underrepresented communities for upwardly mobile careers in NYC’s ever-expanding “digital economy,” in which jobs require computational thinking and an increasing percentage require discreet computer science technical skills. Project Objectives include the following: 1. Computational Thinking (CT) Integration: Integrate computational thinking core practices across existing middle/high school BOP-CCERS STEM curriculum as a means of scaffolding toward long term computer science and computational modeling outcomes. 2. Data Science and Data Analytics: Enabling Researchers to perform interviews with Teachers, students, community members, partners, stakeholders, and Science, Technology, Engineering, and Mathematics (STEM) industry Professionals. Collaborative analysis and data collection were also performed. As a centerpiece, the BOP-CCERS partnership will expand to include a dedicated computer science education partner. New York City Department of Education (NYCDOE), Computer Science for All (CS4ALL) NYC will serve as the dedicated Computer Science (CS) lead, advising the consortium on integration and curriculum development, working in tandem. The BOP-CCERS Model™ also validates that with appropriate application of technical infrastructure, intensive teacher professional developments, and curricular scaffolding, socially connected science learning can be mainstreamed in the nation’s largest urban public school system. This is evidenced and substantiated in the initial phases of BOP-CCERS™. The BOP-CCERS™ student curriculum and teacher professional development have been implemented in approximately 24% of NYC public middle schools, reaching more than 250 educators and 11,000 students directly. BOP-CCERS™ is a fully scalable and transferable educational model, adaptable to all American school districts. In all settings of the proposed Phase IV initiative, the primary beneficiary group will be underrepresented NYC public school students who live in high-poverty neighborhoods and are traditionally underrepresented in the STEM fields, including African Americans, Latinos, English language learners, and children from economically disadvantaged households. In particular, BOP-CCERS Phase IV will explicitly prepare underrepresented students for skilled positions within New York City’s expanding digital economy, computer science, computational information systems, and innovative technology sectors.

Keywords: computer science, data science, equity, diversity and inclusion, STEM education

Procedia PDF Downloads 45
6 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 40
5 Climate Change Threats to UNESCO-Designated World Heritage Sites: Empirical Evidence from Konso Cultural Landscape, Ethiopia

Authors: Yimer Mohammed Assen, Abiyot Legesse Kura, Engida Esyas Dube, Asebe Regassa Debelo, Girma Kelboro Mensuro, Lete Bekele Gure

Abstract:

Climate change has posed severe threats to many cultural landscapes of UNESCO world heritage sites recently. The UNESCO State of Conservation (SOC) reports categorized flooding, temperature increment, and drought as threats to cultural landscapes. This study aimed to examine variations and trends of rainfall and temperature extreme events and their threats to the UNESCO-designated Konso Cultural Landscape in southern Ethiopia. The study used dense merged satellite-gauge station rainfall data (1981-2020) with spatial resolution of 4km by 4km and observed maximum and minimum temperature data (1987-2020). Qualitative data were also gathered from cultural leaders, local administrators, and religious leaders using structured interview checklists. The spatial patterns, coefficient of variation, standardized anomalies, trends, and magnitude of change of rainfall and temperature extreme events both at annual and seasonal levels were computed using the Mann-Kendall trend test and Sen’s slope estimator under the CDT package. The standard precipitation index (SPI) was also used to calculate drought severity, frequency, and trend maps. The data gathered from key informant interviews and focus group discussions were coded and analyzed thematically to complement statistical findings. Thematic areas that explain the impacts of extreme events on the cultural landscape were chosen for coding. The thematic analysis was conducted using Nvivo software. The findings revealed that rainfall was highly variable and unpredictable, resulting in extreme drought and flood. There were significant (P<0.05) increasing trends of heavy rainfall (R10mm and R20mm) and the total amount of rain on wet days (PRCPTOT), which might have resulted in flooding. The study also confirmed that absolute temperature extreme indices (TXx, TXn, and TNx) and the percentile-based temperature extreme indices (TX90p, TN90p, TX10p, and TN10P) showed significant (P<0.05) increasing trends which are signals for warming of the study area. The results revealed that the frequency as well as the severity of drought at 3-months (katana/hageya seasons) was more pronounced than the 12-months (annual) time scale. The highest number of droughts in 100 years is projected at a 3-months timescale across the study area. The findings also showed that frequent drought has led to loss of grasses which are used for making traditional individual houses and multipurpose communal houses (pafta), food insecurity, migration, loss of biodiversity, and commodification of stones from terrace. On the other hand, the increasing trends of rainfall extreme indices resulted in destruction of terraces, soil erosion, loss of life and damage of properties. The study shows that a persistent decline in farmland productivity, due to erratic and extreme rainfall and frequent drought occurrences, forced the local people to participate in non-farm activities and retreat from daily preservation and management of their landscape. Overall, the increasing rainfall and temperature extremes coupled with prevalence of drought are thought to have an impact on the sustainability of cultural landscape through disrupting the ecosystem services and livelihood of the community. Therefore, more localized adaptation and mitigation strategies to the changing climate are needed to maintain the sustainability of Konso cultural landscapes as a global cultural treasure and to strengthen the resilience of smallholder farmers.

Keywords: adaptation, cultural landscape, drought, extremes indices

Procedia PDF Downloads 14
4 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 135
3 Acute Severe Hyponatremia in Patient with Psychogenic Polydipsia, Learning Disability and Epilepsy

Authors: Anisa Suraya Ab Razak, Izza Hayat

Abstract:

Introduction: The diagnosis and management of severe hyponatremia in neuropsychiatric patients present a significant challenge to physicians. Several factors contribute, including diagnostic shadowing and attributing abnormal behavior to intellectual disability or psychiatric conditions. Hyponatraemia is the commonest electrolyte abnormality in the inpatient population, ranging from mild/asymptomatic, moderate to severe levels with life-threatening symptoms such as seizures, coma and death. There are several documented fatal case reports in the literature of severe hyponatremia secondary to psychogenic polydipsia, often diagnosed only in autopsy. This paper presents a case study of acute severe hyponatremia in a neuropsychiatric patient with early diagnosis and admission to intensive care. Case study: A 21-year old Caucasian male with known epilepsy and learning disability was admitted from residential living with generalized tonic-clonic self-terminating seizures after refusing medications for several weeks. Evidence of superficial head injury was detected on physical examination. His laboratory data demonstrated mild hyponatremia (125 mmol/L). Computed tomography imaging of his brain demonstrated no acute bleed or space-occupying lesion. He exhibited abnormal behavior - restlessness, drinking water from bathroom taps, inability to engage, paranoia, and hypersexuality. No collateral history was available to establish his baseline behavior. He was loaded with intravenous sodium valproate and leveritircaetam. Three hours later, he developed vomiting and a generalized tonic-clonic seizure lasting forty seconds. He remained drowsy for several hours and regained minimal recovery of consciousness. A repeat set of blood tests demonstrated profound hyponatremia (117 mmol/L). Outcomes: He was referred to intensive care for peripheral intravenous infusion of 2.7% sodium chloride solution with two-hourly laboratory monitoring of sodium concentration. Laboratory monitoring identified dangerously rapid correction of serum sodium concentration, and hypertonic saline was switched to a 5% dextrose solution to reduce the risk of acute large-volume fluid shifts from the cerebral intracellular compartment to the extracellular compartment. He underwent urethral catheterization and produced 8 liters of urine over 24 hours. Serum sodium concentration remained stable after 24 hours of correction fluids. His GCS recovered to baseline after 48 hours with improvement in behavior -he engaged with healthcare professionals, understood the importance of taking medications, admitted to illicit drug use and drinking massive amounts of water. He was transferred from high-dependency care to ward level and was initiated on multiple trials of anti-epileptics before achieving seizure-free days two weeks after resolution of acute hyponatremia. Conclusion: Psychogenic polydipsia is often found in young patients with intellectual disability or psychiatric disorders. Patients drink large volumes of water daily ranging from ten to forty liters, resulting in acute severe hyponatremia with mortality rates as high as 20%. Poor outcomes are due to challenges faced by physicians in making an early diagnosis and treating acute hyponatremia safely. A low index of suspicion of water intoxication is required in this population, including patients with known epilepsy. Monitoring urine output proved to be clinically effective in aiding diagnosis. Early referral and admission to intensive care should be considered for safe correction of sodium concentration while minimizing risk of fatal complications e.g. central pontine myelinolysis.

Keywords: epilepsy, psychogenic polydipsia, seizure, severe hyponatremia

Procedia PDF Downloads 116
2 A Comprehensive Study of Spread Models of Wildland Fires

Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.

Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling

Procedia PDF Downloads 70
1 Identification of the Antimicrobial Property of Double Metal Oxide/Bioactive Glass Nanocomposite Against Multi Drug Resistant Staphylococcus aureus Causing Implant Infections

Authors: M. H. Pazandeh, M. Doudi, S. Barahimi, L. Rahimzadeh Torabi

Abstract:

The use of antibiotics is essential in reducing the occurrence of adverse effects and inhibiting the emergence of antibiotic resistance in microbial populations. The necessity for a novel methodology concerning local administration of antibiotics has arisen, with particular focus on dealing with localized infections prompted by bacterial colonization of medical devices or implant materials. Bioactive glasses (BG) are extensively employed in the field of regenerative medicine, encompassing a diverse range of materials utilized for drug delivery systems. In the present investigation, various drug carriers for imipenem and tetracycline, namely single systems BG/SnO2, BG/NiO with varying proportions of metal oxide, and nanocomposite BG/SnO2/NiO, were synthesized through the sol-gel technique. The antibacterial efficacy of the synthesized samples was assessed through the utilization of the disk diffusion method with the aim of neutralizing Staphylococcus aureus as the bacterial model. The current study involved the examination of the bioactivity of two samples, namely BG10SnO2/10NiO and BG20SnO2, which were chosen based on their heightened bacterial inactivation properties. This evaluation entailed the employment of two techniques: the measurement of the pH of simulated body fluid (SBF) solution and the analysis of the sample tablets through X-ray diffraction (XRD), scanning electron microscopy (SEM), and Fourier transform infrared (FTIR) spectroscopy. The sample tablets were submerged in SBF for varying durations of 7, 14, and 28 days. The bioactivity of the composite bioactive glass sample was assessed through characterization of alterations in its surface morphology, structure, and chemical composition. This evaluation was performed using scanning electron microscopy (SEM), Fourier-transform infrared (FTIR) spectroscopy, and X-ray diffraction spectroscopy. Subsequently, the sample was immersed in simulated liquids to simulate its behavior in biological environments. The specific body fat percentage (SBF) was assessed over a 28-day period. The confirmation of the formation of a hydroxyapatite surface layer serves as a distinct indicator of bioactivity. The infusion of antibiotics into the composite bioactive glass specimen was done separately, and then the release kinetics of tetracycline and imipenem were tested in simulated body fluid (SBF). Antimicrobial effectiveness against various bacterial strains have been proven in numerous instances using both melt and sol-gel techniques to create multiple bioactive glass compositions. An elevated concentration of calcium ions within a solution has been observed to cause an increase in the pH level. In aqueous suspensions, bioactive glass particles manifest a significant antimicrobial impact. The composite bioactive glass specimen exhibits a gradual and uninterrupted release, which is highly desirable for a drug delivery system over a span of 72 hours. The reduction in absorption, which signals the loss of a portion of the antibiotic during the loading process from the initial phosphate-buffered saline solution, indicates the successful bonding of the two antibiotics to the surfaces of the bioactive glass samples. The sample denoted as BG/10SnO2/10NiO exhibits a higher loading of particles compared to the sample designated as BG/20SnO2 in the context of bioactive glass. The enriched sample demonstrates a heightened bactericidal impact on the bacteria under investigation while concurrently preserving its antibacterial characteristics. Tailored bioactive glass that incorporates hydroxyapatite, with a regulated and efficient release of drugs targeting bacterial infections, holds promise as a potential framework for bone implant scaffolds following rigorous clinical evaluation, thereby establishing potential future biomedical uses. During the modification process, the introduction of metal oxides into bioactive glass resulted in improved antibacterial characteristics, particularly in the composite bioactive glass sample that displayed the highest level of efficiency.

Keywords: antibacterial, bioactive glasses, implant infections, multi drug resistant

Procedia PDF Downloads 86