Search results for: transport parameters
26 Advancing UAV Operations with Hybrid Mobile Network and LoRa Communications
Authors: Annika J. Meyer, Tom Piechotta
Abstract:
Unmanned Aerial Vehicles (UAVs) have increasingly become vital tools in various applications, including surveillance, search and rescue, and environmental monitoring. One common approach to ensure redundant communication systems when flying beyond visual line of sight is for UAVs to employ multiple mobile data modems by different providers. Although widely adopted, this approach suffers from several drawbacks, such as high costs, added weight and potential increases in signal interference. In light of these challenges, this paper proposes a communication framework intermeshing mobile networks and LoRa (Long Range) technology—a low-power, long-range communication protocol. LoRaWAN (Long Range Wide Area Network) is commonly used in Internet of Things applications, relying on stationary gateways and Internet connectivity. This paper, however, utilizes the underlying LoRa protocol, taking advantage of the protocol’s low power and long-range capabilities while ensuring efficiency and reliability. Conducted in collaboration with the Potsdam Fire Department, the implementation of mobile network technology in combination with the LoRa protocol in small UAVs (take-off weight < 0.4 kg), specifically designed for search and rescue and area monitoring missions, is explored. This research aims to test the viability of LoRa as an additional redundant communication system during UAV flights as well as its intermeshing with the primary, mobile network-based controller. The methodology focuses on direct UAV-to-UAV and UAV-to-ground communications, employing different spreading factors optimized for specific operational scenarios—short-range for UAV-to-UAV interactions and long-range for UAV-to-ground commands. This explored use case also dramatically reduces one of the major drawbacks of LoRa communication systems, as a line of sight between the modules is necessary for reliable data transfer. Something that UAVs are uniquely suited to provide, especially when deployed as a swarm. Additionally, swarm deployment may enable UAVs that have lost contact with their primary network to reestablish their connection through another, better-situated UAV. The experimental setup involves multiple phases of testing, starting with controlled environments to assess basic communication capabilities and gradually advancing to complex scenarios involving multiple UAVs. Such a staged approach allows for meticulous adjustment of parameters and optimization of the communication protocols to ensure reliability and effectiveness. Furthermore, due to the close partnership with the Fire Department, the real-world applicability of the communication system is assured. The expected outcomes of this paper include a detailed analysis of LoRa's performance as a communication tool for UAVs, focusing on aspects such as signal integrity, range, and reliability under different environmental conditions. Additionally, the paper seeks to demonstrate the cost-effectiveness and operational efficiency of using a single type of communication technology that reduces UAV payload and power consumption. By shifting from traditional cellular network communications to a more robust and versatile cellular and LoRa-based system, this research has the potential to significantly enhance UAV capabilities, especially in critical applications where reliability is paramount. The success of this paper could pave the way for broader adoption of LoRa in UAV communications, setting a new standard for UAV operational communication frameworks.Keywords: LoRa communication protocol, mobile network communication, UAV communication systems, search and rescue operations
Procedia PDF Downloads 4425 The Use of Antioxidant and Antimicrobial Properties of Plant Extracts for Increased Safety and Sustainability of Dairy Products
Authors: Loreta Serniene, Dalia Sekmokiene, Justina Tomkeviciute, Lina Lauciene, Vaida Andruleviciute, Ingrida Sinkeviciene, Kristina Kondrotiene, Neringa Kasetiene, Mindaugas Malakauskas
Abstract:
One of the most important areas of product development and research in the dairy industry is the product enrichment with active ingredients as well as leading to increased product safety and sustainability. The most expanding field of the active ingredients is the various plants' CO₂ extracts with aromatic, antioxidant and antimicrobial properties. In this study, 15 plant extracts were evaluated based on their antioxidant, antimicrobial properties as well as sensory acceptance indicators for the development of new dairy products. In order to increase the total antioxidant capacity of the milk products, it was important to determine the content of phenolic compounds and antioxidant activity of CO₂ extract. The total phenolic content of fifteen different commercial CO₂ extracts was determined by the Folin-Ciocalteu reagent and expressed as milligrams of the Gallic acid equivalents (GAE) in gram of extract. The antioxidant activities were determined by 2.2′-azinobis-(3-ethylbenzthiazoline)-6-sulfonate (ABTS) methods. The study revealed that the antioxidant activities of investigated CO₂ extract vary from 4.478-62.035 µmole Trolox/g, while the total phenolic content was in the range of 2.021-38.906 mg GAE/g of extract. For the example, the estimated antioxidant activity of Chinese cinnamon (Cinammonum aromaticum) CO₂ extract was 62.023 ± 0.15 µmole Trolox/g and the total flavonoid content reached 17.962 ± 0.35 mg GAE/g. These two parameters suggest that cinnamon could be a promising supplement for the development of new cheese. The inhibitory effects of these essential oils were tested by using agar disc diffusion method against pathogenic bacteria, most commonly found in dairy products. The obtained results showed that essential oil of lemon myrtle (Backhousia citriodora) and cinnamon (Cinnamomum cassia) has antimicrobial activity against E. coli, S. aureus, B. cereus, P. florescens, L. monocytogenes, Br. thermosphacta, P. aeruginosa and S. typhimurium with the diameter of inhibition zones variation from 10 to 52 mm. The sensory taste acceptability of plant extracts in combination with a dairy product was evaluated by a group of sensory evaluation experts (31 individuals) by the criteria of overall taste acceptability in the scale of 0 (not acceptable) to 10 (very acceptable). Each of the tested samples included 200g grams of natural unsweetened greek yogurt without additives and 1 drop of single plant extract (essential oil). The highest average of overall taste acceptability was defined for the samples with essential oils of orange (Citrus sinensis) - average score 6.67, lemon myrtle (Backhousia citriodora) – 6.62, elderberry flower (Sambucus nigra flos.) – 6.61, lemon (Citrus limon) – 5.75 and cinnamon (Cinnamomum cassia) – 5.41, respectively. The results of this study indicate plant extracts of Cinnamomum cassia and Backhousia citriodora as a promising additive not only to increase the total antioxidant capacity of the milk products and as alternative antibacterial agent to combat pathogenic bacteria commonly found in dairy products but also as a desirable flavour for the taste pallet of the consumers with expressed need for safe, sustainable and innovative dairy products. Acknowledgment: This research was funded by the European Regional Development Fund according to the supported activity 'Research Projects Implemented by World-class Researcher Groups' under Measure No. 01.2.2-LMT-K-718.Keywords: antioxidant properties, antimicrobial properties, cinnamon, CO₂ plant extracts, dairy products, essential oils, lemon myrtle
Procedia PDF Downloads 20624 Genetic Diversity of Exon-20 of the IIS6 of the Voltage Gated Sodium Channel Gene from Pyrethroid Resistant Anopheles Mosquitoes in Sudan Savannah Region of Jigawa State
Authors: Asma'u Mahe, Abdullahi A. Imam, Adamu J. Alhassan, Nasiru Abdullahi, Sadiya A. Bichi, Nura Lawal, Kamaluddeen Babagana
Abstract:
Malaria is a disease with global health significance. It is caused by parasites and transmitted by Anopheles mosquitoes. Increase in insecticide resistance threatens the disease vector control. The strength of selection pressure acting on a mosquito population in relation to insecticide resistance can be assess by determining the genetic diversity of a fragment spanning exon- 20 of IIS6 of the voltage gated sodium channel (VGSC). Larval samples reared to adulthood were identified and kdr (knock down resistance) profile was determined. The DNA sequences were used to assess the patterns of genetic differentiation by determining the levels of genetic variability between the Anopheles mosquitoes. Genetic differentiation of the Anopheles mosquitoes based on a portion of the voltage gated sodium channel gene was obtained. Polymorphisms were detected; sequence variation and analysis were presented as a phylogenetic tree. Phylogenetic tree of VGSC haplotypes was constructed for samples of the Anopheles mosquitoes using the maximum likelihood method in MEGA 6.0 software. DNA sequences were edited using BioEdit sequence editor. The edited sequences were aligned with reference sequence (Kisumu strain). Analyses were performed as contained in dnaSP 5.10. Results of genetic parameters of polymorphism and haplotype reconstruction were presented in count. Twenty sequences were used for the analysis. Regions selected were 1- 576, invariable (monomorphic) sites were 460 while variable (polymorphic) sites were 5 giving the number of total mutations observed in this study. Mutations obtained from the study were at codon 105: TTC- Phenylalanine replaces TCC- Serine, codon 513: TAG- Termination replaces TTG- Leucine, codon 153, 300 and 553 mutations were non-synonymous. From the constructed phylogenetic tree, some groups were shown to be closer with Exon20Gambiae Kisumu (Reference strain) having some genetic distance, while 5-Exon20Gambiae-F I13.ab1, 18-Exon20Gambiae-F C17.ab1, and 2-Exon20Gambiae-F C13.ab1 clustered together genetically differentiated away from others. Mutations observed in this study can be attributed to the high insecticide resistance profile recorded in the study areas. Haplotype networks of pattern of genetic variability and polymorphism for the fragment of the VGSC sequences of sampled Anopheles mosquitoes revealed low haplotypes for the present study. Haplotypes are set of closely linked DNA variation on X-chromosome. Haplotypes were scaled accordingly to reflect their respective frequencies. Low haplotype number, four VGSC-1014F haplotypes were observed in this study. A positive association was previously established between low haplotype number of VGSC diversity and pyrethroid resistance through kdr mechanism. Significant values at (P < 0.05) of Tajima D and Fu and Li D’ were observed for some of the results indicating possible signature of positive selection on the fragment of VGSC in the study. This is the first report of VGSC-1014F in the study site. Based on the results, the mutation was present in low frequencies. However, the roles played by the observed mutations need further investigation. Mutations, environmental factors among others can affect genetic diversity. The study area has recorded increase in insecticide resistance that can affect vector control in the area. This finding might affect the efforts made against malaria. Sequences were deposited in GenBank for Accession Number.Keywords: anopheles mosquitoes, insecticide resistance, kdr, malaria, voltage gated sodium channel
Procedia PDF Downloads 6623 Evaluation of the Incorporation of Modified Starch in Puff Pastry Dough by Mixolab Rheological Analysis
Authors: Alejandra Castillo-Arias, Carlos A. Fuenmayor, Carlos M. Zuluaga-Domínguez
Abstract:
The connection between health and nutrition has driven the food industry to explore healthier and more sustainable alternatives. Key strategies to enhance nutritional quality and extend shelf life include reducing saturated fats and incorporating natural ingredients. One area of focus is the use of modified starch in baked goods, which has attracted significant interest in food science and industry due to its functional benefits. Modified starches are commonly used for their gelling, thickening, and water-retention properties. Derived from sources like waxy corn, potatoes, tapioca, or rice, these polysaccharides improve thermal stability and resistance to dough. The use of modified starch enhances the texture and structure of baked goods, which is crucial for consumer acceptance. In this study, it was evaluated the effects of modified starch inclusion on dough used for puff pastry elaboration, measured with Mixolab analysis. This technique assesses flour quality by examining its behavior under varying conditions, providing a comprehensive profile of its baking properties. The analysis included measurements of water absorption capacity, dough development time, dough stability, softening, final consistency, and starch gelatinization. Each of these parameters offers insights into how the flour will perform during baking and the quality of the final product. The performance of wheat flour with varying levels of modified starch inclusion (10%, 20%, 30%, and 40%) was evaluated through Mixolab analysis, with a control sample consisting of 100% wheat flour. Water absorption, gluten content, and retrogradation indices were analyzed to understand how modified starch affects dough properties. The results showed that the inclusion of modified starch increased the absorption index, especially at levels above 30%, indicating a dough with better handling qualities and potentially improved texture in the final baked product. However, the reduction in wheat flour resulted in a lower kneading index, affecting dough strength. Conversely, incorporating more than 20% modified starch reduced the retrogradation index, indicating improved stability and resistance to crystallization after cooling. Additionally, the modified starch improved the gluten index, contributing to better dough elasticity and stability, providing good structural support and resistance to deformation during mixing and baking. As expected, the control sample exhibited a higher amylase index, due to the presence of enzymes in wheat flour. However, this is of low concern in puff pastry dough, as amylase activity is more relevant in fermented doughs, which is not the case here. Overall, the use of modified starch in puff pastry enhanced product quality by improving texture, structure, and shelf life, particularly when used at levels between 30% and 40%. This research underscores the potential of modified starches to address health concerns associated with traditional starches and to contribute to the development of higher-quality, consumer-friendly baked products. Furthermore, the findings suggest that modified starches could play a pivotal role in future innovations within the baking industry, particularly in products aiming to balance healthfulness with sensory appeal. By incorporating modified starch into their formulations, bakeries can meet the growing demand for healthier, more sustainable products while maintaining the indulgent qualities that consumers expect from baked goods.Keywords: baking quality, dough properties, modified starch, puff pastry
Procedia PDF Downloads 2622 The Usefulness of Medical Scribes in the Emengecy Department
Authors: Victor Kang, Sirene Bellahnid, Amy Al-Simaani
Abstract:
Efficient documentation and completion of clerical tasks are pillars of efficient patient-centered care in acute settings such as the emergency department (ED). Medical scribes aid physicians with documentation, navigation of electronic health records, results gathering, and communication coordination with other healthcare teams. However, the use of medical scribes is not widespread, with some hospitals even continuing to discontinue their programs. One reason for this could be the lack of studies that have outlined concrete improvements in efficiency and patient and provider satisfaction in emergency departments before and after incorporating scribes. Methods: We conducted a review of the literature concerning the implementation of a medical scribe program and emergency department performance. For this review, a narrative synthesis accompanied by textual commentaries was chosen to present the selected papers. PubMed was searched exclusively. Initially, no date limits were set, but seeing as the electronic medical record was officially implemented in Canada in 2013, studies published after this date were preferred as they provided insight into the interplay between its implementation and scribes on quality improvement. Results: Throughput, efficiency, and cost-effectiveness were the most commonly used parameters in evaluating scribes in the Emergency Department. Important throughput metrics, specifically door-to-doctor and disposition time, were significantly decreased in emergency departments that utilized scribes. Of note, this was shown to be the case in community hospitals, where the burden of documentation and clerical tasks would fall directly upon the attending physician. Academic centers differ in that they rely heavily on residents and students; so the implementation of scribes has been shown to have limited effect on these metrics. However, unique to academic centers was the provider’s perception of incrased time for teaching was unique to academic centers. Consequently, providers express increased work satisfaction in relation to time spent with patients and in teaching. Patients, on the other hand, did not demonstrate a decrease in satisfaction in regards to the care that was provided, but there was no significant increase observed either. Of the studies we reviewed, one of the biggest limitations was the lack of significance in the data. While many individual studies reported that medical scribes in emergency rooms improved relative value units, patient satisfaction, provider satisfaction, and increased number of patients seen, there was no statistically significant improvement in the above criteria when compiled in a systematic review. There is also a clear publication bias; very few studies with negative results were published. To prove significance, data from more emergency rooms with scribe programs would need to be compiled which also includes emergency rooms who did not report noticeable benefits. Furthermore, most data sets focused only on scribes in academic centers. Conclusion: Ultimately, the literature suggests that while emergency room physicians who have access to medical scribes report higher satisfaction due to lower clerical burdens and can see more patients per shift, there is still variability in terms of patient and provider satisfaction. Whether or not this variability exists due to differences in training (in-house trainees versus contractors), population profile (adult versus pediatric), setting (academic versus community), or which shifts scribe work cannot be determined based on the studies that exist. Ultimately, more scribe programs need to be evaluated to determine whether these variables affect outcomes and prove whether scribes significantly improve emergency room efficiency.Keywords: emergency medicine, medical scribe, scribe, documentation
Procedia PDF Downloads 9021 Linking the Genetic Signature of Free-Living Soil Diazotrophs with Process Rates under Land Use Conversion in the Amazon Rainforest
Authors: Rachel Danielson, Brendan Bohannan, S.M. Tsai, Kyle Meyer, Jorge L.M. Rodrigues
Abstract:
The Amazon Rainforest is a global diversity hotspot and crucial carbon sink, but approximately 20% of its total extent has been deforested- primarily for the establishment of cattle pasture. Understanding the impact of this large-scale disturbance on soil microbial community composition and activity is crucial in understanding potentially consequential shifts in nutrient or greenhouse gas cycling, as well as adding to the body of knowledge concerning how these complex communities respond to human disturbance. In this study, surface soils (0-10cm) were collected from three forests and three 45-year-old pastures in Rondonia, Brazil (the Amazon state with the greatest rate of forest destruction) in order to determine the impact of forest conversion on microbial communities involved in nitrogen fixation. Soil chemical and physical parameters were paired with measurements of microbial activity and genetic profiles to determine how community composition and process rates relate to environmental conditions. Measuring both the natural abundance of 15N in total soil N, as well as incorporation of enriched 15N2 under incubation has revealed that conversion of primary forest to cattle pasture results in a significant increase in the rate of nitrogen fixation by free-living diazotrophs. Quantification of nifH gene copy numbers (an essential subunit encoding the nitrogenase enzyme) correspondingly reveals a significant increase of genes in pasture compared to forest soils. Additionally, genetic sequencing of both nifH genes and transcripts shows a significant increase in the diversity of the present and metabolically active diazotrophs within the soil community. Levels of both organic and inorganic nitrogen tend to be lower in pastures compared to forests, with ammonium rather than nitrate as the dominant inorganic form. However, no significant or consistent differences in total, extractable, permanganate-oxidizable, or loss-on-ignition carbon are present between the two land-use types. Forest conversion is associated with a 0.5- 1.0 unit pH increase, but concentrations of many biologically relevant nutrients such as phosphorus do not increase consistently. Increases in free-living diazotrophic community abundance and activity appear to be related to shifts in carbon to nitrogen pool ratios. Furthermore, there may be an important impact of transient, low molecular weight plant-root-derived organic carbon on free-living diazotroph communities not captured in this study. Preliminary analysis of nitrogenase gene variant composition using NovoSeq metagenomic sequencing indicates that conversion of forest to pasture may significantly enrich vanadium-based nitrogenases. This indication is complemented by a significant decrease in available soil molybdenum. Very little is known about the ecology of diazotrophs utilizing vanadium-based nitrogenases, so further analysis may reveal important environmental conditions favoring their abundance and diversity in soil systems. Taken together, the results of this study indicate a significant change in nitrogen cycling and diazotroph community composition with the conversion of the Amazon Rainforest. This may have important implications for the sustainability of cattle pastures once established since nitrogen is a crucial nutrient for forage grass productivity.Keywords: free-living diazotrophs, land use change, metagenomic sequencing, nitrogen fixation
Procedia PDF Downloads 19520 Measurement System for Human Arm Muscle Magnetic Field and Grip Strength
Authors: Shuai Yuan, Minxia Shi, Xu Zhang, Jianzhi Yang, Kangqi Tian, Yuzheng Ma
Abstract:
The precise measurement of muscle activities is essential for understanding the function of various body movements. This work aims to develop a muscle magnetic field signal detection system based on mathematical analysis. Medical research has underscored that early detection of muscle atrophy, coupled with lifestyle adjustments such as dietary control and increased exercise, can significantly enhance muscle-related diseases. Currently, surface electromyography (sEMG) is widely employed in research as an early predictor of muscle atrophy. Nonetheless, the primary limitation of using sEMG to forecast muscle strength is its inability to directly measure the signals generated by muscles. Challenges arise from potential skin-electrode contact issues due to perspiration, leading to inaccurate signals or even signal loss. Additionally, resistance and phase are significantly impacted by adipose layers. The recent emergence of optically pumped magnetometers introduces a fresh avenue for bio-magnetic field measurement techniques. These magnetometers possess high sensitivity and obviate the need for a cryogenic environment unlike superconducting quantum interference devices (SQUIDs). They detect muscle magnetic field signals in the range of tens to thousands of femtoteslas (fT). The utilization of magnetometers for capturing muscle magnetic field signals remains unaffected by issues of perspiration and adipose layers. Since their introduction, optically pumped atomic magnetometers have found extensive application in exploring the magnetic fields of organs such as cardiac and brain magnetism. The optimal operation of these magnetometers necessitates an environment with an ultra-weak magnetic field. To achieve such an environment, researchers usually utilize a combination of active magnetic compensation technology with passive magnetic shielding technology. Passive magnetic shielding technology uses a magnetic shielding device built with high permeability materials to attenuate the external magnetic field to a few nT. Compared with more layers, the coils that can generate a reverse magnetic field to precisely compensate for the residual magnetic fields are cheaper and more flexible. To attain even lower magnetic fields, compensation coils designed by Biot-Savart law are involved to generate a counteractive magnetic field to eliminate residual magnetic fields. By solving the magnetic field expression of discrete points in the target region, the parameters that determine the current density distribution on the plane can be obtained through the conventional target field method. The current density is obtained from the partial derivative of the stream function, which can be represented by the combination of trigonometric functions. Optimization algorithms in mathematics are introduced into coil design to obtain the optimal current density distribution. A one-dimensional linear regression analysis was performed on the collected data, obtaining a coefficient of determination R2 of 0.9349 with a p-value of 0. This statistical result indicates a stable relationship between the peak-to-peak value (PPV) of the muscle magnetic field signal and the magnitude of grip strength. This system is expected to be a widely used tool for healthcare professionals to gain deeper insights into the muscle health of their patients.Keywords: muscle magnetic signal, magnetic shielding, compensation coils, trigonometric functions.
Procedia PDF Downloads 5719 Pharmacophore-Based Modeling of a Series of Human Glutaminyl Cyclase Inhibitors to Identify Lead Molecules by Virtual Screening, Molecular Docking and Molecular Dynamics Simulation Study
Authors: Ankur Chaudhuri, Sibani Sen Chakraborty
Abstract:
In human, glutaminyl cyclase activity is highly abundant in neuronal and secretory tissues and is preferentially restricted to hypothalamus and pituitary. The N-terminal modification of β-amyloids (Aβs) peptides by the generation of a pyro-glutamyl (pGlu) modified Aβs (pE-Aβs) is an important process in the initiation of the formation of neurotoxic plaques in Alzheimer’s disease (AD). This process is catalyzed by glutaminyl cyclase (QC). The expression of QC is characteristically up-regulated in the early stage of AD, and the hallmark of the inhibition of QC is the prevention of the formation of pE-Aβs and plaques. A computer-aided drug design (CADD) process was employed to give an idea for the designing of potentially active compounds to understand the inhibitory potency against human glutaminyl cyclase (QC). This work elaborates the ligand-based and structure-based pharmacophore exploration of glutaminyl cyclase (QC) by using the known inhibitors. Three dimensional (3D) quantitative structure-activity relationship (QSAR) methods were applied to 154 compounds with known IC50 values. All the inhibitors were divided into two sets, training-set, and test-sets. Generally, training-set was used to build the quantitative pharmacophore model based on the principle of structural diversity, whereas the test-set was employed to evaluate the predictive ability of the pharmacophore hypotheses. A chemical feature-based pharmacophore model was generated from the known 92 training-set compounds by HypoGen module implemented in Discovery Studio 2017 R2 software package. The best hypothesis was selected (Hypo1) based upon the highest correlation coefficient (0.8906), lowest total cost (463.72), and the lowest root mean square deviation (2.24Å) values. The highest correlation coefficient value indicates greater predictive activity of the hypothesis, whereas the lower root mean square deviation signifies a small deviation of experimental activity from the predicted one. The best pharmacophore model (Hypo1) of the candidate inhibitors predicted comprised four features: two hydrogen bond acceptor, one hydrogen bond donor, and one hydrophobic feature. The Hypo1 was validated by several parameters such as test set activity prediction, cost analysis, Fischer's randomization test, leave-one-out method, and heat map of ligand profiler. The predicted features were then used for virtual screening of potential compounds from NCI, ASINEX, Maybridge and Chembridge databases. More than seven million compounds were used for this purpose. The hit compounds were filtered by drug-likeness and pharmacokinetics properties. The selective hits were docked to the high-resolution three-dimensional structure of the target protein glutaminyl cyclase (PDB ID: 2AFU/2AFW) to filter these hits further. To validate the molecular docking results, the most active compound from the dataset was selected as a reference molecule. From the density functional theory (DFT) study, ten molecules were selected based on their highest HOMO (highest occupied molecular orbitals) energy and the lowest bandgap values. Molecular dynamics simulations with explicit solvation systems of the final ten hit compounds revealed that a large number of non-covalent interactions were formed with the binding site of the human glutaminyl cyclase. It was suggested that the hit compounds reported in this study could help in future designing of potent inhibitors as leads against human glutaminyl cyclase.Keywords: glutaminyl cyclase, hit lead, pharmacophore model, simulation
Procedia PDF Downloads 13118 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers
Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek
Abstract:
Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations
Procedia PDF Downloads 13817 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence
Authors: Muhammad Bilal Shaikh
Abstract:
Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.Keywords: multimodal AI, computer vision, NLP, mineral processing, mining
Procedia PDF Downloads 6816 Determination of the Phytochemicals Composition and Pharmacokinetics of whole Coffee Fruit Caffeine Extract by Liquid Chromatography-Tandem Mass Spectrometry
Authors: Boris Nemzer, Nebiyu Abshiru, Z. B. Pietrzkowski
Abstract:
Coffee cherry is one of the most ubiquitous agricultural commodities which possess nutritional and human health beneficial properties. Between the two most widely used coffee cherries Coffea arabica (Arabica) and Coffea canephora (Robusta), Coffea arabica remains superior due to its sensory properties and, therefore, remains in great demand in the global coffee market. In this study, the phytochemical contents and pharmacokinetics of Coffeeberry® Energy (CBE), a commercially available Arabica whole coffee fruit caffeine extract, are investigated. For phytochemical screening, 20 mg of CBE was dissolved in an aqueous methanol solution for analysis by mass spectrometry (MS). Quantification of caffeine and chlorogenic acids (CGAs) contents of CBE was performed using HPLC. For the bioavailability study, serum samples were collected from human subjects before and after 1, 2 and 3 h post-ingestion of 150mg CBE extract. Protein precipitation and extraction were carried out using methanol. Identification of compounds was performed using an untargeted metabolomic approach on Q-Exactive Orbitrap MS coupled to reversed-phase chromatography. Data processing was performed using Thermo Scientific Compound Discover 3.3 software. Phytochemical screening identified a total of 170 compounds, including organic acids, phenolic acids, CGAs, diterpenoids and hydroxytryptamine. Caffeine & CGAs make up more than, respectively, 70% & 9% of the total CBE composition. For serum samples, a total of 82 metabolites representing 32 caffeine- and 50 phenolic-derived metabolites were identified. Volcano plot analysis revealed 32 differential metabolites (24 caffeine- and 8 phenolic-derived) that showed an increase in serum level post-CBE dosing. Caffeine, uric acid, and trimethyluric acid isomers exhibited 4- to 10-fold increase in serum abundance post-dosing. 7-Methyluric acid, 1,7-dimethyluric acid, paraxanthine and theophylline exhibited a minimum of 1.5-fold increase in serum level. Among the phenolic-derived metabolites, iso-feruloyl quinic acid isomers (3-, 4- and 5-iFQA) showed the highest increase in serum level. These compounds were essentially absent in serum collected before dosage. More interestingly, the iFQA isomers were not originally present in the CBE extract, as our phytochemical screen did not identify these compounds. This suggests the potential formation of the isomers during the digestion and absorption processes. Pharmacokinetics parameters (Cmax, Tmax and AUC0-3h) of caffeine- and phenolic-derived metabolites were also investigated. Caffeine was rapidly absorbed, reaching a maximum concentration (Cmax) of 10.95 µg/ml in just 1 hour. Thereafter, caffeine level steadily dropped from the peak level, although it did not return to baseline within the 3-hour dosing period. The disappearance of caffeine from circulation was mirrored by the rise in the concentration of its methylxanthine metabolites. Similarly, serum concentration of iFQA isomers steadily increased, reaching maximum (Cmax: 3-iFQA, 1.54 ng/ml; 4-iFQA, 2.47 ng/ml; 5-iFQA, 2.91 ng/ml) at tmax of 1.5 hours. The isomers remained well above the baseline during the 3-hour dosing period, allowing them to remain in circulation long enough for absorption into the body. Overall, the current study provides evidence of the potential health benefits of a uniquely formulated whole coffee fruit product. Consumption of this product resulted in a distinct serum profile of bioactive compounds, as demonstrated by the more than 32 metabolites that exhibited a significant change in systemic exposure.Keywords: phytochemicals, mass spectrometry, pharmacokinetics, differential metabolites, chlorogenic acids
Procedia PDF Downloads 6915 Ensemble Sampler For Infinite-Dimensional Inverse Problems
Authors: Jeremie Coullon, Robert J. Webber
Abstract:
We introduce a Markov chain Monte Carlo (MCMC) sam-pler for infinite-dimensional inverse problems. Our sam-pler is based on the affine invariant ensemble sampler, which uses interacting walkers to adapt to the covariance structure of the target distribution. We extend this ensem-ble sampler for the first time to infinite-dimensional func-tion spaces, yielding a highly efficient gradient-free MCMC algorithm. Because our ensemble sampler does not require gradients or posterior covariance estimates, it is simple to implement and broadly applicable. In many Bayes-ian inverse problems, Markov chain Monte Carlo (MCMC) meth-ods are needed to approximate distributions on infinite-dimensional function spaces, for example, in groundwater flow, medical imaging, and traffic flow. Yet designing efficient MCMC methods for function spaces has proved challenging. Recent gradi-ent-based MCMC methods preconditioned MCMC methods, and SMC methods have improved the computational efficiency of functional random walk. However, these samplers require gradi-ents or posterior covariance estimates that may be challenging to obtain. Calculating gradients is difficult or impossible in many high-dimensional inverse problems involving a numerical integra-tor with a black-box code base. Additionally, accurately estimating posterior covariances can require a lengthy pilot run or adaptation period. These concerns raise the question: is there a functional sampler that outperforms functional random walk without requir-ing gradients or posterior covariance estimates? To address this question, we consider a gradient-free sampler that avoids explicit covariance estimation yet adapts naturally to the covariance struc-ture of the sampled distribution. This sampler works by consider-ing an ensemble of walkers and interpolating and extrapolating between walkers to make a proposal. This is called the affine in-variant ensemble sampler (AIES), which is easy to tune, easy to parallelize, and efficient at sampling spaces of moderate dimen-sionality (less than 20). The main contribution of this work is to propose a functional ensemble sampler (FES) that combines func-tional random walk and AIES. To apply this sampler, we first cal-culate the Karhunen–Loeve (KL) expansion for the Bayesian prior distribution, assumed to be Gaussian and trace-class. Then, we use AIES to sample the posterior distribution on the low-wavenumber KL components and use the functional random walk to sample the posterior distribution on the high-wavenumber KL components. Alternating between AIES and functional random walk updates, we obtain our functional ensemble sampler that is efficient and easy to use without requiring detailed knowledge of the target dis-tribution. In past work, several authors have proposed splitting the Bayesian posterior into low-wavenumber and high-wavenumber components and then applying enhanced sampling to the low-wavenumber components. Yet compared to these other samplers, FES is unique in its simplicity and broad applicability. FES does not require any derivatives, and the need for derivative-free sam-plers has previously been emphasized. FES also eliminates the requirement for posterior covariance estimates. Lastly, FES is more efficient than other gradient-free samplers in our tests. In two nu-merical examples, we apply FES to challenging inverse problems that involve estimating a functional parameter and one or more scalar parameters. We compare the performance of functional random walk, FES, and an alternative derivative-free sampler that explicitly estimates the posterior covariance matrix. We conclude that FES is the fastest available gradient-free sampler for these challenging and multimodal test problems.Keywords: Bayesian inverse problems, Markov chain Monte Carlo, infinite-dimensional inverse problems, dimensionality reduction
Procedia PDF Downloads 15414 Investigation on Pull-Out-Behavior and Interface Critical Parameters of Polymeric Fibers Embedded in Concrete and Their Correlation with Particular Fiber Characteristics
Authors: Michael Sigruener, Dirk Muscat, Nicole Struebbe
Abstract:
Fiber reinforcement is a state of the art to enhance mechanical properties in plastics. For concrete and civil engineering, steel reinforcements are commonly used. Steel reinforcements show disadvantages in their chemical resistance and weight, whereas polymer fibers' major problems are in fiber-matrix adhesion and mechanical properties. In spite of these facts, longevity and easy handling, as well as chemical resistance motivate researches to develop a polymeric material for fiber reinforced concrete. Adhesion and interfacial mechanism in fiber-polymer-composites are already studied thoroughly. For polymer fibers used as concrete reinforcement, the bonding behavior still requires a deeper investigation. Therefore, several differing polymers (e.g., polypropylene (PP), polyamide 6 (PA6) and polyetheretherketone (PEEK)) were spun into fibers via single screw extrusion and monoaxial stretching. Fibers then were embedded in a concrete matrix, and Single-Fiber-Pull-Out-Tests (SFPT) were conducted to investigate bonding characteristics and microstructural interface of the composite. Differences in maximum pull-out-force, displacement and slope of the linear part of force vs displacement-function, which depicts the adhesion strength and the ductility of the interfacial bond were studied. In SFPT fiber, debonding is an inhomogeneous process, where the combination of interfacial bonding and friction mechanisms add up to a resulting value. Therefore, correlations between polymeric properties and pull-out-mechanisms have to be emphasized. To investigate these correlations, all fibers were introduced to a series of analysis such as differential scanning calorimetry (DSC), contact angle measurement, surface roughness and hardness analysis, tensile testing and scanning electron microscope (SEM). Of each polymer, smooth and abraded fibers were tested, first to simulate the abrasion and damage caused by a concrete mixing process and secondly to estimate the influence of mechanical anchoring of rough surfaces. In general, abraded fibers showed a significant increase in maximum pull-out-force due to better mechanical anchoring. Friction processes therefore play a major role to increase the maximum pull-out-force. The polymer hardness affects the tribological behavior and polymers with high hardness lead to lower surface roughness verified by SEM and surface roughness measurements. This concludes into a decreased maximum pull-out-force for hard polymers. High surface energy polymers show better interfacial bonding strength in general, which coincides with the conducted SFPT investigation. Polymers such as PEEK or PA6 show higher bonding strength in smooth and roughened fibers, revealed through high pull-out-force and concrete particles bonded on the fiber surface pictured via SEM analysis. The surface energy divides into dispersive and polar part, at which the slope is correlating with the polar part. Only polar polymers increase their SFPT-function slope due to better wetting abilities when showing a higher bonding area through rough surfaces. Hence, the maximum force and the bonding strength of an embedded fiber is a function of polarity, hardness, and consequently surface roughness. Other properties such as crystallinity or tensile strength do not affect bonding behavior. Through the conducted analysis, it is now feasible to understand and resolve different effects in pull-out-behavior step-by-step based on the polymer properties itself. This investigation developed a roadmap on how to engineer high adhering polymeric materials for fiber reinforcement of concrete.Keywords: fiber-matrix interface, polymeric fibers, fiber reinforced concrete, single fiber pull-out test
Procedia PDF Downloads 11313 The Development, Composition, and Implementation of Vocalises as a Method of Technical Training for the Adult Musical Theatre Singer
Authors: Casey Keenan Joiner, Shayna Tayloe
Abstract:
Classical voice training for the novice singer has long relied on the guidance and instruction of vocalise collections, such as those written and compiled by Marchesi, Lütgen, Vaccai, and Lamperti. These vocalise collections purport to encourage healthy vocal habits and instill technical longevity in both aspiring and established singers, though their scope has long been somewhat confined to the classical idiom. For pedagogues and students specializing in other vocal genres, such as musical theatre and CCM (contemporary commercial music,) low-impact and pertinent vocal training aids are in short supply, and much of the suggested literature derives from classical methodology. While the tenants of healthy vocal production remain ubiquitous, specific stylistic needs and technical emphases differ from genre to genre and may require a specified extension of vocal acuity. As musical theatre continues to grow in popularity at both the professional and collegiate levels, the need for specialized training grows as well. Pedagogical literature geared specifically towards musical theatre (MT) singing and vocal production, while relatively uncommon, is readily accessible to the contemporary educator. Practitioners such as Norman Spivey, Mary Saunders Barton, Claudia Friedlander, Wendy Leborgne, and Marci Rosenberg continue to publish relevant research in the field of musical theatre voice pedagogy and have successfully identified many common MT vocal faults, their subsequent diagnoses, and their eventual corrections. Where classical methodology would suggest specific vocalises or training exercises to maintain corrected vocal posture following successful fault diagnosis, musical theatre finds itself without a relevant body of work towards which to transition. By analyzing the existing vocalise literature by means of a specialized set of parameters, including but not limited to melodic variation, rhythmic complexity, vowel utilization, and technical targeting, we have composed a set of vocalises meant specifically to address the training and conditioning of adult musical theatre voices. These vocalises target many pedagogical tenants in the musical theatre genre, including but not limited to thyroarytenoid-dominant production, twang resonance, lateral vowel formation, and “belt-mix.” By implementing these vocalises in the musical theatre voice studio, pedagogues can efficiently communicate proper musical theatre vocal posture and kinesthetic connection to their students, regardless of age or level of experience. The composition of these vocalises serves MT pedagogues on both a technical level as well as a sociological one. MT is a relative newcomer on the collegiate stage and the academization of musical theatre methodologies has been a slow and arduous process. The conflation of classical and MT techniques and training methods has long plagued the world of voice pedagogy and teachers often find themselves in positions of “cross-training,” that is, teaching students of both genres in one combined voice studio. As MT continues to establish itself on academic platforms worldwide, genre-specific literature and focused studies are both rare and invaluable. To ensure that modern students receive exacting and definitive training in their chosen fields, it becomes increasingly necessary for genres such as musical theatre to boast specified literature and a collection of musical theatre-specific vocalises only aids in this effort. This collection of musical theatre vocalises is the first of its kind and provides genre-specific studios with a basis upon which to grow healthy, balanced voices built for the harsh conditions of the modern theatre stage.Keywords: voice pedagogy, targeted methodology, musical theatre, singing
Procedia PDF Downloads 15612 Effect of Degree of Phosphorylation on Electrospinning and In vitro Cell Behavior of Phosphorylated Polymers as Biomimetic Materials for Tissue Engineering Applications
Authors: Pallab Datta, Jyotirmoy Chatterjee, Santanu Dhara
Abstract:
Over the past few years, phosphorous containing polymers have received widespread attention for applications such as high performance optical fibers, flame retardant materials, drug delivery and tissue engineering. Being pentavalent, phosphorous can exist in different chemical environments in these polymers which increase their versatility. In human biochemistry, phosphorous based compounds exert their functions both in soluble and insoluble form occurring as inorganic or as organophosphorous compounds. Specifically in case of biomacromolecules, phosphates are critical for functions of DNA, ATP, phosphoproteins, phospholipids, phosphoglycans and several coenzymes. Inspired by the role of phosphorous in functional biomacromolecules, design and synthesis of biomimetic materials are thus carried out by several authors to study macromolecular function or as substitutes in clinical tissue regeneration conditions. In addition, many regulatory signals of the body are controlled by phoshphorylation of key proteins present either in form of growth factors or matrix-bound scaffold proteins. This inspires works on synthesis of phospho-peptidomimetic amino acids for understanding key signaling pathways and this is extended to obtain molecules with potentially useful biological properties. Apart from above applications, phosphate groups bound to polymer backbones have also been demonstrated to improve function of osteoblast cells and augment performance of bone grafts. Despite the advantages of phosphate grafting, however, there is limited understanding on effect of degree of phosphorylation on macromolecular physicochemical and/or biological properties. Such investigations are necessary to effectively translate knowledge of macromolecular biochemistry into relevant clinical products since they directly influence processability of these polymers into suitable scaffold structures and control subsequent biological response. Amongst various techniques for fabrication of biomimetic scaffolds, nanofibrous scaffolds fabricated by electrospinning technique offer some special advantages in resembling the attributes of natural extracellular matrix. Understanding changes in physico-chemical properties of polymers as function of phosphorylation is therefore going to be crucial in development of nanofiber scaffolds based on phosphorylated polymers. The aim of the present work is to investigate the effect of phosphorous grafting on the electrospinning behavior of polymers with aim to obtain biomaterials for bone regeneration applications. For this purpose, phosphorylated derivatives of two polymers of widely different electrospinning behaviors were selected as starting materials. Poly(vinyl alcohol) is a conveniently electrospinnable polymer at different conditions and concentrations. On the other hand, electrospinning of chitosan backbone based polymers have been viewed as a critical challenge. The phosphorylated derivatives of these polymers were synthesized, characterized and electrospinning behavior of various solutions containing these derivatives was compared with electrospinning of pure poly (vinyl alcohol). In PVA, phosphorylation adversely impacted electrospinnability while in NMPC, higher phosphate content widened concentration range for nanofiber formation. Culture of MG-63 cells on electrospun nanofibers, revealed that degree of phosphate modification of a polymer significantly improves cell adhesion or osteoblast function of cultured cells. It is concluded that improvement of cell response parameters of nanofiber scaffolds can be attained as a function of controlled degree of phosphate grafting in polymeric biomaterials with implications for bone tissue engineering applications.Keywords: bone regeneration, chitosan, electrospinning, phosphorylation
Procedia PDF Downloads 22211 Pathomorphological Markers of the Explosive Wave Action on Human Brain
Authors: Sergey Kozlov, Juliya Kozlova
Abstract:
Introduction: The increased attention of researchers to an explosive trauma around the world is associated with a constant renewal of military weapons and a significant increase in terrorist activities using explosive devices. Explosive wave is a well known damaging factor of explosion. The most sensitive to the action of explosive wave in the human body are the head brain, lungs, intestines, urine bladder. The severity of damage to these organs depends on the distance from the explosion epicenter to the object, the power of the explosion, presence of barriers, parameters of the body position, and the presence of protective clothing. One of the places where a shock wave acts, in human tissues and organs, is the vascular endothelial barrier, which suffers the greatest damage in the head brain and lungs. The objective of the study was to determine the pathomorphological changes of the head brain followed the action of explosive wave. Materials and methods of research: To achieve the purpose of the study, there have been studied 6 male corpses delivered to the morgue of Municipal Institution "Dnipropetrovsk regional forensic bureau" during 2014-2016 years. The cause of death of those killed was a military explosive injury. After a visual external assessment of the head brain, for histological study there was conducted the 1 x 1 x 1 cm/piece sampling from different parts of the head brain, i.e. the frontal, parietal, temporal, occipital sites, and also from the cerebellum, pons, medulla oblongata, thalamus, walls of the lateral ventricles, the bottom of the 4th ventricle. Pieces of the head brain were immersed in 10% formalin solution for 24 hours. After fixing, the paraffin blocks were made from the material using the standard method. Then, using a microtome, there were made sections of 4-6 micron thickness from paraffin blocks which then were stained with hematoxylin and eosin. Microscopic analysis was performed using a light microscope with x4, x10, x40 lenses. Results of the study: According to the results of our study, injuries of the head brain were divided into macroscopic and microscopic. Macroscopic injuries were marked according to the results of visual assessment of haemorrhages under the membranes and into the substance, their nature, and localisation, areas of softening. In the microscopic study, our attention was drawn to both vascular changes and those of neurons and glial cells. Microscopic qualitative analysis of histological sections of different parts of the head brain revealed a number of structural changes both at the cellular and tissue levels. Typical changes in most of the studied areas of the head brain included damages of the vascular system. The most characteristic microscopic sign was the separation of vascular walls from neuroglia with the formation of perivascular space. Along with this sign, wall fragmentation of these vessels, haemolysis of erythrocytes, formation of haemorrhages in the newly formed perivascular spaces were found. In addition to damages of the cerebrovascular system, destruction of the neurons, presence of oedema of the brain tissue were observed in the histological sections of the brain. On some sections, the head brain had a heterogeneous step-like or wave-like nature. Conclusions: The pathomorphological microscopic changes in the brain, identified in the study on the died of explosive traumas, can be used for diagnostic purposes in conjunction with other characteristic signs of explosive trauma in forensic and pathological studies. The complex of microscopic signs in the head brain, i.e. separation of blood vessel walls from neuroglia with the perivascular space formation, fragmentation of walls of these blood vessels, erythrocyte haemolysis, formation of haemorrhages in the newly formed perivascular spaces is the direct indication of explosive wave action.Keywords: blast wave, neurotrauma, human, brain
Procedia PDF Downloads 19310 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning
Authors: Pei Yi Lin
Abstract:
Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model
Procedia PDF Downloads 799 Musictherapy and Gardentherapy: A Systemic Approach for the Life Quality of the PsychoPhysical Disability
Authors: Adriana De Serio, Donato Forenza
Abstract:
Aims. In this experimental research the Authors present the methodological plan “Musictherapy and Gardentherapy” that they created interconnected with the garden landscape ecosystems and aimed at PsychoPhysical Disability (MusGarPPhyD). In the context of the environmental education aimed at spreading the landscape culture and its values, it’s necessary to develop a solid perception of the environment sustainability to implement a multidimensional approach that pays attention to the conservation and enhancement of gardens and natural environments. The result is an improvement in the life quality also in compliance with the objectives of the European Agenda 2030. The MusGarPPhyD can help professionals such as musictherapists and environmental and landscape researchers strengthen subjects' motivation to learn to deal with the psychophysical discomfort associated with disability and to cope with the distress and the psychological fragility and the loneliness and the social seclusion and to promote productive social relationships. Materials and Methods. The MusGarPPhyD was implemented in multiple spaces. The musictherapy treatments took place first inside residential therapeutic centres and then in the garden landscape ecosystem. Patients: twenty, set in two groups. Weekly-sessions (50’) for three months. Methodological phases: - Phase P1. MusicTherapy treatments for each group in the indoor spaces. - Phase P2. MusicTherapy sessions inside the gardens. After each Phase, P1 and P2: - a Questionnaire for each patient (ten items / liking-indices) was administrated at t0 time, during the treatment and at tn time at the end of the treatment. - Monitoring of patients' behavioral responses through assessment scales, matrix, table and graph system. MusicTherapy methodology: pazient Sonorous-Musical Anamnesis, Musictherapy Assessment Document, Observation Protocols, Bodily-Environmental-Rhythmical-Sonorous-Vocal-Energy production first indoors and then outside, sonorous-musical instruments and edible instruments made by the Author/musictherapist with some foods; Administration of Patient-Environment-Music Index at time to and tn, to estimate the patient’s behavior evolution, Musictherapeutic Advancement Index. Results. The MusGarPPhyD can strengthen the individual sense of identity and improve the psychophysical skills and the resilience to face and to overcome the difficulties caused by the congenital /acquired disability. The multi-sensory perceptions deriving from contact with the plants in the gardens improve the psychological well-being and regulate the physiological parameters such as blood pressure, cardiac and respiratory rhythm, reducing the cholesterol levels. The secretions of the peptide hormones endorphins and the endogenous opioids enkephalins increase and bring a state of patient’s tranquillity and a better mood. The subjects showed a preference for musictherapy treatments within a setting made up of gardens and peculiar landscape systems. This resulted in greater health benefits. Conclusions. The MusGarPPhyD contributes to reduce psychophysical tensions, anxiety, depression and stress, facilitating the connections between the cerebral hemispheres, thus also improving intellectual performances, self-confidence, motor skills and social interactions. Therefore it is necessary to design hospitals, rehabilitation centers, nursing homes, surrounded by gardens. Ecosystems of natural and urban parks and gardens create fascinating skyline and mosaics of landscapes rich in beauty and biodiversity. The MusGarPPhyD is useful for the health management promoting patient’s psychophysical activation, better mood/affective-tone and relastionships and contributing significantly to improving the life quality.Keywords: musictherapy, gardentherapy, disability, life quality
Procedia PDF Downloads 738 Introducing Global Navigation Satellite System Capabilities into IoT Field-Sensing Infrastructures for Advanced Precision Agriculture Services
Authors: Savvas Rogotis, Nikolaos Kalatzis, Stergios Dimou-Sakellariou, Nikolaos Marianos
Abstract:
As precision holds the key for the introduction of distinct benefits in agriculture (e.g., energy savings, reduced labor costs, optimal application of inputs, improved products, and yields), it steadily becomes evident that new initiatives should focus on rendering Precision Agriculture (PA) more accessible to the average farmer. PA leverages on technologies such as the Internet of Things (IoT), earth observation, robotics and positioning systems (e.g., the Global Navigation Satellite System – GNSS - as well as individual positioning systems like GPS, Glonass, Galileo) that allow: from simple data georeferencing to optimal navigation of agricultural machinery to even more complex tasks like Variable Rate Applications. An identified customer pain point is that, from one hand, typical triangulation-based positioning systems are not accurate enough (with errors up to several meters), while on the other hand, high precision positioning systems reaching centimeter-level accuracy, are very costly (up to thousands of euros). Within this paper, a Ground-Based Augmentation System (GBAS) is introduced, that can be adapted to any existing IoT field-sensing station infrastructure. The latter should cover a minimum set of requirements, and in particular, each station should operate as a fixed, obstruction-free towards the sky, energy supplying unit. Station augmentation will allow them to function in pairs with GNSS rovers following the differential GNSS base-rover paradigm. This constitutes a key innovation element for the proposed solution that encompasses differential GNSS capabilities into an IoT field-sensing infrastructure. Integrating this kind of information supports the provision of several additional PA beneficial services such as spatial mapping, route planning, and automatic field navigation of unmanned vehicles (UVs). Right at the heart of the designed system, there is a high-end GNSS toolkit with base-rover variants and Real-Time Kinematic (RTK) capabilities. The GNSS toolkit had to tackle all availability, performance, interfacing, and energy-related challenges that are faced for a real-time, low-power, and reliable in the field operation. Specifically, in terms of performance, preliminary findings exhibit a high rover positioning precision that can even reach less than 10-centimeters. As this precision is propagated to the full dataset collection, it enables tractors, UVs, Android-powered devices, and measuring units to deal with challenging real-world scenarios. The system is validated with the help of Gaiatrons, a mature network of agro-climatic telemetry stations with presence all over Greece and beyond ( > 60.000ha of agricultural land covered) that constitutes part of “gaiasense” (www.gaiasense.gr) smart farming (SF) solution. Gaiatrons constantly monitor atmospheric and soil parameters, thus, providing exact fit to operational requirements asked from modern SF infrastructures. Gaiatrons are ultra-low-cost, compact, and energy-autonomous stations with a modular design that enables the integration of advanced GNSS base station capabilities on top of them. A set of demanding pilot demonstrations has been initiated in Stimagka, Greece, an area with a diverse geomorphological landscape where grape cultivation is particularly popular. Pilot demonstrations are in the course of validating the preliminary system findings in its intended environment, tackle all technical challenges, and effectively highlight the added-value offered by the system in action.Keywords: GNSS, GBAS, precision agriculture, RTK, smart farming
Procedia PDF Downloads 1167 Amifostine Analogue, Drde-30, Attenuates Radiation-Induced Lung Injury in Mice
Authors: Aastha Arora, Vikas Bhuria, Saurabh Singh, Uma Pathak, Shweta Mathur, Puja P. Hazari, Rajat Sandhir, Ravi Soni, Anant N. Bhatt, Bilikere S. Dwarakanath
Abstract:
Radiotherapy is an effective curative and palliative option for patients with thoracic malignancies. However, lung injury, comprising of pneumonitis and fibrosis, remains a significant clin¬ical complication of thoracic radiation, thus making it a dose-limiting factor. Also, injury to the lung is often reported as part of multi-organ failure in victims of accidental radiation exposures. Radiation induced inflammatory response in the lung, characterized by leukocyte infiltration and vascular changes, is an important contributing factor for the injury. Therefore, countermeasure agents to attenuate radiation induced inflammatory response are considered as an important approach to prevent chronic lung damage. Although Amifostine, the widely used, FDA approved radio-protector, has been found to reduce the radiation induced pneumonitis during radiation therapy of non-small cell lung carcinoma, its application during mass and field exposure is limited due to associated toxicity and ineffectiveness with the oral administration. The amifostine analogue (DRDE-30) overcomes this limitation as it is orally effective in reducing the mortality of whole body irradiated mice. The current study was undertaken to investigate the potential of DRDE-30 to ameliorate radiation induced lung damage. DRDE-30 was administered intra-peritoneally, 30 minutes prior to 13.5 Gy thoracic (60Co-gamma) radiation in C57BL/6 mice. Broncheo- alveolar lavage fluid (BALF) and lung tissues were harvested at 12 and 24 weeks post irradiation for studying inflammatory and fibrotic markers. Lactate dehydrogenase (LDH) leakage, leukocyte count and protein content in BALF were used as parameters to evaluate lung vascular permeability. Inflammatory cell signaling (p38 phosphorylation) and anti-oxidant status (MnSOD and Catalase level) was assessed by Western blot, while X-ray CT scan, H & E staining and trichrome staining were done to study the lung architecture and collagen deposition. Irradiation of the lung increased the total protein content, LDH leakage and total leukocyte count in the BALF, reflecting endothelial barrier dysfunction. These disruptive effects were significantly abolished by DRDE-30, which appear to be linked to the DRDE-30 mediated abrogation of activation of the redox-sensitive pro- inflammatory signaling cascade, the MAPK pathway. Concurrent administration of DRDE-30 with radiation inhibited radiation-induced oxidative stress by strengthening the anti-oxidant defense system and abrogated p38 mitogen-activated protein kinase activation, which was associated with reduced vascular leak and macrophage recruitment to the lungs. Histopathological examination (by H & E staining) of the lung showed radiation-induced inflammation of the lungs, characterized by cellular infiltration, interstitial oedema, alveolar wall thickening, perivascular fibrosis and obstruction of alveolar spaces, which were all reduced by pre-administration of DRDE-30. Structural analysis with X-ray CT indicated lung architecture (linked to the degree of opacity) comparable to un-irradiated mice that correlated well with the lung morphology and reduced collagen deposition. Reduction in the radiation-induced inflammation and fibrosis brought about by DRDE-30 resulted in a profound increase in animal survival (72 % in the combination vs 24% with radiation) observed at the end of 24 weeks following irradiation. These findings establish the potential of the Amifostine analogue, DRDE-30, in reducing radiation induced pulmonary injury by attenuating the inflammatory and fibrotic responses.Keywords: amifostine, fibrosis, inflammation, lung injury radiation
Procedia PDF Downloads 5106 Observations on Cultural Alternative and Environmental Conservation: Populations "Delayed" and Excluded from Health and Public Hygiene Policies in Mexico (1890-1930)
Authors: Marcela Davalos Lopez
Abstract:
The history of the circulation of hygienic knowledge and the consolidation of public health in Latin American cities towards the end of the 19th century is well known. Among them, Mexico City was inserted in international politics, strengthened institutions, medical knowledge, applied parameters of modernity and built sanitary engineering works. Despite the power that this hygienist system achieved, its scope was relative: it cannot be generalized to all cities. From a comparative and contextual analysis, it will be shown that conclusions derived from modern urban historiography present, from our contemporary observations, fractures. Between 1890 and 1930, the small cities and areas surrounding the Mexican capital adapted in their own way the international and federal public health regulations. This will be shown for neighborhoods located around Mexico City and in a medium city, close to the Mexican capital (about 80 km), called Cuernavaca. While the inhabitants of the neighborhoods kept awaiting the evolutionary process and the forms that public hygiene policies were taking (because they were witnesses and affected in their territories), in Cuernavaca, the dictates came as an echo. While the capital was drained, large roads were opened, roundabouts were erected, residents were expelled, and drains, sewers, drinking water pipes, etc., were built; Cuernavaca was sheltered in other times and practices. What was this due to? Undoubtedly, the time and energy that it took politicians and the group of "scientists" to carry out these enormous works in the Mexican capital took them away from addressing the issue in remote villages. It was not until the 20th century that the federal hygiene policy began to be strengthened. Despite this, there are other factors that emphasize the particularities of each site. I would like to draw attention here to the different receptions that each town prepared on public hygiene. We will see that Cuernavaca responded to its own semi-rural culture, history, orography and functions, prolonging for much longer, for example, the use of its deep ravines as sewers. For their part, the neighborhoods surrounding the capital, although affected and excluded from hygienist policies, chose to move away from them and solve the deficiencies with their own resources (they resorted to the waste that was left from the dried lake of Mexico to continue their lake practices). All of this points to a paradox that shapes our contemporary concerns: on the one hand, the benefits derived from medical knowledge and its technological applications (in this work referring particularly to the urban health system) and, on the other, the alteration it caused in environmental settings. Places like Cuernavaca (classified by the nineteenth-century and hygienists of the first decades of the twentieth century as backward), as well as landscapes such as neighborhoods, affected by advances in sanitary engineering, keep in their memory buried practices that we observe today as possible ways to reestablish environmental balances: alternative uses of water; recycling of organic materials; local uses of fauna; various systems for breaking down excreta, and so on. In sum, what the nineteenth and first half of the twentieth centuries graduated as levels of backwardness or progress, turn out to be key information to rethink the routes of environmental conservation. When we return to the observations of the scientists, politicians and lawyers of that period, we find historically rejected cultural alterity. Populations such as Cuernavaca that, due to their history, orography and/or insufficiency of federal policies, kept different relationships with the environment, today give us clues to reorient basic elements of cities: alternative uses of water, waste of raw materials, organic or consumption of local products, among others. It is, therefore, a matter of unearthing the rejected that cries out to emerge to the surface.Keywords: sanitary hygiene, Mexico city, cultural alterity, environmental conservation, environmental history
Procedia PDF Downloads 1665 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping
Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung
Abstract:
Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)
Procedia PDF Downloads 2574 The Effect of Using Emg-based Luna Neurorobotics for Strengthening of Affected Side in Chronic Stroke Patients - Retrospective Study
Authors: Surbhi Kaura, Sachin Kandhari, Shahiduz Zafar
Abstract:
Chronic stroke, characterized by persistent motor deficits, often necessitates comprehensive rehabilitation interventions to improve functional outcomes and mitigate long-term dependency. Luna neurorobotic devices, integrated with EMG feedback systems, provide an innovative platform for facilitating neuroplasticity and functional improvement in stroke survivors. This retrospective study aims to investigate the impact of EMG-based Luna neurorobotic interventions on the strengthening of the affected side in chronic stroke patients. In rehabilitation, active patient participation significantly activates the sensorimotor network during motor control, unlike passive movement. Stroke is a debilitating condition that, when not effectively treated, can result in significant deficits and lifelong dependency. Common issues like neglecting the use of limbs can lead to weakness in chronic stroke cases. In rehabilitation, active patient participation significantly activates the sensorimotor network during motor control, unlike passive movement. This study aims to assess how electromyographic triggering (EMG-triggered) robotic treatments affect walking, ankle muscle force after an ischemic stroke, and the coactivation of agonist and antagonist muscles, which contributes to neuroplasticity with the assistance of biofeedback using robotics. Methods: The study utilized robotic techniques based on electromyography (EMG) for daily rehabilitation in long-term stroke patients, offering feedback and monitoring progress. Each patient received one session per day for two weeks, with the intervention group undergoing 45 minutes of robot-assisted training and exercise at the hospital, while the control group performed exercises at home. Eight participants with impaired motor function and gait after stroke were involved in the study. EMG-based biofeedback exercises were administered through the LUNA neuro-robotic machine, progressing from trigger and release mode to trigger and hold, and later transitioning to dynamic mode. Assessments were conducted at baseline and after two weeks, including the Timed Up and Go (TUG) test, a 10-meter walk test (10m), Berg Balance Scale (BBG), and gait parameters like cadence, step length, upper limb strength measured by EMG threshold in microvolts, and force in Newton meters. Results: The study utilized a scale to assess motor strength and balance, illustrating the benefits of EMG-biofeedback following LUNA robotic therapy. In the analysis of the left hemiparetic group, an increase in strength post-rehabilitation was observed. The pre-TUG mean value was 72.4, which decreased to 42.4 ± 0.03880133 seconds post-rehabilitation, with a significant difference indicated by a p-value below 0.05, reflecting a reduced task completion time. Similarly, in the force-based task, the pre-knee dynamic force in Newton meters was 18.2NM, which increased to 31.26NM during knee extension post-rehabilitation. The post-student t-test showed a p-value of 0.026, signifying a significant difference. This indicated an increase in the strength of knee extensor muscles after LUNA robotic rehabilitation. Lastly, at baseline, the EMG value for ankle dorsiflexion was 5.11 (µV), which increased to 43.4 ± 0.06 µV post-rehabilitation, signifying an increase in the threshold and the patient's ability to generate more motor units during left ankle dorsiflexion. Conclusion: This study aimed to evaluate the impact of EMG and dynamic force-based rehabilitation devices on walking and strength of the affected side in chronic stroke patients without nominal data comparisons among stroke patients. Additionally, it provides insights into the inclusion of EMG-triggered neurorehabilitation robots in the daily rehabilitation of patients.Keywords: neurorehabilitation, robotic therapy, stroke, strength, paralysis
Procedia PDF Downloads 633 Flood Risk Management in the Semi-Arid Regions of Lebanon - Case Study “Semi Arid Catchments, Ras Baalbeck and Fekha”
Authors: Essam Gooda, Chadi Abdallah, Hamdi Seif, Safaa Baydoun, Rouya Hdeib, Hilal Obeid
Abstract:
Floods are common natural disaster occurring in semi-arid regions in Lebanon. This results in damage to human life and deterioration of environment. Despite their destructive nature and their immense impact on the socio-economy of the region, flash floods have not received adequate attention from policy and decision makers. This is mainly because of poor understanding of the processes involved and measures needed to manage the problem. The current understanding of flash floods remains at the level of general concepts; most policy makers have yet to recognize that flash floods are distinctly different from normal riverine floods in term of causes, propagation, intensity, impacts, predictability, and management. Flash floods are generally not investigated as a separate class of event but are rather reported as part of the overall seasonal flood situation. As a result, Lebanon generally lacks policies, strategies, and plans relating specifically to flash floods. Main objective of this research is to improve flash flood prediction by providing new knowledge and better understanding of the hydrological processes governing flash floods in the East Catchments of El Assi River. This includes developing rainstorm time distribution curves that are unique for this type of study region; analyzing, investigating, and developing a relationship between arid watershed characteristics (including urbanization) and nearby villages flow flood frequency in Ras Baalbeck and Fekha. This paper discusses different levels of integration approach¬es between GIS and hydrological models (HEC-HMS & HEC-RAS) and presents a case study, in which all the tasks of creating model input, editing data, running the model, and displaying output results. The study area corresponds to the East Basin (Ras Baalbeck & Fakeha), comprising nearly 350 km2 and situated in the Bekaa Valley of Lebanon. The case study presented in this paper has a database which is derived from Lebanese Army topographic maps for this region. Using ArcMap to digitizing the contour lines, streams & other features from the topographic maps. The digital elevation model grid (DEM) is derived for the study area. The next steps in this research are to incorporate rainfall time series data from Arseal, Fekha and Deir El Ahmar stations to build a hydrologic data model within a GIS environment and to combine ArcGIS/ArcMap, HEC-HMS & HEC-RAS models, in order to produce a spatial-temporal model for floodplain analysis at a regional scale. In this study, HEC-HMS and SCS methods were chosen to build the hydrologic model of the watershed. The model then calibrated using flood event that occurred between 7th & 9th of May 2014 which considered exceptionally extreme because of the length of time the flows lasted (15 hours) and the fact that it covered both the watershed of Aarsal and Ras Baalbeck. The strongest reported flood in recent times lasted for only 7 hours covering only one watershed. The calibrated hydrologic model is then used to build the hydraulic model & assessing of flood hazards maps for the region. HEC-RAS Model is used in this issue & field trips were done for the catchments in order to calibrated both Hydrologic and Hydraulic models. The presented models are a kind of flexible procedures for an ungaged watershed. For some storm events it delivers good results, while for others, no parameter vectors can be found. In order to have a general methodology based on these ideas, further calibration and compromising of results on the dependence of many flood events parameters and catchment properties is required.Keywords: flood risk management, flash flood, semi arid region, El Assi River, hazard maps
Procedia PDF Downloads 4782 Blue Economy and Marine Mining
Authors: Fani Sakellariadou
Abstract:
The Blue Economy includes all marine-based and marine-related activities. They correspond to established, emerging as well as unborn ocean-based industries. Seabed mining is an emerging marine-based activity; its operations depend particularly on cutting-edge science and technology. The 21st century will face a crisis in resources as a consequence of the world’s population growth and the rising standard of living. The natural capital stored in the global ocean is decisive for it to provide a wide range of sustainable ecosystem services. Seabed mineral deposits were identified as having a high potential for critical elements and base metals. They have a crucial role in the fast evolution of green technologies. The major categories of marine mineral deposits are deep-sea deposits, including cobalt-rich ferromanganese crusts, polymetallic nodules, phosphorites, and deep-sea muds, as well as shallow-water deposits including marine placers. Seabed mining operations may take place within continental shelf areas of nation-states. In international waters, the International Seabed Authority (ISA) has entered into 15-year contracts for deep-seabed exploration with 21 contractors. These contracts are for polymetallic nodules (18 contracts), polymetallic sulfides (7 contracts), and cobalt-rich ferromanganese crusts (5 contracts). Exploration areas are located in the Clarion-Clipperton Zone, the Indian Ocean, the Mid Atlantic Ridge, the South Atlantic Ocean, and the Pacific Ocean. Potential environmental impacts of deep-sea mining include habitat alteration, sediment disturbance, plume discharge, toxic compounds release, light and noise generation, and air emissions. They could cause burial and smothering of benthic species, health problems for marine species, biodiversity loss, reduced photosynthetic mechanism, behavior change and masking acoustic communication for mammals and fish, heavy metals bioaccumulation up the food web, decrease of the content of dissolved oxygen, and climate change. An important concern related to deep-sea mining is our knowledge gap regarding deep-sea bio-communities. The ecological consequences that will be caused in the remote, unique, fragile, and little-understood deep-sea ecosystems and inhabitants are still largely unknown. The blue economy conceptualizes oceans as developing spaces supplying socio-economic benefits for current and future generations but also protecting, supporting, and restoring biodiversity and ecological productivity. In that sense, people should apply holistic management and make an assessment of marine mining impacts on ecosystem services, including the categories of provisioning, regulating, supporting, and cultural services. The variety in environmental parameters, the range in sea depth, the diversity in the characteristics of marine species, and the possible proximity to other existing maritime industries cause a span of marine mining impact the ability of ecosystems to support people and nature. In conclusion, the use of the untapped potential of the global ocean demands a liable and sustainable attitude. Moreover, there is a need to change our lifestyle and move beyond the philosophy of single-use. Living in a throw-away society based on a linear approach to resource consumption, humans are putting too much pressure on the natural environment. Applying modern, sustainable and eco-friendly approaches according to the principle of circular economy, a substantial amount of natural resource savings will be achieved. Acknowledgement: This work is part of the MAREE project, financially supported by the Division VI of IUPAC. This work has been partly supported by the University of Piraeus Research Center.Keywords: blue economy, deep-sea mining, ecosystem services, environmental impacts
Procedia PDF Downloads 851 Detailed Degradation-Based Model for Solid Oxide Fuel Cells Long-Term Performance
Authors: Mina Naeini, Thomas A. Adams II
Abstract:
Solid Oxide Fuel Cells (SOFCs) feature high electrical efficiency and generate substantial amounts of waste heat that make them suitable for integrated community energy systems (ICEs). By harvesting and distributing the waste heat through hot water pipelines, SOFCs can meet thermal demand of the communities. Therefore, they can replace traditional gas boilers and reduce greenhouse gas (GHG) emissions. Despite these advantages of SOFCs over competing power generation units, this technology has not been successfully commercialized in large-scale to replace traditional generators in ICEs. One reason is that SOFC performance deteriorates over long-term operation, which makes it difficult to find the proper sizing of the cells for a particular ICE system. In order to find the optimal sizing and operating conditions of SOFCs in a community, a proper knowledge of degradation mechanisms and effects of operating conditions on SOFCs long-time performance is required. The simplified SOFC models that exist in the current literature usually do not provide realistic results since they usually underestimate rate of performance drop by making too many assumptions or generalizations. In addition, some of these models have been obtained from experimental data by curve-fitting methods. Although these models are valid for the range of operating conditions in which experiments were conducted, they cannot be generalized to other conditions and so have limited use for most ICEs. In the present study, a general, detailed degradation-based model is proposed that predicts the performance of conventional SOFCs over a long period of time at different operating conditions. Conventional SOFCs are composed of Yttria Stabilized Zirconia (YSZ) as electrolyte, Ni-cermet anodes, and LaSr₁₋ₓMnₓO₃ (LSM) cathodes. The following degradation processes are considered in this model: oxidation and coarsening of nickel particles in the Ni-cermet anodes, changes in the pore radius in anode, electrolyte, and anode electrical conductivity degradation, and sulfur poisoning of the anode compartment. This model helps decision makers discover the optimal sizing and operation of the cells for a stable, efficient performance with the fewest assumptions. It is suitable for a wide variety of applications. Sulfur contamination of the anode compartment is an important cause of performance drop in cells supplied with hydrocarbon-based fuel sources. H₂S, which is often added to hydrocarbon fuels as an odorant, can diminish catalytic behavior of Ni-based anodes by lowering their electrochemical activity and hydrocarbon conversion properties. Therefore, the existing models in the literature for H₂-supplied SOFCs cannot be applied to hydrocarbon-fueled SOFCs as they only account for the electrochemical activity reduction. A regression model is developed in the current work for sulfur contamination of the SOFCs fed with hydrocarbon fuel sources. The model is developed as a function of current density and H₂S concentration in the fuel. To the best of authors' knowledge, it is the first model that accounts for impact of current density on sulfur poisoning of cells supplied with hydrocarbon-based fuels. Proposed model has wide validity over a range of parameters and is consistent across multiple studies by different independent groups. Simulations using the degradation-based model illustrated that SOFCs voltage drops significantly in the first 1500 hours of operation. After that, cells exhibit a slower degradation rate. The present analysis allowed us to discover the reason for various degradation rate values reported in literature for conventional SOFCs. In fact, the reason why literature reports very different degradation rates, is that literature is inconsistent in definition of how degradation rate is calculated. In the literature, the degradation rate has been calculated as the slope of voltage versus time plot with the unit of voltage drop percentage per 1000 hours operation. Due to the nonlinear profile of voltage over time, degradation rate magnitude depends on the magnitude of time steps selected to calculate the curve's slope. To avoid this issue, instantaneous rate of performance drop is used in the present work. According to a sensitivity analysis, the current density has the highest impact on degradation rate compared to other operating factors, while temperature and hydrogen partial pressure affect SOFCs performance less. The findings demonstrated that a cell running at lower current density performs better in long-term in terms of total average energy delivered per year, even though initially it generates less power than if it had a higher current density. This is because of the dominant and devastating impact of large current densities on the long-term performance of SOFCs, as explained by the model.Keywords: degradation rate, long-term performance, optimal operation, solid oxide fuel cells, SOFCs
Procedia PDF Downloads 133