Search results for: multi sliding friction device
434 Braille Lab: A New Design Approach for Social Entrepreneurship and Innovation in Assistive Tools for the Visually Impaired
Authors: Claudio Loconsole, Daniele Leonardis, Antonio Brunetti, Gianpaolo Francesco Trotta, Nicholas Caporusso, Vitoantonio Bevilacqua
Abstract:
Unfortunately, many people still do not have access to communication, with specific regard to reading and writing. Among them, people who are blind or visually impaired, have several difficulties in getting access to the world, compared to the sighted. Indeed, despite technology advancement and cost reduction, nowadays assistive devices are still expensive such as Braille-based input/output systems which enable reading and writing texts (e.g., personal notes, documents). As a consequence, assistive technology affordability is fundamental in supporting the visually impaired in communication, learning, and social inclusion. This, in turn, has serious consequences in terms of equal access to opportunities, freedom of expression, and actual and independent participation to a society designed for the sighted. Moreover, the visually impaired experience difficulties in recognizing objects and interacting with devices in any activities of daily living. It is not a case that Braille indications are commonly reported only on medicine boxes and elevator keypads. Several software applications for the automatic translation of written text into speech (e.g., Text-To-Speech - TTS) enable reading pieces of documents. However, apart from simple tasks, in many circumstances TTS software is not suitable for understanding very complicated pieces of text requiring to dwell more on specific portions (e.g., mathematical formulas or Greek text). In addition, the experience of reading\writing text is completely different both in terms of engagement, and from an educational perspective. Statistics on the employment rate of blind people show that learning to read and write provides the visually impaired with up to 80% more opportunities of finding a job. Especially in higher educational levels, where the ability to digest very complex text is key, accessibility and availability of Braille plays a fundamental role in reducing drop-out rate of the visually impaired, thus affecting the effectiveness of the constitutional right to get access to education. In this context, the Braille Lab project aims at overcoming these social needs by including affordability in designing and developing assistive tools for visually impaired people. In detail, our awarded project focuses on a technology innovation of the operation principle of existing assistive tools for the visually impaired leaving the Human-Machine Interface unchanged. This can result in a significant reduction of the production costs and consequently of tool selling prices, thus representing an important opportunity for social entrepreneurship. The first two assistive tools designed within the Braille Lab project following the proposed approach aims to provide the possibility to personally print documents and handouts and to read texts written in Braille using refreshable Braille display, respectively. The former, named ‘Braille Cartridge’, represents an alternative solution for printing in Braille and consists in the realization of an electronic-controlled dispenser printing (cartridge) which can be integrated within traditional ink-jet printers, in order to leverage the efficiency and cost of the device mechanical structure which are already being used. The latter, named ‘Braille Cursor’, is an innovative Braille display featuring a substantial technology innovation by means of a unique cursor virtualizing Braille cells, thus limiting the number of active pins needed for Braille characters.Keywords: Human rights, social challenges and technology innovations, visually impaired, affordability, assistive tools
Procedia PDF Downloads 273433 High Performance Lithium Ion Capacitors from Biomass Waste-Derived Activated Carbon
Authors: Makhan Maharjan, Mani Ulaganathan, Vanchiappan Aravindan, Srinivasan Madhavi, Jing-Yuan Wang, Tuti Mariana Lim
Abstract:
The ever-increasing energy demand has made research to develop high performance energy storage systems that are able to fulfill energy needs. Supercapacitors have potential applications as portable energy storage devices. In recent years, there have been huge research interests to enhance the performances of supercapacitors via exploiting novel promising carbon precursors, tailoring textural properties of carbons, exploiting various electrolytes and device types. In this work, we employed orange peel (waste material) as the starting material and synthesized activated carbon by pyrolysis of KOH impregnated orange peel char at 800 °C in argon atmosphere. The resultant orange peel-derived activated carbon (OP-AC) exhibited BET surface area of 1,901 m² g-1, which is the highest surface area so far reported for the orange peel. The pore size distribution (PSD) curve exhibits the pores centered at 11.26 Å pore width, suggesting dominant microporosity. The high surface area OP-AC accommodates more ions in the electrodes and its well-developed porous structure facilitates fast diffusion of ions which subsequently enhance electrochemical performance. The OP-AC was studied as positive electrode in combination with different negative electrode materials, such as pre-lithiated graphite (LiC6) and Li4Ti5O12 for making hybrid capacitors. The lithium ion capacitor (LIC) fabricated using OP-AC with pre-lithiated graphite delivered high energy density of ~106 Wh kg–1. The energy density for OP-AC||Li4Ti5O12 capacitor was ~35 Wh kg⁻¹. For comparison purpose, configuration of OP-AC||OP-AC capacitors were studied in both aqueous (1M H2SO4) and organic (1M LiPF6 in EC-DMC) electrolytes, which delivered the energy density of 8.0 Wh kg⁻¹ and 16.3 Wh kg⁻¹, respectively. The cycling retentions obtained at current density of 1 A g⁻¹ were ~85.8, ~87.0 ~82.2 and ~58.8% after 2500 cycles for OP-AC||OP-AC (aqueous), OP-AC||OP-AC (organic), OP-AC||Li4Ti5O12 and OP-AC||LiC6 configurations, respectively. In addition, characterization studies were performed by elemental and proximate composition, thermogravimetry analysis, field emission-scanning electron microscopy, Raman spectra, X-ray diffraction (XRD) pattern, Fourier transform-infrared, X-ray photoelectron spectroscopy (XPS) and N2 sorption isotherms. The morphological features from FE-SEM exhibited well-developed porous structures. Two typical broad peaks observed in the XRD framework of the synthesized carbon implies amorphous graphitic structure. The ratio of 0.86 for ID/IG in Raman spectra infers high degree of graphitization in the sample. The band spectra of C 1s in XPS display the well resolved peaks related to carbon atoms in various chemical environments. The presence of functional groups is also corroborated from the FTIR spectroscopy. Characterization studies revealed the synthesized carbon to be promising electrode material towards the application for energy storage devices. Overall, the intriguing properties of OP-AC make it a new alternative promising electrode material for the development of high energy lithium ion capacitors from abundant, low-cost, renewable biomass waste. The authors gratefully acknowledge Agency for Science, Technology and Research (A*STAR)/ Singapore International Graduate Award (SINGA) and Nanyang Technological University (NTU), Singapore for funding support.Keywords: energy storage, lithium-ion capacitors, orange peels, porous activated carbon
Procedia PDF Downloads 229432 Investigating the Essentiality of Oxazolidinones in Resistance-Proof Drug Combinations in Mycobacterium tuberculosis Selected under in vitro Conditions
Authors: Gail Louw, Helena Boshoff, Taeksun Song, Clifton Barry
Abstract:
Drug resistance in Mycobacterium tuberculosis is primarily attributed to mutations in target genes. These mutations incur a fitness cost and result in bacterial generations that are less fit, which subsequently acquire compensatory mutations to restore fitness. We hypothesize that mutations in specific drug target genes influence bacterial metabolism and cellular function, which affects its ability to develop subsequent resistance to additional agents. We aim to determine whether the sequential acquisition of drug resistance and specific mutations in a well-defined clinical M. tuberculosis strain promotes or limits the development of additional resistance. In vitro mutants resistant to pretomanid, linezolid, moxifloxacin, rifampicin and kanamycin were generated from a pan-susceptible clinical strain from the Beijing lineage. The resistant phenotypes to the anti-TB agents were confirmed by the broth microdilution assay and genetic mutations were identified by targeted gene sequencing. Growth of mono-resistant mutants was done in enriched medium for 14 days to assess in vitro fitness. Double resistant mutants were generated against anti-TB drug combinations at concentrations 5x and 10x the minimum inhibitory concentration. Subsequently, mutation frequencies for these anti-TB drugs in the different mono-resistant backgrounds were determined. The initial level of resistance and the mutation frequencies observed for the mono-resistant mutants were comparable to those previously reported. Targeted gene sequencing revealed the presence of known and clinically relevant mutations in the mutants resistant to linezolid, rifampicin, kanamycin and moxifloxacin. Significant growth defects were observed for mutants grown under in vitro conditions compared to the sensitive progenitor. Mutation frequencies determination in the mono-resistant mutants revealed a significant increase in mutation frequency against rifampicin and kanamycin, but a significant decrease in mutation frequency against linezolid and sutezolid. This suggests that these mono-resistant mutants are more prone to develop resistance to rifampicin and kanamycin, but less prone to develop resistance against linezolid and sutezolid. Even though kanamycin and linezolid both inhibit protein synthesis, these compounds target different subunits of the ribosome, thereby leading to different outcomes in terms of fitness in the mutants with impaired cellular function. These observations showed that oxazolidinone treatment is instrumental in limiting the development of multi-drug resistance in M. tuberculosis in vitro.Keywords: oxazolidinones, mutations, resistance, tuberculosis
Procedia PDF Downloads 162431 Effect of Pre-bonding Storage Period on Laser-treated Al Surfaces
Authors: Rio Hirakawa, Christian Gundlach, Sven Hartwig
Abstract:
In recent years, the use of aluminium has further expanded and is expected to replace steel in the future as vehicles become lighter and more recyclable in order to reduce greenhouse gas (GHG) emissions and improve fuel economy. In line with this, structures and components are becoming increasingly multi-material, with different materials, including aluminium, being used in combination to improve mechanical utility and performance. A common method of assembling dissimilar materials is mechanical fastening, but it has several drawbacks, such as increased manufacturing processes and the influence of substrate-specific mechanical properties. Adhesive bonding and fusion bonding are methods that overcome the above disadvantages. In these two joining methods, surface pre-treatment of the substrate is always necessary to ensure the strength and durability of the joint. Previous studies have shown that laser surface treatment improves the strength and durability of the joint. Yan et al. showed that laser surface treatment of aluminium alloys changes α-Al2O3 in the oxide layer to γ-Al2O3. As γ-Al2O3 has a large specific surface area, is very porous and chemically active, laser-treated aluminium surfaces are expected to undergo physico-chemical changes over time and adsorb moisture and organic substances from the air or storage atmosphere. The impurities accumulated on the laser-treated surface may be released at the adhesive and bonding interface by the heat input to the bonding system during the joining phase, affecting the strength and durability of the joint. However, only a few studies have discussed the effect of such storage periods on laser-treated surfaces. This paper, therefore, investigates the ageing of laser-treated aluminium alloy surfaces through thermal analysis, electrochemical analysis and microstructural observations.AlMg3 of 0.5 mm and 1.5 mm thickness was cut using a water-jet cutting machine, cleaned and degreased with isopropanol and surface pre-treated with a pulsed fibre laser at 1060 nm wavelength, 70 W maximum power and 55 kHz repetition frequency. The aluminium surface was then analysed using SEM, thermogravimetric analysis (TGA), Fourier transform infrared spectroscopy (FTIR) and cyclic voltammetry (CV) after storage in air for various periods ranging from one day to several months TGA and FTIR analysed impurities adsorbed on the aluminium surface, while CV revealed changes in the true electrochemically active surface area. SEM also revealed visual changes on the treated surface. In summary, the changes in the laser-treated aluminium surface with storage time were investigated, and the final results were used to determine the appropriate storage period.Keywords: laser surface treatment, pre-treatment, adhesion, bonding, corrosion, durability, dissimilar material interface, automotive, aluminium alloys
Procedia PDF Downloads 80430 Medication Side Effects: Implications on the Mental Health and Adherence Behaviour of Patients with Hypertension
Authors: Irene Kretchy, Frances Owusu-Daaku, Samuel Danquah
Abstract:
Hypertension is the leading risk factor for cardiovascular diseases, and a major cause of death and disability worldwide. This study examined whether psychosocial variables influenced patients’ perception and experience of side effects of their medicines, how they coped with these experiences and the impact on mental health and medication adherence to conventional hypertension therapies. Methods: A hospital-based mixed methods study, using quantitative and qualitative approaches was conducted on hypertensive patients. Participants were asked about side effects, medication adherence, common psychological symptoms, and coping mechanisms with the aid of standard questionnaires. Information from the quantitative phase was analyzed with the Statistical Package for Social Sciences (SPSS) version 20. The interviews from the qualitative study were audio-taped with a digital audio recorder, manually transcribed and analyzed using thematic content analysis. The themes originated from participant interviews a posteriori. Results: The experiences of side effects – such as palpitations, frequent urination, recurrent bouts of hunger, erectile dysfunction, dizziness, cough, physical exhaustion - were categorized as no/low (39.75%), moderate (53.0%) and high (7.25%). Significant relationships between depression (x 2 = 24.21, P < 0.0001), anxiety (x 2 = 42.33, P < 0.0001), stress (x 2 = 39.73, P < 0.0001) and side effects were observed. A logistic regression model using the adjusted results for this association are reported – depression [OR = 1.9 (1.03 – 3.57), p = 0.04], anxiety [OR = 1.5 (1.22 – 1.77), p = < 0.001], and stress [OR = 1.3 (1.02 – 1.71), p = 0.04]. Side effects significantly increased the probability of individuals to be non-adherent [OR = 4.84 (95% CI 1.07 – 1.85), p = 0.04] with social factors, media influences and attitudes of primary caregivers further explaining this relationship. The personal adoption of medication modifying strategies, espousing the use of complementary and alternative treatments, and interventions made by clinicians were the main forms of coping with side effects. Conclusions: Results from this study show that contrary to a biomedical approach, the experience of side effects has biological, social and psychological interrelations. The result offers more support for the need for a multi-disciplinary approach to healthcare where all forms of expertise are incorporated into health provision and patient care. Additionally, medication side effects should be considered as a possible cause of non-adherence among hypertensive patients, thus addressing this problem from a Biopsychosocial perspective in any intervention may improve adherence and invariably control blood pressure.Keywords: biopsychosocial, hypertension, medication adherence, psychological disorders
Procedia PDF Downloads 371429 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 132428 A Hybrid Artificial Intelligence and Two Dimensional Depth Averaged Numerical Model for Solving Shallow Water and Exner Equations Simultaneously
Authors: S. Mehrab Amiri, Nasser Talebbeydokhti
Abstract:
Modeling sediment transport processes by means of numerical approach often poses severe challenges. In this way, a number of techniques have been suggested to solve flow and sediment equations in decoupled, semi-coupled or fully coupled forms. Furthermore, in order to capture flow discontinuities, a number of techniques, like artificial viscosity and shock fitting, have been proposed for solving these equations which are mostly required careful calibration processes. In this research, a numerical scheme for solving shallow water and Exner equations in fully coupled form is presented. First-Order Centered scheme is applied for producing required numerical fluxes and the reconstruction process is carried out toward using Monotonic Upstream Scheme for Conservation Laws to achieve a high order scheme. In order to satisfy C-property of the scheme in presence of bed topography, Surface Gradient Method is proposed. Combining the presented scheme with fourth order Runge-Kutta algorithm for time integration yields a competent numerical scheme. In addition, to handle non-prismatic channels problems, Cartesian Cut Cell Method is employed. A trained Multi-Layer Perceptron Artificial Neural Network which is of Feed Forward Back Propagation (FFBP) type estimates sediment flow discharge in the model rather than usual empirical formulas. Hydrodynamic part of the model is tested for showing its capability in simulation of flow discontinuities, transcritical flows, wetting/drying conditions and non-prismatic channel flows. In this end, dam-break flow onto a locally non-prismatic converging-diverging channel with initially dry bed conditions is modeled. The morphodynamic part of the model is verified simulating dam break on a dry movable bed and bed level variations in an alluvial junction. The results show that the model is capable in capturing the flow discontinuities, solving wetting/drying problems even in non-prismatic channels and presenting proper results for movable bed situations. It can also be deducted that applying Artificial Neural Network, instead of common empirical formulas for estimating sediment flow discharge, leads to more accurate results.Keywords: artificial neural network, morphodynamic model, sediment continuity equation, shallow water equations
Procedia PDF Downloads 187427 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack
Authors: Vincent Andrew Cappellano
Abstract:
In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.Keywords: architecture, resiliency, availability, cyber-attack
Procedia PDF Downloads 108426 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 231425 The Use of Social Media in a UK School of Pharmacy to Increase Student Engagement and Sense of Belonging
Authors: Samantha J. Hall, Luke Taylor, Kenneth I. Cumming, Jakki Bardsley, Scott S. P. Wildman
Abstract:
Medway School of Pharmacy – a joint collaboration between the University of Kent and the University of Greenwich – is a large school of pharmacy in the United Kingdom. The school primarily delivers the accredited Master or Pharmacy (MPharm) degree programme. Reportedly, some students may feel isolated from the larger student body that extends across four separate campuses, where a diverse range of academic subjects is delivered. In addition, student engagement has been noted as being limited in some areas, as evidenced in some cases by poor attendance at some lectures. In January 2015, the University of Kent launched a new initiative dedicated to Equality, Diversity and Inclusivity (EDI). As part of this project, Medway School of Pharmacy employed ‘Student Success Project Officers’ in order to analyse past and present school data. As a result, initiatives have been implemented to i) negate disparities in attainment and ii) increase engagement, particularly for Black, Asian and Minority Ethnic (BAME) students which make up for more than 80% of the pharmacy student cohort. Social media platforms are prevalent, with global statistics suggesting that they are most commonly used by females between the ages of 16-34. Student focus groups held throughout the academic year brought to light the school’s need to use social media much more actively. Prior to the EDI initiative, social media usage for Medway School of Pharmacy was scarce. Platforms including: Facebook, Twitter, Instagram, YouTube, The Student Room and University Blogs were either introduced or rejuvenated. This action was taken with the primary aim of increasing student engagement. By using a number of varied social media platforms, the university is able to capture a large range of students by appealing to different interests. Social media is being used to disseminate important information, promote equality and diversity, recognise and celebrate student success and also to allow students to explore the student life outside of Medway School of Pharmacy. Early data suggests an increase in lecture attendance, as well as greater evidence of student engagement highlighted by recent focus group discussions. In addition, students have communicated that active social media accounts were imperative when choosing universities for 2015/16. It allows students to understand more about the University and community prior to beginning their studies. By having a lively presence on social media, the university can use a multi-faceted approach to succeed in early engagement, as well as fostering the long term engagement of continuing students.Keywords: engagement, social media, pharmacy, community
Procedia PDF Downloads 325424 Phage Therapy as a Potential Solution in the Fight against Antimicrobial Resistance
Authors: Sanjay Shukla
Abstract:
Excessive use of antibiotics is a main problem in the treatment of wounds and other chronic infections and antibiotic treatment is frequently non-curative, thus alternative treatment is necessary. Phage therapy is considered one of the most effective approaches to treat multi-drug resistant bacterial pathogens. Infections caused by Staphylococcus aureus are very efficiently controlled with phage cocktails, containing a different individual phages lysate infecting a majority of known pathogenic S. aureus strains. The aim of current study was to investigate the efficiency of a purified phage cocktail for prophylactic as well as therapeutic application in mouse model and in large animals with chronic septic infection of wounds. A total of 150 sewage samples were collected from various livestock farms. These samples were subjected for the isolation of bacteriophage by double agar layer method. A total of 27 sewage samples showed plaque formation by producing lytic activity against S. aureus in double agar overlay method out of 150 sewage samples. In TEM recovered isolates of bacteriophages showed hexagonal structure with tail fiber. In the bacteriophage (ØVS) had an icosahedral symmetry with the head size 52.20 nm in diameter and long tail of 109 nm. Head and tail were held together by connector and can be classified as a member of the Myoviridae family under the order of Caudovirale. Recovered bacteriophage had shown the antibacterial activity against the S. aureus in vitro. Cocktail (ØVS1, ØVS5, ØVS9 and ØVS 27) of phage lysate were tested to know in vivo antibacterial activity as well as the safety profile. Result of mice experiment indicated that the bacteriophage lysate was very safe, did not show any appearance of abscess formation which indicates its safety in living system. The mice were also prophylactically protected against S. aureus when administered with cocktail of bacteriophage lysate just before the administration of S. aureus which indicates that they are good prophylactic agent. The S. aureus inoculated mice were completely recovered by bacteriophage administration with 100% recovery which was very good as compere to conventional therapy. In present study ten chronic cases of wound were treated with phage lysate and follow up of these cases was done regularly up to ten days (at 0, 5 and 10 d). Result indicated that the six cases out of ten showed complete recovery of wounds within 10 d. The efficacy of bacteriophage therapy was found to be 60% which was very good as compared to the conventional antibiotic therapy in chronic septic wounds infections. Thus, the application of lytic phage in single dose proved to be innovative and effective therapy for treatment of septic chronic wounds.Keywords: phage therapy, phage lysate, antimicrobial resistance, S. aureus
Procedia PDF Downloads 118423 Self-Sensing Concrete Nanocomposites for Smart Structures
Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi
Abstract:
In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring
Procedia PDF Downloads 227422 Impact of UV on Toxicity of Zn²⁺ and ZnO Nanoparticles to Lemna minor
Authors: Gabriela Kalcikova, Gregor Marolt, Anita Jemec Kokalj, Andreja Zgajnar Gotvajn
Abstract:
Since the 90’s, nanotechnology is one of the fastest growing fields of science. Nanomaterials are increasingly becoming part of many products and technologies. Metal oxide nanoparticles are among the most used nanomaterials. Zinc oxide nanoparticles (nZnO) is widely used due to its versatile properties; it has been used in products including plastics, paints, food, batteries, solar cells and cosmetic products. It is also a very effective photocatalyst used for water treatment. Such expanding application of nZnO increases their possible occurrence in the environment. In the aquatic ecosystem nZnO interact with natural environmental factors such as UV radiation, and thus it is essential to evaluate possible interaction between them. In this context, the aim of our study was to evaluate combined ecotoxicity of nZnO and Zn²⁺ on duckweed Lemna minor in presence or absence UV. Inhibition of vegetative growth of duckweed Lemna minor was monitored over a period of 7 days in multi-well plates. After the experiment, specific growth rate was determined. ZnO nanoparticles used were of primary size 13.6 ± 1.7 nm. The test was conducted with nominal nZnO and Zn²⁺ (in form of ZnCl₂) concentrations of 1, 10, 100 mg/L. Experiment was repeated with presence of natural intensity of UV (8h UV, 10 W/m² UVA, 0.5 W/m² UVB). Concentration of Zn during the test was determined by ICP-MS. In the regular experiment (absence of UV) the specific growth rate was slightly increased by low concentrations of nZnO and Zn²⁺ in comparison to control. However, 10 and 100 mg/L of Zn²⁺ resulted in 45% and 68% inhibition of the specific growth rate, respectively. In case of nZnO both concentrations (10 and 100 mg/L) resulted in similar ~ 30% inhibition and the response was not dose-dependent. The lack of the dose-response relationship is often observed in case of nanoparticles. The possible explanation is that the physical impact prevails instead of chemical ones. In the presence of UV the toxicity of Zn²⁺ was increased and 100 mg/L of Zn²⁺ caused total inhibition of the specific growth rate (100%). On the other hand, 100 mg/L of nZnO resulted in low inhibition (19%) in comparison to the experiment without UV (30%). It is thus expected, that tested nZnO is low photoactive, but could have a good UV absorption and/or reflective properties and thus protect duckweed against UV impacts. Measured concentration of Zn in the test suspension decreased only about 4% after 168h in the case of ZnCl₂. On the other hand concentration of Zn in nZnO test decreased by 80%. It is expected that nZnO were partially dissolved in the medium and at the same time agglomeration and sedimentation of particles took place and thus the concentration of Zn at the water level decreased. Results of our study indicated, that nZnO combined with UV of natural intensity does not increase toxicity of nZnO, but slightly protect the plant against UV negative effects. When Zn²⁺ and ZnO results are compared it seems that dissolved Zn plays a central role in the nZnO toxicity.Keywords: duckweed, environmental factors, nanoparticles, toxicity
Procedia PDF Downloads 333421 TeleEmergency Medicine: Transforming Acute Care through Virtual Technology
Authors: Ashley L. Freeman, Jessica D. Watkins
Abstract:
TeleEmergency Medicine (TeleEM) is an innovative approach leveraging virtual technology to deliver specialized emergency medical care across diverse healthcare settings, including internal acute care and critical access hospitals, remote patient monitoring, and nurse triage escalation, in addition to external emergency departments, skilled nursing facilities, and community health centers. TeleEM represents a significant advancement in the delivery of emergency medical care, providing healthcare professionals the capability to deliver expertise that closely mirrors in-person emergency medicine, exceeding geographical boundaries. Through qualitative research, the extension of timely, high-quality care has proven to address the critical needs of patients in remote and underserved areas. TeleEM’s service design allows for the expansion of existing services and the establishment of new ones in diverse geographic locations. This ensures that healthcare institutions can readily scale and adapt services to evolving community requirements by leveraging on-demand (non-scheduled) telemedicine visits through the deployment of multiple video solutions. In terms of financial management, TeleEM currently employs billing suppression and subscription models to enhance accessibility for a wide range of healthcare facilities. Plans are in motion to transition to a billing system routing charges through a third-party vendor, further enhancing financial management flexibility. To address state licensure concerns, a patient location verification process has been integrated through legal counsel and compliance authorities' guidance. The TeleEM workflow is designed to terminate if the patient is not physically located within licensed regions at the time of the virtual connection, alleviating legal uncertainties. A distinctive and pivotal feature of TeleEM is the introduction of the TeleEmergency Medicine Care Team Assistant (TeleCTA) role. TeleCTAs collaborate closely with TeleEM Physicians, leading to enhanced service activation, streamlined coordination, and workflow and data efficiencies. In the last year, more than 800 TeleEM sessions have been conducted, of which 680 were initiated by internal acute care and critical access hospitals, as evidenced by quantitative research. Without this service, many of these cases would have necessitated patient transfers. Barriers to success were examined through thorough medical record review and data analysis, which identified inaccuracies in documentation leading to activation delays, limitations in billing capabilities, and data distortion, as well as the intricacies of managing varying workflows and device setups. TeleEM represents a transformative advancement in emergency medical care that nurtures collaboration and innovation. Not only has advanced the delivery of emergency medicine care virtual technology through focus group participation with key stakeholders, rigorous attention to legal and financial considerations, and the implementation of robust documentation tools and the TeleCTA role, but it’s also set the stage for overcoming geographic limitations. TeleEM assumes a notable position in the field of telemedicine by enhancing patient outcomes and expanding access to emergency medical care while mitigating licensure risks and ensuring compliant billing.Keywords: emergency medicine, TeleEM, rural healthcare, telemedicine
Procedia PDF Downloads 82420 Compression-Extrusion Test to Assess Texture of Thickened Liquids for Dysphagia
Authors: Jesus Salmeron, Carmen De Vega, Maria Soledad Vicente, Mireia Olabarria, Olaia Martinez
Abstract:
Dysphagia or difficulty in swallowing affects mostly elder people: 56-78% of the institutionalized and 44% of the hospitalized. Liquid food thickening is a necessary measure in this situation because it reduces the risk of penetration-aspiration. Until now, and as proposed by the American Dietetic Association in 2002, possible consistencies have been categorized in three groups attending to their viscosity: nectar (50-350 mPa•s), honey (350-1750 mPa•s) and pudding (>1750 mPa•s). The adequate viscosity level should be identified for every patient, according to her/his impairment. Nevertheless, a systematic review on dysphagia diet performed recently indicated that there is no evidence to suggest that there is any transition of clinical relevance between the three levels proposed. It was also stated that other physical properties of the bolus (slipperiness, density or cohesiveness, among others) could influence swallowing in affected patients and could contribute to the amount of remaining residue. Texture parameters need to be evaluated as possible alternative to viscosity. The aim of this study was to evaluate the instrumental extrusion-compression test as a possible tool to characterize changes along time in water thickened with various products and in the three theoretical consistencies. Six commercial thickeners were used: NM® (NM), Multi-thick® (M), Nutilis Powder® (Nut), Resource® (R), Thick&Easy® (TE) and Vegenat® (V). All of them with a modified starch base. Only one of them, Nut, also had a 6,4% of gum (guar, tara and xanthan). They were prepared as indicated in the instructions of each product and dispensing the correspondent amount for nectar, honey and pudding consistencies in 300 mL of tap water at 18ºC-20ºC. The mixture was stirred for about 30 s. Once it was homogeneously spread, it was dispensed in 30 mL plastic glasses; always to the same height. Each of these glasses was used as a measuring point. Viscosity was measured using a rotational viscometer (ST-2001, Selecta, Barcelona). Extrusion-compression test was performed using a TA.XT2i texture analyzer (Stable Micro Systems, UK) with a 25 mm diameter cylindrical probe (SMSP/25). Penetration distance was set at 10 mm and a speed of 3 mm/s. Measurements were made at 1, 5, 10, 20, 30, 40, 50 and 60 minutes from the moment samples were mixed. From the force (g)–time (s) curves obtained in the instrumental assays, maximum force peak (F) was chosen a reference parameter. Viscosity (mPa•s) and F (g) showed to be highly correlated and had similar development along time, following time-dependent quadratic models. It was possible to predict viscosity using F as an independent variable, as they were linearly correlated. In conclusion, compression-extrusion test could be an alternative and a useful tool to assess physical characteristics of thickened liquids.Keywords: compression-extrusion test, dysphagia, texture analyzer, thickener
Procedia PDF Downloads 368419 MCD-017: Potential Candidate from the Class of Nitroimidazoles to Treat Tuberculosis
Authors: Gurleen Kour, Mowkshi Khullar, B. K. Chandan, Parvinder Pal Singh, Kushalava Reddy Yumpalla, Gurunadham Munagala, Ram A. Vishwakarma, Zabeer Ahmed
Abstract:
New chemotherapeutic compounds against multidrug-resistant Mycobacterium tuberculosis (Mtb) are urgently needed to combat drug resistance in tuberculosis (TB). Apart from in-vitro potency against the target, physiochemical properties and pharmacokinetic properties play an imperative role in the process of drug discovery. We have identified novel nitroimidazole derivatives with potential activity against mycobacterium tuberculosis. One lead candidates, MCD-017, which showed potent activity against H37Rv strain (MIC=0.5µg/ml) and was further evaluated in the process of drug development. Methods: Basic physicochemical parameters like solubility and lipophilicity (LogP) were evaluated. Thermodynamic solubility was determined in PBS buffer (pH 7.4) using LC/MS-MS. The partition coefficient (Log P) of the compound was determined between octanol and phosphate buffered saline (PBS at pH 7.4) at 25°C by the microscale shake flask method. The compound followed Lipinski’s rule of five, which is predictive of good oral bioavailability and was further evaluated for metabolic stability. In-vitro metabolic stability was determined in rat liver microsomes. The hepatotoxicity of the compound was also determined in HepG2 cell line. In vivo pharmacokinetic profile of the compound after oral dosing was also obtained using balb/c mice. Results: The compound exhibited favorable solubility and lipophilicity. The physical and chemical properties of the compound were made use of as the first determination of drug-like properties. The compound obeyed Lipinski’s rule of five, with molecular weight < 500, number of hydrogen bond donors (HBD) < 5 and number of hydrogen bond acceptors(HBA) not more then 10. The log P of the compound was less than 5 and therefore the compound is predictive of exhibiting good absorption and permeation. Pooled rat liver microsomes were prepared from rat liver homogenate for measuring the metabolic stability. 99% of the compound was not metabolized and remained intact. The compound did not exhibit cytoxicity in hepG2 cells upto 40 µg/ml. The compound revealed good pharmacokinetic profile at a dose of 5mg/kg administered orally with a half life (t1/2) of 1.15 hours, Cmax of 642ng/ml, clearance of 4.84 ml/min/kg and a volume of distribution of 8.05 l/kg. Conclusion : The emergence of multi drug resistance (MDR) and extensively drug resistant (XDR) Tuberculosis emphasize the requirement of novel drugs active against tuberculosis. Thus, the need to evaluate physicochemical and pharmacokinetic properties in the early stages of drug discovery is required to reduce the attrition associated with poor drug exposure. In summary, it can be concluded that MCD-017 may be considered a good candidate for further preclinical and clinical evaluations.Keywords: mycobacterium tuberculosis, pharmacokinetics, physicochemical properties, hepatotoxicity
Procedia PDF Downloads 457418 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement
Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes
Abstract:
Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology
Procedia PDF Downloads 79417 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments
Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz
Abstract:
Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.Keywords: LSTMs, streamflow, hyperparameters, hydrology
Procedia PDF Downloads 69416 EverPro as the Missing Piece in the Plant Protein Portfolio to Aid the Transformation to Sustainable Food Systems
Authors: Aylin W Sahin, Alice Jaeger, Laura Nyhan, Gregory Belt, Steffen Münch, Elke K. Arendt
Abstract:
Our current food systems cause an increase in malnutrition resulting in more people being overweight or obese in the Western World. Additionally, our natural resources are under enormous pressure and the greenhouse gas emission increases yearly with a significant contribution to climate change. Hence, transforming our food systems is of highest priority. Plant-based food products have a lower environmental impact compared to their animal-based counterpart, representing a more sustainable protein source. However, most plant-based protein ingredients, such as soy and pea, are lacking indispensable amino acids and extremely limited in their functionality and, thus, in their food application potential. They are known to have a low solubility in water and change their properties during processing. The low solubility displays the biggest challenge in the development of milk alternatives leading to inferior protein content and protein quality in dairy alternatives on the market. Moreover, plant-based protein ingredients often possess an off-flavour, which makes them less attractive to consumers. EverPro, a plant-protein isolate originated from Brewer’s Spent Grain, the most abundant by-product in the brewing industry, represents the missing piece in the plant protein portfolio. With a protein content of >85%, it is of high nutritional value, including all indispensable amino acids which allows closing the protein quality gap of plant proteins. Moreover, it possesses high techno-functional properties. It is fully soluble in water (101.7 ± 2.9%), has a high fat absorption capacity (182.4 ± 1.9%), and a foaming capacity which is superior to soy protein or pea protein. This makes EverPro suitable for a vast range of food applications. Furthermore, it does not cause changes in viscosity during heating and cooling of dispersions, such as beverages. Besides its outstanding nutritional and functional characteristics, the production of EverPro has a much lower environmental impact compared to dairy or other plant protein ingredients. Life cycle assessment analysis showed that EverPro has the lowest impact on global warming compared to soy protein isolate, pea protein isolate, whey protein isolate, and egg white powder. It also contributes significantly less to freshwater eutrophication, marine eutrophication and land use compared the protein sources mentioned above. EverPro is the prime example of sustainable ingredients, and the type of plant protein the food industry was waiting for: nutritious, multi-functional, and environmentally friendly.Keywords: plant-based protein, upcycled, brewers' spent grain, low environmental impact, highly functional ingredient
Procedia PDF Downloads 80415 Spatial Analysis in the Impact of Aquifer Capacity Reduction on Land Subsidence Rate in Semarang City between 2014-2017
Authors: Yudo Prasetyo, Hana Sugiastu Firdaus, Diyanah Diyanah
Abstract:
The phenomenon of the lack of clean water supply in several big cities in Indonesia is a major problem in the development of urban areas. Moreover, in the city of Semarang, the population density and growth of physical development is very high. Continuous and large amounts of underground water (aquifer) exposure can result in a drastically aquifer supply declining in year by year. Especially, the intensity of aquifer use in the fulfilment of household needs and industrial activities. This is worsening by the land subsidence phenomenon in some areas in the Semarang city. Therefore, special research is needed to know the spatial correlation of the impact of decreasing aquifer capacity on the land subsidence phenomenon. This is necessary to give approve that the occurrence of land subsidence can be caused by loss of balance of pressure on below the land surface. One method to observe the correlation pattern between the two phenomena is the application of remote sensing technology based on radar and optical satellites. Implementation of Differential Interferometric Synthetic Aperture Radar (DINSAR) or Small Baseline Area Subset (SBAS) method in SENTINEL-1A satellite image acquisition in 2014-2017 period will give a proper pattern of land subsidence. These results will be spatially correlated with the aquifer-declining pattern in the same time period. Utilization of survey results to 8 monitoring wells with depth in above 100 m to observe the multi-temporal pattern of aquifer change capacity. In addition, the pattern of aquifer capacity will be validated with 2 underground water cavity maps from observation of ministries of energy and natural resources (ESDM) in Semarang city. Spatial correlation studies will be conducted on the pattern of land subsidence and aquifer capacity using overlapping and statistical methods. The results of this correlation will show how big the correlation of decrease in underground water capacity in influencing the distribution and intensity of land subsidence in Semarang city. In addition, the results of this study will also be analyzed based on geological aspects related to hydrogeological parameters, soil types, aquifer species and geological structures. The results of this study will be a correlation map of the aquifer capacity on the decrease in the face of the land in the city of Semarang within the period 2014-2017. So hopefully the results can help the authorities in spatial planning and the city of Semarang in the future.Keywords: aquifer, differential interferometric synthetic aperture radar (DINSAR), land subsidence, small baseline area subset (SBAS)
Procedia PDF Downloads 182414 Multiple Primary Pulmonary Meningiomas: A Case Report
Authors: Wellemans Isabelle, Remmelink Myriam, Foucart Annick, Rusu Stefan, Compère Christophe
Abstract:
Primary pulmonary meningioma (PPM) is a very rare tumor, and its occurrence has been reported only sporadically. Multiple PPMs are even more exceptional, and herein, we report, to the best of our knowledge, the fourth case, focusing on the clinicopathological features of the tumor. Moreover, the possible relationship between the use of progesterone–only contraceptives and the development of these neoplasms will be discussed. Case Report: We report a case of a 51-year-old female presenting three solid pulmonary nodules, with the following localizations: right upper lobe, middle lobe, and left lower lobe, described as incidental findings on computed tomography (CT) during a pre-bariatric surgery check-up. The patient revealed no drinking or smoking history. The physical exam was unremarkable except for the obesity. The lesions ranged in size between 6 and 24 mm and presented as solid nodules with lobulated contours. The largest lesion situated in the middle lobe had mild fluorodeoxyglucose (FDG) uptake on F-18 FDG positron emission tomography (PET)/CT, highly suggestive of primary lung neoplasm. For pathological assessment, video-assisted thoracoscopic middle lobectomy and wedge resection of the right upper nodule was performed. Histological examination revealed relatively well-circumscribed solid proliferation of bland meningothelial cells growing in whorls and lobular nests, presenting intranuclear pseudo-inclusions and psammoma bodies. No signs of anaplasia were observed. The meningothelial cells expressed diffusely Vimentin, focally Progesterone receptors and were negative for epithelial (cytokeratin (CK) AE1/AE3, CK7, CK20, Epithelial Membrane Antigen (EMA)), neuroendocrine markers (Synaptophysin, Chromogranin, CD56) and Estrogenic receptors. The proliferation labelling index Ki-67 was low (<5%). Metastatic meningioma was ruled out by brain and spine magnetic resonance imaging (MRI) scans. The third lesion localized in the left lower lobe was followed-up and resected three years later because of its slow but significant growth (14 mm to 16 mm), alongside two new infra centimetric lesions. Those three lesions showed a morphological and immunohistochemical profile similar to previously resected lesions. The patient was disease-free one year post-last surgery. Discussion: Although PPMs are mostly benign and slow-growing tumors with an excellent prognosis, they do not present specific radiological characteristics, and it is difficult to differentiate it from other lung tumors, histopathologic examination being essential. Aggressive behavior is associated with atypical or anaplastic features (WHO grades II–III) The etiology is still uncertain and different mechanisms have been proposed. A causal connection between sexual hormones and meningothelial proliferation has long been suspected and few studies examining progesterone only contraception and meningioma risk have all suggested an association. In line with this, our patient was treated with Levonorgestrel, a progesterone agonist, intra-uterine device (IUD). Conclusions: PPM, defined by the typical histological and immunohistochemical features of meningioma in the lungs and the absence of central nervous system lesions, is an extremely rare neoplasm, mainly solitary and associating, and indolent growth. Because of the unspecific radiologic findings, it should always be considered in the differential diagnosis of lung neoplasms. Regarding multiple PPM, only three cases are reported in the literature, and this is the first described in a woman treated by a progesterone-only IUD to the best of our knowledge.Keywords: pulmonary meningioma, multiple meningioma, meningioma, pulmonary nodules
Procedia PDF Downloads 114413 Computational Elucidation of β-endo-Acetylglucosaminidase (LytB) Inhibition by Kaempferol, Apigenin, and Quercetin in Streptococcus pneumoniae: Anti-Pneumonia Mechanism
Authors: Singh Divya, Rohan Singh, Anjana Pandey
Abstract:
Reviewers' Comments: The study provides valuable insights into the anti-pneumonia properties of flavonoids against LytB. Authors could further validate findings through in vitro studies and consider exploring combination therapies for enhanced efficacy Response: Thankyou for your valuable comments. This study has been conducted further via experimental validation of the in-silico findings. The study uses Streptococcus pneumoniae D39 strain and examine the anti-pneumonia effect of kaempferol, quercetin and apigenin at various concentrations ranging from 9ug/ml to 200ug/ml. From results, it can be concluded that the kaempferol has shown the highest cytotoxic effect (72.1% of inhibition) against S. pneumoniae at concentration of 40ug/ml compare to apigenin and quercetin. The treatment of S. pneumoniae with concoction of kaempferol, quercetin and apigenin has also been performed, it is noted that conc. of 200ug/ml was most effect in achieving 75% inhibition. As S. pneumoniae D39 is a virulent encapsulated strain, the capsule interferes with the uptake of large size drug formulation. For instance, S. pneumoniae D39 with kaempferol and gold nano urchin (GNU) formulation, but the large size of GNU has resulted in reduced cytotoxic effect of kaempferol (27%). To achieve near 100% cytotoxic effect on the MDR S. pneumoniae D39 strain, the study will target the development of kaempferol-engineered gold nano-urchin’ conjugates, where gold nanocrystal will be of small size (less than or equal to 5nm) and decorated with hydroxyl, sulfhydryl, carboxyl, amine and groups. This approach is expected to enhance the anti-pneumonia effect of kaempferol (polyhydroxylated flavonoid). The study will also examine the interactive study among lung epithelial cell line (A549), kaempferol-engineered gold nano urchins, and S. pneumoniae for exploring the colonization, invasion, and biofilm formation of S. pneumoniae on A549 cells resembling the upper respiratory surface of humans.Keywords: streptococcus pneumoniae, β-endo-Acetylglucosaminidase, apigenin, quercetin kaempferol, molecular dynamic simulation, interactome study and GROMACS
Procedia PDF Downloads 0412 Simo-syl: A Computer-Based Tool to Identify Language Fragilities in Italian Pre-Schoolers
Authors: Marinella Majorano, Rachele Ferrari, Tamara Bastianello
Abstract:
The recent technological advance allows for applying innovative and multimedia screen-based assessment tools to test children's language and early literacy skills, monitor their growth over the preschool years, and test their readiness for primary school. Several are the advantages that a computer-based assessment tool offers with respect to paper-based tools. Firstly, computer-based tools which provide the use of games, videos, and audio may be more motivating and engaging for children, especially for those with language difficulties. Secondly, computer-based assessments are generally less time-consuming than traditional paper-based assessments: this makes them less demanding for children and provides clinicians and researchers, but also teachers, with the opportunity to test children multiple times over the same school year and, thus, to monitor their language growth more systematically. Finally, while paper-based tools require offline coding, computer-based tools sometimes allow obtaining automatically calculated scores, thus producing less subjective evaluations of the assessed skills and provide immediate feedback. Nonetheless, using computer-based assessment tools to test meta-phonological and language skills in children is not yet common practice in Italy. The present contribution aims to estimate the internal consistency of a computer-based assessment (i.e., the Simo-syl assessment). Sixty-three Italian pre-schoolers aged between 4;10 and 5;9 years were tested at the beginning of the last year of the preschool through paper-based standardised tools in their lexical (Peabody Picture Vocabulary Test), morpho-syntactical (Grammar Repetition Test for Children), meta-phonological (Meta-Phonological skills Evaluation test), and phono-articulatory skills (non-word repetition). The same children were tested through Simo-syl assessment on their phonological and meta-phonological skills (e.g., recognise syllables and vowels and read syllables and words). The internal consistency of the computer-based tool was acceptable (Cronbach's alpha = .799). Children's scores obtained in the paper-based assessment and scores obtained in each task of the computer-based assessment were correlated. Significant and positive correlations emerged between all the tasks of the computer-based assessment and the scores obtained in the CMF (r = .287 - .311, p < .05) and in the correct sentences in the RCGB (r = .360 - .481, p < .01); non-word repetition standardised test significantly correlates with the reading tasks only (r = .329 - .350, p < .05). Further tasks should be included in the current version of Simo-syl to have a comprehensive and multi-dimensional approach when assessing children. However, such a tool represents a good chance for the teachers to early identifying language-related problems even in the school environment.Keywords: assessment, computer-based, early identification, language-related skills
Procedia PDF Downloads 183411 Estimating Age in Deceased Persons from the North Indian Population Using Ossification of the Sternoclavicular Joint
Authors: Balaji Devanathan, Gokul G., Raveena Divya, Abhishek Yadav, Sudhir K. Gupta
Abstract:
Background: Age estimation is a common problem in administrative settings, medico legal cases, and among athletes competing in different sports. Age estimation is a problem in medico legal problems that arise in hospitals when there has been a criminal abortion, when consenting to surgery or a general physical examination, when there has been infanticide, impotence, sterility, etc. Medical imaging progress has benefited forensic anthropology in various ways, most notably in the area of determining bone age. An efficient method for researching the epiphyseal union and other differences in the body's bones and joints is multi-slice computed tomography. There isn't a significant database on Indians available. So to obtain an Indian based database author has performed this original study. Methodologies: The appearance and fusion of ossification centre of sternoclavicular joint is evaluated, and grades were assigned accordingly. Using MSCT scans, we examined the relationship between the age of the deceased and alterations in the sternoclavicular joint during the appearance and union in 500 instances, 327 men and 173 females, in the age range of 0 to 25 years. Results: According to our research in both the male and female groups, the ossification centre for the medial end of the clavicle first appeared between the ages of 18.5 and 17.1 respectively. The age range of the partial union was 20.4 and 20.2 years old. The earliest age of complete fusion was 23 years for males and 22 years for females. For fusion of their sternebrae into one, age range is 11–24 years for females and 17–24 years. The fusion of the third and fourth sternebrae was completed by 11 years. The fusions of the first and second and second and third sternebrae occur by the age of 17 years. Furthermore, correlation and reliability were carried out which yielded significant results. Conclusion: With numerous exceptions, the projected values are consistent with a large number of the previously developed age charts. These variations may be caused by the ethnic or regional heterogeneity in the ossification pattern among the population under study. The pattern of bone maturation did not significantly differ between the sexes, according to the study. The study's age range was 0 to 25 years, and for obvious reasons, the majority of the occurrences occurred in the last five years, or between 20 and 25 years of age. This resulted in a comparatively smaller study population for the 12–18 age group, where age estimate is crucial because of current legal requirements. It will require specialized PMCT research in this age range to produce population standard charts for age estimate. The medial end of the clavicle is one of several ossification foci that are being thoroughly investigated since they are challenging to assess with a traditional X-ray examination. Combining the two has been shown to be a valid result when it comes to raising the age beyond eighteen.Keywords: age estimation, sternoclavicular joint, medial clavicle, computed tomography
Procedia PDF Downloads 44410 Measurement System for Human Arm Muscle Magnetic Field and Grip Strength
Authors: Shuai Yuan, Minxia Shi, Xu Zhang, Jianzhi Yang, Kangqi Tian, Yuzheng Ma
Abstract:
The precise measurement of muscle activities is essential for understanding the function of various body movements. This work aims to develop a muscle magnetic field signal detection system based on mathematical analysis. Medical research has underscored that early detection of muscle atrophy, coupled with lifestyle adjustments such as dietary control and increased exercise, can significantly enhance muscle-related diseases. Currently, surface electromyography (sEMG) is widely employed in research as an early predictor of muscle atrophy. Nonetheless, the primary limitation of using sEMG to forecast muscle strength is its inability to directly measure the signals generated by muscles. Challenges arise from potential skin-electrode contact issues due to perspiration, leading to inaccurate signals or even signal loss. Additionally, resistance and phase are significantly impacted by adipose layers. The recent emergence of optically pumped magnetometers introduces a fresh avenue for bio-magnetic field measurement techniques. These magnetometers possess high sensitivity and obviate the need for a cryogenic environment unlike superconducting quantum interference devices (SQUIDs). They detect muscle magnetic field signals in the range of tens to thousands of femtoteslas (fT). The utilization of magnetometers for capturing muscle magnetic field signals remains unaffected by issues of perspiration and adipose layers. Since their introduction, optically pumped atomic magnetometers have found extensive application in exploring the magnetic fields of organs such as cardiac and brain magnetism. The optimal operation of these magnetometers necessitates an environment with an ultra-weak magnetic field. To achieve such an environment, researchers usually utilize a combination of active magnetic compensation technology with passive magnetic shielding technology. Passive magnetic shielding technology uses a magnetic shielding device built with high permeability materials to attenuate the external magnetic field to a few nT. Compared with more layers, the coils that can generate a reverse magnetic field to precisely compensate for the residual magnetic fields are cheaper and more flexible. To attain even lower magnetic fields, compensation coils designed by Biot-Savart law are involved to generate a counteractive magnetic field to eliminate residual magnetic fields. By solving the magnetic field expression of discrete points in the target region, the parameters that determine the current density distribution on the plane can be obtained through the conventional target field method. The current density is obtained from the partial derivative of the stream function, which can be represented by the combination of trigonometric functions. Optimization algorithms in mathematics are introduced into coil design to obtain the optimal current density distribution. A one-dimensional linear regression analysis was performed on the collected data, obtaining a coefficient of determination R2 of 0.9349 with a p-value of 0. This statistical result indicates a stable relationship between the peak-to-peak value (PPV) of the muscle magnetic field signal and the magnitude of grip strength. This system is expected to be a widely used tool for healthcare professionals to gain deeper insights into the muscle health of their patients.Keywords: muscle magnetic signal, magnetic shielding, compensation coils, trigonometric functions.
Procedia PDF Downloads 56409 Role of Platelet Volume Indices in Diabetes Related Vascular Angiopathies
Authors: Mitakshara Sharma, S. K. Nema, Sanjeev Narang
Abstract:
Diabetes mellitus (DM) is a group of metabolic disorders characterized by metabolic abnormalities, chronic hyperglycaemia and long term macrovascular & microvascular complications. Vascular complications are due to platelet hyperactivity and dysfunction, increased inflammation, altered coagulation and endothelial dysfunction. Large proportion of patients with Type II DM suffers from preventable vascular angiopathies, and there is need to develop risk factor modifications and interventions to reduce impact of complications. These complications are attributed to platelet activation, recognised by increase in Platelet Volume Indices (PVI) including Mean Platelet Volume (MPV) and Platelet Distribution Width (PDW). The current study is prospective analytical study conducted over 2 years. Out of 1100 individuals, 930 individuals fulfilled inclusion criteria and were segregated into three groups on basis of glycosylated haemoglobin (HbA1C): - (a) Diabetic, (b) Non-Diabetic and (c) Subjects with Impaired fasting glucose (IFG) with 300 individuals in IFG and non-diabetic groups & 330 individuals in diabetic group. Further, diabetic group was divided into two groups on the basis of presence or absence of known diabetes related vascular complications. Samples for HbA1c and PVI were collected using Ethylene diamine tetraacetic acid (EDTA) as anticoagulant and processed on SYSMEX-X-800i autoanalyser. The study revealed gradual increase in PVI from non-diabetics to IFG to diabetics. PVI were markedly increased in diabetic patients. MPV and PDW of diabetics, IFG and non diabetics were (17.60 ± 2.04)fl, (11.76 ± 0.73)fl, (9.93 ± 0.64)fl and (19.17 ± 1.48)fl, (15.49 ± 0.67)fl, (10.59 ± 0.67)fl respectively with a significant p value 0.00 and a significant positive correlation (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). MPV & PDW of subjects with diabetes related complications were higher as compared to those without them and were (17.51±0.39)fl & (15.14 ± 1.04)fl and (20.09 ± 0.98) fl & (18.96 ± 0.83)fl respectively with a significant p value 0.00. There was a significant positive correlation between PVI and duration of diabetes across the groups (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). However, a significant negative correlation was found between glycaemic levels and total platelet count (PC- HbA1c r =-0.164). This is multi-parameter and comprehensive study with an adequately powered study design. It can be concluded from our study that PVI are extremely useful and important indicators of impending vascular complications in all patients with deranged glycaemic control. Introduction of automated cell counters has facilitated the availability of PVI as routine parameters. PVI is a useful means for identifying larger & active platelets which play important role in development of micro and macro angiopathic complications of diabetes leading to mortality and morbidity. PVI can be used as cost effective markers to predict and prevent impending vascular events in patients with Diabetes mellitus especially in developing countries like India. PVI, if incorporated into protocols for management of diabetes, could revolutionize care and curtail the ever increasing cost of patient management.Keywords: diabetes, IFG, HbA1C, MPV, PDW, PVI
Procedia PDF Downloads 257408 Surface Sunctionalization Strategies for the Design of Thermoplastic Microfluidic Devices for New Analytical Diagnostics
Authors: Camille Perréard, Yoann Ladner, Fanny D'Orlyé, Stéphanie Descroix, Vélan Taniga, Anne Varenne, Cédric Guyon, Michael. Tatoulian, Frédéric Kanoufi, Cyrine Slim, Sophie Griveau, Fethi Bedioui
Abstract:
The development of micro total analysis systems is of major interest for contaminant and biomarker analysis. As a lab-on-chip integrates all steps of an analysis procedure in a single device, analysis can be performed in an automated format with reduced time and cost, while maintaining performances comparable to those of conventional chromatographic systems. Moreover, these miniaturized systems are either compatible with field work or glovebox manipulations. This work is aimed at developing an analytical microsystem for trace and ultra trace quantitation in complex matrices. The strategy consists in the integration of a sample pretreatment step within the lab-on-chip by a confinement zone where selective ligands are immobilized for target extraction and preconcentration. Aptamers were chosen as selective ligands, because of their high affinity for all types of targets (from small ions to viruses and cells) and their ease of synthesis and functionalization. This integrated target extraction and concentration step will be followed in the microdevice by an electrokinetic separation step and an on-line detection. Polymers consisting of cyclic olefin copolymer (COC) or fluoropolymer (Dyneon THV) were selected as they are easy to mold, transparent in UV-visible and have high resistance towards solvents and extreme pH conditions. However, because of their low chemical reactivity, surface treatments are necessary. For the design of this miniaturized diagnostics, we aimed at modifying the microfluidic system at two scales : (1) on the entire surface of the microsystem to control the surface hydrophobicity (so as to avoid any sample wall adsorption) and the fluid flows during electrokinetic separation, or (2) locally so as to immobilize selective ligands (aptamers) on restricted areas for target extraction and preconcentration. We developed different novel strategies for the surface functionalization of COC and Dyneon, based on plasma, chemical and /or electrochemical approaches. In a first approach, a plasma-induced immobilization of brominated derivatives was performed on the entire surface. Further substitution of the bromine by an azide functional group led to covalent immobilization of ligands through “click” chemistry reaction between azides and terminal alkynes. COC and Dyneon materials were characterized at each step of the surface functionalization procedure by various complementary techniques to evaluate the quality and homogeneity of the functionalization (contact angle, XPS, ATR). With the objective of local (micrometric scale) aptamer immobilization, we developed an original electrochemical strategy on engraved Dyneon THV microchannel. Through local electrochemical carbonization followed by adsorption of azide-bearing diazonium moieties and covalent linkage of alkyne-bearing aptamers through click chemistry reaction, typical dimensions of immobilization zones reached the 50 µm range. Other functionalization strategies, such as sol-gel encapsulation of aptamers, are currently investigated and may also be suitable for the development of the analytical microdevice. The development of these functionalization strategies is the first crucial step in the design of the entire microdevice. These strategies allow the grafting of a large number of molecules for the development of new analytical tools in various domains like environment or healthcare.Keywords: alkyne-azide click chemistry (CuAAC), electrochemical modification, microsystem, plasma bromination, surface functionalization, thermoplastic polymers
Procedia PDF Downloads 442407 Household Perspectives and Resistance to Preventive Relocation in Flood Prone Areas: A Case Study in the Polwatta River Basin, Southern Sri Lanka
Authors: Ishara Madusanka, So Morikawa
Abstract:
Natural disasters, particularly floods, pose severe challenges globally, affecting both developed and developing countries. In many regions, especially Asia, riverine floods are prevalent and devastating. Integrated flood management incorporates structural and non-structural measures, with preventive relocation emerging as a cost-effective and proactive strategy for areas repeatedly impacted by severe flooding. However, preventive relocation is often hindered by economic, psychological, social, and institutional barriers. This study investigates the factors influencing resistance to preventive relocation and evaluates the role of flood risk information in shaping relocation decisions through risk perception. A conceptual model was developed, incorporating variables such as Flood Risk Information (FRI), Place Attachment (PA), Good Living Conditions (GLC), and Adaptation to Flooding (ATF), with Flood Risk Perception (FRP) serving as a mediating variable. The research was conducted in Welipitiya in the Polwatta river basin, Matara district, Sri Lanka, a region experiencing recurrent flood damage. For this study, an experimental design involving a structured questionnaire survey was utilized, with 185 households participating. The treatment group received flood risk information, including flood risk maps and historical data, while the control group did not. Data were collected in 2023 and analyzed using independent sample t-tests and Partial Least Squares Structural Equation Modeling (PLS-SEM). PLS-SEM was chosen for its ability to model latent variables, handle complex relationships, and suitability for exploratory research. Multi-group Analysis (MGA) assessed variations across different flood risk areas. Findings indicate that flood risk information had a limited impact on flood risk perception and relocation decisions, though its effect was significant in specific high-risk areas. Place attachment was a significant factor influencing relocation decisions across the sample. One potential reason for the limited impact of flood risk information on relocation decisions could be the lack of specificity in the information provided. The results suggest that while flood risk information alone may not significantly influence relocation decisions, it is crucial in specific contexts. Future studies and practitioners should focus on providing more detailed risk information and addressing psychological factors like place attachments to enhance preventive relocation efforts.Keywords: flood risk communication, flood risk perception, place attachment, preventive relocation, structural equation modeling
Procedia PDF Downloads 31406 Wind Load Reduction Effect of Exterior Porous Skin on Facade Performance
Authors: Ying-Chang Yu, Yuan-Lung Lo
Abstract:
Building envelope design is one of the most popular design fields of architectural profession in nowadays. The main design trend of such system is to highlight the designer's aesthetic intention from the outlook of building project. Due to the trend of current façade design, the building envelope contains more and more layers of components, such as double skin façade, photovoltaic panels, solar control system, or even ornamental components. These exterior components are designed for various functional purposes. Most researchers focus on how these exterior elements should be structurally sound secured. However, not many researchers consider these elements would help to improve the performance of façade system. When the exterior elements are deployed in large scale, it creates an additional layer outside of original façade system and acts like a porous interface which would interfere with the aerodynamic of façade surface in micro-scale. A standard façade performance consists with 'water penetration, air infiltration rate, operation force, and component deflection ratio', and these key performances are majorly driven by the 'Design Wind Load' coded in local regulation. A design wind load is usually determined by the maximum wind pressure which occurs on the surface due to the geometry or location of building in extreme conditions. This research was designed to identify the air damping phenomenon of micro turbulence caused by porous exterior layer leading to surface wind load reduction for improvement of façade system performance. A series of wind tunnel test on dynamic pressure sensor array covered by various scale of porous exterior skin was conducted to verify the effect of wind pressure reduction. The testing specimens were designed to simulate the typical building with two-meter extension offsetting from building surface. Multiple porous exterior skins were prepared to replicate various opening ratio of surface which may cause different level of damping effect. This research adopted 'Pitot static tube', 'Thermal anemometers', and 'Hot film probe' to collect the data of surface dynamic pressure behind porous skin. Turbulence and distributed resistance are the two main factors of aerodynamic which would reduce the actual wind pressure. From initiative observation, the reading of surface wind pressure was effectively reduced behind porous media. In such case, an actual building envelope system may be benefited by porous skin from the reduction of surface wind pressure, which may improve the performance of envelope system consequently.Keywords: multi-layer facade, porous media, facade performance, turbulence and distributed resistance, wind tunnel test
Procedia PDF Downloads 217405 Hydrological Challenges and Solutions in the Nashik Region: A Multi Tracer and Geochemistry Approach to Groundwater Management
Authors: Gokul Prasad, Pennan Chinnasamy
Abstract:
The degradation of groundwater resources, attributed to factors such as excessive abstraction and contamination, has emerged as a global concern. This study delves into the stable isotopes of water) in a hard-rock aquifer situated in the Upper Godavari watershed, an agriculturally rich region in India underlain by Basalt. The higher groundwater draft (> 90%) poses significant risks; comprehending groundwater sources, flow patterns, and their environmental impacts is pivotal for researchers and water managers. The region has faced five droughts in the past 20 years; four are categorized as medium. The recharge rates are variable and show a very minimum contribution to groundwater. The rainfall pattern shows vast variability, with the region receiving seasonal monsoon rainfall for just four months and the rest of the year experiencing minimal rainfall. This research closely monitored monsoon precipitation inputs and examined spatial and temporal fluctuations in δ18O and δ2H in both groundwater and precipitation. By discerning individual recharge events during monsoons, it became possible to identify periods when evaporation led to groundwater quality deterioration, characterized by elevated salinity and stable isotope values in the return flow. The locally derived meteoric water line (LMWL) (δ2H = 6.72 * δ18O + 1.53, r² = 0.6) provided valuable insights into the groundwater system. The leftward shift of the Nashik LMWL in relation to the GMWL and LMWL indicated groundwater evaporation (-33 ‰), supported by spatial variations in electrical conductivity (EC) data. Groundwater in the eastern and northern watershed areas exhibited higher salinity > 3000uS/cm, expanding > 40% of the area compared to the western and southern regions due to geological disparities (alluvium vs basalt). The findings emphasize meteoric precipitation as the primary groundwater source in the watershed. However, spatial variations in isotope values and chemical constituents indicate other contributing factors, including evaporation, groundwater source type, and natural or anthropogenic (specifically agricultural and industrial) contaminants. Therefore, the study recommends focused hydro geochemistry and isotope analysis in areas with strong agricultural and industrial influence for the development of holistic groundwater management plans for protecting the groundwater aquifers' quantity and quality.Keywords: groundwater quality, stable isotopes, salinity, groundwater management, hard-rock aquifer
Procedia PDF Downloads 47