Search results for: forming limit curve (FLC)
654 Clinical Empathy: The Opportunity to Offer Optimal Treatment to People with Serious Illness
Authors: Leonore Robieux, Franck Zenasni, Marc Pocard, Clarisse Eveno
Abstract:
Empirical data in health psychology studies show the necessity to consider the doctor-patient communication and its positive impact on outcomes such as patients’ satisfaction, treatment adherence, physical and psychological wellbeing. In this line, the present research aims to define the role and determinants of an effective doctor–patient communication during the treatment of patients with serious illness (peritoneal carcinomatosis). We carried out a prospective longitudinal study including patients treated for peritoneal carcinomatosis of various origins. From November 2016, to date, data were collected using validated questionnaires at two times of evaluation: one month before the surgery (T0) and one month after (T1). Thus, patients reported their (a) anxiety and depression levels, (b) standardized and individualized quality of life and (c) how they perceived communication, attitude and empathy of the surgeon. 105 volunteer patients (Mean age = 58.18 years, SD = 10.24, 62.2% female) participated to the study. PC arose from rare diseases (14%), colorectal (38%), eso-gastric (24%) and ovarian (8%) cancer. Three groups are defined according to the severity of their pathology and the treatment offered to them: (1) important surgical treatment with the goal of healing (53%), (2) repeated palliative surgical treatment (17%), and (3) the patients recused for surgical treatment, only palliative approach (30%). Results are presented according to Baron and Kenny recommendations. The regressions analyses show that only depression and anxiety are sensitive to the communication and empathy of surgeon. The main results show that a good communication and high level of empathy at T0 and T1 limit depression and anxiety of the patients in T1. Results also indicate that the severity of the disease modulates this positive impact of communication: better is the communication the less are the level of depression and anxiety of the patients. This effect is higher for patients treated for the more severe disease. These results confirm that, even in the case severe disease a good communication between patient and physician remains a significant factor in promoting the well-being of patients. More specific training need to be developed to promote empathic care.Keywords: clinical empathy, determinants, healthcare, psychological wellbeing
Procedia PDF Downloads 121653 Thermal Behaviour of a Low-Cost Passive Solar House in Somerset East, South Africa
Authors: Ochuko K. Overen, Golden Makaka, Edson L. Meyer, Sampson Mamphweli
Abstract:
Low-cost housing provided for people with small incomes in South Africa are characterized by poor thermal performance. This is due to inferior craftsmanship with no regard to energy efficient design during the building process. On average, South African households spend 14% of their total monthly income on energy needs, in particular space heating; which is higher than the international benchmark of 10% for energy poverty. Adopting energy efficient passive solar design strategies and superior thermal building materials can create a stable thermal comfort environment indoors. Thereby, reducing energy consumption for space heating. The aim of this study is to analyse the thermal behaviour of a low-cost house integrated with passive solar design features. A low-cost passive solar house with superstructure fly ash brick walls was designed and constructed in Somerset East, South Africa. Indoor and outdoor meteorological parameters of the house were monitored for a period of one year. The ASTM E741-11 Standard was adopted to perform ventilation test in the house. In summer, the house was found to be thermally comfortable for 66% of the period monitored, while for winter it was about 79%. The ventilation heat flow rate of the windows and doors were found to be 140 J/s and 68 J/s, respectively. Air leakage through cracks and openings in the building envelope was 0.16 m3/m2h with a corresponding ventilation heat flow rate of 24 J/s. The indoor carbon dioxide concentration monitored overnight was found to be 0.248%, which is less than the maximum range limit of 0.500%. The prediction percentage dissatisfaction of the house shows that 86% of the occupants will express the thermal satisfaction of the indoor environment. With a good operation of the house, it can create a well-ventilated, thermal comfortable and nature luminous indoor environment for the occupants. Incorporating passive solar design in low-cost housing can be one of the long and immediate solutions to the energy crisis facing South Africa.Keywords: energy efficiency, low-cost housing, passive solar design, rural development, thermal comfort
Procedia PDF Downloads 260652 Smart Contracts: Bridging the Divide Between Code and Law
Authors: Abeeb Abiodun Bakare
Abstract:
The advent of blockchain technology has birthed a revolutionary innovation: smart contracts. These self-executing contracts, encoded within the immutable ledger of a blockchain, hold the potential to transform the landscape of traditional contractual agreements. This research paper embarks on a comprehensive exploration of the legal implications surrounding smart contracts, delving into their enforceability and their profound impact on traditional contract law. The first section of this paper delves into the foundational principles of smart contracts, elucidating their underlying mechanisms and technological intricacies. By harnessing the power of blockchain technology, smart contracts automate the execution of contractual terms, eliminating the need for intermediaries and enhancing efficiency in commercial transactions. However, this technological marvel raises fundamental questions regarding legal enforceability and compliance with traditional legal frameworks. Moving beyond the realm of technology, the paper proceeds to analyze the legal validity of smart contracts within the context of traditional contract law. Drawing upon established legal principles, such as offer, acceptance, and consideration, we examine the extent to which smart contracts satisfy the requirements for forming a legally binding agreement. Furthermore, we explore the challenges posed by jurisdictional issues as smart contracts transcend physical boundaries and operate within a decentralized network. Central to this analysis is the examination of the role of arbitration and dispute resolution mechanisms in the context of smart contracts. While smart contracts offer unparalleled efficiency and transparency in executing contractual terms, disputes inevitably arise, necessitating mechanisms for resolution. We investigate the feasibility of integrating arbitration clauses within smart contracts, exploring the potential for decentralized arbitration platforms to streamline dispute resolution processes. Moreover, this paper explores the implications of smart contracts for traditional legal intermediaries, such as lawyers and judges. As smart contracts automate the execution of contractual terms, the role of legal professionals in contract drafting and interpretation may undergo significant transformation. We assess the implications of this paradigm shift for legal practice and the broader legal profession. In conclusion, this research paper provides a comprehensive analysis of the legal implications surrounding smart contracts, illuminating the intricate interplay between code and law. While smart contracts offer unprecedented efficiency and transparency in commercial transactions, their legal validity remains subject to scrutiny within traditional legal frameworks. By navigating the complex landscape of smart contract law, we aim to provide insights into the transformative potential of this groundbreaking technology.Keywords: smart-contracts, law, blockchain, legal, technology
Procedia PDF Downloads 43651 Unraveling Language Contact through Syntactic Dynamics of ‘Also’ in Hong Kong and Britain English
Authors: Xu Zhang
Abstract:
This article unveils an indicator of language contact between English and Cantonese in one of the Outer Circle Englishes, Hong Kong (HK) English, through an empirical investigation into 1000 tokens from the Global Web-based English (GloWbE) corpus, employing frequency analysis and logistic regression analysis. It is perceived that Cantonese and general Chinese are contextually marked by an integral underlying thinking pattern. Chinese speakers exhibit a reliance on semantic context over syntactic rules and lexical forms. This linguistic trait carries over to their use of English, affording greater flexibility to formal elements in constructing English sentences. The study focuses on the syntactic positioning of the focusing subjunct ‘also’, a linguistic element used to add new or contrasting prominence to specific sentence constituents. The English language generally allows flexibility in the relative position of 'also’, while there is a preference for close marking relationships. This article shifts attention to Hong Kong, where Cantonese and English converge, and 'also' finds counterparts in Cantonese ‘jaa’ and Mandarin ‘ye’. Employing a corpus-based data-driven method, we investigate the syntactic position of 'also' in both HK and GB English. The study aims to ascertain whether HK English exhibits a greater 'syntactic freedom,' allowing for a more distant marking relationship with 'also' compared to GB English. The analysis involves a random extraction of 500 samples from both HK and GB English from the GloWbE corpus, forming a dataset (N=1000). Exclusions are made for cases where 'also' functions as an additive conjunct or serves as a copulative adverb, as well as sentences lacking sufficient indication that 'also' functions as a focusing particle. The final dataset comprises 820 tokens, with 416 for GB and 404 for HK, annotated according to the focused constituent and the relative position of ‘also’. Frequency analysis reveals significant differences in the relative position of 'also' and marking relationships between HK and GB English. Regression analysis indicates a preference in HK English for a distant marking relationship between 'also' and its focused constituent. Notably, the subject and other constituents emerge as significant predictors of a distant position for 'also.' Together, these findings underscore the nuanced linguistic dynamics in HK English and contribute to our understanding of language contact. It suggests that future pedagogical practice should consider incorporating the syntactic variation within English varieties, facilitating leaners’ effective communication in diverse English-speaking environments and enhancing their intercultural communication competence.Keywords: also, Cantonese, English, focus marker, frequency analysis, language contact, logistic regression analysis
Procedia PDF Downloads 51650 Aerodynamic Design Optimization Technique for a Tube Capsule That Uses an Axial Flow Air Compressor and an Aerostatic Bearing
Authors: Ahmed E. Hodaib, Muhammed A. Hashem
Abstract:
High-speed transportation has become a growing concern. To increase high-speed efficiencies and minimize power consumption of a vehicle, we need to eliminate the friction with the ground and minimize the aerodynamic drag acting on the vehicle. Due to the complexity and high power requirements of electromagnetic levitation, we make use of the air in front of the capsule, that produces the majority of the drag, to compress it in two phases and inject a proportion of it through small nozzles to make a high-pressure air cushion to levitate the capsule. The tube is partially-evacuated so that the air pressure is optimized for maximum compressor effectiveness, optimum tube size, and minimum vacuum pump power consumption. The total relative mass flow rate of the tube air is divided into two fractions. One is by-passed to flow over the capsule body, ensuring that no chocked flow takes place. The other fraction is sucked by the compressor where it is diffused to decrease the Mach number (around 0.8) to be suitable for the compressor inlet. The air is then compressed and intercooled, then split. One fraction is expanded through a tail nozzle to contribute to generating thrust. The other is compressed again. Bleed from the two compressors is used to maintain a constant air pressure in an air tank. The air tank is used to supply air for levitation. Dividing the total mass flow rate increases the achievable speed (Kantrowitz limit), and compressing it decreases the blockage of the capsule. As a result, the aerodynamic drag on the capsule decreases. As the tube pressure decreases, the drag decreases and the capsule power requirements decrease, however, the vacuum pump consumes more power. That’s why Design optimization techniques are to be used to get the optimum values for all the design variables given specific design inputs. Aerodynamic shape optimization, Capsule and tube sizing, compressor design, diffuser and nozzle expander design and the effect of the air bearing on the aerodynamics of the capsule are to be considered. The variations of the variables are to be studied for the change of the capsule velocity and air pressure.Keywords: tube-capsule, hyperloop, aerodynamic design optimization, air compressor, air bearing
Procedia PDF Downloads 329649 Amelioration of Lipopolysaccharide Induced Murine Colitis by Cell Wall Contents of Probiotic Lactobacillus Casei: Targeting Immuno-Inflammation and Oxidative Stress
Authors: Vishvas N. Patel, Mehul Chorawala
Abstract:
Currently, according to the authors best knowledge there are less effective therapeutic agents to limit intestinal mucosa damage associated with inflammatory bowel disease (IBD). Clinical studies have shown beneficial effects of several probiotics in patients of IBD. Probiotics are live organisms; confer a health benefit to the host by modulating immunoinflammation and oxidative stress. Although probiotics in murine and human improve disease severity, very little is known about the specific contribution of cell wall contents of probiotics in IBD. Herein, we investigated the ameliorative potential of cell wall contents of Lactobacillus casei (LC) in lipopolysaccharide (LPS)-induced murine colitis. Methods: Colitis was induced in LPS-sensitized rats by intracolonic instillation of LPS (50 µg/rat) for consecutive 14 days. Concurrently, cell wall contents isolated from 103, 106 and 109 CFU of LC was given subcutaneously to each rat for 21 days, considering sulfasalazine (100 mg/kg, p.o.) as standard. The severity of colitis was assessed by body weight loss, food intake, stool consistency, rectal bleeding, colon weight/length, spleen weight and histological analysis. Colonic inflammatory markers (myeloperoxidase (MPO) activity, C-reactive protein and proinflammatory cytokines) and oxidative stress markers (malondialdehyde, reduced glutathione and nitric oxide) were also assayed. Results: Cell wall contents of isolated from 106 and 109 CFU of LC significantly improved the severity of colitis by reducing body weight loss and diarrhea & bleeding incidence, improving food intake, colon weight/length, spleen weight and microscopic damage to the colonic mucosa. The treatment also reduced levels of inflammatory and oxidative stress markers and boosted antioxidant molecule. However, cell wall contents of isolated from 103 were ineffective. Conclusion: In conclusion, cell wall contents of LC attenuate LPS-induced colitis by modulating immuno-inflammation and oxidative stress.Keywords: probiotics, Lactobacillus casei, immuno-inflammation, oxidative stress, lipopolysaccharide, colitis
Procedia PDF Downloads 86648 Best Practical Technique to Drain Recoverable Oil from Unconventional Deep Libyan Oil Reservoir
Authors: Tarek Duzan, Walid Esayed
Abstract:
Fluid flow in porous media is attributed fundamentally to parameters that are controlled by depositional and post-depositional environments. After deposition, digenetic events can act negatively on the reservoir and reduce the effective porosity, thereby making the rock less permeable. Therefore, exploiting hydrocarbons from such resources requires partially altering the rock properties to improve the long-term production rate and enhance the recovery efficiency. In this study, we try to address, firstly, the phenomena of permeability reduction in tight sandstone reservoirs and illustrate the implemented procedures to investigate the problem roots; finally, benchmark the candidate solutions at the field scale and recommend the mitigation strategy for the field development plan. During the study, two investigations have been considered: subsurface analysis using ( PLT ) and Laboratory tests for four candidate wells of the interested reservoir. Based on the above investigations, it was obvious that the Production logging tool (PLT) has shown areas of contribution in the reservoir, which is considered very limited, considering the total reservoir thickness. Also, Alcohol treatment was the first choice to go with for the AA9 well. The well productivity has been relatively restored but not to its initial productivity. Furthermore, Alcohol treatment in the lab was effective and restored permeability in some plugs by 98%, but operationally, the challenge would be the ability to distribute enough alcohol in a wellbore to attain the sweep Efficiency obtained within a laboratory core plug. However, the Second solution, which is based on fracking wells, has shown excellent results, especially for those wells that suffered a high drop in oil production. It is suggested to frac and pack the wells that are already damaged in the Waha field to mitigate the damage and restore productivity back as much as possible. In addition, Critical fluid velocity and its effect on fine sand migration in the reservoir have to be well studied on core samples, and therefore, suitable pressure drawdown will be applied in the reservoir to limit fine sand migration.Keywords: alcohol treatment, post-depositional environments, permeability, tight sandstone
Procedia PDF Downloads 67647 Influence Zone of Strip Footing on Untreated and Cement Treated Sand Mat Underlain by Soft Clay (2nd reviewed)
Authors: Sharifullah Ahmed
Abstract:
Shallow foundation on soft soils without ground improvement can represent a high level of settlement. In such a case, an alternative to pile foundations may be shallow strip footings placed on a soil system in which the upper layer is untreated or cement-treated compacted sand to limit the settlement within a permissible level. This research work deals with a rigid plane-strain strip footing of 2.5m width placed on a soil consisting of untreated or cement treated sand layer underlain by homogeneous soft clay. Both the thin and thick compared the footing width was considered. The soft inorganic cohesive NC clay layer is considered undrained for plastic loading stages and drained in consolidation stages, and the sand layer is drained in all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0 with a model consisting of clay deposits of 15m thickness and 18m width. The soft clay layer was modeled using the Hardening Soil Model, Soft Soil Model, Soft Soil Creep model, and the upper improvement layer was modeled using only the Hardening Soil Model. The system is considered fully saturated. The value of natural void ratio 1.2 is used. Total displacement fields of strip footing and subsoil layers in the case of Untreated and Cement treated Sand as Upper layer are presented. For Hi/B =0.6 or above, the distribution of major deformation within an upper layer and the influence zone of footing is limited in an upper layer which indicates the complete effectiveness of the upper layer in bearing the foundation effectively in case of the untreated upper layer. For Hi/B =0.3 or above, the distribution of major deformation occurred within an upper layer, and the function of footing is limited in the upper layer. This indicates the complete effectiveness of the cement-treated upper layer. Brittle behavior of cemented sand and fracture or cracks is not considered in this analysis.Keywords: displacement, ground improvement, influence depth, PLAXIS 2D, primary and secondary settlement, sand mat, soft clay
Procedia PDF Downloads 92646 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano
Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das
Abstract:
Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption
Procedia PDF Downloads 412645 Incidence and Molecular Mechanism of Human Pathogenic Bacterial Interaction with Phylloplane of Solanum lycopersicum
Authors: Indu Gaur, Neha Bhadauria, Shilpi Shilpi, Susmita Goswami, Prem D. Sharma, Prabir K. Paul
Abstract:
The concept of organic agriculture has been accepted as novelty in Indian society, but there is no data available on the human pathogens colonizing plant parts due to such practices. Also, the pattern and mechanism of their colonization need to be understood in order to devise possible strategies for their prevention. In the present study, human pathogenic bacteria were isolated from organically grown tomato plants and five of them were identified as Klebsiella pneumoniae, Enterobacter ludwigii, Serratia fonticola, Stenotrophomonas maltophilia and Chryseobacterium jejuense. Tomato plants were grown in controlled aseptic conditions with 25±1˚C, 70% humidity and 12 hour L/D photoperiod. Six weeks old plants were divided into 6 groups of 25 plants each and treated as follows: Group 1: K. pneumonia, Group 2: E. ludwigii, Group 3: S. fonticola, Group 4: S. maltophilia, Group 5: C. jejuense, Group 6: Sterile distilled water (control). The inoculums for all treatments were prepared by overnight growth with uniform concentration of 108 cells/ml. Leaf samples from above groups were collected at 0.5, 2, 4, 6 and 24 hours post inoculation for the colony forming unit counts (CFU/cm2 of leaf area) of individual pathogens using leaf impression method. These CFU counts were used for the in vivo colonization assay and adherence assay of individual pathogens. Also, resistance of these pathogens to at least 12 antibiotics was studied. Based on these findings S. fonticola was found to be most prominently colonizing the phylloplane of tomato and was further studied. Tomato plants grown in controlled aseptic conditions same as mentioned above were divided into 2 groups of 25 plants each and treated as follows: Group 1: S. fonticola, Group 2: Sterile distilled water (control). Leaf samples from above groups were collected at 0, 24, 48, 72 and 96 hours post inoculation and homogenized in suitable buffers for surface and cell wall protein isolation. Protein samples thus obtained were subjected to isocratic SDS-gel electrophoresis and analyzed. It was observed that presence of S. fonticola could induce the expression of at least 3 additional cell wall proteins at different time intervals. Surface proteins also showed variation in the expression pattern at different sampling intervals. Further identification of these proteins by MALDI-MS and bioinformatics tools revealed the gene(s) involved in the interaction of S. fonticola with tomato phylloplane.Keywords: cell wall proteins, human pathogenic bacteria, phylloplane, solanum lycopersicum
Procedia PDF Downloads 226644 A Novel Machine Learning Approach to Aid Agrammatism in Non-fluent Aphasia
Authors: Rohan Bhasin
Abstract:
Agrammatism in non-fluent Aphasia Cases can be defined as a language disorder wherein a patient can only use content words ( nouns, verbs and adjectives ) for communication and their speech is devoid of functional word types like conjunctions and articles, generating speech of with extremely rudimentary grammar . Past approaches involve Speech Therapy of some order with conversation analysis used to analyse pre-therapy speech patterns and qualitative changes in conversational behaviour after therapy. We describe this approach as a novel method to generate functional words (prepositions, articles, ) around content words ( nouns, verbs and adjectives ) using a combination of Natural Language Processing and Deep Learning algorithms. The applications of this approach can be used to assist communication. The approach the paper investigates is : LSTMs or Seq2Seq: A sequence2sequence approach (seq2seq) or LSTM would take in a sequence of inputs and output sequence. This approach needs a significant amount of training data, with each training data containing pairs such as (content words, complete sentence). We generate such data by starting with complete sentences from a text source, removing functional words to get just the content words. However, this approach would require a lot of training data to get a coherent input. The assumptions of this approach is that the content words received in the inputs of both text models are to be preserved, i.e, won't alter after the functional grammar is slotted in. This is a potential limit to cases of severe Agrammatism where such order might not be inherently correct. The applications of this approach can be used to assist communication mild Agrammatism in non-fluent Aphasia Cases. Thus by generating these function words around the content words, we can provide meaningful sentence options to the patient for articulate conversations. Thus our project translates the use case of generating sentences from content-specific words into an assistive technology for non-Fluent Aphasia Patients.Keywords: aphasia, expressive aphasia, assistive algorithms, neurology, machine learning, natural language processing, language disorder, behaviour disorder, sequence to sequence, LSTM
Procedia PDF Downloads 162643 Molecular Diversity of Forensically Relevant Insects from the Cadavers of Lahore
Authors: Sundus Mona, Atif Adnan, Babar Ali, Fareeha Arshad, Allah Rakha
Abstract:
Molecular diversity is the variation in the abundance of species. Forensic entomology is a neglected field in Pakistan. Insects collected from the crime scene should be handled by forensic entomologists who are currently virtually non-existent in Pakistan. Correct identification of insect specimen along with knowledge of their biodiversity can aid in solving many problems related to complicated forensic cases. Inadequate morphological identification and insufficient thermal biological studies limit the entomological utility in Forensic Medicine. Recently molecular identification of entomological evidence has gained attention globally. DNA barcoding is the latest and established method for species identification. Only proper identification can provide a precise estimation of postmortem intervals. Arthropods are known to be the first tourists scavenging on decomposing dead matter. The objective of the proposed study was to identify species by molecular techniques and analyze their phylogenetic importance with barcoded necrophagous insect species of early succession on human cadavers. Based upon this identification, the study outcomes will be the utilization of established DNA bar codes to identify carrion feeding insect species for concordant estimation of post mortem interval. A molecular identification method involving sequencing of a 658bp ‘barcode’ fragment of the mitochondrial cytochrome oxidase subunit 1 (CO1) gene from collected specimens of unknown dipteral species from cadavers of Lahore was evaluated. Nucleotide sequence divergences were calculated using MEGA 7 and Arlequin, and a neighbor-joining phylogenetic tree was generated. Three species were identified, Chrysomya megacephala, Chrysomya saffranea, and Chrysomya rufifacies with low genetic diversity. The fixation index was 0.83992 that suggests a need for further studies to identify and classify forensically relevant insects in Pakistan. There is an exigency demand for further research especially when immature forms of arthropods are recovered from the crime scene.Keywords: molecular diversity, DNA barcoding, species identification, forensically relevant
Procedia PDF Downloads 147642 Effect of Sulphur Concentration on Microbial Population and Performance of a Methane Biofilter
Authors: Sonya Barzgar, J. Patrick, A. Hettiaratchi
Abstract:
Methane (CH4) is reputed as the second largest contributor to greenhouse effect with a global warming potential (GWP) of 34 related to carbon dioxide (CO2) over the 100-year horizon, so there is a growing interest in reducing the emissions of this gas. Methane biofiltration (MBF) is a cost effective technology for reducing low volume point source emissions of methane. In this technique, microbial oxidation of methane is carried out by methane-oxidizing bacteria (methanotrophs) which use methane as carbon and energy source. MBF uses a granular medium, such as soil or compost, to support the growth of methanotrophic bacteria responsible for converting methane to carbon dioxide (CO₂) and water (H₂O). Even though the biofiltration technique has been shown to be an efficient, practical and viable technology, the design and operational parameters, as well as the relevant microbial processes have not been investigated in depth. In particular, limited research has been done on the effects of sulphur on methane bio-oxidation. Since bacteria require a variety of nutrients for growth, to improve the performance of methane biofiltration, it is important to establish the input quantities of nutrients to be provided to the biofilter to ensure that nutrients are available to sustain the process. The study described in this paper was conducted with the aim of determining the influence of sulphur on methane elimination in a biofilter. In this study, a set of experimental measurements has been carried out to explore how the conversion of elemental sulphur could affect methane oxidation in terms of methanotrophs growth and system pH. Batch experiments with different concentrations of sulphur were performed while keeping the other parameters i.e. moisture content, methane concentration, oxygen level and also compost at their optimum level. The study revealed the tolerable limit of sulphur without any interference to the methane oxidation as well as the particular sulphur concentration leading to the greatest methane elimination capacity. Due to the sulphur oxidation, pH varies in a transient way which affects the microbial growth behavior. All methanotrophs are incapable of growth at pH values below 5.0 and thus apparently are unable to oxidize methane. Herein, the certain pH for the optimal growth of methanotrophic bacteria is obtained. Finally, monitoring methane concentration over time in the presence of sulphur is also presented for laboratory scale biofilters.Keywords: global warming, methane biofiltration (MBF), methane oxidation, methanotrophs, pH, sulphur
Procedia PDF Downloads 234641 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances
Authors: P. Mounnarath, U. Schmitz, Ch. Zhang
Abstract:
Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis
Procedia PDF Downloads 434640 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 533639 Determination of Influence Lines for Train Crossings on a Tied Arch Bridge to Optimize the Construction of the Hangers
Authors: Martin Mensinger, Marjolaine Pfaffinger, Matthias Haslbeck
Abstract:
The maintenance and expansion of the railway network represents a central task for transport planning in the future. In addition to the ultimate limit states, the aspects of resource conservation and sustainability are increasingly more necessary to include in the basic engineering. Therefore, as part of the AiF research project, ‘Integrated assessment of steel and composite railway bridges in accordance with sustainability criteria’, the entire lifecycle of engineering structures is involved in planning and evaluation, offering a way to optimize the design of steel bridges. In order to reduce the life cycle costs and increase the profitability of steel structures, it is particularly necessary to consider the demands on hanger connections resulting from fatigue. In order for accurate analysis, a number simulations were conducted as part of the research project on a finite element model of a reference bridge, which gives an indication of the internal forces of the individual structural components of a tied arch bridge, depending on the stress incurred by various types of trains. The calculations were carried out on a detailed FE-model, which allows an extraordinarily accurate modeling of the stiffness of all parts of the constructions as it is made up surface elements. The results point to a large impact of the formation of details on fatigue-related changes in stress, on the one hand, and on the other, they could depict construction-specific specifics over the course of adding stress. Comparative calculations with varied axle-stress distribution also provide information about the sensitivity of the results compared to the imposition of stress and axel distribution on the stress-resultant development. The calculated diagrams help to achieve an optimized hanger connection design through improved durability, which helps to reduce the maintenance costs of rail networks and to give practical application notes for the formation of details.Keywords: fatigue, influence line, life cycle, tied arch bridge
Procedia PDF Downloads 325638 Preventative Maintenance, Impact on the Optimal Replacement Strategy of Secondhand Products
Authors: Pin-Wei Chiang, Wen-Liang Chang, Ruey-Huei Yeh
Abstract:
This paper investigates optimal replacement and preventative maintenance policies of secondhand products under a Finite Planning Horizon (FPH). Any consumer wishing to replace their product under FPH would have it undergo minimal repairs. The replacement provided would be required to undergo periodical preventive maintenance done to avoid product failure. Then, a mathematical formula for disbursement cost for products under FPH can be derived. Optimal policies are then obtained to minimize cost. In the first of two segments of the paper, a model for initial product purchase of either new or secondhand products is used. This model is built by analyzing product purchasing price, surplus value of product, as well as the minimal repair cost. The second segment uses a model for replacement products, which are also secondhand products with no limit on usage. This model analyzes the same components as the first as well as expected preventative maintenance cost. Using these two models, a formula for the expected final total cost can be developed. The formula requires four variables (optimal preventive maintenance level, preventive maintenance frequency, replacement timing, age of replacement product) to find minimal cost requirement. Based on analysis of the variables using the expected total final cost model, it was found that the purchasing price and length of ownership were directly related. Also, consumers should choose the secondhand product with the higher usage for replacement. Products with higher initial usage upon acquisition require an earlier replacement schedule. In this case, replacements should be made with a secondhand product with less usage. In addition, preventative maintenance also significantly reduces cost. Consumers that plan to use products for longer periods of time replace their products later. Hence these consumers should choose the secondhand product with lesser initial usage for replacement. Preventative maintenance also creates significant total cost savings in this case. This study provides consumers with a method of calculating both the ideal amount of usage of the products they should purchase as well as the frequency and level of preventative maintenance that should be conducted in order to minimize cost and maintain product function.Keywords: finite planning horizon, second hand product, replacement, preventive maintenance, minimal repair
Procedia PDF Downloads 472637 Investigation of Existing Guidelines for Four-Legged Angular Telecommunication Tower
Authors: Sankara Ganesh Dhoopam, Phaneendra Aduri
Abstract:
Lattice towers are light weight structures which are primarily governed by the effects of wind loading. Ensuring a precise assessment of wind loads on the tower structure, antennas, and associated equipment is vital for the safety and efficiency of tower design. Earlier, the Indian standards are not available for design of telecom towers. Instead, the industry conventionally relied on the general building wind loading standard for calculating loads on tower components and the transmission line tower design standard for designing the angular members of the towers. Subsequently, the Bureau of Indian Standards (BIS) revised these standards and angular member design standard. While the transmission line towers are designed using the above standard, a full-scale model test will be done to prove the design. Telecom angular towers are also designed using the same with overload factor/factor of safety without full scale tower model testing. General construction in steel design code is available with limit state design approach and is applicable to the design of general structures involving angles and tubes but not used for angle member design of towers. Recently, in response to the evolving industry needs, the Bureau of Indian Standards (BIS) introduced a new standard titled “Isolated Towers, Masts, and Poles using structural steel -Code of practice” for the design of telecom towers. This study focuses on a 40m four legged angular tower to compare loading calculations and member designs between old and new standards. Additionally, a comparative analysis aligning with the new code provisions with international loading and design standards with a specific focus on American standards has been carried out. This paper elaborates code-based provisions used for load and member design calculations, including the influence of "ka" area averaging factor introduced in new wind load case.Keywords: telecom, angular tower, PLS tower, GSM antenna, microwave antenna, IS 875(Part-3):2015, IS 802(Part-1/sec-2):2016, IS 800:2007, IS 17740:2022, ANSI/TIA-222G, ANSI/TIA-222H.
Procedia PDF Downloads 82636 Antimicrobial Activity of 2-Nitro-1-Propanol and Lauric Acid against Gram-Positive Bacteria
Authors: Robin Anderson, Elizabeth Latham, David Nisbet
Abstract:
Propagation and dissemination of antimicrobial resistant and pathogenic microbes from spoiled silages and composts represents a serious public health threat to humans and animals. In the present study, the antimicrobial activity of the short chain nitro-compound, 2-nitro-1-propanol (9 mM) as well as the medium chain fatty acid, lauric acid, and its glycerol monoester, monolaurin, (each at 25 and 17 µmol/mL, respectfully) were investigated against select pathogenic and multi-drug resistant antimicrobial resistant Gram-positive bacteria common to spoiled silages and composts. In an initial study, we found that growth rates of a multi-resistant Enterococcus faecalis (expressing resistance against erythromycin, quinupristin/dalfopristin and tetracycline) and Staphylococcus aureus strain 12600 (expressing resistance against erythromycin, linezolid, penicillin, quinupristin/dalfopristin and vancomycin) were more than 78% slower (P < 0.05) by 2-nitro-1-propanol treatment during culture (n = 3/treatment) in anaerobically prepared ½ strength Brain Heart Infusion broth at 37oC when compared to untreated controls (0.332 ± 0.04 and 0.108 ± 0.03 h-1, respectively). The growth rate of 2-nitro-1-propanol-treated Listeria monocytogenes was also decreased by 96% (P < 0.05) when compared to untreated controls cultured similarly (0.171 ± 0.01 h-1). Maximum optical densities measured at 600 nm were lower (P < 0.05) in 2-nitro-1-propanol-treated cultures (0.053 ± 0.01, 0.205 ± 0.02 and 0.041 ± 0.01, respectively) than in untreated controls (0.483 ± 0.02, 0.523 ± 0.01 and 0.427 ± 0.01, respectively) for E. faecalis, S. aureus and L. monocytogenes, respectively. When tested against mixed microbial populations during anaerobic 24 h incubation of spoiled silage, significant effects of treatment with 1 mg 2-nitro-1-propanol (approximately 9.5 µmol/g) or 5 mg lauric acid/g (approximately 25 µmol/g) on populations of wildtype Enterococcus and Listeria were not observed. Mixed populations treated with 5 mg monolaurin/g (approximately 17 µmol/g) had lower (P < 0.05) viable cell counts of wildtype enterococci than untreated controls after 6 h incubation (2.87 ± 1.03 versus 5.20 ± 0.25 log10 colony forming units/g, respectively) but otherwise significant effects of monolaurin were not observed. These results reveal differential susceptibility of multi-drug resistant enterococci and staphylococci as well as L. monocytogenes to the inhibitory activity of 2-nitro-1-propanol and the medium chain fatty acid, lauric acid and its glycerol monoester, monolaurin. Ultimately, these results may lead to improved treatment technologies to preserve the microbiological safety of silages and composts.Keywords: 2-nitro-1-propanol, lauric acid, monolaurin, gram positive bacteria
Procedia PDF Downloads 108635 A Review of Gas Hydrate Rock Physics Models
Authors: Hemin Yuan, Yun Wang, Xiangchun Wang
Abstract:
Gas hydrate is drawing attention due to the fact that it has an enormous amount all over the world, which is almost twice the conventional hydrocarbon reserves, making it a potential alternative source of energy. It is widely distributed in permafrost and continental ocean shelves, and many countries have launched national programs for investigating the gas hydrate. Gas hydrate is mainly explored through seismic methods, which include bottom simulating reflectors (BSR), amplitude blanking, and polarity reverse. These seismic methods are effective at finding the gas hydrate formations but usually contain large uncertainties when applying to invert the micro-scale petrophysical properties of the formations due to lack of constraints. Rock physics modeling links the micro-scale structures of the rocks to the macro-scale elastic properties and can work as effective constraints for the seismic methods. A number of rock physics models have been proposed for gas hydrate modeling, which addresses different mechanisms and applications. However, these models are generally not well classified, and it is confusing to determine the appropriate model for a specific study. Moreover, since the modeling usually involves multiple models and steps, it is difficult to determine the source of uncertainties. To solve these problems, we summarize the developed models/methods and make four classifications of the models according to the hydrate micro-scale morphology in sediments, the purpose of reservoir characterization, the stage of gas hydrate generation, and the lithology type of hosting sediments. Some sub-categories may overlap each other, but they have different priorities. Besides, we also analyze the priorities of different models, bring up the shortcomings, and explain the appropriate application scenarios. Moreover, by comparing the models, we summarize a general workflow of the modeling procedure, which includes rock matrix forming, dry rock frame generating, pore fluids mixing, and final fluid substitution in the rock frame. These procedures have been widely used in various gas hydrate modeling and have been confirmed to be effective. We also analyze the potential sources of uncertainties in each modeling step, which enables us to clearly recognize the potential uncertainties in the modeling. In the end, we explicate the general problems of the current models, including the influences of pressure and temperature, pore geometry, hydrate morphology, and rock structure change during gas hydrate dissociation and re-generation. We also point out that attenuation is also severely affected by gas hydrate in sediments and may work as an indicator to map gas hydrate concentration. Our work classifies rock physics models of gas hydrate into different categories, generalizes the modeling workflow, analyzes the modeling uncertainties and potential problems, which can facilitate the rock physics characterization of gas hydrate bearding sediments and provide hints for future studies.Keywords: gas hydrate, rock physics model, modeling classification, hydrate morphology
Procedia PDF Downloads 157634 Estimating Affected Croplands and Potential Crop Yield Loss of an Individual Farmer Due to Floods
Authors: Shima Nabinejad, Holger Schüttrumpf
Abstract:
Farmers who are living in flood-prone areas such as coasts are exposed to storm surges increased due to climate change. Crop cultivation is the most important economic activity of farmers, and in the time of flooding, agricultural lands are subject to inundation. Additionally, overflow saline water causes more severe damage outcomes than riverine flooding. Agricultural crops are more vulnerable to salinity than other land uses for which the economic damages may continue for a number of years even after flooding and affect farmers’ decision-making for the following year. Therefore, it is essential to assess what extent the agricultural areas are flooded and how much the associated flood damage to each individual farmer is. To address these questions, we integrated farmers’ decision-making at farm-scale with flood risk management. The integrated model includes identification of hazard scenarios, failure analysis of structural measures, derivation of hydraulic parameters for the inundated areas and analysis of the economic damages experienced by each farmer. The present study has two aims; firstly, it attempts to investigate the flooded cropland and potential crop damages for the whole area. Secondly, it compares them among farmers’ field for three flood scenarios, which differ in breach locations of the flood protection structure. To achieve its goal, the spatial distribution of fields and cultivated crops of farmers were fed into the flood risk model, and a 100-year storm surge hydrograph was selected as the flood event. The study area was Pellworm Island that is located in the German Wadden Sea National Park and surrounded by North Sea. Due to high salt content in seawater of North Sea, crops cultivated in the agricultural areas of Pellworm Island are 100% destroyed by storm surges which were taken into account in developing of depth-damage curve for analysis of consequences. As a result, inundated croplands and economic damages to crops were estimated in the whole Island which was further compared for six selected farmers under three flood scenarios. The results demonstrate the significance and the flexibility of the proposed model in flood risk assessment of flood-prone areas by integrating flood risk management and decision-making.Keywords: crop damages, flood risk analysis, individual farmer, inundated cropland, Pellworm Island, storm surges
Procedia PDF Downloads 255633 Study of Porous Metallic Support for Intermediate-Temperature Solid Oxide Fuel Cells
Authors: S. Belakry, D. Fasquelle, A. Rolle, E. Capoen, R. N. Vannier, J. C. Carru
Abstract:
Solid oxide fuel cells (SOFCs) are promising devices for energy conversion due to their high electrical efficiency and eco-friendly behavior. Their performance is not only influenced by the microstructural and electrical properties of the electrodes and electrolyte but also depends on the interactions at the interfaces. Nowadays, commercial SOFCs are electrically efficient at high operating temperatures, typically between 800 and 1000 °C, which restricts their real-life applications. The present work deals with the objectives to reduce the operating temperature and to develop cost-effective intermediate-temperature solid oxide fuel cells (IT-SOFCs). This work focuses on the development of metal-supported solid oxide fuel cells (MS-IT-SOFCs) that would provide cheaper SOFC cells with increased lifetime and reduced operating temperature. In the framework, the local company TIBTECH brings its skills for the manufacturing of porous metal supports. This part of the work focuses on the physical, chemical, and electrical characterizations of porous metallic supports (stainless steel 316 L and FeCrAl alloy) under different exposure conditions of temperature and atmosphere by studying oxidation, mechanical resistance, and electrical conductivity of the materials. Within the target operating temperature (i.e., 500 to 700 ° C), the stainless steel 316 L and FeCrAl alloy slightly oxidize in the air and H2, but don’t deform; whereas under Ar atmosphere, they oxidize more than with previously mentioned atmospheres. Above 700 °C under air and Ar, the two metallic supports undergo high oxidation. From 500 to 700 °C, the resistivity of FeCrAl increases by 55%. But nevertheless, the FeCrAl resistivity increases more slowly than the stainless steel 316L resistivity. This study allows us to verify the compatibility of electrodes and electrolyte materials with metallic support at the operating requirements of the IT-SOFC cell. The characterizations made in this context will also allow us to choose the most suitable fabrication process for all functional layers in order to limit the oxidation of the metallic supports.Keywords: stainless steel 316L, FeCrAl alloy, solid oxide fuel cells, porous metallic support
Procedia PDF Downloads 91632 Fire Safety Assessment of At-Risk Groups
Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen
Abstract:
Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty
Procedia PDF Downloads 101631 Dosimetry in Interventional Radiology Examinations for Occupational Exposure Monitoring
Authors: Ava Zarif Sanayei, Sedigheh Sina
Abstract:
Interventional radiology (IR) uses imaging guidance, including X-rays and CT scans, to deliver therapy precisely. Most IR procedures are performed under local anesthesia and start with a small needle being inserted through the skin, which may be called pinhole surgery or image-guided surgery. There is increasing concern about radiation exposure during interventional radiology procedures due to procedure complexity. The basic aim of optimizing radiation protection as outlined in ICRP 139, is to strike a balance between image quality and radiation dose while maximizing benefits, ensuring that diagnostic interpretation is satisfactory. This study aims to estimate the equivalent doses to the main trunk of the body for the Interventional radiologist and Superintendent using LiF: Mg, Ti (TLD-100) chips at the IR department of a hospital in Shiraz, Iran. In the initial stage, the dosimeters were calibrated with the use of various phantoms. Afterward, a group of dosimeters was prepared, following which they were used for three months. To measure the personal equivalent dose to the body, three TLD chips were put in a tissue-equivalent batch and used under a protective lead apron. After the completion of the duration, TLDs were read out by a TLD reader. The results revealed that these individuals received equivalent doses of 387.39 and 145.11 µSv, respectively. The findings of this investigation revealed that the total radiation exposure to the staff was less than the annual limit of occupational exposure. However, it's imperative to implement appropriate radiation protection measures. Although the dose received by the interventional radiologist is a bit noticeable, it may be due to the reason for using conventional equipment with over-couch x-ray tubes for interventional procedures. It is therefore important to use dedicated equipment and protective means such as glasses and screens whenever compatible with the intervention when they are available or have them fitted to equipment if they are not present. Based on the results, the placement of staff in an appropriate location led to increasing the dose to the radiologist. Manufacturing and installation of moveable lead curtains with a thickness of 0.25 millimeters can effectively minimize the radiation dose to the body. Providing adequate training on radiation safety principles, particularly for technologists, can be an optimal approach to further decreasing exposure.Keywords: interventional radiology, personal monitoring, radiation protection, thermoluminescence dosimetry
Procedia PDF Downloads 61630 A Lower Dose of Topiramate with Enough Antiseizure Effect: A Realistic Therapeutic Range of Topiramate
Authors: Seolah Lee, Yoohyk Jang, Soyoung Lee, Kon Chu, Sang Kun Lee
Abstract:
Objective: The International League Against Epilepsy (ILAE) currently suggests a topiramate serum level range of 5-20 mg/L. However, numerous institutions have observed substantial drug response at lower levels. This study aims to investigate the correlation between topiramate serum levels, drug responsiveness, and adverse events to establish a more accurate and tailored therapeutic range. Methods: We retrospectively analyzed topiramate serum samples collected between January 2017 and January 2022 at Seoul National University Hospital. Clinical data, including serum levels, antiseizure regimens, seizure frequency, and adverse events, were collected. Patient responses were categorized as "insufficient" (reduction in seizure frequency <50%) or "sufficient" (reduction ≥ 50%). Within the "sufficient" group, further subdivisions included seizure-free and tolerable seizure subgroups. A population pharmacokinetic model estimated serum levels from spot measurements. ROC curve analysis determined the optimal serum level cut-off. Results: A total of 389 epilepsy patients, with 555 samples, were reviewed, having a mean dose of 178.4±117.9 mg/day and a serum level of 3.9±2.8 mg/L. Out of the samples, only 5.6% (n=31) exhibited insufficient response, with a mean serum level of 3.6±2.5 mg/L. In contrast, 94.4% (n=524) of samples demonstrated sufficient response, with a mean serum level of 4.0±2.8 mg/L. This difference was not statistically significant (p = 0.45). Among the 78 reported adverse events, logistic regression analysis identified a significant association between ataxia and serum concentration (p = 0.04), with an optimal cut-off value of 6.5 mg/L. In the subgroup of patients receiving monotherapy, those in the tolerable seizure group exhibited a significantly higher serum level compared to the seizure-free group (4.8±2.0 mg/L vs 3.4±2.3 mg/L, p < 0.01). Notably, patients in the tolerable seizure group displayed a higher likelihood of progressing into drug-resistant epilepsy during follow-up visits compared to the seizure-free group. Significance: This study proposed an optimal therapeutic concentration for topiramate based on the patient's responsiveness to the drug and the incidence of adverse effects. We employed a population pharmacokinetic model and analyzed topiramate serum levels to recommend a serum level below 6.5 mg/L to mitigate the risk of ataxia-related side effects. Our findings also indicated that topiramate dose elevation is unnecessary for suboptimal responders, as the drug's effectiveness plateaus at minimal doses.Keywords: topiramate, therapeutic range, low dos, antiseizure effect
Procedia PDF Downloads 53629 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)
Authors: Azimollah Aleshzadeh, Enver Vural Yavuz
Abstract:
The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping
Procedia PDF Downloads 131628 The Effectiveness of Men Who Have Sex with Men (MSM) Sensitivity Training for Nigerian Health Care Providers (HCPs)
Authors: Chiedu C. Ifekandu, Olusegun Sangowawa, Jean E. Njab
Abstract:
Background: Health care providers (HCPs) in Nigeria receive little or no training of the healthcare needs of men who have sex with men (MSM) limiting the quality and effectiveness of comprehensive HIV prevention and treatment services. Consequently, most MSM disguise themselves to access services which limit the quality of care provided partly due to challenges related to stigma and discrimination, and breach of confidentiality. Objective: To assess the knowledge of healthcare providers on effective intervention for MSM. Methods: We trained 122 HIV focal persons drawn from 60 health facilities from twelve Nigerian states. , the participants were requested to complete a pre-training questionnaire to assess their level of working experience with key populations as a baseline. Participants included male and female doctors, nurses and counselors/testers. A test was administered to measure their knowledge on MSM sexual risk practices, HIV prevention and healthcare needs and also to assess their attitudes (including homophobia) and beliefs and how it affects service uptake by key populations particularly MSM prior and immediately after the training to ascertain the impact of the training. Results: The mean age of the HCP was 38 years +/- SD Of the 122 HCPs (45 % female, 55 % male; 85 % counsellor/testers; 15 % doctors and nurses; 92 % working in government facilities) from 42 health facilities were trained, of which 105 attempted the test questions. At the baseline, few HCPs reported any prior sensitivity training on MSM. Most of the HCPs had limited knowledge of MSM sexual health needs. Over 90% of the HCPs believed that homosexuality is a mental illness. 8 % do not consider MSM, FSW and PWID as key populations for HIV infection. 45 % lacked knowledge on MSM anal sexual practices. The post-test showed that homophobic attitudes had decreased significantly by the end of the training; the health care providers have acquired basic knowledge compared to the pre-test. Conclusions: Scaling up MSM sensitivity training for Nigerian HCPs is likely to be a timely and effective means to improve their understanding of MSM-related health issues, reduce homophobic sentiments and enhance their capacity to provide responsive HIV prevention, treatment and care services in a supportive and non-stigmatizing environment.Keywords: healthcare providers, key population, men who have sex with men, HCT
Procedia PDF Downloads 354627 Heteroatom Doped Binary Metal Oxide Modified Carbon as a Bifunctional Electrocatalysts for all Vanadium Redox Flow Battery
Authors: Anteneh Wodaje Bayeh, Daniel Manaye Kabtamu, Chen-Hao Wang
Abstract:
As one of the most promising electrochemical energy storage systems, vanadium redox flow batteries (VRFBs) have received increasing attention owing to their attractive features for largescale storage applications. However, their high production cost and relatively low energy efficiency still limit their feasibility. For practical implementation, it is of great interest to improve their efficiency and reduce their cost. One of the key components of VRFBs that can greatly influence the efficiency and final cost is the electrode, which provide the reactions sites for redox couples (VO²⁺/VO₂ + and V²⁺/V³⁺). Carbon-based materials are considered to be the most feasible electrode materials in the VRFB because of their excellent potential in terms of operation range, good permeability, large surface area, and reasonable cost. However, owing to limited electrochemical activity and reversibility and poor wettability due to its hydrophobic properties, the performance of the cell employing carbon-based electrodes remained limited. To address the challenges, we synthesized heteroatom-doped bimetallic oxide grown on the surface of carbon through the one-step approach. When applied to VRFBs, the prepared electrode exhibits significant electrocatalytic effect toward the VO²⁺/VO₂ + and V³⁺/V²⁺ redox reaction compared with that of pristine carbon. It is found that the presence of heteroatom on metal oxide promotes the absorption of vanadium ions. The controlled morphology of bimetallic metal oxide also exposes more active sites for the redox reaction of vanadium ions. Hence, the prepared electrode displays the best electrochemical performance with energy and voltage efficiencies of 74.8% and 78.9%, respectively, which is much higher than those of 59.8% and 63.2% obtained from the pristine carbon at high current density. Moreover, the electrode exhibit durability and stability in an acidic electrolyte during long-term operation for 1000 cycles at the higher current density.Keywords: VRFB, VO²⁺/VO₂ + and V³⁺/V²⁺ redox couples, graphite felt, heteroatom-doping
Procedia PDF Downloads 95626 Layer-By-Layer Deposition of Poly (Amidoamine) and Poly (Acrylic Acid) on Grafted-Polylactide Nonwoven with Different Surface Charge
Authors: Sima Shakoorjavan, Mahdieh Eskafi, Dawid Stawski, Somaye Akbari
Abstract:
In this study, poly (amidoamine) dendritic material (PAMAM) and poly (acrylic acid) (PAA) as polycation and polyanion were deposited on surface charged polylactide (PLA) nonwoven to study the relationship of dye absorption capacity of layered-PLA with the number of deposited layers. To produce negatively charged-PLA, acrylic acid (AA) was grafted on the PLA surface (PLA-g-AA) through a chemical redox reaction with the strong oxidizing agent. Spectroscopy analysis, water contact measurement, and FTIR-ATR analysis confirm the successful grafting of AA on the PLA surface through the chemical redox reaction method. In detail, an increase in dye absorption percentage by 19% and immediate absorption of water droplets ensured hydrophilicity of PLA-g-AA surface; and the presence of new carbonyl bond at 1530 cm-¹ and a wide peak of hydroxyl between 3680-3130 cm-¹ confirm AA grafting. In addition, PLA as linear polyester can undergo aminolysis, which is the cleavage of ester bonds and replacement with amid bonds when exposed to an aminolysis agent. Therefore, to produce positively charged PLA, PAMAM as amine-terminated dendritic material was introduced to PLA molecular chains at different conditions; (1) at 60 C for 0.5, 1, 1.5, 2 hours of aminolysis and (2) at room temperature (RT) for 1, 2, 3, and 4 hours of aminolysis. Weight changes and spectrophotometer measurements showed a maximum in weight gain graph and K/S value curve indicating the highest PAMAM attachment at 60 C for 1 hour and RT for 2 hours which is considered as an optimum condition. Also, the emerging new peak around 1650 cm-1 corresponding to N-H bending vibration and double wide peak at around 3670-3170 cm-1 corresponding to N-H stretching vibration confirm PAMAM attachment in selected optimum condition. In the following, regarding the initial surface charge of grafted-PLA, lbl deposition was performed and started with PAA or PAMAM. FTIR-ATR results confirm chemical changes in samples due to deposition of the first layer (PAA or PAMAM). Generally, spectroscopy analysis indicated that an increase in layer number costed dye absorption capacity. It can be due to the partial deposition of a new layer on the previously deposited layer; therefore, the available PAMAM at the first layer is more than the third layer. In detail, in the case of layer-PLA starting lbl with negatively charged, having PAMAM as the first top layer (PLA-g-AA/PAMAM) showed the highest dye absorption of both cationic and anionic model dye.Keywords: surface modification, layer-by-layer technique, dendritic materials, PAMAM, dye absorption capacity, PLA nonwoven
Procedia PDF Downloads 81625 Renewable Energy and Environment: Design of a Decision Aided Tool for Sustainable Development
Authors: Mustapha Ouardouz, Mina Amharref, Abdessamed Bernoussi
Abstract:
The future energy, for limited energy resources countries, goes through renewable energies (solar, wind etc.). The renewable energies constitute a major component of the energy strategy to cover a substantial part of the growing needs and contribute to environmental protection by replacing fossil fuels. Indeed, sustainable development involves the promotion of renewable energy and the preservation of the environment by the use of clean energy technologies to limit emissions of greenhouse gases and reducing the pressure exerted on the forest cover. So the impact studies, of the energy use on the environment and farm-related risks are necessary. For that, a global approach integrating all the various sectors involved in such project seems to be the best approach. In this paper we present an approach based on the multi criteria analysis and the realization of one pilot to achieve the development of an innovative geo-intelligent environmental platform. An implementation of this platform will collect, process, analyze and manage environmental data in connection with the nature of used energy in the studied region. As an application we consider a region in the north of Morocco characterized by intense agricultural and industrials activities and using diverse renewable energy. The strategic goals of this platform are; the decision support for better governance, improving the responsiveness of public and private companies connected by providing them in real time with reliable data, modeling and simulation possibilities of energy scenarios, the identification of socio-technical solutions to introduce renewable energies and estimate technical and implantable potential by socio-economic analyzes and the assessment of infrastructure for the region and the communities, the preservation and enhancement of natural resources for better citizenship governance through democratization of access to environmental information, the tool will also perform simulations integrating environmental impacts of natural disasters, particularly those linked to climate change. Indeed extreme cases such as floods, droughts and storms will be no longer rare and therefore should be integrated into such projects.Keywords: renewable energies, decision aided tool, environment, simulation
Procedia PDF Downloads 459