Search results for: safety equipment detection
735 Inhibitory Action of Fatty Acid Salts against Cladosporium cladosporioides and Dermatophagoides farinae
Authors: Yui Okuno, Mariko Era, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita
Abstract:
Introduction: Fungus and mite are known as allergens that cause an allergic disease for example asthma bronchiale and allergic rhinitis. Cladosporium cladosporioides is one of the most often detected fungi in the indoor environment and causes pollution and deterioration. Dermatophagoides farinae is major mite allergens indoors. Therefore, the creation of antifungal agents with high safety and the antifungal effect is required. Fatty acid salts are known that have antibacterial activities. This report describes the effects of fatty acid salts against Cladosporium cladosporioides NBRC 30314 and Dermatophagoides farinae. Methods: Potassium salts of 9 fatty acids (C4:0, C6:0, C8:0, C10:0, C12:0, C14:0, C18:1, C18:2, C18:3) were prepared by mixing the fatty acid with the appropriate amount of KOH solution to a concentration of 175 mM and pH 10.5. The antifungal method, the spore suspension (3.0×104 spores/mL) was mixed with a sample of fatty acid potassium (final concentration of 175 mM). Samples were counted at 0, 10, 60, 180 min by plating (100 µL) on PDA. Fungal colonies were counted after incubation for 3 days at 30 °C. The MIC (minimum inhibitory concentration) against the fungi was determined by the two-fold dilution method. Each fatty acid salts were inoculated separately with 400 µL of C. cladosporioides at 3.0 × 104 spores/mL. The mixtures were incubated at the respective temperature for each organism for 10 min. The tubes were then contacted with the fungi incubated at 30 °C for 7 days and examined for growth of spores on PDA. The acaricidal method, twenty D. farinae adult females were used and each adult was covered completely with 2 µL fatty acid potassium for 1 min. The adults were then dried with filter paper. The filter paper was folded and fixed by two clips and kept at 25 °C and 64 % RH. Mortalities were determained 48 h after treatment under the microscope. D. farina was considered to be dead if appendages did not move when prodded with a pin. Results and Conclusions: The results show that C8K, C10K, C12K, C14K was effective to decrease survival rate (4 log unit) of the fatty acids potassium incubated time for 10 min against C. cladosporioides. C18:3K was effective to decrease 4 log unit of the fatty acids potassium incubated time for 60 min. Especially, C12K was the highest antifungal activity and the MIC of C12K was 0.7 mM. On the other hand, the fatty acids potassium showed no acaricidal effects against D. farinae. The activity of D. farinae was not adversely affected after 48 hours. These results indicate that C12K has high antifungal activity against C. cladosporioides and suggest the fatty acid potassium will be used as an antifungal agent.Keywords: fatty acid salts, antifungal effects, acaricidal effects, Cladosporium cladosporioides, Dermatophagoides farinae
Procedia PDF Downloads 271734 Emergence of Fluoroquinolone Resistance in Pigs, Nigeria
Authors: Igbakura I. Luga, Alex A. Adikwu
Abstract:
A comparison of resistance to quinolones was carried out on isolates of Shiga toxin-producing Escherichia coliO157:H7 from cattle and mecA and nuc genes harbouring Staphylococcus aureus from pigs. The isolates were separately tested in the first and current decades of the 21st century. The objective was to demonstrate the dissemination of resistance to this frontline class of antibiotic by bacteria from food animals and bring to the limelight the spread of antibiotic resistance in Nigeria. A total of 10 isolates of the E. coli O157:H7 and 9 of mecA and nuc genes harbouring S. aureus were obtained following isolation, biochemical testing, and serological identification using the Remel Wellcolex E. coli O157:H7 test. Shiga toxin-production screening in the E. coli O157:H7 using the verotoxin E. coli reverse passive latex agglutination (VTEC-RPLA) test; and molecular identification of the mecA and nuc genes in S. aureus. Detection of the mecA and nuc genes were carried out using the protocol by the Danish Technical University (DTU) using the following primers mecA-1:5'-GGGATCATAGCGTCATTATTC-3', mecA-2: 5'-AACGATTGTGACACGATAGCC-3', nuc-1: 5'-TCAGCAAATGCATCACAAACAG-3', nuc-2: 5'-CGTAAATGCACTTGCTTCAGG-3' for the mecA and nuc genes, respectively. The nuc genes confirm the S. aureus isolates and the mecA genes as being methicillin-resistant and so pathogenic to man. The fluoroquinolones used in the antibiotic resistance testing were norfloxacin (10 µg) and ciprofloxacin (5 µg) in the E. coli O157:H7 isolates and ciprofloxacin (5 µg) in the S. aureus isolates. Susceptibility was tested using the disk diffusion method on Muller-Hinton agar. Fluoroquinolone resistance was not detected from isolates of E. coli O157:H7 from cattle. However, 44% (4/9) of the S. aureus were resistant to ciprofloxacin. Resistance of up to 44% in isolates of mecA and nuc genes harbouring S. aureus is a compelling evidence for the rapid spread of antibiotic resistance from bacteria in food animals from Nigeria. Ciprofloxacin is the drug of choice for the treatment of Typhoid fever, therefore widespread resistance to it in pathogenic bacteria is of great public health significance. The study concludes that antibiotic resistance in bacteria from food animals is on the increase in Nigeria. The National Food and Drug Administration and Control (NAFDAC) agency in Nigeria should implement the World Health Organization (WHO) global action plan on antimicrobial resistance. A good starting point can be coordinating the WHO, Office of International Epizootics (OIE), Food and Agricultural Organization (FAO) tripartite draft antimicrobial resistance monitoring and evaluation (M&E) framework in Nigeria.Keywords: Fluoroquinolone, Nigeria, resistance, Staphylococcus aureus
Procedia PDF Downloads 456733 Prescription of Maintenance Fluids in the Emergency Department
Authors: Adrian Craig, Jonathan Easaw, Rose Jordan, Ben Hall
Abstract:
The prescription of intravenous fluids is a fundamental component of inpatient management, but it is one which usually lacks thought. Fluids are a drug, which like any other can cause harm when prescribed inappropriately or wrongly. However, it is well recognised that it is poorly done, especially in the acute portals. The National Institute for Health and Care Excellence (NICE) recommends 1mmol/kg of potassium, sodium, and chloride per day. With various options of fluids, clinicians tend to face difficulty in choosing the most appropriate maintenance fluid, and there is a reluctance to prescribe potassium as part of an intravenous maintenance fluid regime. The aim was to prospectively audit the prescription of the first bag of intravenous maintenance fluids, the use of urea and electrolytes results to guide the choice of fluid and the use of fluid prescription charts, in a busy emergency department of a major trauma centre in Stoke-on-Trent, United Kingdom. This was undertaken over a week in early November 2016. Of those prescribed maintenance fluid only 8.9% were prescribed a fluid which was most appropriate for their daily electrolyte requirements. This audit has helped to highlight further the issues that are faced in busy Emergency Departments within hospitals that are stretched and lack capacity for prompt transfer to a ward. It has supported the findings of NICE, that emergency admission portals such as Emergency Departments poorly prescribed intravenous fluid therapy. The findings have enabled simple steps to be taken to educate clinicians about their fluid of choice. This has included: posters to remind clinicians to consider the urea and electrolyte values before prescription, suggesting the inclusion of a suggested intravenous fluid of choice in the prescription chart of the trust and the inclusion of a session within the introduction programme revising intravenous fluid therapy and daily electrolyte requirements. Moving forward, once the interventions have been implemented then, the data will be reaudited in six months to note any improvement in maintenance fluid choice. Alongside this, an audit of the rate of intravenous maintenance fluid therapy would be proposed to further increase patient safety by avoiding unintentional fluid overload which may cause unnecessary harm to patients within the hospital. In conclusion, prescription of maintenance fluid therapy was poor within the Emergency Department, and there is a great deal of opportunity for improvement. Therefore, the measures listed above will be implemented and the data reaudited.Keywords: chloride, electrolyte, emergency department, emergency medicine, fluid, fluid therapy, intravenous, maintenance, major trauma, potassium, sodium, trauma
Procedia PDF Downloads 321732 Nondecoupling Signatures of Supersymmetry and an Lμ-Lτ Gauge Boson at Belle-II
Authors: Heerak Banerjee, Sourov Roy
Abstract:
Supersymmetry, one of the most celebrated fields of study for explaining experimental observations where the standard model (SM) falls short, is reeling from the lack of experimental vindication. At the same time, the idea of additional gauge symmetry, in particular, the gauged Lμ-Lτ symmetric models have also generated significant interest. They have been extensively proposed in order to explain the tantalizing discrepancy in the predicted and measured value of the muon anomalous magnetic moment alongside several other issues plaguing the SM. While very little parameter space within these models remain unconstrained, this work finds that the γ + Missing Energy (ME) signal at the Belle-II detector will be a smoking gun for supersymmetry (SUSY) in the presence of a gauged U(1)Lμ-Lτ symmetry. A remarkable consequence of breaking the enhanced symmetry appearing in the limit of degenerate (s)leptons is the nondecoupling of the radiative contribution of heavy charged sleptons to the γ-Z΄ kinetic mixing. The signal process, e⁺e⁻ →γZ΄→γ+ME, is an outcome of this ubiquitous feature. Taking the severe constraints on gauged Lμ-Lτ models by several low energy observables into account, it is shown that any significant excess in all but the highest photon energy bin would be an undeniable signature of such heavy scalar fields in SUSY coupling to the additional gauge boson Z΄. The number of signal events depends crucially on the logarithm of the ratio of stau to smuon mass in the presence of SUSY. In addition, the number is also inversely proportional to the e⁺e⁻ collision energy, making a low-energy, high-luminosity collider like Belle-II an ideal testing ground for this channel. This process can probe large swathes of the hitherto free slepton mass ratio vs. additional gauge coupling (gₓ) parameter space. More importantly, it can explore the narrow slice of Z΄ mass (MZ΄) vs. gₓ parameter space still allowed in gauged U(1)Lμ-Lτ models for superheavy sparticles. The spectacular finding that the signal significance is independent of individual slepton masses is an exciting prospect indeed. Further, the prospect that signatures of even superheavy SUSY particles that may have escaped detection at the LHC may show up at the Belle-II detector is an invigorating revelation.Keywords: additional gauge symmetry, electron-positron collider, kinetic mixing, nondecoupling radiative effect, supersymmetry
Procedia PDF Downloads 126731 Risk Assessment on New Bio-Composite Materials Made from Water Resource Recovery
Authors: Arianna Nativio, Zoran Kapelan, Jan Peter van der Hoek
Abstract:
Bio-composite materials are becoming increasingly popular in various applications, such as the automotive industry. Usually, bio-composite materials are made from natural resources recovered from plants, now, a new type of bio-composite material has begun to be produced in the Netherlands. This material is made from resources recovered from drinking water treatments (calcite), wastewater treatment (cellulose), and material from surface water management (aquatic plants). Surface water, raw drinking water, and wastewater can be contaminated with pathogens and chemical compounds. Therefore, it would be valuable to develop a framework to assess, monitor, and control the potential risks. Indeed, the goal is to define the major risks in terms of human health, quality of materials, and environment associated with the production and application of these new materials. This study describes the general risk assessment framework, starting with a qualitative risk assessment. The qualitative risk analysis was carried out by using the HAZOP methodology for the hazard identification phase. The HAZOP methodology is logical and structured and able to identify the hazards in the first stage of the design when hazards and associated risks are not well known. The identified hazards were analyzed to define the potential associated risks, and then these were evaluated by using the qualitative Event Tree Analysis. ETA is a logical methodology used to define the consequences for a specific hazardous incidents, evaluating the failure modes of safety barriers and dangerous intermediate events that lead to the final scenario (risk). This paper shows the effectiveness of combining of HAZOP and qualitative ETA methodologies for hazard identification and risk mapping. Then, key risks were identified, and a quantitative framework was developed based on the type of risks identified, such as QMRA and QCRA. These two models were applied to assess human health risks due to the presence of pathogens and chemical compounds such as heavy metals into the bio-composite materials. Thus, due to these contaminations, the bio-composite product, during its application, might release toxic substances into the environment leading to a negative environmental impact. Therefore, leaching tests are going to be planned to simulate the application of these materials into the environment and evaluate the potential leaching of inorganic substances, assessing environmental risk.Keywords: bio-composite, risk assessment, water reuse, resource recovery
Procedia PDF Downloads 107730 Children Asthma; The Role of Molecular Pathways and Novel Saliva Biomarkers Assay
Authors: Seyedahmad Hosseini, Mohammadjavad Sotoudeheian
Abstract:
Introduction: Allergic asthma is a heterogeneous immuno-inflammatory disease based on Th-2-mediated inflammation. Histopathologic abnormalities of the airways characteristic of asthma include epithelial damage and subepithelial collagen deposition. Objectives: Human bronchial epithelial cell genome expression of TNF‑α, IL‑6, ICAM‑1, VCAM‑1, nuclear factor (NF)‑κB signaling pathways up-regulate during inflammatory cascades. Moreover, immunofluorescence assays confirmed the nuclear translocation of NF‑κB p65 during inflammatory responses. An absolute LDH leakage assays suggestedLPS-inducedcells injury, and the associated mechanisms are co-incident events. LPS-induced phosphorylation of ERKand JNK causes inflammation in epithelial cells through inhibition of ERK and JNK activation and NF-κB signaling pathway. Furthermore, the inhibition of NF-κB mRNA expression and the nuclear translocation of NF-κB lead to anti-inflammatory events. Likewise, activation of SUMF2 which inhibits IL-13 and reduces Th2-cytokines, NF-κB, and IgE levels to ameliorate asthma. On the other hand, TNFα-induced mucus production reduced NF-κB activation through inhibition of the activation status of Rac1 and IκBα phosphorylation. In addition, bradykinin B2 receptor (B2R), which mediates airway remodeling, regulates through NF-κB. Bronchial B2R expression is constitutively elevated in allergic asthma. In addition, certain NF-κB -dependent chemokines function to recruit eosinophils in the airway. Besides, bromodomain containing 4 (BRD4) plays a significant role in mediating innate immune response in human small airway epithelial cells as well as transglutaminase 2 (TG2), which is detectable in saliva. So, the guanine nucleotide-binding regulatory protein α-subunit, Gα16, expresses a κB-driven luciferase reporter. This response was accompanied by phosphorylation of IκBα. Furthermore, expression of Gα16 in saliva markedly enhanced TNF-α-induced κB reporter activity. Methods: The applied method to form NF-κB activation is the electromobility shift assay (EMSA). Also, B2R-BRD4-TG2 complex detection by immunoassay method within saliva with EMSA of NF-κB activation may be a novel biomarker for asthma diagnosis and follow up. Conclusion: This concept introduces NF-κB signaling pathway as potential asthma biomarkers and promising targets for the development of new therapeutic strategies against asthma.Keywords: NF-κB, asthma, saliva, T-helper
Procedia PDF Downloads 95729 Evaluating Radiation Dose for Interventional Radiologists Performing Spine Procedures
Authors: Kholood A. Baron
Abstract:
While radiologist numbers specialized in spine interventional procedures are limited in Kuwait, the number of patients demanding these procedures is increasing rapidly. Due to this high demand, the workload of radiologists is increasing, which might represent a radiation exposure concern. During these procedures, the doctor’s hands are in very close proximity to the main radiation beam/ if not within it. The aim of this study is to measure the radiation dose for radiologists during several interventional procedures for the spine. Methods: Two doctors carrying different workloads were included. (DR1) was performing procedures in the morning and afternoon shifts, while (DR2) was performing procedures in the morning shift only. Comparing the radiation exposures that the hand of each doctor is receiving will assess radiation safety and help to set up workload regulations for radiologists carrying a heavy schedule of such procedures. Entrance Skin Dose (ESD) was measured via TLD (ThermoLuminescent Dosimetry) placed at the right wrist of the radiologists. DR1 was covering the morning shift in one hospital (Mubarak Al-Kabeer Hospital) and the afternoon shift in another hospital (Dar Alshifa Hospital). The TLD chip was placed in his gloves during the 2 shifts for a whole week. Since DR2 was covering the morning shift only in Al Razi Hospital, he wore the TLD during the morning shift for a week. It is worth mentioning that DR1 was performing 4-5 spine procedures/day in the morning and the same number in the afternoon and DR2 was performing 5-7 procedures/day. This procedure was repeated for 4 consecutive weeks in order to calculate the ESD value that a hand receives in a month. Results: In general, radiation doses that the hand received in a week ranged from 0.12 to 1.12 mSv. The ESD values for DR1 for the four consecutive weeks were 1.12, 0.32, 0.83, 0.22 mSv, thus for a month (4 weeks), this equals 2.49 mSv and calculated to be 27.39 per year (11 months-since each radiologist have 45 days of leave in each year). For DR2, the weekly ESD values are 0.43, 0.74, 0.12, 0.61 mSv, and thus, for a month, this equals 1.9 mSv, and for a year, this equals 20.9 mSv /year. These values are below the standard level and way below the maximum limit of 500 mSv per year (set by ICRP = International Council of Radiation Protection). However, it is worth mentioning that DR1 was a senior consultant and hence needed less fluoro-time during each procedure. This is evident from the low ESD values of the second week (0.32) and the fourth week (0.22), even though he was performing nearly 10-12 procedures in a day /5 days a week. These values were lower or in the same range as those for DR2 (who was a junior consultant). This highlighted the importance of increasing the radiologist's skills and awareness of fluoroscopy time effect. In conclusion, the radiation dose that radiologists received during spine interventional radiology in our setting was below standard dose limits.Keywords: radiation protection, interventional radiology dosimetry, ESD measurements, radiologist radiation exposure
Procedia PDF Downloads 56728 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 4727 Interpretation of Two Indices for the Prediction of Cardiovascular Risk in Pediatric Obesity
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Obesity and weight gain are associated with increased risk of developing cardiovascular diseases and the progression of liver fibrosis. Aspartate transaminase–to-platelet count ratio index (AST-to-PLT, APRI) and fibrosis-4 (FIB-4) were primarily considered as the formulas capable of differentiating hepatitis from cirrhosis. Recently, they have found clinical use as measures of liver fibrosis and cardiovascular risk. However, their status in children has not been evaluated in detail yet. The aim of this study is to determine APRI and FIB-4 status in obese (OB) children and compare them with values found in children with normal body mass index (N-BMI). A total of sixty-eight children examined in the outpatient clinics of the Pediatrics Department in Tekirdag Namik Kemal University Medical Faculty were included in the study. Two groups were constituted. In the first group, thirty-five children with N-BMI, whose age- and sex-dependent BMI indices vary between 15 and 85 percentiles, were evaluated. The second group comprised thirty-three OB children whose BMI percentile values were between 95 and 99. Anthropometric measurements and routine biochemical tests were performed. Using these parameters, values for the related indices, BMI, APRI, and FIB-4, were calculated. Appropriate statistical tests were used for the evaluation of the study data. The statistical significance degree was accepted as p<0.05. In the OB group, values found for APRI and FIB-4 were higher than those calculated for the N-BMI group. However, there was no statistically significant difference between the N-BMI and OB groups in terms of APRI and FIB-4. A similar pattern was detected for triglyceride (TRG) values. The correlation coefficient and degree of significance between APRI and FIB-4 were r=0.336 and p=0.065 in the N-BMI group. On the other hand, they were r=0.707 and p=0.001 in the OB group. Associations of these two indices with TRG have shown that this parameter was strongly correlated (p<0.001) both with APRI and FIB-4 in the OB group, whereas no correlation was calculated in children with N-BMI. Triglycerides are associated with an increased risk of fatty liver, which can progress to severe clinical problems such as steatohepatitis, which can lead to liver fibrosis. Triglycerides are also independent risk factors for cardiovascular disease. In conclusion, the lack of correlation between TRG and APRI as well as FIB-4 in children with N-BMI, along with the detection of strong correlations of TRG with these indices in OB children, was the indicator of the possible onset of the tendency towards the development of fatty liver in OB children. This finding also pointed out the potential risk for cardiovascular pathologies in OB children. The nature of the difference between APRI vs FIB-4 correlations in N-BMI and OB groups (no correlation versus high correlation), respectively, may be the indicator of the importance of involving age and alanine transaminase parameters in addition to AST and PLT in the formula designed for FIB-4.Keywords: APRI, children, FIB-4, obesity, triglycerides
Procedia PDF Downloads 347726 Human Rabies Survivors in India: Epidemiological, Immunological and Virological Studies
Authors: Madhusudana S. N., Reeta Mani, Ashwini S. Satishchandra P., Netravati, Udhani V., Fiaz A., Karande S.
Abstract:
Rabies is an acute encephalitis which is considered 100% fatal despite occasional reports of survivors. However, in recent times more cases of human rabies survivors are being reported. In the last 5 years, there are six laboratories confirmed human rabies survivors in India alone. All cases were children below 15 years and all contracted the disease by dog bites. All of them also had received the full or partial course of rabies vaccination and 4 out of 6 had also received rabies immunoglobulin. All cases were treated in intensive care units in hospitals at Bangalore, Mumbai, Chandigarh, Lucknow and Goa. We report here the results of immunological and virological studies conducted at our laboratory on these patients. The clinical samples that were obtained from these patients were Serum, CSF, nuchal skin biopsy and saliva. Serum and CSF samples were subjected to standard RFFIT for estimation of rabies neutralizing antibodies. Skin biopsy, CSF and saliva were processed by TaqMan real-time PCR for detection of viral RNA. CSF, saliva and skin homogenates were also processed for virus isolation by inoculation of suckling mice. The PBMCs isolated from fresh blood was subjected to ELISPOT assay to determine the type of immune response (Th1/Th2). Both CSF and serum were also investigated for selected cytokines by Luminex assay. The level of antibodies to virus G protein and N protein were determined by ELISA. All survivors had very high titers of RVNA in serum and CSF 100 fold higher than non-survivors and vaccine controls. A five-fold rise in titer could be demonstrated in 4 out of 6 patients. All survivors had a significant increase in antibodies to G protein in both CSF and serum when compared to non-survivors. There was a profound and robust Th1 response in all survivors indicating that interferon gamma could play an important factor in virus clearance. We could isolate viral RNA in only one patient four years after he had developed symptoms. The partial N gene sequencing revealed 99% homology to species I strain prevalent in India. Levels of selected cytokines in CSF and serum did not reveal any difference between survivors and non-survivors. To conclude, survival from rabies is mediated by virus-specific immune responses of the host and clearance of rabies virus from CNS may involve the participation of both Th2 and Th1 immune responses.Keywords: rabies, rabies treatment, rabies survivors, immune reponse in rabies encephalitis
Procedia PDF Downloads 328725 A Holistic Analysis of the Emergency Call: From in Situ Negotiation to Policy Frameworks and Back
Authors: Jo Angouri, Charlotte Kennedy, Shawnea Ting, David Rawlinson, Matthew Booker, Nigel Rees
Abstract:
Ambulance services need to balance the large volume of emergency (999 in the UK) calls they receive (e.g., West Midlands Ambulance Service reports per day about 4,000 999 calls; about 679,000 calls per year are received in Wales), with dispatching limited resource for on-site intervention to the most critical cases. The process by which Emergency Medical Dispatch (EMD) decisions are made is related to risk assessment and involves the caller and call-taker as well as clinical teams negotiating risk levels on a case-by-case basis. Medical Priority Dispatch System (MPDS – also referred to as Advanced Medical Priority Dispatch System AMPDS) are used in the UK by NHS Trusts (e.,g WAST) to process and prioritise 999 calls. MPDS / AMPDS provide structured protocols for call prioritisation and call management. Protocols/policy frameworks have not been examined before in the way we propose in our project. In more detail, the risk factors that play a role in the EMD negotiation between the caller and call-taker have been analysed in both medical and social science research. Research has focused on the structural, morphological and phonological aspects that could improve, and train, human-to-human interaction or automate risk detection, as well as the medical factors that need to be captured from the caller to inform the dispatch decision. There are two significant gaps in our knowledge that we address in our work: 1. the role of backstage clinical teams in translating the caller/call-taker interaction in their internal risk negotiation and, 2. the role of policy frameworks, protocols and regulations in the framing of institutional priorities and resource allocation. We take a multi method approach and combine the analysis of 999 calls with the analysis of policy documents. We draw on interaction analysis, corpus methodologies and thematic analysis. In this paper, we report on our preliminary findings and focus in particular on the risk factors we have identified and the relationship with the regulations that create the frame within which teams operate. We close the paper with implications of our study for providing evidence-based policy intervention and recommendations for further research.Keywords: emergency (999) call, interaction analysis, discourse analysis, ambulance dispatch, medical discourse
Procedia PDF Downloads 101724 An Investigation of the Structural and Microstructural Properties of Zn1-xCoxO Thin Films Applied as Gas Sensors
Authors: Ariadne C. Catto, Luis F. da Silva, Khalifa Aguir, Valmor Roberto Mastelaro
Abstract:
Zinc oxide (ZnO) pure or doped are one of the most promising metal oxide semiconductors for gas sensing applications due to the well-known high surface-to-volume area and surface conductivity. It was shown that ZnO is an excellent gas-sensing material for different gases such as CO, O2, NO2 and ethanol. In this context, pure and doped ZnO exhibiting different morphologies and a high surface/volume ratio can be a good option regarding the limitations of the current commercial sensors. Different studies showed that the sensitivity of metal-doped ZnO (e.g. Co, Fe, Mn,) enhanced its gas sensing properties. Motivated by these considerations, the aim of this study consisted on the investigation of the role of Co ions on structural, morphological and the gas sensing properties of nanostructured ZnO samples. ZnO and Zn1-xCoxO (0 < x < 5 wt%) thin films were obtained via the polymeric precursor method. The sensitivity, selectivity, response time and long-term stability gas sensing properties were investigated when the sample was exposed to a different concentration range of ozone (O3) at different working temperatures. The gas sensing property was probed by electrical resistance measurements. The long and short-range order structure around Zn and Co atoms were investigated by X-ray diffraction and X-ray absorption spectroscopy. X-ray photoelectron spectroscopy measurement was performed in order to identify the elements present on the film surface as well as to determine the sample composition. Microstructural characteristics of the films were analyzed by a field-emission scanning electron microscope (FE-SEM). Zn1-xCoxO XRD patterns were indexed to the wurtzite ZnO structure and any second phase was observed even at a higher cobalt content. Co-K edge XANES spectra revealed the predominance of Co2+ ions. XPS characterization revealed that Co-doped ZnO samples possessed a higher percentage of oxygen vacancies than the ZnO samples, which also contributed to their excellent gas sensing performance. Gas sensor measurements pointed out that ZnO and Co-doped ZnO samples exhibit a good gas sensing performance concerning the reproducibility and a fast response time (around 10 s). Furthermore, the Co addition contributed to reduce the working temperature for ozone detection and improve the selective sensing properties.Keywords: cobalt-doped ZnO, nanostructured, ozone gas sensor, polymeric precursor method
Procedia PDF Downloads 245723 Validation of an Impedance-Based Flow Cytometry Technique for High-Throughput Nanotoxicity Screening
Authors: Melanie Ostermann, Eivind Birkeland, Ying Xue, Alexander Sauter, Mihaela R. Cimpan
Abstract:
Background: New reliable and robust techniques to assess biological effects of nanomaterials (NMs) in vitro are needed to speed up safety analysis and to identify key physicochemical parameters of NMs, which are responsible for their acute cytotoxicity. The central aim of this study was to validate and evaluate the applicability and reliability of an impedance-based flow cytometry (IFC) technique for the high-throughput screening of NMs. Methods: Eight inorganic NMs from the European Commission Joint Research Centre Repository were used: NM-302 and NM-300k (Ag: 200 nm rods and 16.7 nm spheres, respectively), NM-200 and NM- 203 (SiO₂: 18.3 nm and 24.7 nm amorphous, respectively), NM-100 and NM-101 (TiO₂: 100 nm and 6 nm anatase, respectively), and NM-110 and NM-111 (ZnO: 147 nm and 141 nm, respectively). The aim was to assess the biological effects of these materials on human monoblastoid (U937) cells. Dispersions of NMs were prepared as described in the NANOGENOTOX dispersion protocol and cells were exposed to NMs at relevant concentrations (2, 10, 20, 50, and 100 µg/mL) for 24 hrs. The change in electrical impedance was measured at 0.5, 2, 6, and 12 MHz using the IFC AmphaZ30 (Amphasys AG, Switzerland). A traditional toxicity assay, Trypan Blue Dye Exclusion assay, and dark-field microscopy were used to validate the IFC method. Results: Spherical Ag particles (NM-300K) showed the highest toxic effect on U937 cells followed by ZnO (NM-111 ≥ NM-110) particles. Silica particles were moderate to non-toxic at all used concentrations under these conditions. A higher toxic effect was seen with smaller sized TiO2 particles (NM-101) compared to their larger analogues (NM-100). No interferences between the IFC and the used NMs were seen. Uptake and internalization of NMs were observed after 24 hours exposure, confirming actual NM-cell interactions. Conclusion: Results collected with the IFC demonstrate the applicability of this method for rapid nanotoxicity assessment, which proved to be less prone to nano-related interference issues compared to some traditional toxicity assays. Furthermore, this label-free and novel technique shows good potential for up-scaling in directions of an automated high-throughput screening and for future NM toxicity assessment. This work was supported by the EC FP7 NANoREG (Grant Agreement NMP4-LA-2013-310584), the Research Council of Norway, project NorNANoREG (239199/O70), the EuroNanoMed II 'GEMN' project (246672), and the UH-Nett Vest project.Keywords: cytotoxicity, high-throughput, impedance, nanomaterials
Procedia PDF Downloads 360722 Immunocytochemical Stability of Antigens in Cytological Samples Stored in In-house Liquid-Based Medium
Authors: Anamarija Kuhar, Veronika Kloboves Prevodnik, Nataša Nolde, Ulrika Klopčič
Abstract:
The decision for immunocytochemistry (ICC) is usually made in the basis of the findings in Giemsa- and/or Papanicolaou- smears. More demanding diagnostic cases require preparation of additional cytological preparations. Therefore, it is convenient to suspend cytological samples in a liquid based medium (LBM) that preserve antigen and morphological properties. However, the duration of these properties being preserved in the medium is usually unknown. Eventually, cell morphology becomes impaired and altered, as well as antigen properties may be lost or become diffused. In this study, the influence of cytological sample storage length in in-house liquid based medium on antigen properties and cell morphology is evaluated. The question is how long the cytological samples in this medium can be stored so that the results of immunocytochemical reactions are still reliable and can be safely used in routine cytopathological diagnostics. The stability of 6 ICC markers that are most frequently used in everyday routine work were tested; Cytokeratin AE1/AE3, Calretinin, Epithelial specific antigen Ep-CAM (MOC-31), CD 45, Oestrogen receptor (ER), and Melanoma triple cocktail were tested on methanol fixed cytospins prepared from fresh fine needle aspiration biopsies, effusion samples, and disintegrated lymph nodes suspended in in-house cell medium. Cytospins were prepared on the day of the sampling as well as on the second, fourth, fifth, and eight day after sample collection. Next, they were fixed in methanol and immunocytochemically stained. Finally, the percentage of positive stained cells, reaction intensity, counterstaining, and cell morphology were assessed using two assessment methods: the internal assessment and the UK NEQAS ICC scheme assessment. Results show that the antigen properties for Cytokeratin AE1/AE3, MOC-31, CD 45, ER, and Melanoma triple cocktail were preserved even after 8 days of storage in in-house LBM, while the antigen properties for Calretinin remained unchanged only for 4 days. The key parameters for assessing detection of antigen are the proportion of cells with a positive reaction and intensity of staining. Well preserved cell morphology is highly important for reliable interpretation of ICC reaction. Therefore, it would be valuable to perform a similar analysis for other ICC markers to determine the duration in which the antigen and morphological properties are preserved in LBM.Keywords: cytology samples, cytospins, immunocytochemistry, liquid-based cytology
Procedia PDF Downloads 138721 Analysis of Brownfield Soil Contamination Using Local Government Planning Data
Authors: Emma E. Hellawell, Susan J. Hughes
Abstract:
BBrownfield sites are currently being redeveloped for residential use. Information on soil contamination on these former industrial sites is collected as part of the planning process by the local government. This research project analyses this untapped resource of environmental data, using site investigation data submitted to a local Borough Council, in Surrey, UK. Over 150 site investigation reports were collected and interrogated to extract relevant information. This study involved three phases. Phase 1 was the development of a database for soil contamination information from local government reports. This database contained information on the source, history, and quality of the data together with the chemical information on the soil that was sampled. Phase 2 involved obtaining site investigation reports for development within the study area and extracting the required information for the database. Phase 3 was the data analysis and interpretation of key contaminants to evaluate typical levels of contaminants, their distribution within the study area, and relating these results to current guideline levels of risk for future site users. Preliminary results for a pilot study using a sample of the dataset have been obtained. This pilot study showed there is some inconsistency in the quality of the reports and measured data, and careful interpretation of the data is required. Analysis of the information has found high levels of lead in shallow soil samples, with mean and median levels exceeding the current guidance for residential use. The data also showed elevated (but below guidance) levels of potentially carcinogenic polyaromatic hydrocarbons. Of particular concern from the data was the high detection rate for asbestos fibers. These were found at low concentrations in 25% of the soil samples tested (however, the sample set was small). Contamination levels of the remaining chemicals tested were all below the guidance level for residential site use. These preliminary pilot study results will be expanded, and results for the whole local government area will be presented at the conference. The pilot study has demonstrated the potential for this extensive dataset to provide greater information on local contamination levels. This can help inform regulators and developers and lead to more targeted site investigations, improving risk assessments, and brownfield development.Keywords: Brownfield development, contaminated land, local government planning data, site investigation
Procedia PDF Downloads 136720 Fuzzy Expert Approach for Risk Mitigation on Functional Urban Areas Affected by Anthropogenic Ground Movements
Authors: Agnieszka A. Malinowska, R. Hejmanowski
Abstract:
A number of European cities are strongly affected by ground movements caused by anthropogenic activities or post-anthropogenic metamorphosis. Those are mainly water pumping, current mining operation, the collapse of post-mining underground voids or mining-induced earthquakes. These activities lead to large and small-scale ground displacements and a ground ruptures. The ground movements occurring in urban areas could considerably affect stability and safety of structures and infrastructures. The complexity of the ground deformation phenomenon in relation to the structures and infrastructures vulnerability leads to considerable constraints in assessing the threat of those objects. However, the increase of access to the free software and satellite data could pave the way for developing new methods and strategies for environmental risk mitigation and management. Open source geographical information systems (OS GIS), may support data integration, management, and risk analysis. Lately, developed methods based on fuzzy logic and experts methods for buildings and infrastructure damage risk assessment could be integrated into OS GIS. Those methods were verified base on back analysis proving their accuracy. Moreover, those methods could be supported by ground displacement observation. Based on freely available data from European Space Agency and free software, ground deformation could be estimated. The main innovation presented in the paper is the application of open source software (OS GIS) for integration developed models and assessment of the threat of urban areas. Those approaches will be reinforced by analysis of ground movement based on free satellite data. Those data would support the verification of ground movement prediction models. Moreover, satellite data will enable our mapping of ground deformation in urbanized areas. Developed models and methods have been implemented in one of the urban areas hazarded by underground mining activity. Vulnerability maps supported by satellite ground movement observation would mitigate the hazards of land displacements in urban areas close to mines.Keywords: fuzzy logic, open source geographic information science (OS GIS), risk assessment on urbanized areas, satellite interferometry (InSAR)
Procedia PDF Downloads 159719 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought
Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan
Abstract:
Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin
Procedia PDF Downloads 62718 Evaluation of the Irritation Potential of Three Topical Formulations of Minoxidil 5% + Finasteride 0.1% Using Patch Test
Authors: Joshi Rajiv, Shah Priyank, Thavkar Amit, Rohira Poonam, Mehta Suyog
Abstract:
Topical formulation containing minoxidil and finasteride helps hair growth in the treatment of male androgenetic alopecia. The objective of this study is to compare the irritation potential of three conventional formulations of minoxidil 5% + finasteride 0.1% topical solution of in human patch test. The study was a single centre, double blind, non-randomized controlled study in 53 healthy adult Indian subjects. Occlusive patch test for 24 hours was performed with three formulations of minoxidil 5% + finasteride 0.1% topical solution. Products tested included aqueous based minoxidil 5% + finasteride 0.1% (AnasureTM-F, Sun Pharma, India – Brand A), lipid based minoxidil 5% + finasteride 0.1% (Brand B) and aqueous based minoxidil 5% + finasteride 0.1% (Brand C). Isotonic saline 0.9% and 1% w/w sodium lauryl sulphate were included as negative control and positive control respectively. Patches were applied and removed after 24 hours. The skin reaction was assessed and clinically scored 24 hours after the removal of the patches under constant artificial daylight source using the Draize scale (0-4 points scale for erythema/dryness//wrinkles and for oedema). Follow-up was scheduled after one week to confirm recovery for any reaction. A combined mean score up to 2.0/8.0 indicates a product is “non-irritant” and a score between 2.0/8.0 and 4.0/8.0 indicates “mildly irritant” and a score above 4.0/8.0 indicates “irritant”. The procedure of the patch test followed the principles outlined by the Bureau of Indian Standards (BIS) (IS 4011:2018; Methods of Test for safety evaluation of Cosmetics-3rd revision). Fifty three subjects with mean age 31.9 years (25 males and 28 females) participated in the study. The combined mean score ± standard deviation were: 0.06 ± 0.23 (Brand A), 0.81 ± 0.59 (Brand B), 0.38 ± 0.49 (Brand C), 2.92 ± 0.47 (positive control) and 0.0 ± 0.0 (Negative control). This means the score of Brand A (Sun Pharma product) was significantly lower than that of Brand B (p=0.001) and that of Brand C (p=0.001). The combined mean erythema score ± standard deviation were: 0.06 ± 0.23 (Brand A), 0.81 ± 0.59 (Brand B), 0.38 ± 0.49 (Brand C), 2.09 ± 0.4 (Positive control) and 0.0 ± 0.0 (Negative control). The mean erythema score of Brand A was significantly lower than Brand B (p=0.001) and that of Brand C (p=0.001). Any reaction observed at 24hours after patch removal subsided in a week. All the three topical formulations of minoxidil 5% + finasteride 0.1% were non-irritant. Brand A of minoxidil 5% + finasteride 0.1% (Sun Pharma) was found to be the least irritant than Brand B and Brand C based on the combined mean score and mean erythema score in the human patch test as per the BIS, IS 4011:2018Keywords: erythema, finasteride, irritation, minoxidil, patch test
Procedia PDF Downloads 81717 Evaluation of the Irritation Potential of Three Topical Formulations of Minoxidil 2% Using Patch Test
Authors: Sule Pallavi, Shah Priyank, Thavkar Amit, Rohira Poonam, Mehta Suyog
Abstract:
Introduction: Minoxidil has been used topically for a long time to assist hair growth in the management of male androgenetic alopecia. The aim of this study was a comparative assessment of the irritation potential of three commercial formulations of minoxidil 2% topical solution in a human patch test. Methodology: The study was a non-randomized, double-blind, controlled, single-center study of 56 healthy adult Indian subjects. A 24-hour occlusive patch test was conducted with three formulations of minoxidil 2% topical solution. Products tested were aqueous-based minoxidil 2% (AnasureTM 2%, Sun Pharma, India – Brand A), alcohol-based minoxidil 2% (Brand B) and aqueous-based minoxidil 2% (Brand C). Isotonic saline 0.9% and 1% w/w sodium lauryl sulphate as a negative and positive control, respectively, were included. Patches were applied on the back, followed by removal after 24 hours. The Draize scale (0-4 points scale for erythema/dryness/wrinkles and for oedema) was used to evaluate and clinically score the skin reaction under constant artificial daylight 24 hours after the removal of the patches. The patch test was based on the principles outlined by Bureau of Indian Standards (BIS) (IS 4011:2018; Methods of Test for safety evaluation of Cosmetics-3rd revision). A mean combined score up to 2.0/8.0 indicates that a product is “non-irritant,” and a score between 2.0/8.0 and 4.0/8.0 indicates “mildly irritant” and a score above 4.0/8.0 indicates “irritant”. In case of any skin reaction that was observed, a follow-up was planned after one week to confirm recovery. Results: The 56 subjects who participated in the study had a mean age of 28.7 years (28 males and 28 females). The combined mean score ± standard deviation was: 0.09 ± 0.29 (Brand A), 0.29± 0.53 (Brand B), 0.30 ± 0.46 (Brand C), 3.25 ± 0.77 (positive control) and 0.02 ± 0.13 (negative control). This mean score of Brand A (Sun Pharma) was significantly lower than that of Brand B (p=0.016) and that of Brand C (p=0.004). The mean erythema score ± standard deviation was: 0.09 ± 0.29 (Brand A), 0.27 ± 0.49 (Brand B), 0.30 ± 0.46 (Brand C), 2.5 ± 0.66 (positive control) and 0.02 ± 0.13 (negative control). The mean erythema score of Brand A (Sun Pharma) was significantly lower than that of Brand B (p=0.019) and that of Brand C (p=0.004). Reactions that were observed 24 hours after patch removal subsided in a week’s time. Conclusion: Based on the human patch test as per the BIS, IS 4011:2018, all the three topical formulations of minoxidil 2% were found to be non-irritant. Brand A of 2% minoxidil (Sun Pharma) was found to be the least irritant than Brand B and Brand C based on the combined mean score and mean erythema score.Keywords: erythema, irritation, minoxidil, patch test
Procedia PDF Downloads 80716 Intracranial Hypotension: A Brief Review of the Pathophysiology and Diagnostic Algorithm
Authors: Ana Bermudez de Castro Muela, Xiomara Santos Salas, Silvia Cayon Somacarrera
Abstract:
The aim of this review is to explain what is the intracranial hypotension and its main causes, and also to approach to the diagnostic management in the different clinical situations, understanding radiological findings, and physiopathological substrate. An approach to the diagnostic management is presented: what are the guidelines to follow, the different tests available, and the typical findings. We review the myelo-CT and myelo-RM studies in patients with suspected CSF fistula or hypotension of unknown cause during the last 10 years in three centers. Signs of intracranial hypotension (subdural hygromas/hematomas, pachymeningeal enhancement, venous sinus engorgement, pituitary hyperemia, and lowering of the brain) that are evident in baseline CT and MRI are also sought. The intracranial hypotension is defined as a lower opening pressure of 6 cmH₂O. It is a relatively rare disorder with an annual incidence of 5 per 100.000, with a female to male ratio 2:1. The clinical features it’s an orthostatic headache, which is defined as development or aggravation of headache when patients move from a supine to an upright position and disappear or typically relieve after lay down. The etiology is a decrease in the amount of cerebrospinal fluid (CSF), usually by loss of it, either spontaneous or secondary (post-traumatic, post-surgical, systemic disease, post-lumbar puncture etc.) and rhinorrhea and/or otorrhea may exist. The pathophysiological mechanisms of hypotension and CSF hypertension are interrelated, as a situation of hypertension may lead to hypotension secondary to spontaneous CSF leakage. The diagnostic management of intracranial hypotension in our center includes, in the case of being spontaneous and without rhinorrhea and/or otorrhea and according to necessity, a range of available tests, which will be performed from less to more complex: cerebral CT, cerebral MRI and spine without contrast and CT/MRI with intrathecal contrast. If we are in a situation of intracranial hypotension with the presence of rhinorrhea/otorrhea, a sample can be obtained for the detection of b2-transferrin, which is found in the CSF physiologically, as well as sinus CT and cerebral MRI including constructive interference steady state (CISS) sequences. If necessary, cisternography studies are performed to locate the exact point of leakage. It is important to emphasize the significance of myelo-CT / MRI to establish the diagnosis and location of CSF leak, which is indispensable for therapeutic planning (whether surgical or not) in patients with more than one lesion or doubts in the baseline tests.Keywords: cerebrospinal fluid, neuroradiology brain, magnetic resonance imaging, fistula
Procedia PDF Downloads 125715 In vitro Evaluation of Capsaicin Patches for Transdermal Drug Delivery
Authors: Alija Uzunovic, Sasa Pilipovic, Aida Sapcanin, Zahida Ademovic, Berina Pilipović
Abstract:
Capsaicin is a naturally occurring alkaloid extracted from capsicum fruit extracts of different of Capsicum species. It has been employed topically to treat many diseases such as rheumatoid arthritis, osteoarthritis, cancer pain and nerve pain in diabetes. The high degree of pre-systemic metabolism of intragastrical capsaicin and the short half-life of capsaicin by intravenous administration made topical application of capsaicin advantageous. In this study, we have evaluated differences in the dissolution characteristics of capsaicin patch 11 mg (purchased from market) at different dissolution rotation speed. The proposed patch area is 308 cm2 (22 cm x 14 cm; it contains 36 µg of capsaicin per square centimeter of adhesive). USP Apparatus 5 (Paddle Over Disc) is used for transdermal patch testing. The dissolution study was conducted using USP apparatus 5 (n=6), ERWEKA DT800 dissolution tester (paddle-type) with addition of a disc. The fabricated patch of 308 cm2 is to be cut into 9 cm2 was placed against a disc (delivery side up) retained with the stainless-steel screen and exposed to 500 mL of phosphate buffer solution pH 7.4. All dissolution studies were carried out at 32 ± 0.5 °C and different rotation speed (50± 5; 100± 5 and 150± 5 rpm). 5 ml aliquots of samples were withdrawn at various time intervals (1, 4, 8 and 12 hours) and replaced with 5 ml of dissolution medium. Withdrawn were appropriately diluted and analyzed by reversed-phase liquid chromatography (RP-LC). A Reversed Phase Liquid Chromatography (RP-LC) method has been developed, optimized and validated for the separation and quantitation of capsaicin in a transdermal patch. The method uses a ProntoSIL 120-3-C18 AQ 125 x 4,0 mm (3 μm) column maintained at 600C. The mobile phase consisted of acetonitrile: water (50:50 v/v), the flow rate of 0.9 mL/min, the injection volume 10 μL and the detection wavelength 222 nm. The used RP-LC method is simple, sensitive and accurate and can be applied for fast (total chromatographic run time was 4.0 minutes) and simultaneous analysis of capsaicin and dihydrocapsaicin in a transdermal patch. According to the results obtained in this study, we can conclude that the relative difference of dissolution rate of capsaicin after 12 hours was elevated by increase of dissolution rotation speed (100 rpm vs 50 rpm: 84.9± 11.3% and 150 rpm vs 100 rpm: 39.8± 8.3%). Although several apparatus and procedures (USP apparatus 5, 6, 7 and a paddle over extraction cell method) have been used to study in vitro release characteristics of transdermal patches, USP Apparatus 5 (Paddle Over Disc) could be considered as a discriminatory test. would be able to point out the differences in the dissolution rate of capsaicin at different rotation speed.Keywords: capsaicin, in vitro, patch, RP-LC, transdermal
Procedia PDF Downloads 224714 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 62713 An Exploratory Study on the Impact of Climate Change on Design Rainfalls in the State of Qatar
Authors: Abdullah Al Mamoon, Niels E. Joergensen, Ataur Rahman, Hassan Qasem
Abstract:
Intergovernmental Panel for Climate Change (IPCC) in its fourth Assessment Report AR4 predicts a more extreme climate towards the end of the century, which is likely to impact the design of engineering infrastructure projects with a long design life. A recent study in 2013 developed new design rainfall for Qatar, which provides an improved design basis of drainage infrastructure for the State of Qatar under the current climate. The current design standards in Qatar do not consider increased rainfall intensity caused by climate change. The focus of this paper is to update recently developed design rainfalls in Qatar under the changing climatic conditions based on IPCC's AR4 allowing a later revision to the proposed design standards, relevant for projects with a longer design life. The future climate has been investigated based on the climate models released by IPCC’s AR4 and A2 story line of emission scenarios (SRES) using a stationary approach. Annual maximum series (AMS) of predicted 24 hours rainfall data for both wet (NCAR-CCSM) scenario and dry (CSIRO-MK3.5) scenario for the Qatari grid points in the climate models have been extracted for three periods, current climate 2010-2039, medium term climate (2040-2069) and end of century climate (2070-2099). A homogeneous region of the Qatari grid points has been formed and L-Moments based regional frequency approach is adopted to derive design rainfalls. The results indicate no significant changes in the design rainfall on the short term 2040-2069, but significant changes are expected towards the end of the century (2070-2099). New design rainfalls have been developed taking into account climate change for 2070-2099 scenario and by averaging results from the two scenarios. IPCC’s AR4 predicts that the rainfall intensity for a 5-year return period rain with duration of 1 to 2 hours will increase by 11% in 2070-2099 compared to current climate. Similarly, the rainfall intensity for more extreme rainfall, with a return period of 100 years and duration of 1 to 2 hours will increase by 71% in 2070-2099 compared to current climate. Infrastructure with a design life exceeding 60 years should add safety factors taking the predicted effects from climate change into due consideration.Keywords: climate change, design rainfalls, IDF, Qatar
Procedia PDF Downloads 392712 Effects of Fatty Acid Salts and Spices on Dermatophagoides farinae
Authors: Yumeho Obata, Mariko Era, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita
Abstract:
Dermatophagoides farinae is major mite allergens in indoors. D. farinae is often swarm over powder products (e.g. wheat flour), because it feeds on starch or protein that are included in them. Eating powder products which are mixed D.farinae causes various allergic symptoms. Therefore, the creation of food additive agents with high safety and control of mite effect is required. Fatty acid salts and spices are known that have pesticidal activities. This study describes the effects of fatty acid salts and spices against Dermatophagoides farinae. Materials and Methods: Potassium salts of 9 fatty acids (C4:0, C6:0, C8:0, C10:0, C12:0, C14:0, C18:1, C18:2, C18:3) were prepared by mixing the fatty acid with the appropriate amount of KOH solution to a concentration of 175 mM and pH 10.5. C12Cu and C12Zn were selected as other fatty acid salts. Cayenne pepper, habanero, Japanese pepper, mustard, jalapeno pepper, curry aroma and cinnamon were selected as spices. D. farina, have been cultured in laboratory. To rear the mites, double-soled dishes containing of sterilized food were put on the big plastic container (30.0 × 20.0 × 20.0cm) which had 100% ammonium nitrate solution in the bottom. Plastic container was placed on incubator at 25 °C and 64 % relative humidity (RH) under dark condition. Sterilized food composed of dried bonito flakes and dried yeast (Ebios), 1:1 by weight. The antiproliferative method, sample and medium culture were mixed in double-soled dish and kept at 25 °C and 64 % RH. Decrease rates were determined 1 week and 4 week after treatment under microscope. D. farina was considered to be dead if appendages did not move when prodded with a pin. Results and Conclusions: The results show that the fatty acids potassium showed no antiproliferative effects against D. farinae. On the other hand, Japanese pepper, mustard, curry aroma and cinnamon were effective to decrease propagative rate (over 80 %) after treatment for 1 week against D. farina. Japanese pepper, curry aroma and cinnamon were effective to decrease propagative rate (approximately 100 %) after treatment for 4 weeks against D. farina. Especially, Japanese pepper and cinnamon showed the fasted and the most consecutive antiproliferative effects. These results indicate that Japanese pepper and cinnamon have high antiproliferative effects against D. farina and suggest spices will be used as a food additive agent.Keywords: fatty acid salts, spices, antiproliferative effects, dermatophagoides farinae
Procedia PDF Downloads 232711 Human Identification Using Local Roughness Patterns in Heartbeat Signal
Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori
Abstract:
Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification
Procedia PDF Downloads 404710 Urban Flood Resilience Comprehensive Assessment of "720" Rainstorm in Zhengzhou Based on Multiple Factors
Authors: Meiyan Gao, Zongmin Wang, Haibo Yang, Qiuhua Liang
Abstract:
Under the background of global climate change and rapid development of modern urbanization, the frequency of climate disasters such as extreme precipitation in cities around the world is gradually increasing. In this paper, Hi-PIMS model is used to simulate the "720" flood in Zhengzhou, and the continuous stages of flood resilience are determined with the urban flood stages are divided. The flood resilience curve under the influence of multiple factors were determined and the urban flood toughness was evaluated by combining the results of resilience curves. The flood resilience of urban unit grid was evaluated based on economy, population, road network, hospital distribution and land use type. Firstly, the rainfall data of meteorological stations near Zhengzhou and the remote sensing rainfall data from July 17 to 22, 2021 were collected. The Kriging interpolation method was used to expand the rainfall data of Zhengzhou. According to the rainfall data, the flood process generated by four rainfall events in Zhengzhou was reproduced. Based on the results of the inundation range and inundation depth in different areas, the flood process was divided into four stages: absorption, resistance, overload and recovery based on the once in 50 years rainfall standard. At the same time, based on the levels of slope, GDP, population, hospital affected area, land use type, road network density and other aspects, the resilience curve was applied to evaluate the urban flood resilience of different regional units, and the difference of flood process of different precipitation in "720" rainstorm in Zhengzhou was analyzed. Faced with more than 1,000 years of rainstorm, most areas are quickly entering the stage of overload. The influence levels of factors in different areas are different, some areas with ramps or higher terrain have better resilience, and restore normal social order faster, that is, the recovery stage needs shorter time. Some low-lying areas or special terrain, such as tunnels, will enter the overload stage faster in the case of heavy rainfall. As a result, high levels of flood protection, water level warning systems and faster emergency response are needed in areas with low resilience and high risk. The building density of built-up area, population of densely populated area and road network density all have a certain negative impact on urban flood resistance, and the positive impact of slope on flood resilience is also very obvious. While hospitals can have positive effects on medical treatment, they also have negative effects such as population density and asset density when they encounter floods. The result of a separate comparison of the unit grid of hospitals shows that the resilience of hospitals in the distribution range is low when they encounter floods. Therefore, in addition to improving the flood resistance capacity of cities, through reasonable planning can also increase the flood response capacity of cities. Changes in these influencing factors can further improve urban flood resilience, such as raise design standards and the temporary water storage area when floods occur, train the response speed of emergency personnel and adjust emergency support equipment.Keywords: urban flood resilience, resilience assessment, hydrodynamic model, resilience curve
Procedia PDF Downloads 38709 Regional Anesthesia in Carotid Surgery: A Single Center Experience
Authors: Daniel Thompson, Muhammad Peerbux, Sophie Cerutti, Hansraj Riteesh Bookun
Abstract:
Patients with carotid stenosis, which may be asymptomatic or symptomatic in the form of transient ischaemic attack (TIA), amaurosis fugax, or stroke, often require an endarterectomy to reduce stroke risk. Risks of this procedure include stroke, death, myocardial infarction, and cranial nerve damage. Carotid endarterectomy is most commonly performed under general anaesthetic, however, it can also be undertaken with a regional anaesthetic approach. Our tertiary centre generally performs carotid endarterectomy under regional anaesthetic. Our major tertiary hospital mostly utilises regional anaesthesia for carotid endarterectomy. We completed a cross-sectional analysis of all cases of carotid endarterectomy performed under regional anaesthesia across a 10-year period between January 2010 to March 2020 at our institution. 350 patients were included in this descriptive analysis, and demographic details for patients, indications for surgery, procedural details, length of surgery, and complications were collected. Data was cross tabulated and presented in frequency tables to describe these categorical variables. 263 of the 350 patients in the analysis were male, with a mean age of 71 ± 9. 172 patients had a history of ischaemic heart disease, 104 had diabetes mellitus, 318 had hypertension, and 17 patients had chronic kidney disease greater than Stage 3. 13.1% (46 patients) were current smokers, and the majority (63%) were ex-smokers. Most commonly, carotid endarterectomy was performed conventionally with patch arterioplasty 96% of the time (337 patients). The most common indication was TIA and stroke in 64% of patients, 18.9% were classified as asymptomatic, and 13.7% had amaurosis fugax. There were few general complications, with 9 wound complications/infections, 7 postoperative haematomas requiring return to theatre, 3 myocardial infarctions, 3 arrhythmias, 1 exacerbation of congestive heart failure, 1 chest infection, and 1 urinary tract infection. Specific complications to carotid endarterectomy included 3 strokes, 1 postoperative TIA, and 1 cerebral bleed. There were no deaths in our cohort. This analysis of a large cohort of patients from a major tertiary centre who underwent carotid endarterectomy under regional anaesthesia indicates the safety of such an approach for these patients. Regional anaesthesia holds the promise of less general respiratory and cardiac events compared to general anaesthesia, and in this vulnerable patient group, calls for comparative research between local and general anaesthesia in carotid surgery.Keywords: anaesthesia, carotid endarterectomy, stroke, carotid stenosis
Procedia PDF Downloads 120708 The Effects of Molecular and Climatic Variability on the Occurrence of Aspergillus Species and Aflatoxin Production in Commercial Maize from Different Agro-climatic Regions in South Africa
Authors: Nji Queenta Ngum, Mwanza Mulunda
Abstract:
Introduction Most African research reports on the frequent aflatoxin contamination of various foodstuffs, with researchers rarely specifying which of the Aspergillus species are present in these commodities. Numerous research works provide evidence of the ability of fungi to grow, thrive, and interact with other crop species and focus on the fact that these processes are largely affected by climatic variables. South Africa is a water-stressed country with high spatio-temporal rainfall variability; moreover, temperatures have been projected to rise at a rate twice the global rate. This weather pattern change may lead to crop stress encouraging mold contamination with subsequent mycotoxin production. In this study, the biodiversity and distribution of Aspergillus species with their corresponding toxins in maize from six distinct maize producing regions with different weather patterns in South Africa were investigated. Materials And Methods By applying cultural and molecular methods, a total of 1028 maize samples from six distinct agro-climatic regions were examined for contamination by the Aspergillus species while the high performance liquid chromatography (HPLC) method was applied to analyse the level of contamination by aflatoxins. Results About 30% of the overall maize samples were contaminated by at least one Aspergillus species. Less than 30% (28.95%) of the 228 isolates subjected to the aflatoxigenic test was found to possess at least one of the aflatoxin biosynthetic genes. Furthermore, almost 20% were found to be contaminated with aflatoxins, with mean total aflatoxin concentration levels of 64.17 ppb. Amongst the contaminated samples, 59.02% had mean total aflatoxin concentration levels above the SA regulatory limit of 20ppb for animals and 10 for human consumption. Conclusion In this study, climate variables (rainfall reduction) were found to significantly (p<0.001) influence the occurrence of the Aspergillus species (especially Aspergillus fumigatus) and the production of aflatoxin in South Africa commercial maize by maize variety, year of cultivation as well as the agro-climatic region in which the maize is cultivated. This included, amongst others, a reduction in the average annual rainfall of the preceding year to about 21.27 mm, and, as opposed to other regions whose average maximum rainfall ranged between 37.24 – 44.1 mm, resulted in a significant increase in the aflatoxin contamination of maize.Keywords: aspergillus species, aflatoxins, diversity, drought, food safety, HPLC and PCR techniques
Procedia PDF Downloads 73707 The Need for Innovation Management in the Context of Integrated Management Systems
Authors: Adela Mariana Vadastreanu, Adrian Bot, Andreea Maier, Dorin Maier
Abstract:
This paper approaches the need for innovation management in the context of an existing integrated management system implemented in an organization. The road to success for companies in today’s economic environment is more demanding than ever and the capacity of adapting to the rapid changes is compensatory in order to resist on the market. The managers struggle, daily, with increasingly complex problems, caused by fierce competition in the market but also from the rising demands of customers. Innovation seems to be the solution for these problems. During the last decade almost all companies have been certificated according to various management systems, like quality management system, environmental management system, health and safety management system and others; furthermore many companies have implemented an integrated management system, by integrating two or more management systems. The problem rising today is how to integrate innovation in this integrated management systems. The challenge of the problem is that the development of an innovation management system is in the early phase. In this paper we have studied the possibility of integrating some of the innovation request in an existing management system, we have identify the innovation performance request and we proposed some recommendations regarding innovation management and its implementation as a part of an integrated management system. This paper lies down the bases for developing an model of integration management systems that include innovation as a main part of it. Organizations are becoming more aware of the importance of Integrated Management Systems (IMS). Integrating two or more management systems into an integrated management system can have much advantages.This paper examines various models of management systems integration in accordance with professional references ISO 9001, ISO 18001 and OHSAS 18001, highlighting strengths and weaknesses, creating a basis for future development of integrated management systems, and their involvement in various other processes within the organization, such as innovation management. The more and more demanding economic context emphasizes the awareness of the importance of innovation for organizations. This paper highlights the importance of the innovation for an organization and also gives some practical solution in order to improve the overall success of the business through a better approach of innovation. Various standards have been developed in order to certificate organizations that they respect the requirements. Applying an integrated standards model is shown to be a more effective way then applying the standards independently. The problem that arises is that in order to adopt the integrated version of standards there have to be made some changes at the organizational level. Every change that needs to be done has an effect on its activity, and in this sense the paper tries to deal with the changes needed for adopting an integrated management system and if those changes have an influence over the performance. After the analysis of the results, we can conclude that in order to improve the performance a necessary step is the implementation of innovation in the existing integrated management system.Keywords: innovation, integrated management systems, innovation management, quality
Procedia PDF Downloads 313706 Interpretation of the Russia-Ukraine 2022 War via N-Gram Analysis
Authors: Elcin Timur Cakmak, Ayse Oguzlar
Abstract:
This study presents the results of the tweets sent by Twitter users on social media about the Russia-Ukraine war by bigram and trigram methods. On February 24, 2022, Russian President Vladimir Putin declared a military operation against Ukraine, and all eyes were turned to this war. Many people living in Russia and Ukraine reacted to this war and protested and also expressed their deep concern about this war as they felt the safety of their families and their futures were at stake. Most people, especially those living in Russia and Ukraine, express their views on the war in different ways. The most popular way to do this is through social media. Many people prefer to convey their feelings using Twitter, one of the most frequently used social media tools. Since the beginning of the war, it is seen that there have been thousands of tweets about the war from many countries of the world on Twitter. These tweets accumulated in data sources are extracted using various codes for analysis through Twitter API and analysed by Python programming language. The aim of the study is to find the word sequences in these tweets by the n-gram method, which is known for its widespread use in computational linguistics and natural language processing. The tweet language used in the study is English. The data set consists of the data obtained from Twitter between February 24, 2022, and April 24, 2022. The tweets obtained from Twitter using the #ukraine, #russia, #war, #putin, #zelensky hashtags together were captured as raw data, and the remaining tweets were included in the analysis stage after they were cleaned through the preprocessing stage. In the data analysis part, the sentiments are found to present what people send as a message about the war on Twitter. Regarding this, negative messages make up the majority of all the tweets as a ratio of %63,6. Furthermore, the most frequently used bigram and trigram word groups are found. Regarding the results, the most frequently used word groups are “he, is”, “I, do”, “I, am” for bigrams. Also, the most frequently used word groups are “I, do, not”, “I, am, not”, “I, can, not” for trigrams. In the machine learning phase, the accuracy of classifications is measured by Classification and Regression Trees (CART) and Naïve Bayes (NB) algorithms. The algorithms are used separately for bigrams and trigrams. We gained the highest accuracy and F-measure values by the NB algorithm and the highest precision and recall values by the CART algorithm for bigrams. On the other hand, the highest values for accuracy, precision, and F-measure values are achieved by the CART algorithm, and the highest value for the recall is gained by NB for trigrams.Keywords: classification algorithms, machine learning, sentiment analysis, Twitter
Procedia PDF Downloads 73