Search results for: test result
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17908

Search results for: test result

1078 Risk Assessment Tools Applied to Deep Vein Thrombosis Patients Treated with Warfarin

Authors: Kylie Mueller, Nijole Bernaitis, Shailendra Anoopkumar-Dukie

Abstract:

Background: Vitamin K antagonists particularly warfarin is the most frequently used oral medication for deep vein thrombosis (DVT) treatment and prophylaxis. Time in therapeutic range (TITR) of the international normalised ratio (INR) is widely accepted as a measure to assess the quality of warfarin therapy. Multiple factors can affect warfarin control and the subsequent adverse outcomes including thromboembolic and bleeding events. Predictor models have been developed to assess potential contributing factors and measure the individual risk of these adverse events. These predictive models have been validated in atrial fibrillation (AF) patients, however, there is a lack of literature on whether these can be successfully applied to other warfarin users including DVT patients. Therefore, the aim of the study was to assess the ability of these risk models (HAS BLED and CHADS2) to predict haemorrhagic and ischaemic incidences in DVT patients treated with warfarin. Methods: A retrospective analysis of DVT patients receiving warfarin management by a private pathology clinic was conducted. Data was collected from November 2007 to September 2014 and included demographics, medical and drug history, INR targets and test results. Patients receiving continuous warfarin therapy with an INR reference range between 2.0 and 3.0 were included in the study with mean TITR calculated using the Rosendaal method. Bleeding and thromboembolic events were recorded and reported as incidences per patient. The haemorrhagic risk model HAS BLED and ischaemic risk model CHADS2 were applied to the data. Patients were then stratified into either the low, moderate, or high-risk categories. The analysis was conducted to determine if a correlation existed between risk assessment tool and patient outcomes. Data was analysed using GraphPad Instat Version 3 with a p value of <0.05 considered to be statistically significant. Patient characteristics were reported as mean and standard deviation for continuous data and categorical data reported as number and percentage. Results: Of the 533 patients included in the study, there were 268 (50.2%) female and 265 (49.8%) male patients with a mean age of 62.5 years (±16.4). The overall mean TITR was 78.3% (±12.7) with an overall haemorrhagic incidence of 0.41 events per patient. For the HAS BLED model, there was a haemorrhagic incidence of 0.08, 0.53, and 0.54 per patient in the low, moderate and high-risk categories respectively showing a statistically significant increase in incidence with increasing risk category. The CHADS2 model showed an increase in ischaemic events according to risk category with no ischaemic events in the low category, and an ischaemic incidence of 0.03 in the moderate category and 0.47 high-risk categories. Conclusion: An increasing haemorrhagic incidence correlated to an increase in the HAS BLED risk score in DVT patients treated with warfarin. Furthermore, a greater incidence of ischaemic events occurred in patients with an increase in CHADS2 category. In an Australian population of DVT patients, the HAS BLED and CHADS2 accurately predicts incidences of haemorrhage and ischaemic events respectively.

Keywords: anticoagulant agent, deep vein thrombosis, risk assessment, warfarin

Procedia PDF Downloads 263
1077 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique

Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina

Abstract:

The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.

Keywords: diffusion, glass-ceramics, ion exchange, vitrification

Procedia PDF Downloads 269
1076 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains

Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe

Abstract:

The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.

Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain

Procedia PDF Downloads 313
1075 Care Experience of a Female Breast Cancer Patient Undergoing Modified Radical Mastectomy

Authors: Ting-I Lin

Abstract:

Purpose: This article explores the care experience of a 34-year-old female breast cancer patient who was admitted to the intensive care unit after undergoing a modified radical mastectomy. The patient discovered a lump in her right breast during a self-examination and, after mammography and ultrasound-guided biopsy, was diagnosed with a malignant tumor in the right breast. The tumor measured 1.5 x 1.4 x 2 cm, and the patient underwent a modified radical mastectomy. Postoperatively, she exhibited feelings of inferiority due to changes in her appearance. Method: During the care period, we engaged in conversations, observations, and active listening, using Gordon's Eleven Functional Health Patterns for a comprehensive assessment. In collaboration with the critical care team, a psychologist, and an oncology case manager, we conducted an interdisciplinary discussion and reached a consensus on key nursing issues. These included pain related to postoperative tumor excision and disturbed body image due to changes in appearance after surgery. Result: During the care period, a private space was provided to encourage the patient to express her feelings about her altered body image. Communication was conducted through active listening and a non-judgmental approach. The patient's anxiety level, as measured by the depression and anxiety scale, decreased from moderate to mild, and she was able to sleep for 6-8 hours at night. The oncology case manager was invited to provide education on breast reconstruction using breast models and videos to both the patient and her husband. This helped rebuild the patient's confidence. With the patient's consent, a support group was arranged where a peer with a similar experience shared her journey, offering emotional support and encouragement. This helped alleviate the psychological stress and shock caused by the cancer diagnosis. Additionally, pain management was achieved through adjusting the dosage of analgesics, administering Ultracet 37.5 mg/325 mg 1# Q6H PO, along with distraction techniques and acupressure therapy. These interventions helped the patient relax and alleviate discomfort, maintaining her pain score at a manageable level of 3, indicating mild pain. Conclusion: Disturbance in body image can cause significant psychological stress for patients. Through support group discussions, encouraging patients to express their feelings, and providing appropriate education on breast reconstruction and dressing techniques, the patient's self-concept was positively reinforced, and her emotions were stabilized. This led to renewed self-worth and confidence.

Keywords: breast cancer, modified radical mastectomy, acupressure therapy, Gordon's 11 functional health patterns

Procedia PDF Downloads 28
1074 Insecticidal Activity of Bacillus Thuringiensis Strain AH-2 Against Hemiptera Insects Pests: Aphis. Gossypii, and Lepidoptera Insect Pests: Plutella Xylostella and Hyphantria Cunea

Authors: Ajuna B. Henry

Abstract:

In recent decades, climate change has demanded biological pesticides; more Bt strains are being discovered worldwide, some containing novel insecticidal genes while others have been modified through molecular approaches for increased yield, toxicity, and wider host target. In this study, B. thuringiensis strain AH-2 (Bt-2) was isolated from the soil and tested for insecticidal activity against Aphis gossypii (Hemiptera: Aphididae) and Lepidoptera insect pests: fall webworm (Hyphantria cunea) and diamondback moth (Plutella xylostella). A commercial strain B. thuringiensis subsp. kurstaki (Btk), and a chemical pesticide, imidacloprid (for Hemiptera) and chlorantraniliprole (for Lepidoptera), were used as positive control and the same media (without bacterial inoculum) as a negative control. For aphidicidal activity, Bt-2 caused a mortality rate of 70.2%, 78.1% or 88.4% in third instar nymphs of A. gossypii (3N) at 10%, 25% or 50% culture concentrations, respectively. Moreover, Bt-2 was effectively produced in cost-effective (PB) supplemented with either glucose (PBG) or sucrose (PBS) and maintained high aphicidal efficacy with 3N mortality rates of 85.9%, 82.9% or 82.2% in TSB, PBG or PBS media, respectively at 50% culture concentration. Bt-2 also suppressed adult fecundity by 98.3% compared to only 65.8% suppression by Btk at similar concentrations but was slightly lower than chemical treatment, which caused 100% suppression. Partial purification of 60 – 80% (NH4)2SO4 fraction of Bt-2 aphicidal proteins purified on anion exchange (DEAE-FF) column revealed a 105 kDa aphicidal protein with LC50 = 55.0 ng/µℓ. For Lepidoptera pests, chemical pesticide, Bt-2, and Btk cultures, mortality of 86.7%, 60%, and 60% in 3rd instar larvae of P. xylostella, and 96.7%, 80.0%, and 93.3% in 6th instar larvae of H. cunea, after 72h of exposure. When the entomopathogenic strains were cultured in a cost-effective PBG or PBS, the insecticidal activity in all strains was not significantly different compared to the use of a commercial medium (TSB). Bt-2 caused a mortality rate of 60.0%, 63.3%, and 50.0% against P. xylostella larvae and 76.7%, 83.3%, and 73.3% against H. cunea when grown in TSB, PBG, and PBS media, respectively. Bt-2 (grown in cost-effective PBG medium) caused a dose-dependent toxicity of 26.7%, 40.0%, and 63.3% against P. xylostella and 46.7%, 53.3%, and 76.7% against H. cunea at 10%, 25% and 50% culture concentration, respectively. The partially purified Bt-2 insecticidal proteins fractions F1, F2, F3, and F4 (extracted at different ratios of organic solvent) caused low toxicity (50.0%, 40.0%, 36.7%, and 30.0%) against P. xylostella and relatively high toxicity (56.7%, 76.7%, 66.7%, and 63.3%) against H. cunea at 100 µg/g of artificial diets. SDS-PAGE analysis revealed that a128kDa protein is associated with toxicity of Bt-2. Our result demonstrates a medium and strong larvicidal activity of Bt-2 against P. xylostella and H. cunea, respectively. Moreover, Bt-2 could be potentially produced using a cost-effective PBG medium which makes it an effective alternative biocontrol strategy to reduce chemical pesticide application.

Keywords: biocontrol, insect pests, larvae/nymph mortality, cost-effective media, aphis gossypii, plutella xylostella, hyphantria cunea, bacillus thuringiensi

Procedia PDF Downloads 19
1073 Thermosensitive Hydrogel Development for Its Possible Application in Cardiac Cell Therapy

Authors: Lina Paola Orozco Marin, Yuliet Montoya Osorio, John Bustamante Osorno

Abstract:

Ischemic events can culminate in acute myocardial infarction by irreversible cardiac lesions that cannot be restored due to the limited regenerative capacity of the heart. Cell therapy seeks to replace these injured or necrotic cells by transplanting healthy and functional cells. The therapeutic alternatives proposed by tissue engineering and cardiovascular regenerative medicine are the use of biomaterials to mimic the native extracellular medium, which is full of proteins, proteoglycans, and glycoproteins. The selected biomaterials must provide structural support to the encapsulated cells to avoid their migration and death in the host tissue. In this context, the present research work focused on developing a natural thermosensitive hydrogel, its physical and chemical characterization, and the determination of its biocompatibility in vitro. The hydrogel was developed by mixing hydrolyzed bovine and porcine collagen at 2% w/v, chitosan at 2.5% w/v, and beta-glycerolphosphate at 8.5% w/w and 10.5% w/w in magnetic stirring at 4°C. Once obtained, the thermosensitivity and gelation time were determined, incubating the samples at 37°C and evaluating them through the inverted tube method. The morphological characterization of the hydrogels was carried out through scanning electron microscopy. Chemical characterization was carried out employing infrared spectroscopy. The biocompatibility was determined using the MTT cytotoxicity test according to the ISO 10993-5 standard for the hydrogel’s precursors using the fetal human ventricular cardiomyocytes cell line RL-14. The RL-14 cells were also seeded on the top of the hydrogels, and the supernatants were subculture at different periods to their observation under a bright field microscope. Four types of thermosensitive hydrogels were obtained, which differ in their composition and concentration, called A1 (chitosan/bovine collagen/beta-glycerolphosphate 8.5%w/w), A2 (chitosan/porcine collagen/beta-glycerolphosphate 8.5%), B1 (chitosan/bovine collagen/beta-glycerolphosphate 10.5%) and B2 (chitosan/porcine collagen/beta-glycerolphosphate 10.5%). A1 and A2 had a gelation time of 40 minutes, and B1 and B2 had a gelation time of 30 minutes at 37°C. Electron micrographs revealed a three-dimensional internal structure with interconnected pores for the four types of hydrogels. This facilitates the exchange of nutrients, oxygen, and the exit of metabolites, allowing to preserve a microenvironment suitable for cell proliferation. In the infrared spectra, it was possible to observe the interaction that occurs between the amides of polymeric compounds with the phosphate groups of beta-glycerolphosphate. Finally, the biocompatibility tests indicated that cells in contact with the hydrogel or with each of its precursors are not affected in their proliferation capacity for a period of 16 days. These results show the potential of the hydrogel to increase the cell survival rate in the cardiac cell therapies under investigation. Moreover, the results lay the foundations for its characterization and biological evaluation in both in vitro and in vivo models.

Keywords: cardiac cell therapy, cardiac ischemia, natural polymers, thermosensitive hydrogel

Procedia PDF Downloads 190
1072 Modeling Geogenic Groundwater Contamination Risk with the Groundwater Assessment Platform (GAP)

Authors: Joel Podgorski, Manouchehr Amini, Annette Johnson, Michael Berg

Abstract:

One-third of the world’s population relies on groundwater for its drinking water. Natural geogenic arsenic and fluoride contaminate ~10% of wells. Prolonged exposure to high levels of arsenic can result in various internal cancers, while high levels of fluoride are responsible for the development of dental and crippling skeletal fluorosis. In poor urban and rural settings, the provision of drinking water free of geogenic contamination can be a major challenge. In order to efficiently apply limited resources in the testing of wells, water resource managers need to know where geogenically contaminated groundwater is likely to occur. The Groundwater Assessment Platform (GAP) fulfills this need by providing state-of-the-art global arsenic and fluoride contamination hazard maps as well as enabling users to create their own groundwater quality models. The global risk models were produced by logistic regression of arsenic and fluoride measurements using predictor variables of various soil, geological and climate parameters. The maps display the probability of encountering concentrations of arsenic or fluoride exceeding the World Health Organization’s (WHO) stipulated concentration limits of 10 µg/L or 1.5 mg/L, respectively. In addition to a reconsideration of the relevant geochemical settings, these second-generation maps represent a great improvement over the previous risk maps due to a significant increase in data quantity and resolution. For example, there is a 10-fold increase in the number of measured data points, and the resolution of predictor variables is generally 60 times greater. These same predictor variable datasets are available on the GAP platform for visualization as well as for use with a modeling tool. The latter requires that users upload their own concentration measurements and select the predictor variables that they wish to incorporate in their models. In addition, users can upload additional predictor variable datasets either as features or coverages. Such models can represent an improvement over the global models already supplied, since (a) users may be able to use their own, more detailed datasets of measured concentrations and (b) the various processes leading to arsenic and fluoride groundwater contamination can be isolated more effectively on a smaller scale, thereby resulting in a more accurate model. All maps, including user-created risk models, can be downloaded as PDFs. There is also the option to share data in a secure environment as well as the possibility to collaborate in a secure environment through the creation of communities. In summary, GAP provides users with the means to reliably and efficiently produce models specific to their region of interest by making available the latest datasets of predictor variables along with the necessary modeling infrastructure.

Keywords: arsenic, fluoride, groundwater contamination, logistic regression

Procedia PDF Downloads 348
1071 Comparative Analysis of Mechanical Properties of Paddy Rice for Different Variety-Moisture Content Interactions

Authors: Johnson Opoku-Asante, Emmanuel Bobobee, Joseph Akowuah, Eric Amoah Asante

Abstract:

In recent years, the issue of postharvest losses has become a serious concern in Sub-Saharan Africa. Postharvest technology development and adaptation need urgent attention, particularly for small and medium-scale rice farmers in Africa. However, to better develop any postharvest technology, knowledge of the mechanical properties of different varieties of paddy rice is vital. There is also the issue of the development of new rice cultivars. The objectives of this research are to (1) determine the mechanical properties of the selected paddy rice varieties at varying moisture content. (2) conduct a comparative analysis of the mechanical properties of selected rice paddy for different variety-moisture content interactions. (3) determine the significant statistical differences between the mean values of the various variety-moisture content interactions The mechanical properties of AGRA rice, CRI-Amankwatia, CRI-Enapa and CRI-Dartey, four local varieties developed by Crop Research Institute of Ghana are compared at 11.5%, 13.0% and 16.5% dry basis moisture content. The mechanical properties measured are Sphericity, Aspect ratio, Grain mass, 1000 Grain mass, Bulk Density, True Density, Porosity and Angle of Repose. Samples were collected from the Kwadaso Agric College of the CRI in Kumasi. The samples were threshed manually and winnowed before conducting the experiment. The moisture content was determined on a dry basis using the Moistex Screw-Type Digital Grain Moisture Meter. Other equipment used for data collection were venire calipers and Citizen electronic scale. A 4×3 factorial arrangement was used in a completely randomized design in three replications. Tukey's HSD comparisons test was conducted during data analysis to compare all possible pairwise combinations of the various varieties’ moisture content interaction. From the results, it was concluded that Sphericity recorded 0.391 mm³ to 0.377 mm³ for CRI-Dartey at 16.5% and CRI-Enapa at 13.5%, respectively, whereas Aspect Ratio recorded 0.298 mm³ to 0.269 mm³ for CRI-Dartey at 16.5% and CRI-Enapa at 13.5% respectively. For grain mass, AGRA rice at 13.0% also recorded 0.0312 g as the highest score and CRI-Enapa at 13.0% obtained 0.0237 as the lowest score. For the GM1000, it was observed that it ranges from 29.33 g for CRI-Amankwatia at 16.5% moisture content to 22.54 g for CRI-Enapa at 16.5% interactions. Bulk Density ranged from 654.0 kg/m³ to 422.9 kg/m³ for CRI-Amankwatia at 16.5% and CRI-Enapa at 11.5% as the highest and lowest recordings, respectively. It was also observed that the true Density ranges from 1685.8 kg/m3 for AGRA rice at 13.0% moisture content to 1352.5 kg/m³ for CRI-Enapa at 16.5% interactions. In the case of porosity, CRI-Enapa at 11.5% received the highest score of 70.83% and CRI-Amankwatia at 16.5 received the lowest score of 55.88%. Finally, in the case of Angle of Repose, CRI-Amankwatia at 16.5% recorded the highest score of 47.3o and CRI-Enapa at 11.5% recorded the least score of 34.27o. In all cases, the difference in mean value was less than the LSD. This indicates that there were no significant statistical differences between their mean values, indicating that technologies developed and adapted for one variety can equally be used for all the other varieties.

Keywords: angle of repose, aspect ratio, bulk density, porosity, sphericity, mechanical properties

Procedia PDF Downloads 98
1070 Comparative Review of Models for Forecasting Permanent Deformation in Unbound Granular Materials

Authors: Shamsulhaq Amin

Abstract:

Unbound granular materials (UGMs) are pivotal in ensuring long-term quality, especially in the layers under the surface of flexible pavements and other constructions. This study seeks to better understand the behavior of the UGMs by looking at popular models for predicting lasting deformation under various levels of stresses and load cycles. These models focus on variables such as the number of load cycles, stress levels, and features specific to materials and were evaluated on the basis of their ability to accurately predict outcomes. The study showed that these factors play a crucial role in how well the models work. Therefore, the research highlights the need to look at a wide range of stress situations to more accurately predict how much the UGMs bend or shift. The research looked at important factors, like how permanent deformation relates to the number of times a load is applied, how quickly this phenomenon happens, and the shakedown effect, in two different types of UGMs: granite and limestone. A detailed study was done over 100,000 load cycles, which provided deep insights into how these materials behave. In this study, a number of factors, such as the level of stress applied, the number of load cycles, the density of the material, and the moisture present were seen as the main factors affecting permanent deformation. It is vital to fully understand these elements for better designing pavements that last long and handle wear and tear. A series of laboratory tests were performed to evaluate the mechanical properties of materials and acquire model parameters. The testing included gradation tests, CBR tests, and Repeated load triaxial tests. The repeated load triaxial tests were crucial for studying the significant components that affect deformation. This test involved applying various stress levels to estimate model parameters. In addition, certain model parameters were established by regression analysis, and optimization was conducted to improve outcomes. Afterward, the material parameters that were acquired were used to construct graphs for each model. The graphs were subsequently compared to the outcomes obtained from the repeated load triaxial testing. Additionally, the models were evaluated to determine if they demonstrated the two inherent deformation behaviors of materials when subjected to repetitive load: the initial phase, post-compaction, and the second phase volumetric changes. In this study, using log-log graphs was key to making the complex data easier to understand. This method made the analysis clearer and helped make the findings easier to interpret, adding both precision and depth to the research. This research provides important insight into picking the right models for predicting how these materials will act under expected stress and load conditions. Moreover, it offers crucial information regarding the effect of load cycle and permanent deformation as well as the shakedown effect on granite and limestone UGMs.

Keywords: permanent deformation, unbound granular materials, load cycles, stress level

Procedia PDF Downloads 38
1069 Towards a Strategic Framework for State-Level Epistemological Functions

Authors: Mark Darius Juszczak

Abstract:

While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.

Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program

Procedia PDF Downloads 77
1068 Semiotics of the New Commercial Music Paradigm

Authors: Mladen Milicevic

Abstract:

This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.

Keywords: music, semiology, commercial, taste

Procedia PDF Downloads 393
1067 Direct Contact Ultrasound Assisted Drying of Mango Slices

Authors: E. K. Mendez, N. A. Salazar, C. E. Orrego

Abstract:

There is undoubted proof that increasing the intake of fruit lessens the risk of hypertension, coronary heart disease, stroke, and probable evidence that lowers the risk of cancer. Proper fruit drying is an excellent alternative to make their shelf-life longer, commercialization easier, and ready-to-eat healthy products or ingredients. The conventional way of drying is by hot air forced convection. However, this process step often requires a very long residence time; furthermore, it is highly energy consuming and detrimental to the product quality. Nowadays, power ultrasound (US) technic has been considered as an emerging and promising technology for industrial food processing. Most of published works dealing with drying food assisted by US have studied the effect of ultrasonic pre-treatment prior to air-drying on food and the airborne US conditions during dehydration. In this work a new approach was tested taking in to account drying time and two quality parameters of mango slices dehydrated by convection assisted by 20 KHz power US applied directly using a holed plate as product support and sound transmitting surface. During the drying of mango (Mangifera indica L.) slices (ca. 6.5 g, 0.006 m height and 0.040 m diameter), their weight was recorded every hour until final moisture content (10.0±1.0 % wet basis) was reached. After previous tests, optimization of three drying parameters - frequencies (2, 5 and 8 minutes each half-hour), air temperature (50-55-60⁰C) and power (45-70-95W)- was attempted by using a Box–Behnken design under the response surface methodology for the optimal drying time, color parameters and rehydration rate of dried samples. Assays involved 17 experiments, including a quintuplicate of the central point. Dried samples with and without US application were packed in individual high barrier plastic bags under vacuum, and then stored in the dark at 8⁰C until their analysis. All drying assays and sample analysis were performed in triplicate. US drying experimental data were fitted with nine models, among which the Verna model resulted in the best fit with R2 > 0.9999 and reduced χ2 ≤ 0.000001. Significant reductions in drying time were observed for the assays that used lower frequency and high US power. At 55⁰C, 95 watts and 2 min/30 min of sonication, 10% moisture content was reached in 211 min, as compared with 320 min for the same test without the use of US (blank). Rehydration rates (RR), defined as the ratio of rehydrated sample weight to that of dry sample and measured, was also larger than those of blanks and, in general, the higher the US power, the greater the RR. The direct contact and intermittent US treatment of mango slices used in this work improve drying rates and dried fruit rehydration ability. This technique can thus be used to reduce energy processing costs and the greenhouse gas emissions of fruit dehydration.

Keywords: ultrasonic assisted drying, fruit drying, mango slices, contact ultrasonic drying

Procedia PDF Downloads 345
1066 Effects of Different Fungicide In-Crop Treatments on Plant Health Status of Sunflower (Helianthus annuus L.)

Authors: F. Pal-Fam, S. Keszthelyi

Abstract:

Phytosanitary condition of sunflower (Helianthus annuus L.) was endangered by several phytopathogenic agents, mainly microfungi, such as Sclerotinia sclerotiorum, Diaporthe helianthi, Plasmopara halstedtii, Macrophomina phaseolina and so on. There are more agrotechnical and chemical technologies against them, for instance, tolerant hybrids, crop rotations and eventually several in-crop chemical treatments. There are different fungicide treatment methods in sunflower in Hungarian agricultural practice in the quest of obtaining healthy and economic plant products. Besides, there are many choices of useable active ingredients in Hungarian sunflower protection. This study carried out into the examination of the effect of five different fungicide active substances (found on the market) and three different application modes (early; late; and early and late treatments) in a total number of 9 sample plots, 0.1 ha each other. Five successive vegetation periods have been investigated in long term, between 2013 and 2017. The treatments were: 1)untreated control; 2) boscalid and dimoxystrobin late treatment (July); 3) boscalid and dimoxystrobin early treatment (June); 4) picoxystrobin and cyproconazole early treatment; 5) picoxystrobin and cymoxanil and famoxadone early treatment; 6) picoxystrobin and cyproconazole early; cymoxanil and famoxadone late treatments; 7) picoxystrobin and cyproconazole early; picoxystrobin and cymoxanil and famoxadone late treatments; 8) trifloxystrobin and cyproconazole early treatment; and 9) trifloxystrobin and cyproconazole both early and late treatments. Due to the very different yearly weather conditions different phytopathogenic fungi were dominant in the particular years: Diaporthe and Alternaria in 2013; Alternaria and Sclerotinia in 2014 and 2015; Alternaria, Sclerotinia and Diaporthe in 2016; and Alternaria in 2017. As a result of treatments ‘infection frequency’ and ‘infestation rate’ showed a significant decrease compared to the control plot. There were no significant differences between the efficacies of the different fungicide mixes; all were almost the same effective against the phytopathogenic fungi. The most dangerous Sclerotinia infection was practically eliminated in all of the treatments. Among the single treatments, the late treatment realised in July was the less efficient, followed by the early treatments effectuated in June. The most efficient was the double treatments realised in both June and July, resulting 70-80% decrease of the infection frequency, respectively 75-90% decrease of the infestation rate, comparing with the control plot in the particular years. The lowest yield quantity was observed in the control plot, followed by the late single treatment. The yield of the early single treatments was higher, while the double treatments showed the highest yield quantities (18.3-22.5% higher than the control plot in particular years). In total, according to our five years investigation, the most effective application mode is the double in-crop treatment per vegetation time, which is reflected by the yield surplus.

Keywords: fungicides, treatments, phytopathogens, sunflower

Procedia PDF Downloads 141
1065 Exploring the Impact of Eye Movement Desensitization and Reprocessing (EMDR) And Mindfulness for Processing Trauma and Facilitating Healing During Ayahuasca Ceremonies

Authors: J. Hash, J. Converse, L. Gibson

Abstract:

Plant medicines are of growing interest for addressing mental health concerns. Ayahuasca, a traditional plant-based medicine, has established itself as a powerful way of processing trauma and precipitating healing and mood stabilization. Eye Movement Desensitization and Reprocessing (EMDR) is another treatment modality that aids in the rapid processing and resolution of trauma. We investigated group EMDR therapy, G-TEP, as a preparatory practice before Ayahuasca ceremonies to determine if the combination of these modalities supports participants in their journeys of letting go of past experiences negatively impacting mental health, thereby accentuating the healing of the plant medicine. We surveyed 96 participants (51 experimental G-TEP, 45 control grounding prior to their ceremony; age M=38.6, SD=9.1; F=57, M=34; white=39, Hispanic/Latinx=23, multiracial=11, Asian/Pacific Islander=10, other=7) in a pre-post, mixed methods design. Participants were surveyed for demographic characteristics, symptoms of PTSD and cPTSD (International Trauma Questionnaire (ITQ), depression (Beck Depression Inventory, BDI), and stress (Perceived Stress Scale, PSS) before the ceremony and at the end of the ceremony weekend. Open-ended questions also inquired about their expectations of the ceremony and results at the end. No baseline differences existed between the control and experimental participants. Overall, participants reported a decrease in meeting the threshold for PTSD symptoms (p<0.01); surprisingly, the control group reported significantly fewer thresholds met for symptoms of affective dysregulation, 2(1)=6.776, p<.01, negative self-concept, 2 (1)=7.122, p<.01, and disturbance in relationships, 2 (1)=9.804, p<.01, on subscales of the ITQ as compared to the experimental group. All participants also experienced a significant decrease in scores on the BDI, t(94)=8.995, p<.001, and PSS, t(91)=6.892, p<.001. Similar to patterns of PTSD symptoms, the control group reported significantly lower scores on the BDI, t(65.115)=-2.587, p<.01, and a trend toward lower PSS, t(90)=-1.775, p=.079 (this was significant with a one-sided test at p<.05), compared to the experimental group following the ceremony. Qualitative interviews among participants revealed a potential explanation for these relatively higher levels of depression and stress in the experimental group following the ceremony. Many participants reported needing more time to process their experience to gain an understanding of the effects of the Ayahuasca medicine. Others reported a sense of hopefulness and understanding of the sources of their trauma and the necessary steps to heal moving forward. This suggests increased introspection and openness to processing trauma, therefore making them more receptive to their emotions. The integration process of an Ayahuasca ceremony is a week- to months-long process that was not accessible in this stage of research, yet it is an integral process to understanding the full effects of the Ayahuasca medicine following the closure of a ceremony. Our future research aims to assess participants weeks into their integration process to determine the effectiveness of EMDR, and if the higher levels of depression and stress indicate the initial reaction to greater awareness of trauma and receptivity to healing.

Keywords: ayahuasca, EMDR, PTSD, mental health

Procedia PDF Downloads 65
1064 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology

Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal

Abstract:

Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.

Keywords: chloramine decay, modelling, response surface methodology, water quality parameters

Procedia PDF Downloads 224
1063 Association between Obstetric Factors with Affected Areas of Health-Related Quality of Life of Pregnant Women

Authors: Cinthia G. P. Calou, Franz J. Antezana, Ana I. O. Nicolau, Eveliny S. Martins, Paula R. A. L. Soares, Glauberto S. Quirino, Dayanne R. Oliveira, Priscila S. Aquino, Régia C. M. B. Castro, Ana K. B. Pinheiro

Abstract:

Introduction: As an integral part of the health-disease process, gestation is a period in which the social insertion of women can influence, in a positive or negative way, the course of the pregnancy-puerperal cycle. Thus, evaluating the quality of life of this population can redirect the implementation of innovative practices in the quest to make them more effective and real for the promotion of a more humanized care. This study explores the associations between the obstetric factors with affected areas of health-related quality of life of pregnant women with habitual risk. Methods: This is a cross-sectional, quantitative study conducted in three public facilities and a private service that provides prenatal care in the city of Fortaleza, Ceara, Brazil. The sample consisted of 261 pregnant women who underwent low-risk prenatal care and were interviewed from September to November 2014. The collection instruments were a questionnaire containing socio-demographic and obstetric variables, in addition to the Brazilian version of the Mother scale Generated Index (MGI) characterized by being a specific and objective instrument, consisting of a single sheet and subdivided into three stages. It allows identifying the areas of life of the pregnant woman that are most affected, which could go unnoticed by the pre-formulated measurement instruments. The obstetric data, as well as the data concerning the application of the MGI scale, were compiled and analyzed through the statistical program Statistical Package for the Social Sciences (SPSS), version 20.0. After the compilation, a descriptive analysis was carried out. Then, associations were made between some variables. The tests applied were the Pearson Chi-Square and the Fisher's exact test. The odds ratio was also calculated. These associations were considered statistically significant when the p (probability) value was less than or equal to a level of 5% (α = 0.05) in the tests performed. Results: The variables that negatively reflected the quality of life of the pregnant women and presented a significant association with the polaciuria were: gestational age (p = 0.022) and parity (p = 0.048). Episodes of nausea and vomiting also showed significant with gestational age correlation (p = 0.0001). Evaluating the crossing of stress, we observed a significant association with parity (p = 0.0001). In turn, emotional lability revealed dependence on the variable type of delivery (p = 0.009). Conclusion: The health professionals involved in the assistance to the pregnant woman can understand how the process of gestation is experienced, considering all its peculiar transformations; to meet their individual needs, stimulating their autonomy and their power of choice, envisaging the achievement of a better quality of life related to health in the perspective of health promotion.

Keywords: health-related quality of life, obstetric nursing, pregnant women, prenatal care

Procedia PDF Downloads 293
1062 The Effects of Alpha-Lipoic Acid Supplementation on Post-Stroke Patients: A Systematic Review and Meta-Analysis of Randomized Controlled Trials

Authors: Hamid Abbasi, Neda Jourabchi, Ranasadat Abedi, Kiarash Tajernarenj, Mehdi Farhoudi, Sarvin Sanaie

Abstract:

Background: Alpha lipoic acid (ALA), fat- and water-soluble, coenzyme with sulfuret content, has received considerable attention for its potential therapeutic role in diabetes, cardiovascular diseases, cancers, and central nervous disease. This investigation aims to evaluate the probable protective effects of ALA in stroke patients. Methods: Based on Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines, This meta-analysis was performed. The PICO criteria for this meta-analysis were as follows: Population/Patients (P: stroke patients); Intervention (I: ALA); Comparison (C: control); Outcome (O: blood glucose, lipid profile, oxidative stress, inflammatory factors).In addition, Studies that were excluded from the analysis consisted of in vitro, in vivo, and ex vivo studies, case reports, quasi-experimental studies. Scopus, PubMed, Web of Science, EMBASE databases were searched until August 2023. Results: Of 496 records that were screened in the title/abstract stage, 9 studies were included in this meta-analysis. The sample sizes in the included studies vary between 28 and 90. The result of risk of bias was performed via risk of bias (RoB) in randomized-controlled trials (RCTs) based on the second version of the Cochrane RoB assessment tool. 8 studies had a definitely high risk of bias. Discussion: To the best of our knowledge, The present meta-analysis is the first study addressing the effectiveness of ALA supplementation in enhancing post-stroke metabolic markers, including lipid profile, oxidative stress, and inflammatory indices. It is imperative to acknowledge certain potential limitations inherent in this study. First of all, type of treatment (oral or intravenous infusion) could alter the bioavailability of ALA. Our study had restricted evidence regarding the impact of ALA supplementation on included outcomes. Therefore, further research is warranted to develop into the effects of ALA specifically on inflammation and oxidative stress. Funding: The research protocol was approved and supported by the Student Research Committee, Tabriz University of Medical Sciences (grant number: 72825). Registration: This study was registered in the International prospective register of systematic reviews (PROSPERO ID: CR42023461612).

Keywords: alpha-lipoic acid, lipid profile, blood glucose, inflammatory factors, oxidative stress, meta-analysis, post-stroke

Procedia PDF Downloads 63
1061 Variability and Stability of Bread and Durum Wheat for Phytic Acid Content

Authors: Gordana Branković, Vesna Dragičević, Dejan Dodig, Desimir Knežević, Srbislav Denčić, Gordana Šurlan-Momirović

Abstract:

Phytic acid is a major pool in the flux of phosphorus through agroecosystems and represents a sum equivalent to > 50% of all phosphorus fertilizer used annually. Nutrition rich in phytic acid can substantially decrease micronutrients apsorption as calcium, zink, iron, manganese, copper due to phytate salts excretion by human and non-ruminant animals as poultry, swine and fish, having in common very scarce phytase activity, and consequently the ability to digest and utilize phytic acid, thus phytic acid derived phosphorus in animal waste contributes to water pollution. The tested accessions consisted of 15 genotypes of bread wheat (Triticum aestivum L. ssp. vulgare) and of 15 genotypes of durum wheat (Triticum durum Desf.). The trials were sown at the three test sites in Serbia: Rimski Šančevi (RS) (45º19´51´´N; 19º50´59´´E), Zemun Polje (ZP) (44º52´N; 20º19´E) and Padinska Skela (PS) (44º57´N 20º26´E) during two vegetation seasons 2010-2011 and 2011-2012. The experimental design was randomized complete block design with four replications. The elementary plot consisted of 3 internal rows of 0.6 m2 area (3 × 0.2 m × 1 m). Grains were grinded with Laboratory Mill 120 Perten (“Perten”, Sweden) (particles size < 500 μm) and flour was used for the analysis. Phytic acid grain content was determined spectrophotometrically with the Shimadzu UV-1601 spectrophotometer (Shimadzu Corporation, Japan). Objectives of this study were to determine: i) variability and stability of the phytic acid content among selected genotypes of bread and durum wheat, ii) predominant source of variation regarding genotype (G), environment (E) and genotype × environment interaction (GEI) from the multi-environment trial, iii) influence of climatic variables on the GEI for the phytic acid content. Based on the analysis of variance it had been determined that the variation of phytic acid content was predominantly influenced by environment in durum wheat, while the GEI prevailed for the variation of the phytic acid content in bread wheat. Phytic acid content expressed on the dry mass basis was in the range 14.21-17.86 mg g-1 with the average of 16.05 mg g-1 for bread wheat and 14.63-16.78 mg g-1 with the average of 15.91 mg g-1 for durum wheat. Average-environment coordination view of the genotype by environment (GGE) biplot was used for the selection of the most desirable genotypes for breeding for low phytic acid content in the sense of good stability and lower level of phytic acid content. The most desirable genotypes of bread and durum wheat for breeding for phytic acid were Apache and 37EDUYT /07 No. 7849. Models of climatic factors in the highest percentage (> 91%) were useful in interpreting GEI for phytic acid content, and included relative humidity in June, sunshine hours in April, mean temperature in April and winter moisture reserves for genotypes of bread wheat, as well as precipitation in June and April, maximum temperature in April and mean temperature in June for genotypes of durum wheat.

Keywords: genotype × environment interaction, phytic acid, stability, variability

Procedia PDF Downloads 394
1060 Multi-Criteria Selection and Improvement of Effective Design for Generating Power from Sea Waves

Authors: Khaled M. Khader, Mamdouh I. Elimy, Omayma A. Nada

Abstract:

Sustainable development is the nominal goal of most countries at present. In general, fossil fuels are the development mainstay of most world countries. Regrettably, the fossil fuel consumption rate is very high, and the world is facing the problem of conventional fuels depletion soon. In addition, there are many problems of environmental pollution resulting from the emission of harmful gases and vapors during fuel burning. Thus, clean, renewable energy became the main concern of most countries for filling the gap between available energy resources and their growing needs. There are many renewable energy sources such as wind, solar and wave energy. Energy can be obtained from the motion of sea waves almost all the time. However, power generation from solar or wind energy is highly restricted to sunny periods or the availability of suitable wind speeds. Moreover, energy produced from sea wave motion is one of the cheapest types of clean energy. In addition, renewable energy usage of sea waves guarantees safe environmental conditions. Cheap electricity can be generated from wave energy using different systems such as oscillating bodies' system, pendulum gate system, ocean wave dragon system and oscillating water column device. In this paper, a multi-criteria model has been developed using Analytic Hierarchy Process (AHP) to support the decision of selecting the most effective system for generating power from sea waves. This paper provides a widespread overview of the different design alternatives for sea wave energy converter systems. The considered design alternatives have been evaluated using the developed AHP model. The multi-criteria assessment reveals that the off-shore Oscillating Water Column (OWC) system is the most appropriate system for generating power from sea waves. The OWC system consists of a suitable hollow chamber at the shore which is completely closed except at its base which has an open area for gathering moving sea waves. Sea wave's motion pushes the air up and down passing through a suitable well turbine for generating power. Improving the power generation capability of the OWC system is one of the main objectives of this research. After investigating the effect of some design modifications, it has been concluded that selecting the appropriate settings of some effective design parameters such as the number of layers of Wells turbine fans and the intermediate distance between the fans can result in significant improvements. Moreover, simple dynamic analysis of the Wells turbine is introduced. Furthermore, this paper strives for comparing the theoretical and experimental results of the built experimental prototype.

Keywords: renewable energy, oscillating water column, multi-criteria selection, Wells turbine

Procedia PDF Downloads 162
1059 Industrial Waste Multi-Metal Ion Exchange

Authors: Thomas S. Abia II

Abstract:

Intel Chandler Site has internally developed its first-of-kind (FOK) facility-scale wastewater treatment system to achieve multi-metal ion exchange. The process was carried out using a serial process train of carbon filtration, pH / ORP adjustment, and cationic exchange purification to treat dilute metal wastewater (DMW) discharged from a substrate packaging factory. Spanning a trial period of 10 months, a total of 3,271 samples were collected and statistically analyzed (average baseline + standard deviation) to evaluate the performance of a 95-gpm, multi-reactor continuous copper ion exchange treatment system that was consequently retrofitted for manganese ion exchange to meet environmental regulations. The system is also equipped with an inline acid and hot caustic regeneration system to rejuvenate exhausted IX resins and occasionally remove surface crud. Data generated from lab-scale studies was transferred to system operating modifications following multiple trial-and-error experiments. Despite the DMW treatment system failing to meet internal performance specifications for manganese output, it was observed to remove the cation notwithstanding the prevalence of copper in the waste stream. Accordingly, the average manganese output declined from 6.5 + 5.6 mg¹L⁻¹ at pre-pilot to 1.1 + 1.2 mg¹L⁻¹ post-pilot (83% baseline reduction). This milestone was achieved regardless of the average influent manganese to DMW increasing from 1.0 + 13.7 mg¹L⁻¹ at pre-pilot to 2.1 + 0.2 mg¹L⁻¹ post-pilot (110% baseline uptick). Likewise, the pre-trial and post-trial average influent copper values to DMW were 22.4 + 10.2 mg¹L⁻¹ and 32.1 + 39.1 mg¹L⁻¹, respectively (43% baseline increase). As a result, the pre-trial and post-trial average copper output values were 0.1 + 0.5 mg¹L⁻¹ and 0.4 + 1.2 mg¹L⁻¹, respectively (300% baseline uptick). Conclusively, the operating pH range upstream of treatment (between 3.5 and 5) was shown to be the largest single point of influence for optimizing manganese uptake during multi-metal ion exchange. However, the high variability of the influent copper-to-manganese ratio was observed to adversely impact the system functionality. The journal herein intends to discuss the operating parameters such as pH and oxidation-reduction potential (ORP) that were shown to influence the functional versatility of the ion exchange system significantly. The literature also proposes to discuss limitations of the treatment system such as influent copper-to-manganese ratio variations, operational configuration, waste by-product management, and system recovery requirements to provide a balanced assessment of the multi-metal ion exchange process. The take-away from this literature is intended to analyze the overall feasibility of ion exchange for metals manufacturing facilities that lack the capability to expand hardware due to real estate restrictions, aggressive schedules, or budgetary constraints.

Keywords: copper, industrial wastewater treatment, multi-metal ion exchange, manganese

Procedia PDF Downloads 143
1058 Developing Creative and Critically Reflective Digital Learning Communities

Authors: W. S. Barber, S. L. King

Abstract:

This paper is a qualitative case study analysis of the development of a fully online learning community of graduate students through arts-based community building activities. With increasing numbers and types of online learning spaces, it is incumbent upon educators to continue to push the edge of what best practices look like in digital learning environments. In digital learning spaces, instructors can no longer be seen as purveyors of content knowledge to be examined at the end of a set course by a final test or exam. The rapid and fluid dissemination of information via Web 3.0 demands that we reshape our approach to teaching and learning, from one that is content-focused to one that is process-driven. Rather than having instructors as formal leaders, today’s digital learning environments require us to share expertise, as it is the collective experiences and knowledge of all students together with the instructors that help to create a very different kind of learning community. This paper focuses on innovations pursued in a 36 hour 12 week graduate course in higher education entitled “Critical and Reflective Practice”. The authors chronicle their journey to developing a fully online learning community (FOLC) by emphasizing the elements of social, cognitive, emotional and digital spaces that form a moving interplay through the community. In this way, students embrace anywhere anytime learning and often take the learning, as well as the relationships they build and skills they acquire, beyond the digital class into real world situations. We argue that in order to increase student online engagement, pedagogical approaches need to stem from two primary elements, both creativity and critical reflection, that are essential pillars upon which instructors can co-design learning environments with students. The theoretical framework for the paper is based on the interaction and interdependence of Creativity, Intuition, Critical Reflection, Social Constructivism and FOLCs. By leveraging students’ embedded familiarity with a wide variety of technologies, this case study of a graduate level course on critical reflection in education, examines how relationships, quality of work produced, and student engagement can improve by using creative and imaginative pedagogical strategies. The authors examine their professional pedagogical strategies through the lens that the teacher acts as facilitator, guide and co-designer. In a world where students can easily search for and organize information as self-directed processes, creativity and connection can at times be lost in the digitized course environment. The paper concludes by posing further questions as to how institutions of higher education may be challenged to restructure their credit granting courses into more flexible modules, and how students need to be considered an important part of assessment and evaluation strategies. By introducing creativity and critical reflection as central features of the digital learning spaces, notions of best practices in digital teaching and learning emerge.

Keywords: online, pedagogy, learning, communities

Procedia PDF Downloads 404
1057 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI

Procedia PDF Downloads 171
1056 Teacher’s Role in the Process of Identity Construction in Language Learners

Authors: Gaston Bacquet

Abstract:

The purpose of this research is to explore how language and culture shape a learner’s identity as they immerse themselves in the world of second language learning and how teachers can assist in the process of identity construction within a classroom setting. The study will be conducted as an in-classroom ethnography, using a qualitative methods approach and analyzing students’ experiences as language learners, their degree of investment, inclusion/exclusion, and attitudes, both towards themselves and their social context; the research question the study will attempt to answer is: What kind of pedagogical interventions are needed to help language learners in the process of identity construction so they can offset unequal conditions of power and gain further social inclusion? The following methods will be used for data collection: i) Questionnaires to investigate learners’ attitudes and feelings in different areas divided into four strands: themselves, their classroom, learning English and their social context. ii) Participant observations, conducted in a naturalistic manner. iii) Journals, which will be used in two different ways: on the one hand, learners will keep semi-structured, solicited diaries to record specific events as requested by the researcher (event-contingent). On the other, the researcher will keep his journal to maintain a record of events and situations as they happen to reduce the risk of inaccuracies. iv) Person-centered interviews, which will be conducted at the end of the study to unearth data that might have been occluded or be unclear from the methods above. The interviews will aim at gaining further data on experiences, behaviors, values, opinions, feelings, knowledge and sensory, background and demographic information. This research seeks to understand issues of socio-cultural identities and thus make a significant contribution to knowledge in this area by investigating the type of pedagogical interventions needed to assist language learners in the process of identity construction to achieve further social inclusion. It will also have applied relevance for those working with diverse student groups, especially taking our present social context into consideration: we live in a highly mobile world, with migrants relocating to wealthier, more developed countries that pose their own particular set of challenges for these communities. This point is relevant because an individual’s insight and understanding of their own identity shape their relationship with the world and their ability to continue constructing this relationship. At the same time, because a relationship is influenced by power, the goal of this study is to help learners feel and become more empowered by increasing their linguistic capital, which we hope might result in a greater ability to integrate themselves socially. Exactly how this help will be provided will vary as data is unearthed through questionnaires, focus groups and the actual participant observations being carried out.

Keywords: identity construction, second-language learning, investment, second-language culture, social inclusion

Procedia PDF Downloads 103
1055 Structural Balance and Creative Tensions in New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

New Product Development involves team members coming together and working in teams to come up with innovative solutions to problems, resulting in new products. Thus, a core attribute of a successful NPD team is their creativity and innovation. They need to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas, resulting in a POC (proof-of-concept) implementation or a prototype of the product. There are two distinctive traits that the teams need to have, one is ideational creativity, and the other is effective and efficient teamworking. There are multiple types of tensions that each of these traits cause in the teams, and these tensions reflect in the team dynamics. Ideational conflicts arising out of debates and deliberations increase the collective knowledge and affect the team creativity positively. However, the same trait of challenging each other’s viewpoints might lead the team members to be disruptive, resulting in interpersonal tensions, which in turn lead to less than efficient teamwork. Teams that foster and effectively manage these creative tensions are successful, and teams that are not able to manage these tensions show poor team performance. In this paper, it explore these tensions as they result in the team communication social network and propose a Creative Tension Balance index along the lines of Degree of Balance in social networks that has the potential to highlight the successful (and unsuccessful) NPD teams. Team communication reflects the team dynamics among team members and is the data set for analysis. The emails between the members of the NPD teams are processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. This social network is subjected to traditional social network analysis methods to arrive at some established metrics and structural balance analysis metrics. Traditional structural balance is extended to include team interaction pattern metrics to arrive at a creative tension balance metric that effectively captures the creative tensions and tension balance in teams. This CTB (Creative Tension Balance) metric truly captures the signatures of successful and unsuccessful (dissonant) NPD teams. The dataset for this research study includes 23 NPD teams spread out over multiple semesters and computes this CTB metric and uses it to identify the most successful and unsuccessful teams by classifying these teams into low, high and medium performing teams. The results are correlated to the team reflections (for team dynamics and interaction patterns), the team self-evaluation feedback surveys (for teamwork metrics) and team performance through a comprehensive team grade (for high and low performing team signatures).

Keywords: team dynamics, social network analysis, new product development teamwork, structural balance, NPD teams

Procedia PDF Downloads 79
1054 Electrochemical Performance of Femtosecond Laser Structured Commercial Solid Oxide Fuel Cells Electrolyte

Authors: Mohamed A. Baba, Gazy Rodowan, Brigita Abakevičienė, Sigitas Tamulevičius, Bartlomiej Lemieszek, Sebastian Molin, Tomas Tamulevičius

Abstract:

Solid oxide fuel cells (SOFC) efficiently convert hydrogen to energy without producing any disturbances or contaminants. The core of the cell is electrolyte. For improving the performance of electrolyte-supported cells, it is desirable to extend the available exchange surface area by micro-structuring of the electrolyte with laser-based micromachining. This study investigated the electrochemical performance of cells micro machined using a femtosecond laser. Commercial ceramic SOFC (Elcogen, AS) with a total thickness of 400 μm was structured by 1030 nm wavelength Yb: KGW fs-laser Pharos (Light Conversion) using 100 kHz repetition frequency and 290 fs pulse length light by scanning with the galvanometer scanner (ScanLab) and focused with a f-Theta telecentric lens (SillOptics). The sample height was positioned using a motorized z-stage. The microstructures were formed using a laser spiral trepanning in Ni/YSZ anode supported membrane at the central part of the ceramic piece of 5.5 mm diameter at active area of the cell. All surface was drilled with 275 µm diameter holes spaced by 275 µm. The machining processes were carried out under ambient conditions. The microstructural effects of the femtosecond laser treatment on the electrolyte surface were investigated prior to the electrochemical characterisation using a scanning electron microscope (SEM) Quanta 200 FEG (FEI). The Novo control Alpha-A was used for electrochemical impedance spectroscopy on a symmetrical cell configuration with an excitation amplitude of 25 mV and a frequency range of 1 MHz to 0.1 Hz. The fuel cell characterization of the cell was examined on open flanges test setup by Fiaxell. Using nickel mesh on the anode side and au mesh on the cathode side, the cell was electrically linked. The cell was placed in a Kittec furnace with a Process IDentifier temperature controller. The wires were connected to a Solartron 1260/1287 frequency analyzer for the impedance and current-voltage characterization. In order to determine the impact of the anode's microstructure on the performance of the commercial cells, the acquired results were compared to cells with unstructured anode. Geometrical studies verified that the depth of the -holes increased linearly according to laser energy and scanning times. On the other hand, it reduced as the scanning speed increased. The electrochemical analysis demonstrates that the open circuit voltage OCV values of the two cells are equal. Further, the modified cell's initial slope reduces to 0.209 from 0.253 of the unmodified cell, revealing that the surface modification considerably decreases energy loss. Plus, the maximum power density for the cell with the microstructure and the reference cell respectively, are 1.45 and 1.16 Wcm⁻².

Keywords: electrochemical performance, electrolyte-supported cells, laser micro-structuring, solid oxide fuel cells

Procedia PDF Downloads 67
1053 The Ephemeral Re-Use of Cultural Heritage: The Incorporation of the Festival Phenomenon Within Monuments and Archaeological Sites in Lebanon

Authors: Joe Kallas

Abstract:

It is now widely accepted that the preservation of cultural heritage must go beyond simple restoration and renovation actions. While some historic monuments have been preserved for millennia, many of them, less important or simply neglected because of lack of money, have disappeared. As a result, the adaptation of monuments and archaeological sites to new functions allow them to 'survive'. Temporary activities or 'ephemeral' re-use, are increasingly recognized as a means of vitalization of deprived areas and enhancement of historic sites that became obsolete. They have the potential to increase economic and cultural value while making the best use of existing resources. However, there are often conservation and preservation issues related to the implementation of this type of re-use, which can also threaten the integrity and authenticity of archaeological sites and monuments if they have not been properly managed. This paper aims to get a better knowledge of the ephemeral re-use of heritage, and more specifically the subject of the incorporation of the festival phenomenon within the monuments and archaeological sites in Lebanon, a topic that is not yet studied enough. This paper tried to determine the elements that compose it, in order to analyze this phenomenon and to trace its good practices, by comparing international study cases to important national cases: the International Festival of Baalbek, the International Festival of Byblos and the International Festival of Beiteddine. Various factors have been studied and analyzed in order to best respond to the main problematic of this paper: 'How can we preserve the integrity of sites and monuments after the integration of an ephemeral function? And what are the preventive conservation measures to be taken when holding festivals in archaeological sites with fragile structures?' The impacts of the technical problems were first analyzed using various data and more particularly the effects of mass tourism, the integration of temporary installations, sound vibrations, the effects of unstudied lighting, until the mystification of heritage. Unfortunately, the DGA (General Direction of Antiquities in Lebanon) does not specify any frequency limit for the sound vibrations emitted by the speakers during musical festivals. In addition, there is no requirement from its part regarding the installations of the lighting systems in the historic monuments and no monitoring is done in situ, due to the lack of awareness of the impact that could be generated by such interventions, and due to the lack of materials and tools needed for the monitoring process. The study and analysis of the various data mentioned above led us to the elaboration of the main objective of this paper, which is the establishment of a list of recommendations. This list enables to define various preventive conservation measures to be taken during the holding of the festivals within the cultural heritage sites in Lebanon. We strongly hope that this paper will be an awareness document to start taking into consideration several factors previously neglected, in order to improve the conservation practices in the archaeological sites and monuments during the incorporation of the festival phenomenon.

Keywords: archaeology, authenticity, conservation, cultural heritage, festival, historic sites, integrity, monuments, tourism

Procedia PDF Downloads 118
1052 Minding the Gap: Consumer Contracts in the Age of Online Information Flow

Authors: Samuel I. Becher, Tal Z. Zarsky

Abstract:

The digital world becomes part of our DNA now. The way e-commerce, human behavior, and law interact and affect one another is rapidly and significantly changing. Among others things, the internet equips consumers with a variety of platforms to share information in a volume we could not imagine before. As part of this development, online information flows allow consumers to learn about businesses and their contracts in an efficient and quick manner. Consumers can become informed by the impressions that other, experienced consumers share and spread. In other words, consumers may familiarize themselves with the contents of contracts through the experiences that other consumers had. Online and offline, the relationship between consumers and businesses are most frequently governed by consumer standard form contracts. For decades, such contracts are assumed to be one-sided and biased against consumers. Consumer Law seeks to alleviate this bias and empower consumers. Legislatures, consumer organizations, scholars, and judges are constantly looking for clever ways to protect consumers from unscrupulous firms and unfair behaviors. While consumers-businesses relationships are theoretically administered by standardized contracts, firms do not always follow these contracts in practice. At times, there is a significant disparity between what the written contract stipulates and what consumers experience de facto. That is, there is a crucial gap (“the Gap”) between how firms draft their contracts on the one hand, and how firms actually treat consumers on the other. Interestingly, the Gap is frequently manifested by deviation from the written contract in favor of consumers. In other words, firms often exercise lenient approach in spite of the stringent written contracts they draft. This essay examines whether, counter-intuitively, policy makers should add firms’ leniency to the growing list of firms suspicious behaviors. At first glance, firms should be allowed, if not encouraged, to exercise leniency. Many legal regimes are looking for ways to cope with unfair contract terms in consumer contracts. Naturally, therefore, consumer law should enable, if not encourage, firms’ lenient practices. Firms’ willingness to deviate from their strict contracts in order to benefit consumers seems like a sensible approach. Apparently, such behavior should not be second guessed. However, at times online tools, firm’s behaviors and human psychology result in a toxic mix. Beneficial and helpful online information should be treated with due respect as it may occasionally have surprising and harmful qualities. In this essay, we illustrate that technological changes turn the Gap into a key component in consumers' understanding, or misunderstanding, of consumer contracts. In short, a Gap may distort consumers’ perception and undermine rational decision-making. Consequently, this essay explores whether, counter-intuitively, consumer law should sanction firms that create a Gap and use it. It examines when firms’ leniency should be considered as manipulative or exercised in bad faith. It then investigates whether firms should be allowed to enforce the written contract even if the firms deliberately and consistently deviated from it.

Keywords: consumer contracts, consumer protection, information flow, law and economics, law and technology, paper deal v firms' behavior

Procedia PDF Downloads 198
1051 Cost Based Analysis of Risk Stratification Tool for Prediction and Management of High Risk Choledocholithiasis Patients

Authors: Shreya Saxena

Abstract:

Background: Choledocholithiasis is a common complication of gallstone disease. Risk scoring systems exist to guide the need for further imaging or endoscopy in managing choledocholithiasis. We completed an audit to review the American Society for Gastrointestinal Endoscopy (ASGE) scoring system for prediction and management of choledocholithiasis against the current practice at a tertiary hospital to assess its utility in resource optimisation. We have now conducted a cost focused sub-analysis on patients categorized high-risk for choledocholithiasis according to the guidelines to determine any associated cost benefits. Method: Data collection from our prior audit was used to retrospectively identify thirteen patients considered high-risk for choledocholithiasis. Their ongoing management was mapped against the guidelines. Individual costs for the key investigations were obtained from our hospital financial data. Total cost for the different management pathways identified in clinical practice were calculated and compared against predicted costs associated with recommendations in the guidelines. We excluded the cost of laparoscopic cholecystectomy and considered a set figure for per day hospital admission related expenses. Results: Based on our previous audit data, we identified a77% positive predictive value for the ASGE risk stratification tool to determine patients at high-risk of choledocholithiasis. 47% (6/13) had an magnetic resonance cholangiopancreatography (MRCP) prior to endoscopic retrograde cholangiopancreatography (ERCP), whilst 53% (7/13) went straight for ERCP. The average length of stay in the hospital was 7 days, with an additional day and cost of £328.00 (£117 for ERCP) for patients awaiting an MRCP prior to ERCP. Per day hospital admission was valued at £838.69. When calculating total cost, we assumed all patients had admission bloods and ultrasound done as the gold standard. In doing an MRCP prior to ERCP, there was a 130% increase in cost incurred (£580.04 vs £252.04) per patient. When also considering hospital admission and the average length of stay, it was an additional £1166.69 per patient. We then calculated the exact costs incurred by the department, over a three-month period, for all patients, for key investigations or procedures done in the management of choledocholithiasis. This was compared to an estimate cost derived from the recommended pathways in the ASGE guidelines. Overall, 81% (£2048.45) saving was associated with following the guidelines compared to clinical practice. Conclusion: MRCP is the most expensive test associated with the diagnosis and management of choledocholithiasis. The ASGE guidelines recommend endoscopy without an MRCP in patients stratified as high-risk for choledocholithiasis. Our audit that focused on assessing the utility of the ASGE risk scoring system showed it to be relatively reliable for identifying high-risk patients. Our cost analysis has shown significant cost savings per patient and when considering the average length of stay associated with direct endoscopy rather than an additional MRCP. Part of this is also because of an increased average length of stay associated with waiting for an MRCP. The above data supports the ASGE guidelines for the management of high-risk for choledocholithiasis patients from a cost perspective. The only caveat is our small data set that may impact the validity of our average length of hospital stay figures and hence total cost calculations.

Keywords: cost-analysis, choledocholithiasis, risk stratification tool, general surgery

Procedia PDF Downloads 98
1050 Prevalence of Antibiotic Resistant Enterococci in Treated Wastewater Effluent in Durban, South Africa and Characterization of Vancomycin and High-Level Gentamicin-Resistant Strains

Authors: S. H. Gasa, L. Singh, B. Pillay, A. O. Olaniran

Abstract:

Wastewater treatment plants (WWTPs) have been implicated as the leading reservoir for antibiotic resistant bacteria (ARB), including Enterococci spp. and antibiotic resistance genes (ARGs), worldwide. Enterococci are a group of clinically significant bacteria that have gained much attention as a result of their antibiotic resistance. They play a significant role as the principal cause of nosocomial infections and dissemination of antimicrobial resistance genes in the environment. The main objective of this study was to ascertain the role of WWTPs in Durban, South Africa as potential reservoirs for antibiotic resistant Enterococci (ARE) and their related ARGs. Furthermore, the antibiogram and resistance gene profile of Enterococci species recovered from treated wastewater effluent and receiving surface water in Durban were also investigated. Using membrane filtration technique, Enterococcus selective agar and selected antibiotics, ARE were enumerated in samples (influent, activated sludge, before chlorination and final effluent) collected from two WWTPs, as well as from upstream and downstream of the receiving surface water. Two hundred Enterococcus isolates recovered from the treated effluent and receiving surface water were identified by biochemical and PCR-based methods, and their antibiotic resistance profiles determined by the Kirby-Bauer disc diffusion assay, while PCR-based assays were used to detect the presence of resistance and virulence genes. High prevalence of ARE was obtained at both WWTPs, with values reaching a maximum of 40%. The influent and activated sludge samples contained the greatest prevalence of ARE with lower values observed in the before and after chlorination samples. Of the 44 vancomycin and high-level gentamicin-resistant isolates, 11 were identified as E. faecium, 18 as E. faecalis, 4 as E. hirae while 11 are classified as “other” Enterococci species. High-level aminoglycoside resistance for gentamicin (39%) and vancomycin (61%) was recorded in species tested. The most commonly detected virulence gene was the gelE (44%), followed by asa1 (40%), while cylA and esp were detected in only 2% of the isolates. The most prevalent aminoglycoside resistance genes were aac(6')-Ie-aph(2''), aph(3')-IIIa, and ant(6')-Ia detected in 43%, 45% and 41% of the isolates, respectively. Positive correlation was observed between resistant phenotypes to high levels of aminoglycosides and presence of all aminoglycoside resistance genes. Resistance genes for glycopeptide: vanB (37%) and vanC-1 (25%), and macrolide: ermB (11%) and ermC (54%) were detected in the isolates. These results show the need for more efficient wastewater treatment and disposal in order to prevent the release of virulent and antibiotic resistant Enterococci species and safeguard public health.

Keywords: antibiogram, enterococci, gentamicin, vancomycin, virulence signatures

Procedia PDF Downloads 219
1049 Prognostic Factors for Mortality and Duration of Admission in Malnourished Hospitalized, Elderly Patients: A Cross-Sectional Study

Authors: Christos E. Lampropoulos, Maria Konsta, Vicky Dradaki, Irini Dri, Tamta Sirbilatze, Ifigenia Apostolou, Christina Kordali, Konstantina Panouria, Kostas Argyros, Georgios Mavras

Abstract:

Malnutrition in hospitalized patients is related to increased morbidity and mortality. Purpose of our study was to assess nutritional status of hospitalized, elderly patients with various nutritional scores and to detect unfavorable prognostic factors, related to increased mortality and extended duration of admission. Methods: 150 patients (78 men, 72 women, mean age 80±8.2) were included in this cross-sectional study. Nutritional status was assessed by Mini Nutritional Assessment (MNA full, short-form), Malnutrition Universal Screening Tool (MUST) and short Nutritional Appetite Questionnaire (sNAQ). The following data were incorporated in analysis: Anthropometric and laboratory data, physical activity (International Physical Activity Questionnaires, IPAQ), smoking status, dietary habits and mediterranean diet (assessed by MedDiet score), cause and duration of current admission, medical history (co-morbidities, previous admissions). Primary endpoints were the mortality (from admission until 6 months afterwards) and duration of admission, compared to national guidelines for closed consolidated medical expenses. Mann-Whitney two-sample statistics or t-test was used for group comparisons and Spearman or Pearson coefficients for testing correlation between variables. Results: Normal nutrition was assessed in 54/150 (36%), 92/150 (61.3%) and in 106/150 (70.7%) of patients, according to full MNA, MUST and sNAQ questionnaires respectively. Mortality rate was 20.7% (31/150 patients). The patients who died until 6 months after admission had lower BMI (24±4.4 vs 26±4.8, p=0.04) and albumin levels (2.9±0.7 vs 3.4±0.7, p=0.002), significantly lower full MNA (14.5±7.3 vs 20.7±6, p<0.0001) and short-form MNA scores (7.3±4.2 vs 10.5±3.4, p=0.0002) compared to non-dead one. In contrast, the aforementioned patients had higher MUST (2.5±1.8 vs 0.5±1.02, p=<0.0001) and sNAQ scores (2.9±2.4 vs 1.1±1.3, p<0.0001). Additionally, they showed significantly lower MedDiet (23.5±4.3 vs 31.1±5.6, p<0.0001) and IPAQ scores (37.2±156.2 vs 516.5±1241.7, p<0.0001) compared to remaining one. These patients had extended hospitalization [5 (0-13) days vs 0 (-1-3) days, p=0.001]. Patients who admitted due to cancer depicted higher mortality rate (10/13, 77%), compared to those who admitted due to infections (12/73, 18%), stroke (4/15, 27%) or other causes (4/49, 8%) (p<0.0001). Extension of hospitalization was negatively correlated to both full (Spearman r=-0.35, p<0.0001) and short-form MNA (Spearman r=-0.33, p<0.0001) and positively correlated to MUST (Spearman r=0.34, p<0.0001) and sNAQ (Spearman r=0.3, p=0.0002). Additionally, the extension was inversely related to MedDiet score (Spearman r=-0.35, p<0.0001), IPAQ score (Spearman r=-0.34, p<0.0001), albumin levels (Pearson r=-0.36, p<0.0001), Ht (Pearson r=-0.2, p=0.02) and Hb (Pearson r=-0.18, p=0.02). Conclusion: A great proportion of elderly, hospitalized patients are malnourished or at risk of malnutrition. All nutritional scores, physical activity and albumin are significantly related to mortality and increased hospitalization.

Keywords: dietary habits, duration of admission, malnutrition, prognostic factors for mortality

Procedia PDF Downloads 289