Search results for: inertial measurement unit
3809 Measurement of Project Success in Construction Using Performance Indices
Authors: Annette Joseph
Abstract:
Background: The construction industry is dynamic in nature owing to the increasing uncertainties in technology, budgets, and development processes making projects more complex. Thus, predicting project performance and chances of its likely success has become difficult. The goal of all parties involved in construction projects is to successfully complete it on schedule, within planned budget and with the highest quality and in the safest manner. However, the concept of project success has remained ambiguously defined in the mind of the construction professionals. Purpose: This paper aims to study the analysis of a project in terms of its performance and measure the success. Methodology: The parameters for evaluating project success and the indices to measure success/performance of a project are identified through literature study. Through questionnaire surveys aimed at the stakeholders in the projects, data is collected from two live case studies (an ongoing and completed project) on the overall performance in terms of its success/failure. Finally, with the help of SPSS tool, the data collected from the surveys are analyzed and applied on the selected performance indices. Findings: The score calculated by using the indices and models helps in assessing the overall performance of the project and interpreting it to find out whether the project will be a success or failure. This study acts as a reference for firms to carry out performance evaluation and success measurement on a regular basis helping projects to identify the areas which are performing well and those that require improvement. Originality & Value: The study signifies that by measuring project performance; a project’s deviation towards success/failure can be assessed thus helping in suggesting early remedial measures to bring it on track ensuring that a project will be completed successfully.Keywords: project, performance, indices, success
Procedia PDF Downloads 1913808 Monitoring the Drying and Grinding Process during Production of Celitement through a NIR-Spectroscopy Based Approach
Authors: Carolin Lutz, Jörg Matthes, Patrick Waibel, Ulrich Precht, Krassimir Garbev, Günter Beuchle, Uwe Schweike, Peter Stemmermann, Hubert B. Keller
Abstract:
Online measurement of the product quality is a challenging task in cement production, especially in the production of Celitement, a novel environmentally friendly hydraulic binder. The mineralogy and chemical composition of clinker in ordinary Portland cement production is measured by X-ray diffraction (XRD) and X ray fluorescence (XRF), where only crystalline constituents can be detected. But only a small part of the Celitement components can be measured via XRD, because most constituents have an amorphous structure. This paper describes the development of algorithms suitable for an on-line monitoring of the final processing step of Celitement based on NIR-data. For calibration intermediate products were dried at different temperatures and ground for variable durations. The products were analyzed using XRD and thermogravimetric analyses together with NIR-spectroscopy to investigate the dependency between the drying and the milling processes on one and the NIR-signal on the other side. As a result, different characteristic parameters have been defined. A short overview of the Celitement process and the challenging tasks of the online measurement and evaluation of the product quality will be presented. Subsequently, methods for systematic development of near-infrared calibration models and the determination of the final calibration model will be introduced. The application of the model on experimental data illustrates that NIR-spectroscopy allows for a quick and sufficiently exact determination of crucial process parameters.Keywords: calibration model, celitement, cementitious material, NIR spectroscopy
Procedia PDF Downloads 5003807 Effects of Spent Dyebath Recycling on Pollution and Cost of Production in a Cotton Textile Industry
Authors: Dinesh Kumar Sharma, Sanjay Sharma
Abstract:
Textile manufacturing industry uses a substantial amount of chemicals not only in the production processes but also in manufacturing the raw materials. Dyes are the most significant raw material which provides colour to the fabric and yarn. Dyes are produced by using a large amount of chemicals both organic and inorganic in nature. Dyes are further classified as Reactive or Vat Dyes which are mostly used in cotton textiles. In the process of application of dyes to the cotton fiber, yarn or fabric, several auxiliary chemicals are also used in the solution called dyebath to improve the absorption of dyes. There is a very little absorption of dyes and auxiliary chemicals and a residual amount of all these substances is released as the spent dye bath effluent. Because of the wide variety of chemicals used in cotton textile dyes, there is always a risk of harmful effects which may not be apparent immediately but may have an irreversible impact in the long term. Colour imparted by the dyes to the water also has an adverse effect on its public acceptability and the potability. This study has been conducted with an objective to assess the feasibility of reuse of the spent dye bath. Studies have been conducted in two independent industries manufacturing dyed cotton yarn and dyed cotton fabric respectively. These have been referred as Unit-I and Unit-II. The studies included assessment of reduction in pollution levels and the economic benefits of such reuse. The study conclusively establishes that the reuse of spent dyebath results in prevention of pollution, reduction in pollution loads and cost of effluent treatment & production. This pollution prevention technique presents a good preposition for pollution prevention in cotton textile industry.Keywords: dyes, dyebath, reuse, toxic, pollution, costs
Procedia PDF Downloads 3933806 Fabrication Characteristics and Mechanical Behaviour of Fly Ash-Alumina Reinforced Zn-27Al Alloy Matrix Hybrid Composite Using Stir-Casting Technique
Authors: Oluwagbenga B. Fatile, Felix U. Idu, Olajide T. Sanya
Abstract:
This paper reports the viability of developing Zn-27Al alloy matrix hybrid composites reinforced with alumina, graphite and fly ash (a solid waste byproduct of coal in thermal power plants). This research work was aimed at developing low cost-high performance Zn-27Al matrix composite with low density. Alumina particulates (Al2O3), graphite added with 0, 2, 3, 4, and 5 wt% fly ash were utilized to prepare 10wt% reinforcing phase with Zn-27Al alloy as matrix using two-step stir casting method. Density measurement estimated percentage porosity, tensile testing, micro hardness measurement, and optical microscopy were used to assess the performance of the composites produced. The results show that the hardness, ultimate tensile strength, and percent elongation of the hybrid composites decrease with increase in fly ash content. The maximum decrease in hardness and ultimate tensile strength of 13.72% and 15.25% respectively were observed for composite grade containing 5wt% fly ash. The percentage elongation of composite sample without fly ash is 8.9% which is comparable with that of the sample containing 2wt% fly ash with percentage elongation of 8.8%. The fracture toughness of the fly ash containing composites was, however, superior to those of composites without fly ash with 5wt% fly ash containing composite exhibiting the highest fracture toughness. The results show that fly ash can be utilized as complementary reinforcement in ZA-27 alloy matrix composite to reduce cost.Keywords: fly ash, hybrid composite, mechanical behaviour, stir-cast
Procedia PDF Downloads 3353805 Determination of Direct Solar Radiation Using Atmospheric Physics Models
Authors: Pattra Pukdeekiat, Siriluk Ruangrungrote
Abstract:
This work was originated to precisely determine direct solar radiation by using atmospheric physics models since the accurate prediction of solar radiation is necessary and useful for solar energy applications including atmospheric research. The possible models and techniques for a calculation of regional direct solar radiation were challenging and compulsory for the case of unavailable instrumental measurement. The investigation was mathematically governed by six astronomical parameters i.e. declination (δ), hour angle (ω), solar time, solar zenith angle (θz), extraterrestrial radiation (Iso) and eccentricity (E0) along with two atmospheric parameters i.e. air mass (mr) and dew point temperature at Bangna meteorological station (13.67° N, 100.61° E) in Bangkok, Thailand. Analyses of five models of solar radiation determination with the assumption of clear sky were applied accompanied by three statistical tests: Mean Bias Difference (MBD), Root Mean Square Difference (RMSD) and Coefficient of determination (R2) in order to validate the accuracy of obtainable results. The calculated direct solar radiation was in a range of 491-505 Watt/m2 with relative percentage error 8.41% for winter and 532-540 Watt/m2 with relative percentage error 4.89% for summer 2014. Additionally, dataset of seven continuous days, representing both seasons were considered with the MBD, RMSD and R2 of -0.08, 0.25, 0.86 and -0.14, 0.35, 3.29, respectively, which belong to Kumar model for winter and CSR model for summer. In summary, the determination of direct solar radiation based on atmospheric models and empirical equations could advantageously provide immediate and reliable values of the solar components for any site in the region without a constraint of actual measurement.Keywords: atmospheric physics models, astronomical parameters, atmospheric parameters, clear sky condition
Procedia PDF Downloads 4093804 Laser-Hole Boring into Overdense Targets: A Detailed Study on Laser and Target Properties
Authors: Florian Wagner, Christoph Schmidt, Vincent Bagnoud
Abstract:
Understanding the interaction of ultra-intense laser pulses with overcritical targets is of major interest for many applications such as laser-driven ion acceleration, fast ignition in the frame of inertial confinement fusion or high harmonic generation and the creation of attosecond pulses. One particular aspect of this interaction is the shift of the critical surface, where the laser pulse is stopped and the absorption is at maximum, due to the radiation pressure induced by the laser pulse, also referred to as laser hole boring. We investigate laser-hole boring experimentally by measuring the backscattered spectrum which is doppler-broadened because of the movement of the reflecting surface. Using the high-power, high-energy laser system PHELIX in Darmstadt, we gathered an extensive set of data for different laser intensities ranging from 10^18 W/cm2 to 10^21 W/cm2, two different levels of the nanosecond temporal contrast (10^6 vs. 10^11), elliptical and linear polarization and varying target configurations. In this contribution we discuss how the maximum velocity of the critical surface depends on these parameters. In particular we show that by increasing the temporal contrast the maximum hole boring velocity is decreased by more than a factor of three. Our experimental findings are backed by a basic analytical model based on momentum and mass conservation as well as particle in cell simulations. These results are of particular importance for fast ignition since they contribute to a better understanding of the transport of the ignitor pulse into the overdense region.Keywords: laser-hole boring, interaction of ultra-intense lasers with overcritical targets, fast ignition, relativistic laser motter interaction
Procedia PDF Downloads 4053803 A New Approach in a Problem of a Supersonic Panel Flutter
Authors: M. V. Belubekyan, S. R. Martirosyan
Abstract:
On the example of an elastic rectangular plate streamlined by a supersonic gas flow, we have investigated the phenomenon of divergence and of panel flatter of the overrunning of the gas flow at a free edge under assumption of the presence of concentrated inertial masses and moments at the free edge. We applied a new approach of finding of solution of these problems, which was developed based on the algorithm for an analytical solution finding. This algorithm is easy to use for theoretical studies for the wides circle of nonconservative problems of linear elastic stability. We have established the relation between the characteristics of natural vibrations of the plate and velocity of the streamlining gas flow, which enables one to draw some conclusions on the stability of disturbed motion of the plate depending on the parameters of the system plate-flow. Its solution shows that either the divergence or the localized divergence and the flutter instability are possible. The regions of the stability and instability in space of parameters of the problem are identified. We have investigated the dynamic behavior of the disturbed motion of the panel near the boundaries of region of the stability. The safe and dangerous boundaries of region of the stability are found. The transition through safe boundary of the region of the stability leads to the divergence or localized divergence arising in the vicinity of free edge of the rectangular plate. The transition through dangerous boundary of the region of the stability leads to the panel flutter. The deformations arising at the flutter are more dangerous to the skin of the modern aircrafts and rockets resulting to the loss of the strength and appearance of the fatigue cracks.Keywords: stability, elastic plate, divergence, localized divergence, supersonic panels flutter
Procedia PDF Downloads 4613802 A Vision-Based Early Warning System to Prevent Elephant-Train Collisions
Authors: Shanaka Gunasekara, Maleen Jayasuriya, Nalin Harischandra, Lilantha Samaranayake, Gamini Dissanayake
Abstract:
One serious facet of the worsening Human-Elephant conflict (HEC) in nations such as Sri Lanka involves elephant-train collisions. Endangered Asian elephants are maimed or killed during such accidents, which also often result in orphaned or disabled elephants, contributing to the phenomenon of lone elephants. These lone elephants are found to be more likely to attack villages and showcase aggressive behaviour, which further exacerbates the overall HEC. Furthermore, Railway Services incur significant financial losses and disruptions to services annually due to such accidents. Most elephant-train collisions occur due to a lack of adequate reaction time. This is due to the significant stopping distance requirements of trains, as the full braking force needs to be avoided to minimise the risk of derailment. Thus, poor driver visibility at sharp turns, nighttime operation, and poor weather conditions are often contributing factors to this problem. Initial investigations also indicate that most collisions occur in localised “hotspots” where elephant pathways/corridors intersect with railway tracks that border grazing land and watering holes. Taking these factors into consideration, this work proposes the leveraging of recent developments in Convolutional Neural Network (CNN) technology to detect elephants using an RGB/infrared capable camera around known hotspots along the railway track. The CNN was trained using a curated dataset of elephants collected on field visits to elephant sanctuaries and wildlife parks in Sri Lanka. With this vision-based detection system at its core, a prototype unit of an early warning system was designed and tested. This weatherised and waterproofed unit consists of a Reolink security camera which provides a wide field of view and range, an Nvidia Jetson Xavier computing unit, a rechargeable battery, and a solar panel for self-sufficient functioning. The prototype unit was designed to be a low-cost, low-power and small footprint device that can be mounted on infrastructures such as poles or trees. If an elephant is detected, an early warning message is communicated to the train driver using the GSM network. A mobile app for this purpose was also designed to ensure that the warning is clearly communicated. A centralized control station manages and communicates all information through the train station network to ensure coordination among important stakeholders. Initial results indicate that detection accuracy is sufficient under varying lighting situations, provided comprehensive training datasets that represent a wide range of challenging conditions are available. The overall hardware prototype was shown to be robust and reliable. We envision a network of such units may help contribute to reducing the problem of elephant-train collisions and has the potential to act as an important surveillance mechanism in dealing with the broader issue of human-elephant conflicts.Keywords: computer vision, deep learning, human-elephant conflict, wildlife early warning technology
Procedia PDF Downloads 2263801 Review of Research on Effectiveness Evaluation of Technology Innovation Policy
Authors: Xue Wang, Li-Wei Fan
Abstract:
The technology innovation has become the driving force of social and economic development and transformation. The guidance and support of public policies is an important condition to promote the realization of technology innovation goals. Policy effectiveness evaluation is instructive in policy learning and adjustment. This paper reviews existing studies and systematically evaluates the effectiveness of policy-driven technological innovation. We used 167 articles from WOS and CNKI databases as samples to clarify the measurement of technological innovation indicators and analyze the classification and application of policy evaluation methods. In general, technology innovation input and technological output are the two main aspects of technological innovation index design, among which technological patents are the focus of research, the number of patents reflects the scale of technological innovation, and the quality of patents reflects the value of innovation from multiple aspects. As for policy evaluation methods, statistical analysis methods are applied to the formulation, selection and evaluation of the after-effect of policies to analyze the effect of policy implementation qualitatively and quantitatively. The bibliometric methods are mainly based on the public policy texts, discriminating the inter-government relationship and the multi-dimensional value of the policy. Decision analysis focuses on the establishment and measurement of the comprehensive evaluation index system of public policy. The economic analysis methods focus on the performance and output of technological innovation to test the policy effect. Finally, this paper puts forward the prospect of the future research direction.Keywords: technology innovation, index, policy effectiveness, evaluation of policy, bibliometric analysis
Procedia PDF Downloads 703800 Predictors of Non-Adherence to Pharmacological Therapy in Patients with Type 2 Diabetes
Authors: Anan Jarab, Riham Almrayat, Salam Alqudah, Maher Khdour, Tareq Mukattash, Sharell Pinto
Abstract:
Background: The prevalence of diabetes in Jordan is among the highest in the world, making it a particularly alarming health problem there. It has been indicated that poor adherence to the prescribed therapy lead to poor glycemic control and enhance the development of diabetes complications and unnecessary hospitalization. Purpose: To explore factors associated with medication non-adherence in patients with type 2 diabetes in Jordan. Materials and Methods: Variables including socio-demographics, disease and therapy factors, diabetes knowledge, and health-related quality of life in addition to adherence assessment were collected for 171 patients with type 2 diabetes using custom-designed and validated questionnaires. Logistic regression was performed to develop a model with variables that best predicted medication non-adherence in patients with type 2 diabetes in Jordan. Results: The majority of the patients (72.5%) were non-adherent. Patients were found four times less likely to adhere to their medications with each unit increase in the number of prescribed medications (OR = 0.244, CI = 0.08-0.63) and nine times less likely to adhere to their medications with each unit increase in the frequency of administration of diabetic medication (OR = 0.111, CI = 0.04-2.01). Patients in the present study were also approximately three times less likely (OR = 0.362, CI = 0.24-0.87) to adhere to their medications if they reported having concerns about side effects and twice more likely to adhere to medications (OR = 0.493, CI = 0.08-1.16) if they had one or more micro-vascular complication. Conclusion: The current study revealed low adherence rate to the prescribed therapy among Jordanians with type 2 diabetes. Simplifying dosage regimen, selecting treatments with lower side effects along with an emphasis on diabetes complications should be taken into account when developing care plans for patients with type 2 diabetes.Keywords: type 2 diabetes, adherence, glycemic control, clinical pharmacist, Jordan
Procedia PDF Downloads 4383799 Compact LWIR Borescope Sensor for Surface Temperature of Engine Components
Authors: Andy Zhang, Awnik Roy, Trevor B. Chen, Bibik Oleksandr, Subodh Adhikari, Paul S. Hsu
Abstract:
The durability of a combustor in gas-turbine enginesrequiresa good control of its component temperatures. Since the temperature of combustion gases frequently exceeds the melting point of the combustion liner walls, an efficient air-cooling system is significantly important to elongatethe lifetime of liner walls. To determine the effectiveness of the air-cooling system, accurate 2D surface temperature measurement of combustor liner walls is crucial for advanced engine development. Traditional diagnostic techniques for temperature measurement, such as thermocouples, thermal wall paints, pyrometry, and phosphors, have shown disadvantages, including being intrusive and affecting local flame/flow dynamics, potential flame quenching, and physical damages to instrumentation due to harsh environments inside the combustor and strong optical interference from strong combustion emission in UV-Mid IR wavelength. To overcome these drawbacks, a compact and small borescope long-wave-infrared (LWIR) sensor is developed to achieve two-dimensional high-spatial resolution, high-fidelity thermal imaging of 2D surface temperature in gas-turbine engines, providing the desired engine component temperature distribution. The compactLWIRborescope sensor makes it feasible to promote the durability of combustor in gas-turbine engines.Keywords: borescope, engine, long-wave-infrared, sensor
Procedia PDF Downloads 1373798 School Partners in Initial Teacher Education: An Including or Excluding Approach When Engaging Schools
Authors: Laila Niklasson
Abstract:
The aim of the study is to critically discuss how partner schools are engaged during Initial teacher education, ITE. The background is an experiment in Sweden where the practicum organization is reorganized due to a need to enhance quality during practicum. It is a national initiative from the government, supported by the National Agency of Education and lasts 2014-2019. The main features are concentration of students to school with a certain amount of mentors, mentors who have a mentor education and teachers with relevant subject areas and where there could be a mentor team with a leader at the school. An expected outcome is for example that the student teachers should be engaged in peer-learning. The schools should be supported by extra lectures from university teachers during practicum and also extra research projects where the schools should be engaged. A case study of one university based ITE was carried out to explore the consequences for the schools not selected. The result showed that from engaging x schools in a region, x was engaged. The schools are both in urban and rural areas, mainly in the latter. There is also a tendency that private schools are not engaged. On a unit level recruitment is perceived as harder for schools not engaged. In addition they cannot market themselves as ´selected school´ which can affect parent´s selection of school for their children. Also, on unit level, but with consequences for professional development, they are not selected for research project and thereby are not fully supported during school development. The conclusion is that from an earlier inclusive approach concerning professions where all teachers were perceived as possible mentors, there is a change to an exclusive approach where selected schools and selected teachers should be engaged. The change could be perceived as a change in governance mentality, but also in how professions are perceived, and development work is pursued.Keywords: initial teacher education, practicum schools, profession, quality development
Procedia PDF Downloads 1423797 Systematic Identification of Noncoding Cancer Driver Somatic Mutations
Authors: Zohar Manber, Ran Elkon
Abstract:
Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements
Procedia PDF Downloads 1043796 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes
Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo
Abstract:
Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation
Procedia PDF Downloads 2073795 Khon Kaen University Family Health Assessment Tool Training Program on Primary Care Unit Nurses’ Skills
Authors: Suwarno, D. Jongudomkarn
Abstract:
Family Health Assessment (FHA) is a key process to identify the family health needs, family health problems, and family health history. Assessing the family health is not only from the assessment tool but also from health care provider especially Nurse. Nurses’ have duties to assess the family as holistic view and they have to increase their capacities (knowledge, skills and experiences) in FHA. Thus, the continuing nursing education-training program on using the KKU FHA Tool was aimed to enhance the participant nurses’ capacities in (FHA) based on such tool. The aim of this study was to evaluate the KKU FHA Tool training program on PCU nurses’ capacity before and after training program in Primary Care Unit Bantul, Yogyakarta. The Quasi-Experiment with one group pre-, post-test design as a research design with convenient sampling technique and one group pre- post test formula for Nurses who work in Six PCU Bantul, Yogyakarta as much as fourteen respondents. The research processes were used training program with module, video and handbook KKU FHA Tool, KKU FHA tool form and capacities questionnaires. It was analyzed by descriptive data, Kolmogorov-Smirnov and Paired Sample t-test. The overall comparing analysis of paired sample t-test revealed that the mean values of pre-test were 3.35 with SD 0.417, post-test was 3.86 with SD 0.154 and post-test in later two weeks was 4.00 with SD 0.243. It was found that the p value of among the pre-test, the intermediate post-test and the post–test in later two weeks were 0.000. The p value of the intermediate post-test and post-test in later two weeks was 0.053. KKU FHA Tool training program in PCU Bantul Yogyakarta was enhanced the participant nurses’ capacities significantly. In conclusion, we are recommending KKU FHA Tool forms have to develop and implement with qualitative research as complementary data in PCU Bantul Yogyakarta by Focus Group Discussion.Keywords: family health assessment, KKU FHA tool, training program, nurses capacities
Procedia PDF Downloads 3793794 Glucose Measurement in Response to Environmental and Physiological Challenges: Towards a Non-Invasive Approach to Study Stress in Fishes
Authors: Tomas Makaras, Julija Razumienė, Vidutė Gurevičienė, Gintarė Sauliutė, Milda Stankevičiūtė
Abstract:
Stress responses represent animal’s natural reactions to various challenging conditions and could be used as a welfare indicator. Regardless of the wide use of glucose measurements in stress evaluation, there are some inconsistencies in its acceptance as a stress marker, especially when it comes to comparison with non-invasive cortisol measurements in the fish challenging stress. To meet the challenge and to test the reliability and applicability of glucose measurement in practice, in this study, different environmental/anthropogenic exposure scenarios were simulated to provoke chemical-induced stress in fish (14-days exposure to landfill leachate) followed by a 14-days stress recovery period and under the cumulative effect of leachate fish subsequently exposed to pathogenic oomycetes (Saprolegnia parasitica) to represent a possible infection in fish. It is endemic to all freshwater habitats worldwide and is partly responsible for the decline of natural freshwater fish populations. Brown trout (Salmo trutta fario) and sea trout (Salmo trutta trutta) juveniles were chosen because of a large amount of literature on physiological stress responses in these species was known. Glucose content in fish by applying invasive and non-invasive glucose measurement procedures in different test mediums such as fish blood, gill tissues and fish-holding water were analysed. The results indicated that the quantity of glucose released in the holding water of stressed fish increased considerably (approx. 3.5- to 8-fold) and remained substantially higher (approx. 2- to 4-fold) throughout the stress recovery period than the control level suggesting that fish did not recover from chemical-induced stress. The circulating levels of glucose in blood and gills decreased over time in fish exposed to different stressors. However, the gill glucose level in fish showed a decrease similar to the control levels measured at the same time points, which was found to be insignificant. The data analysis showed that concentrations of β-D glucose measured in gills of fish treated with S. parasitica differed significantly from the control recovery, but did not differ from the leachate recovery group showing that S. parasitica presence in water had no additive effects. In contrast, a positive correlation between blood and gills glucose were determined. Parallel trends in blood and water glucose changes suggest that water glucose measurement has much potency in predicting stress. This study demonstrated that measuring β-D-glucose in fish-holding water is not stressful as it involves no handling and manipulation of an organism and has critical technical advantages concerning current (invasive) methods, mainly using blood samples or specific tissues. The quantification of glucose could be essential for studies examining the stress physiology/aquaculture studies interested in the assessment or long-term monitoring of fish health.Keywords: brown trout, landfill leachate, sea trout, pathogenic oomycetes, β-D-glucose
Procedia PDF Downloads 1743793 Isolated Iterating Fractal Independently Corresponds with Light and Foundational Quantum Problems
Authors: Blair D. Macdonald
Abstract:
After nearly one hundred years of its origin, foundational quantum mechanics remains one of the greatest unexplained mysteries in physicists today. Within this time, chaos theory and its geometry, the fractal, has developed. In this paper, the propagation behaviour with an iteration of a simple fractal, the Koch Snowflake, was described and analysed. From an arbitrary observation point within the fractal set, the fractal propagates forward by oscillation—the focus of this study and retrospectively behind by exponential growth from a point beginning. It propagates a potentially infinite exponential oscillating sinusoidal wave of discrete triangle bits sharing many characteristics of light and quantum entities. The model's wave speed is potentially constant, offering insights into the perception and a direction of time where, to an observer, when travelling at the frontier of propagation, time may slow to a stop. In isolation, the fractal is a superposition of component bits where position and scale present a problem of location. In reality, this problem is experienced within fractal landscapes or fields where 'position' is only 'known' by the addition of information or markers. The quantum' measurement problem', 'uncertainty principle,' 'entanglement,' and the classical-quantum interface are addressed; these are a problem of scale invariance associated with isolated fractality. Dual forward and retrospective perspectives of the fractal model offer the opportunity for unification between quantum mechanics and cosmological mathematics, observations, and conjectures. Quantum and cosmological problems may be different aspects of the one fractal geometry.Keywords: measurement problem, observer, entanglement, unification
Procedia PDF Downloads 903792 Identifying Pathogenic Mycobacterium Species Using Multiple Gene Phylogenetic Analysis
Authors: Lemar Blake, Chris Oura, Ayanna C. N. Phillips Savage
Abstract:
Improved DNA sequencing technology has greatly enhanced bacterial identification, especially for organisms that are difficult to culture. Mycobacteriosis with consistent hyphema, bilateral exophthalmia, open mouth gape and ocular lesions, were observed in various fish populations at the School of Veterinary Medicine, Aquaculture/Aquatic Animal Health Unit. Objective: To identify the species of Mycobacterium that is affecting aquarium fish at the School of Veterinary Medicine, Aquaculture/Aquatic Animal Health Unit. Method: A total of 13 fish samples were collected and analyzed via: Ziehl-Neelsen, conventional polymerase chain reaction (PCR) and real-time PCR. These tests were carried out simultaneously for confirmation. The following combination of conventional primers: 16s rRNA (564 bp), rpoB (396 bp), sod (408 bp) were used. Concatenation of the gene fragments was carried out to phylogenetically classify the organism. Results: Acid fast non-branching bacilli were detected in all samples from homogenized internal organs. All 13 acid fast samples were positive for Mycobacterium via real-time PCR. Partial gene sequences using all three primer sets were obtained from two samples and demonstrated a novel strain. A strain 99% related to Mycobacterium marinum was also confirmed in one sample, using 16srRNA and rpoB genes. The two novel strains were clustered with the rapid growers and strains that are known to affect humans. Conclusions: Phylogenetic analysis demonstrated two novel Mycobacterium strains with the potential of being zoonotic and one strain 99% related to Mycobacterium marinum.Keywords: polymerase chain reaction, phylogenetic, DNA sequencing, zoonotic
Procedia PDF Downloads 1433791 Nondestructive Electrochemical Testing Method for Prestressed Concrete Structures
Authors: Tomoko Fukuyama, Osamu Senbu
Abstract:
Prestressed concrete is used a lot in infrastructures such as roads or bridges. However, poor grout filling and PC steel corrosion are currently major issues of prestressed concrete structures. One of the problems with nondestructive corrosion detection of PC steel is a plastic pipe which covers PC steel. The insulative property of pipe makes a nondestructive diagnosis difficult; therefore a practical technology to detect these defects is necessary for the maintenance of infrastructures. The goal of the research is a development of an electrochemical technique which enables to detect internal defects from the surface of prestressed concrete nondestructively. Ideally, the measurements should be conducted from the surface of structural members to diagnose non-destructively. In the present experiment, a prestressed concrete member is simplified as a layered specimen to simulate a current path between an input and an output electrode on a member surface. The specimens which are layered by mortar and the prestressed concrete constitution materials (steel, polyethylene, stainless steel, or galvanized steel plates) were provided to the alternating current impedance measurement. The magnitude of an applied electric field was 0.01-volt or 1-volt, and the frequency range was from 106 Hz to 10-2 Hz. The frequency spectrums of impedance, which relate to charge reactions activated by an electric field, were measured to clarify the effects of the material configurations or the properties. In the civil engineering field, the Nyquist diagram is popular to analyze impedance and it is a good way to grasp electric relaxation using a shape of the plot. However, it is slightly not suitable to figure out an influence of a measurement frequency which is reciprocal of reaction time. Hence, Bode diagram is also applied to describe charge reactions in the present paper. From the experiment results, the alternating current impedance method looks to be applicable to the insulative material measurement and eventually prestressed concrete diagnosis. At the same time, the frequency spectrums of impedance show the difference of the material configuration. This is because the charge mobility reflects the variety of substances and also the measuring frequency of the electric field determines migration length of charges which are under the influence of the electric field. However, it could not distinguish the differences of the material thickness and is inferred the difficulties of prestressed concrete diagnosis to identify the amount of an air void or a layer of corrosion product by the technique.Keywords: capacitance, conductance, prestressed concrete, susceptance
Procedia PDF Downloads 4133790 Lighting Consumption Analysis in Retail Industry: Comparative Study
Authors: Elena C. Tamaş, Grațiela M. Țârlea, Gianni Flamaropol, Dragoș Hera
Abstract:
This article is referring to a comparative study regarding the electrical energy consumption for lighting on diverse types of big sizes commercial buildings built in Romania after 2007, having 3, 4, 5 versus 8, 9, 10 operational years. Some buildings have installed building management systems (BMS) to monitor also the lighting performances starting with the opening days till the present days but some have chosen only local meters to implement. Firstly, for each analyzed building, the total required energy power and the energy power consumption for lighting were calculated depending on the lamps number, the unit power and the average daily running hours. All objects and installations were chosen depending on the destination/location of the lighting (exterior parking or access, interior or covering parking, building interior and building perimeter). Secondly, to all lighting objects and installations, mechanical counters were installed, and to the ones linked to BMS there were installed the digital meters as well for a better monitoring. Some efficient solutions are proposed to improve the power consumption, for example the 1/3 lighting functioning for the covered and exterior parking lighting to those buildings if can be done. This type of lighting share can be performed on each level, especially on the night shifts. Another example is to use the dimmers to reduce the light level, depending on the executed work in the respective area, and a 30% power energy saving can be achieved. Using the right BMS to monitor, the energy consumption depending on the average operational daily hours and changing the non-performant unit lights with the ones having LED technology or economical ones might increase significantly the energy performances and reduce the energy consumption of the buildings.Keywords: commercial buildings, energy performances, lightning consumption, maintenance
Procedia PDF Downloads 2613789 Necrotising Anterior Scleritis and Scleroderma: A Rare Association
Authors: Angeliki Vassila, Dimitrios Kalogeropoulos, Rania Rawashdeh, Nigel Hall, Najiha Rahman, Mark Fabian, Suresh Thulasidharan, Hossain Parwez
Abstract:
Introduction: Necrotising scleritis is a severe form of scleritis and poses a significant threat to vision. It can manifest in various systemic autoimmune disorders, systemic vasculitis, or as a consequence of microbial infections. The objective of this study is to present a case of necrotizing scleritis associated with scleroderma, which was further complicated by a secondary Staphylococcus epidermidis infection. Methods: This is a retrospective analysis that examines the medical records of a patient who was hospitalised in the Eye Unit at University Hospital Southampton. Results: A 78-year-old woman presented at the eye casualty department of our unit with a two-week history of progressively worsening pain in her left eye. She received a diagnosis of necrotising scleritis and was admitted to the hospital for further treatment. It was decided to commence a three-day course of intravenous methylprednisolone followed by a tapering regimen of oral steroids. Additionally, a conjunctival swab was taken, and two days later, it revealed the presence of S. epidermidis, indicating a potential secondary infection. Given this finding, she was also prescribed topical (Ofloxacin 0.3% - four times daily) and oral (Ciprofloxacin 750mg – twice daily) antibiotics. The inflammation and symptoms gradually improved, leading to the patient being scheduled for a scleral graft and applying an amniotic membrane to cover the area of scleral thinning. Conclusions: Rheumatoid arthritis and granulomatosis with polyangiitis are the most commonly identifiable systemic diseases associated with necrotising scleritis. Although association with scleroderma is extremely rare, early identification and treatment are necessary to prevent scleritis-related complications.Keywords: scleritis, necrotizing scleritis, scleroderma, autoimmune disease
Procedia PDF Downloads 303788 Thermal Performance of an Air Heating Storing System
Authors: Mohammed A. Elhaj, Jamal S. Yassin
Abstract:
Owing to the lack of synchronization between the solar energy availability and the heat demands in a specific application, the energy storing sub-system is necessary to maintain the continuity of thermal process. The present work is dealing with an active solar heating storing system in which an air solar collector is connected to storing unit where this energy is distributed and provided to the heated space in a controlled manner. The solar collector is a box type absorber where the air flows between a number of vanes attached between the collector absorber and the bottom plate. This design can improve the efficiency due to increasing the heat transfer area exposed to the flowing air, as well as the heat conduction through the metal vanes from the top absorbing surface. The storing unit is a packed bed type where the air is coming from the air collector and circulated through the bed in order to add/remove the energy through the charging / discharging processes, respectively. The major advantage of the packed bed storage is its high degree of thermal stratification. Numerical solution of the packed bed energy storage is considered through dividing the bed into a number of equal segments for the bed particles and solved the energy equation for each segment depending on the neighbor ones. The studied design and performance parameters in the developed simulation model including, particle size, void fraction, etc. The final results showed that the collector efficiency was fluctuated between 55%-61% in winter season (January) under the climatic conditions of Misurata in Libya. Maximum temperature of 52ºC is attained at the top of the bed while the lower one is 25ºC at the end of the charging process of hot air into the bed. This distribution can satisfy the required load for the most house heating in Libya.Keywords: solar energy, thermal process, performance, collector, packed bed, numerical analysis, simulation
Procedia PDF Downloads 3313787 Ghost Frequency Noise Reduction through Displacement Deviation Analysis
Authors: Paua Ketan, Bhagate Rajkumar, Adiga Ganesh, M. Kiran
Abstract:
Low gear noise is an important sound quality feature in modern passenger cars. Annoying gear noise from the gearbox is influenced by the gear design, gearbox shaft layout, manufacturing deviations in the components, assembly errors and the mounting arrangement of the complete gearbox. Geometrical deviations in the form of profile and lead errors are often present on the flanks of the inspected gears. Ghost frequencies of a gear are very challenging to identify in standard gear measurement and analysis process due to small wavelengths involved. In this paper, gear whine noise occurring at non-integral multiples of gear mesh frequency of passenger car gearbox is investigated and the root cause is identified using the displacement deviation analysis (DDA) method. DDA method is applied to identify ghost frequency excitations on the flanks of gears arising out of generation grinding. Frequency identified through DDA correlated with the frequency of vibration and noise on the end-of-line machine as well as vehicle level measurements. With the application of DDA method along with standard lead profile measurement, gears with ghost frequency geometry deviations were identified on the production line to eliminate defective parts and thereby eliminate ghost frequency noise from a vehicle. Further, displacement deviation analysis can be used in conjunction with the manufacturing process simulation to arrive at suitable countermeasures for arresting the ghost frequency.Keywords: displacement deviation analysis, gear whine, ghost frequency, sound quality
Procedia PDF Downloads 1463786 Back to Basics: Redefining Quality Measurement for Hybrid Software Development Organizations
Authors: Satya Pradhan, Venky Nanniyur
Abstract:
As the software industry transitions from a license-based model to a subscription-based Software-as-a-Service (SaaS) model, many software development groups are using a hybrid development model that incorporates Agile and Waterfall methodologies in different parts of the organization. The traditional metrics used for measuring software quality in Waterfall or Agile paradigms do not apply to this new hybrid methodology. In addition, to respond to higher quality demands from customers and to gain a competitive advantage in the market, many companies are starting to prioritize quality as a strategic differentiator. As a result, quality metrics are included in the decision-making activities all the way up to the executive level, including board of director reviews. This paper presents key challenges associated with measuring software quality in organizations using the hybrid development model. We introduce a framework called Prevention-Inspection-Evaluation-Removal (PIER) to provide a comprehensive metric definition for hybrid organizations. The framework includes quality measurements, quality enforcement, and quality decision points at different organizational levels and project milestones. The metrics framework defined in this paper is being used for all Cisco systems products used in customer premises. We present several field metrics for one product portfolio (enterprise networking) to show the effectiveness of the proposed measurement system. As the results show, this metrics framework has significantly improved in-process defect management as well as field quality.Keywords: quality management system, quality metrics framework, quality metrics, agile, waterfall, hybrid development system
Procedia PDF Downloads 1743785 Effect of Irrigation Regime and Plant Density on Chickpea (Cicer arietinum L.) Yield in a Semi-Arid Environment
Authors: Atif Naim, Faisal E. Ahmed, Sershen
Abstract:
A field experiment was conducted for two consecutive winter seasons at the Demonstration Farm of the Faculty of Agriculture, University of Khartoum, Sudan, to study effects of different levels of irrigation regime and plant density on yield of introduced small seeded (desi type) chickpea cultivar (ILC 482). The experiment was laid out in a 3X3 factorial split-plot design with 4 replications. The treatments consisted of three irrigation regimes (designated as follows: I1 = optimum irrigation, I2 = moderate stress and I3 = severe stress; this corresponded with irrigation after drainage of 50%, 75% and 100% of available water based on 70%, 60% and 50% of field capacity, respectively) assigned as main plots and three plant densities (D₁=20, D₂= 40 and D₃= 60 plants/m²) assigned as subplots. The results indicated that the yield components (number of pods per plant, number of seeds per pod, 100 seed weight), seed yield per plant, harvest index and yield per unit area of chickpea were significantly (p < 0.05) affected by irrigation regime. Decreasing irrigation regime significantly (p < 0.05) decreased all measured parameters. Alternatively, increasing plant density significantly (p < 0.05) decreased the number of pods and seed yield per plant and increased seed yield per unit area. While number of seeds per pod and harvest index were not significantly (p > 0.05) affected by plant density. Interaction between irrigation regime and plant density was also significantly (p < 0.05) affected all measured parameters of yield, except for harvest index. It could be concluded that the best irrigation regime was full irrigation (after drainage of 50% available water at 70% field capacity) and the optimal plant density was 20 plants/m² under conditions of semi-arid regions.Keywords: irrigation regime, Cicer arietinum, chickpea, plant density
Procedia PDF Downloads 2253784 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails
Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali
Abstract:
When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis
Procedia PDF Downloads 493783 Evaluating and Prioritizing the Effective Management Factors of Human Resources Empowerment and Efficiency in Manufacturing Companies: A Case Study on Fars’ Livestock and Poultry Manufacturing Companies
Authors: Mohsen Yaghmor, Sima Radmanesh
Abstract:
Rapid environmental changes have been threatening the life of many organizations. Enabling and productivity of human resource should be considered as the most important issue in order to increase performance and ensure survival of the organizations. In this research, the effectiveness of management factory in productivity and inability of human resource have been identified and reviewed at glance. Afterwards, answers were sought to questions "What are the factors effecting productivity and enabling of human resource?" and "What are the priority order based on effective management of human resource in Fars Poultry Complex?". A specified questionnaire has been designed regarding the priorities and effectiveness of the identified factors. Six factors were specified consisting of: individual characteristics, teaching, motivation, partnership management, authority or power submission and job development that have most effect on organization. Then a questionnaire was specified for priority and effect measurement of specified factors that were reached after collecting information and using statistical tests of Keronchbakh alpha coefficient r = 0.792, so that we can say the questionnaire has sufficient reliability. After information analysis of specified six factors by Friedman test their effects were categorized. Measurement on organization respectively consists of individual characteristics, job development or enrichment, authority submission, partnership management, teaching and motivation. Lastly, approaches has been introduced to increase productivity of manpower.Keywords: productivity, empowerment, enrichment, authority submission, partnership management, teaching, motivation
Procedia PDF Downloads 2653782 Identification of Flooding Attack (Zero Day Attack) at Application Layer Using Mathematical Model and Detection Using Correlations
Authors: Hamsini Pulugurtha, V.S. Lakshmi Jagadmaba Paluri
Abstract:
Distributed denial of service attack (DDoS) is one altogether the top-rated cyber threats presently. It runs down the victim server resources like a system of measurement and buffer size by obstructing the server to supply resources to legitimate shoppers. Throughout this text, we tend to tend to propose a mathematical model of DDoS attack; we discuss its relevancy to the choices like inter-arrival time or rate of arrival of the assault customers accessing the server. We tend to tend to further analyze the attack model in context to the exhausting system of measurement and buffer size of the victim server. The projected technique uses an associate in nursing unattended learning technique, self-organizing map, to make the clusters of identical choices. Lastly, the abstract applies mathematical correlation and so the standard likelihood distribution on the clusters and analyses their behaviors to look at a DDoS attack. These systems not exclusively interconnect very little devices exchanging personal data, but to boot essential infrastructures news standing of nuclear facilities. Although this interconnection brings many edges and blessings, it to boot creates new vulnerabilities and threats which might be conversant in mount attacks. In such sophisticated interconnected systems, the power to look at attacks as early as accomplishable is of paramount importance.Keywords: application attack, bandwidth, buffer correlation, DDoS distribution flooding intrusion layer, normal prevention probability size
Procedia PDF Downloads 2253781 How Envisioning Process Is Constructed: An Exploratory Research Comparing Three International Public Televisions
Authors: Alexandre Bedard, Johane Brunet, Wendellyn Reid
Abstract:
Public Television is constantly trying to maintain and develop its audience. And to achieve those goals, it needs a strong and clear vision. Vision or envision is a multidimensional process; it is simultaneously a conduit that orients and fixes the future, an idea that comes before the strategy and a mean by which action is accomplished, from a business perspective. Also, vision is often studied from a prescriptive and instrumental manner. Based on our understanding of the literature, we were able to explain how envisioning, as a process, is a creative one; it takes place in the mind and uses wisdom and intelligence through a process of evaluation, analysis and creation. Through an aggregation of the literature, we build a model of the envisioning process, based on past experiences, perceptions and knowledge and influenced by the context, being the individual, the organization and the environment. With exploratory research in which vision was deciphered through the discourse, through a qualitative and abductive approach and a grounded theory perspective, we explored three extreme cases, with eighteen interviews with experts, leaders, politicians, actors of the industry, etc. and more than twenty hours of interviews in three different countries. We compared the strategy, the business model, and the political and legal forces. We also looked at the history of each industry from an inertial point of view. Our analysis of the data revealed that a legitimacy effect due to the audience, the innovation and the creativity of the institutions was at the cornerstone of what would influence the envisioning process. This allowed us to identify how different the process was for Canadian, French and UK public broadcasters, although we concluded that the three of them had a socially constructed vision for their future, based on stakeholder management and an emerging role for the managers: ideas brokers.Keywords: envisioning process, international comparison, television, vision
Procedia PDF Downloads 1323780 A Study on the Microbilogical Profile and Antibiotic Sensitivity Pattern of Bacterial Isolates Causing Urinary Tract Infection in Intensive Care Unit Patients in a Tertiary Care Hospital in Eastern India
Authors: Pampita Chakraborty, Sukumar Mukherjee
Abstract:
The study was done to determine the microbiological profile and changing pattern of the pathogens causing UTI in the ICU patients. All the patients admitted to the ICU with urinary catheter insertion for more than 48hours were included in the study. Urine samples were collected in a sterile container with aseptic precaution using disposable syringe and was processed as per standards. Antimicrobial susceptibility test was done by Disc Diffusion method as per CLSI guidelines. A total of 100 urine samples were collected from ICU patients, out of which 30% showed significant bacterial growth and 7% showed growth of candida spp. Prevalence of UTI was more in female (73%) than male (27.%). Gram-negative bacilli 26(86.67%) were more common in our study followed by gram-positive cocci 4(13.33%). The most common uropathogens isolated were Escherichia coli 14 (46.67%), followed by Klebsiella spp 7(23.33%), Staphylococcus aureus 4(13.33%), Acinetobacter spp 3(10%), Enterococcus faecalis 1(3.33%) and Pseudomonas aeruginosa 1(3.33%). Most of the Gram-negative bacilli were sensitive to amikacin (80%) and nitrofurantoin (80%), where as all gram-positive organisms were sensitive to Vancomycin. A large number ESBL producers were also observed in this study. The study finding showed that E.coli is the predominant pathogen and has increasing resistance pattern to the commonly used antibiotics. The study proposes that the adherence to antibiotic policy is the key ingredients for successful outcome in ICU patients and also emphasizes that repeated evaluation of microbial characteristics and continuous surveillance of resistant bacteria is required for selection of appropriate antibiotic therapy.Keywords: antimicrobial sensitivity, intensive care unit, nosocomial infection, urinary tract infection
Procedia PDF Downloads 270