Search results for: radiological physics center
462 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation
Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov
Abstract:
Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery
Procedia PDF Downloads 110461 A Study on the Magnetic and Submarine Geology Structure of TA22 Seamount in Lau Basin, Tonga
Authors: Soon Young Choi, Chan Hwan Kim, Chan Hong Park, Hyung Rae Kim, Myoung Hoon Lee, Hyeon-Yeong Park
Abstract:
We performed the marine magnetic, bathymetry and seismic survey at the TA22 seamount (in the Lau basin, SW Pacific) for finding the submarine hydrothermal deposits in October 2009. We acquired magnetic and bathymetry data sets by suing Overhouser Proton Magnetometer SeaSPY (Marine Magnetics Co.), Multi-beam Echo Sounder EM120 (Kongsberg Co.). We conducted the data processing to obtain detailed seabed topography, magnetic anomaly, reduction to the pole (RTP) and magnetization. Based on the magnetic properties result, we analyzed submarine geology structure of TA22 seamount with post-processed seismic profile. The detailed bathymetry of the TA22 seamount showed the left and right crest parts that have caldera features in each crest central part. The magnetic anomaly distribution of the TA22 seamount regionally displayed high magnetic anomalies in northern part and the low magnetic anomalies in southern part around the caldera features. The RTP magnetic anomaly distribution of the TA22 seamount presented commonly high magnetic anomalies in the each caldera central part. Also, it represented strong anomalies at the inside of caldera rather than outside flank of the caldera. The magnetization distribution of the TA22 seamount showed the low magnetization zone in the center of each caldera, high magnetization zone in the southern and northern east part. From analyzed the seismic profile map, The TA22 seamount area is showed for the inferred small mounds inside each caldera central part and it assumes to make possibility of sills by the magma in cases of the right caldera. Taking into account all results of this study (bathymetry, magnetic anomaly, RTP, magnetization, seismic profile) with rock samples at the left caldera area in 2009 survey, we suppose the possibility of hydrothermal deposits at mounds in each caldera central part and at outside flank of the caldera representing the low magnetization zone. We expect to have the better results by combined modeling from this study data with the other geological data (ex. detailed gravity, 3D seismic, petrologic study results and etc).Keywords: detailed bathymetry, magnetic anomaly, seamounts, seismic profile, SW Pacific
Procedia PDF Downloads 402460 Detection and Identification of Antibiotic Resistant Bacteria Using Infra-Red-Microscopy and Advanced Multivariate Analysis
Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel
Abstract:
Antimicrobial drugs have an important role in controlling illness associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global health-care problem. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing like disk diffusion are time-consuming and other method including E-test, genotyping are relatively expensive. Fourier transform infrared (FTIR) microscopy is rapid, safe, and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 550 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 85% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.Keywords: antibiotics, E. coli, FTIR, multivariate analysis, susceptibility
Procedia PDF Downloads 265459 Land Degradation Vulnerability Modeling: A Study on Selected Micro Watersheds of West Khasi Hills Meghalaya, India
Authors: Amritee Bora, B. S. Mipun
Abstract:
Land degradation is often used to describe the land environmental phenomena that reduce land’s original productivity both qualitatively and quantitatively. The study of land degradation vulnerability primarily deals with “Environmentally Sensitive Areas” (ESA) and the amount of topsoil loss due to erosion. In many studies, it is observed that the assessment of the existing status of land degradation is used to represent the vulnerability. Moreover, it is also noticed that in most studies, the primary emphasis of land degradation vulnerability is to assess its sensitivity to soil erosion only. However, the concept of land degradation vulnerability can have different objectives depending upon the perspective of the study. It shows the extent to which changes in land use land cover can imprint their effect on the land. In other words, it represents the susceptibility of a piece of land to degrade its productive quality permanently or in the long run. It is also important to mention that the vulnerability of land degradation is not a single factor outcome. It is a probability assessment to evaluate the status of land degradation and needs to consider both biophysical and human induce parameters. To avoid the complexity of the previous models in this regard, the present study has emphasized on to generate a simplified model to assess the land degradation vulnerability in terms of its current human population pressure, land use practices, and existing biophysical conditions. It is a “Mixed-Method” termed as the land degradation vulnerability index (LDVi). It was originally inspired by the MEDALUS model (Mediterranean Desertification and Land Use), 1999, and Farazadeh’s 2007 revised version of it. It has followed the guidelines of Space Application Center, Ahmedabad / Indian Space Research Organization for land degradation vulnerability. The model integrates the climatic index (Ci), vegetation index (Vi), erosion index (Ei), land utilization index (Li), population pressure index (Pi), and cover management index (CMi) by giving equal weightage to each parameter. The final result shows that the very high vulnerable zone primarily indicates three (3) prominent circumstances; land under continuous population pressure, high concentration of human settlement, and high amount of topsoil loss due to surface runoff within the study sites. As all the parameters of the model are amalgamated with equal weightage further with the help of regression analysis, the LDVi model also provides a strong grasp of each parameter and how far they are competent to trigger the land degradation process.Keywords: population pressure, land utilization, soil erosion, land degradation vulnerability
Procedia PDF Downloads 166458 Urban Compactness and Sustainability: Beijing Experience
Authors: Xilu Liu, Ameen Farooq
Abstract:
Beijing has several compact residential housing settings in many of its urban districts. The study in this paper reveals that urban compactness, as predictor of density, may carry an altogether different meaning in the developing world when compared to the U.S for achieving objectives of urban sustainability. Recent urban design studies in the U.S are debating for compact and mixed-use higher density housing to achieve sustainable and energy efficient living environments. While the concept of urban compactness is widely accepted as an approach in modern architectural and urban design fields, this belief may not directly carry well into all areas within cities of developing countries. Beijing’s technology-driven economy, with its historic and rich cultural heritage and a highly speculated real-estate market, extends its urban boundaries into multiple compact urban settings of varying scales and densities. The accelerated pace of migration from the countryside for better opportunities has led to unsustainable and uncontrolled buildups in order to meet the growing population demand within and outside of the urban center. This unwarranted compactness in certain urban zones has produced an unhealthy physical density with serious environmental and ecological challenging basic living conditions. In addition, crowding, traffic congestion, pollution and limited housing surrounding this compactness is a threat to public health. Several residential blocks in close proximity to each other were found quite compacted, or ill-planned, with residential sites due to lack of proper planning in Beijing. Most of them at first sight appear to be compact and dense but further analytical studies revealed that what appear to be dense actually are not as dense as to make a good case that could serve as the corner stone of sustainability and energy efficiency. This study considered several factors including floor area ratio (FAR), ground coverage (GSI), open space ratio (OSR) as indicators in analyzing urban compactness as a predictor of density. The findings suggest that these measures, influencing the density of residential sites under study, were much smaller in density than expected given their compact adjacencies. Further analysis revealed that several residential housing appear to support the notion of density in its compact layout but are actually compacted due to unregulated planning marred by lack of proper urban design standards, policies and guidelines specific to their urban context and condition.Keywords: Beijing, density, sustainability, urban compactness
Procedia PDF Downloads 424457 Relationship of Macro-Concepts in Educational Technologies
Authors: L. R. Valencia Pérez, A. Morita Alexander, Peña A. Juan Manuel, A. Lamadrid Álvarez
Abstract:
This research shows the reflection and identification of explanatory variables and their relationships between different variables that are involved with educational technology, all of them encompassed in macro-concepts which are: cognitive inequality, economy, food and language; These will give the guideline to have a more detailed knowledge of educational systems, the communication and equipment, the physical space and the teachers; All of them interacting with each other give rise to what is called educational technology management. These elements contribute to have a very specific knowledge of the equipment of communications, networks and computer equipment, systems and content repositories. This is intended to establish the importance of knowing a global environment in the transfer of knowledge in poor countries, so that it does not diminish the capacity to be authentic and preserve their cultures, their languages or dialects, their hierarchies and real needs; In short, to respect the customs of different towns, villages or cities that are intended to be reached through the use of internationally agreed professional educational technologies. The methodology used in this research is the analytical - descriptive, which allows to explain each of the variables, which in our opinion must be taken into account, in order to achieve an optimal incorporation of the educational technology in a model that gives results in a medium term. The idea is that in an encompassing way the concepts will be integrated to others with greater coverage until reaching macro concepts that are of national coverage in the countries and that are elements of conciliation in the different federal and international reforms. At the center of the model is the educational technology which is directly related to the concepts that are contained in factors such as the educational system, communication and equipment, spaces and teachers, which are globally immersed in macro concepts Cognitive inequality, economics, food and language. One of the major contributions of this article is to leave this idea under an algorithm that allows to be as unbiased as possible when evaluating this indicator, since other indicators that are to be taken from international preference entities like the OECD in the area of education systems studied, so that they are not influenced by particular political or interest pressures. This work opens the way for a relationship between involved entities, both conceptual, procedural and human activity, to clearly identify the convergence of their impact on the problem of education and how the relationship can contribute to an improvement, but also shows possibilities of being able to reach a comprehensive education reform for all.Keywords: relationships macro-concepts, cognitive inequality, economics, alimentation and language
Procedia PDF Downloads 199456 Harmonization of Accreditation Standards in Education of Central Asian Countries: Theoretical Aspect
Authors: Yskak Nabi, Onolkan Umankulova, Ilyas Seitov
Abstract:
Tempus project about “Central Asian network for quality assurance – CANQA” had been implemented in 2009-2012. As the result of the project, two accreditation agencies were established: the agency for quality assurance in the field of education, “EdNet” in Kyrgyzstan, center of progressive technologies in Tajikistan. The importance of the research studies of the project is supported by the idea that the creation of Central-Asian network for quality assurance in education is still relevant, and results of the International forum “Global in regional: Kazakhstan in Bologna process and EU projects,” that was held in Nur-Sultan in October 2020, proves this. At the same time, the previous experience of the partnership between accreditation agencies of Central Asia shows that recommendations elaborated within the CANQA project were not theoretically justified. But there are a number of facts and arguments that prove the practical appliance of these recommendations. In this respect, joint activities of accreditation agencies of Kyrgyzstan and Kazakhstan are representative. For example, independent Kazakh agency of accreditation and rating successfully conducts accreditation of Kyrgyz universities; based on the memorandum about joint activity between the agency for quality assurance in the field of education “EdNet” (Kyrgyzstan) and Astana accreditation agency (Kazakhstan), the last one provides its experts for accreditation procedures in EdNet. Exchange of experience among the agencies shows an effective approach towards adaptation of European standards to the reality of education systems of Central Asia with consideration of not only a legal framework but also from the point of European practices view. Therefore, the relevance of the research is identified as there is a practical partnership between accreditation agencies of Central Asian countries, but the absence of theoretical justification of integrational processes in the accreditation field. As a result, the following hypothesis was put forward: “if to develop theoretical aspects for harmonization of accreditation standards, then integrational processes would be improved since the implementation of Bologna process principles would be supported with wider possibilities, and particularly, students and academic mobility would be improved.” Indeed, for example, in Kazakhstan, the total share of foreign students was 5,04% in 2020, and most of them are coming from Kyrgyzstan, Tajikistan, and Uzbekistan, and if integrational processes will be improved, then this share can increase.Keywords: accreditation standards in education, Central Asian countries, pedagogical theory, model
Procedia PDF Downloads 199455 Phenotypic Diversity of the Tomato Germplasm from the Lazio Region in Central Italy, with a Case Study on Molecular Distinctiveness
Authors: Barbara Farinon, Maurizio E. Picarella, Lorenzo Mancini, Andrea Mazzucato
Abstract:
Italy is notoriously a secondary center of diversification for cultivated tomatoes (Solanum lycopersicum L.). The study of phenotypic and genetic diversity in landrace collections is important for germplasm conservation and biodiversity protection. Here, we set up to study the germplasm collected in the region of Lazio in Central Italy with a focus on the distinctiveness among landraces and the attribution of membership to unnamed accessions. Our regional collection included 30 accessions belonging to six different locally recognized landraces and 21 unnamed accessions. All accessions were gathered in Lazio and belonged to the collection held at the Regional Agency for the Development and Innovation of Agriculture in Lazio (ARSIAL, in the application of the Regional Act n. 15/2000, funded by Lazio Rural Development Plan 2014 – 2020 Agro-environmental Measure, Action 10.2.1) and at the University of Tuscia. We included 13 control genotypes as references. The collection showed wide phenotypic variability for several traits, such as fruit weight (range 14-277 g), locule number (2-12), shape index (0.54-2.65), yield (0.24-3.08 kg/plant), and soluble solids (3.4-7.5 °B). A few landraces showed uncommon phenotypes, such as potato leaf, colorless fruit epidermis, or delayed ripening. Multivariate analysis of 25 cardinal phenotypic variables grouped the named varieties and allowed to assign of some of the unnamed to recognized groups. A case study for distinctiveness is presented for the flattened-ribbed types that presented overlapping distribution according to the phenotypic data. Molecular markers retrieved by previous studies revealed differences compared to the phenotyping clustering, indicating that the named varieties “Scatolone di Bolsena” and “Pantano Romanesco” belong to the Marmande group, together with the reference landrace from Tuscany “Costoluto Fiorentino”. Differently, the landrace “Spagnoletta di Formia e Gaeta” was clearly distinct from the former at the molecular level. Therefore, a genotypic analysis of the analyzed collection appears needed to better define the molecular distinctiveness among the flattened-ribbed accessions, as well as to properly attribute the membership group of the unnamed accessions.Keywords: distinctiveness, flattened-ribbed fruits, regional landraces, tomato
Procedia PDF Downloads 138454 The Impact of Climate Change on Typical Material Degradation Criteria over Timurid Historical Heritage
Authors: Hamed Hedayatnia, Nathan Van Den Bossche
Abstract:
Understanding the ways in which climate change accelerates or slows down the process of material deterioration is the first step towards assessing adaptive approaches for the conservation of historical heritage. Analysis of the climate change effects on the degradation risk assessment parameters like freeze-thaw cycles and wind erosion is also a key parameter when considering mitigating actions. Due to the vulnerability of cultural heritage to climate change, the impact of this phenomenon on material degradation criteria with the focus on brick masonry walls in Timurid heritage, located in Iran, was studied. The Timurids were the final great dynasty to emerge from the Central Asian steppe. Through their patronage, the eastern Islamic world in northwestern of Iran, especially in Mashhad and Herat, became a prominent cultural center. Goharshad Mosque is a mosque in Mashhad of the Razavi Khorasan Province, Iran. It was built by order of Empress Goharshad, the wife of Shah Rukh of the Timurid dynasty in 1418 CE. Choosing an appropriate regional climate model was the first step. The outputs of two different climate model: the 'ALARO-0' and 'REMO,' were analyzed to find out which model is more adopted to the area. For validating the quality of the models, a comparison between model data and observations was done in 4 different climate zones in Iran for a period of 30 years. The impacts of the projected climate change were evaluated until 2100. To determine the material specification of Timurid bricks, standard brick samples from a Timurid mosque were studied. Determination of water absorption coefficient, defining the diffusion properties and determination of real density, and total porosity tests were performed to characterize the specifications of brick masonry walls, which is needed for running HAM-simulations. Results from the analysis showed that the threatening factors in each climate zone are almost different, but the most effective factor around Iran is the extreme temperature increase and erosion. In the north-western region of Iran, one of the key factors is wind erosion. In the north, rainfall erosion and mold growth risk are the key factors. In the north-eastern part, in which our case study is located, the important parameter is wind erosion.Keywords: brick, climate change, degradation criteria, heritage, Timurid period
Procedia PDF Downloads 119453 The Geometrical Cosmology: The Projective Cast of the Collective Subjectivity of the Chinese Traditional Architectural Drawings
Authors: Lina Sun
Abstract:
Chinese traditional drawings related to buildings and construction apply a unique geometry differentiating with western Euclidean geometry and embrace a collection of special terminologies, under the category of tu (the Chinese character for drawing). This paper will on one side etymologically analysis the terminologies of Chinese traditional architectural drawing, and on the other side geometrically deconstruct the composition of tu and locate the visual narrative language of tu in the pictorial tradition. The geometrical analysis will center on selected series of Yang-shi-lei tu of the construction of emperors’ mausoleums in Qing Dynasty (1636-1912), and will also draw out the earlier architectural drawings and the architectural paintings such as the jiehua, and paintings on religious frescoes and tomb frescoes as the comparison. By doing these, this research will reveal that both the terminologies corresponding to different geometrical forms respectively indicate associations between architectural drawing and the philosophy of Chinese cosmology, and the arrangement of the geometrical forms in the visual picture plane facilitates expressions of the concepts of space and position in the geometrical cosmology. These associations and expressions are the collective intentions of architectural drawing evolving in the thousands of years’ tradition without breakage and irrelevant to the individual authorship. Moreover, the architectural tu itself as an entity, not only functions as the representation of the buildings but also express intentions and strengthen them by using the Chinese unique geometrical language flexibly and intentionally. These collective cosmological spatial intentions and the corresponding geometrical words and languages reveal that the Chinese traditional architectural drawing functions as a unique architectural site with subjectivity which exists parallel with buildings and express intentions and meanings by itself. The methodology and the findings of this research will, therefore, challenge the previous researches which treat architectural drawings just as the representation of buildings and understand the drawings more than just using them as the evidence to reconstruct the information of buildings. Furthermore, this research will situate architectural drawing in between the researches of Chinese technological tu and artistic painting, bridging the two academic areas which usually treated the partial features of architectural drawing separately. Beyond this research, the collective subjectivity of the Chinese traditional drawings will facilitate the revealing of the transitional experience from traditions to drawing modernity, where the individual subjective identities and intentions of architects arise. This research will root for the understanding both the ambivalence and affinity of the drawing modernity encountering the traditions.Keywords: Chinese traditional architectural drawing (tu), etymology of tu, collective subjectivity of tu, geometrical cosmology in tu, geometry and composition of tu, Yang-shi-lei tu
Procedia PDF Downloads 121452 Traumatic Events, Post-traumatic Symptoms, Personal Resilience, Quality of Life, and Organizational Com Mitment Among Midwives: A Cross-Sectional Study
Authors: Kinneret Segal
Abstract:
The work of a midwife is emotionally challenging, both positively and negatively. Midwives share moments of joy when a baby is welcomed into the world, and also attend difficult events of loss and trauma. The relationship that develops with the maternity is the essence of the midwife's care, and it is a fundamental source of motivation and professional satisfaction. This close relationship with the maternity may be used as a double-edged sword in cases of exposure to traumatic events at birth. Birth problems, exposure to emergencies and traumatic events, and loss can affect the professional quality of life and the Compassion satisfaction of the midwife. It seems that the issue of traumatic experiences in the work of midwives, has not been sufficiently explored. The present study examined the associations between exposure to traumatic events, personal resilience and post-traumatic symptoms, professional quality of life and organizational commitment among midwifery nurses in Israeli hospitals. 131 midwives from three hospitals in the country's center in Israel participated in this study. The data were collected during 2021 using a self-report questionnaire that examined sociodemographic characteristics, the degree of exposure to traumatic events in the delivery room, personal resilience, post-traumatic symptoms, professional quality of life, and organizational commitment. The three most difficult traumatic events for the midwives were death or fear of death of a newborn, death or fear of the death of a mother and a quiet birth. The higher the frequency of exposure to traumatic events, the more numerous and intense the onset of post-trauma symptoms. The more numerous and powerful the post-trauma symptoms, the higher the level of professional burnout and/or compassion fatigue, and the lower the level of compassion satisfaction. High levels of compassion satisfaction and/or low professional burnout were expressed in a heightened sense of organizational commitment. Personal resilience, country of birth, traumatic symptoms and organizational commitment, predicted satisfaction from compassion. Midwives are exposed to traumatic events associated with dissatisfaction and impairment of the professional quality of life that accompanies burnout and compassion fatigue. Exposure to traumatic events leads to the appearance of traumatic symptoms, a decrease in organizational commitment, and psychological and mental well-being. The issue needs to be addressed by implementing training programs, organizational support, and policies to improving well-being and quality of care among midwives.Keywords: traumatic experirnces, midwives, quality of life, burnout, organizational commitment, personal resilience
Procedia PDF Downloads 87451 Carbapenem Usage in Medical Wards: An Antibiotic Stewardship Feedback Project
Authors: Choon Seong Ng, P. Petrick, C. L. Lau
Abstract:
Background: Carbapenem-resistant isolates have been increasingly reported recently. Carbapenem stewardship is designed to optimize its usage particularly among medical wards with high prevalence of carbapenem prescriptions to combat such emerging resistance. Carbapenem stewardship programmes (CSP) can reduce antibiotic use but clinical outcome of such measures needs further evaluation. We examined this in a prospective manner using feedback mechanism. Methods: Our single-center prospective cohort study involved all carbapenem prescriptions across the medical wards (including medical patients admitted to intensive care unit) in a tertiary university hospital setting. The impact of such stewardship was analysed according to the accepted and the rejected groups. The primary endpoint was safety. Safety measure applied in this study was the death at 1 month. Secondary endpoints included length of hospitalisation and readmission. Results: Over the 19 months’ period, input from 144 carbapenem prescriptions was analysed on the basis of acceptance of our CSP recommendations on the use of carbapenems. Recommendations made were as follows : de-escalation of carbapenem; stopping the carbapenem; use for a short duration of 5-7 days; required prolonged duration in the case of carbapenem-sensitive Extended Spectrum Beta-Lactamases bacteremia; dose adjustment; and surgical intervention for removal of septic foci. De-escalation, shorten duration of carbapenem and carbapenem cessation comprised 79% of the recommendations. Acceptance rate was 57%. Those who accepted CSP recommendations had no increase in mortality (p = 0.92), had a shorter length of hospital stay (LOS) and had cost-saving. Infection-related deaths were found to be higher among those in the rejected group. Moreover, three rejected cases (6%) among all non-indicated cases (n = 50) were found to have developed carbapenem-resistant isolates. Lastly, Pitt’s bacteremia score appeared to be a key element affecting the carbapenem prescription’s behaviour in this trial. Conclusions: Carbapenem stewardship program in the medical wards not only saves money, but most importantly it is safe and does not harm the patients with added benefits of reducing the length of hospital stay. However, more time is needed to engage the primary clinical teams by formal clinical presentation and immediate personal feedback by senior Infectious Disease (ID) personnel to increase its acceptance.Keywords: audit and feedback, carbapenem stewardship, medical wards, university hospital
Procedia PDF Downloads 204450 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution
Procedia PDF Downloads 99449 Novel Low-cost Bubble CPAP as an Alternative Non-invasive Oxygen Therapy for Newborn Infants with Respiratory Distress Syndrome in a Tertiary Level Neonatal Intensive Care Unit in the Philippines: A Single Blind Randomized Controlled Trial
Authors: Navid P Roodaki, Rochelle Abila, Daisy Evangeline Garcia
Abstract:
Background and Objective: Respiratory Distress Syndrome (RDS) among premature infants is a major causes of neonatal death. The use of Continuous Positive Airway Pressure (CPAP) has become a standard of care for preterm newborns with RDS hence cost-effective innovations are needed. This study compared a novel low-cost Bubble CPAP (bCPAP) device to ventilator driven CPAP in the treatment of RDS. Methods: This is a single-blind, randomized controlled trial done on May 2022 to October 2022 in a Level III Neonatal Intensive Care Unit in the Philippines. Preterm newborns (<36 weeks) with RDS were randomized to receive Vayu bCPAP device or Ventilator-derived CPAP. Arterial Blood Gases, Oxygen Saturation, administration of surfactant, and CPAP failure rates were measured. Results: Seventy preterm newborns were included. No differences were observed between the Ventilator driven CPAP and Vayu bCPAP on the PaO2 (97.51mmHg vs 97.37mmHg), So2 (97.08% vs 95.60%) levels, amount of surfactant administered between groups. There were no observed differences in CPAP failure rates between Vayu bPCAP (x̄ 3.23 days) and ventilator-driven CPAP (x̄ 2.98 days). However, a significant difference was noted on the CO2 level (40.32mmHg vs 50.70mmHg), which was higher among those hooked to Ventilator-driven CPAP (p 0.004). Conclusion: This study has shown that the novel low-cost bubble CPAP (Vayu bCPAP) can be used as an efficacious alternate non invasive oxygen therapy among preterm neonates with RDS, although the CO2 levels were higher among those hooked to ventilator driven CPAP, other outcome parameters measured showed that both devices are comparable. Recommendation: A multi-center or national study to account for geographic region, which may alter the outcomes of patients connected to different ventilatory support. Cost comparison between devices is also suggested. A mixed-method research assessing the experiences of health care professionals in assembling and utilizing the gadget is a second consideration.Keywords: bubble CPAP, ventilator-derived CPAP; infant, premature, respiratory distress syndrome
Procedia PDF Downloads 83448 Evaluation of the Irritation Potential of Three Topical Formulations of Minoxidil 2% Using Patch Test
Authors: Sule Pallavi, Shah Priyank, Thavkar Amit, Rohira Poonam, Mehta Suyog
Abstract:
Introduction: Minoxidil has been used topically for a long time to assist hair growth in the management of male androgenetic alopecia. The aim of this study was a comparative assessment of the irritation potential of three commercial formulations of minoxidil 2% topical solution in a human patch test. Methodology: The study was a non-randomized, double-blind, controlled, single-center study of 56 healthy adult Indian subjects. A 24-hour occlusive patch test was conducted with three formulations of minoxidil 2% topical solution. Products tested were aqueous-based minoxidil 2% (AnasureTM 2%, Sun Pharma, India – Brand A), alcohol-based minoxidil 2% (Brand B) and aqueous-based minoxidil 2% (Brand C). Isotonic saline 0.9% and 1% w/w sodium lauryl sulphate as a negative and positive control, respectively, were included. Patches were applied on the back, followed by removal after 24 hours. The Draize scale (0-4 points scale for erythema/dryness/wrinkles and for oedema) was used to evaluate and clinically score the skin reaction under constant artificial daylight 24 hours after the removal of the patches. The patch test was based on the principles outlined by Bureau of Indian Standards (BIS) (IS 4011:2018; Methods of Test for safety evaluation of Cosmetics-3rd revision). A mean combined score up to 2.0/8.0 indicates that a product is “non-irritant,” and a score between 2.0/8.0 and 4.0/8.0 indicates “mildly irritant” and a score above 4.0/8.0 indicates “irritant”. In case of any skin reaction that was observed, a follow-up was planned after one week to confirm recovery. Results: The 56 subjects who participated in the study had a mean age of 28.7 years (28 males and 28 females). The combined mean score ± standard deviation was: 0.09 ± 0.29 (Brand A), 0.29± 0.53 (Brand B), 0.30 ± 0.46 (Brand C), 3.25 ± 0.77 (positive control) and 0.02 ± 0.13 (negative control). This mean score of Brand A (Sun Pharma) was significantly lower than that of Brand B (p=0.016) and that of Brand C (p=0.004). The mean erythema score ± standard deviation was: 0.09 ± 0.29 (Brand A), 0.27 ± 0.49 (Brand B), 0.30 ± 0.46 (Brand C), 2.5 ± 0.66 (positive control) and 0.02 ± 0.13 (negative control). The mean erythema score of Brand A (Sun Pharma) was significantly lower than that of Brand B (p=0.019) and that of Brand C (p=0.004). Reactions that were observed 24 hours after patch removal subsided in a week’s time. Conclusion: Based on the human patch test as per the BIS, IS 4011:2018, all the three topical formulations of minoxidil 2% were found to be non-irritant. Brand A of 2% minoxidil (Sun Pharma) was found to be the least irritant than Brand B and Brand C based on the combined mean score and mean erythema score.Keywords: erythema, irritation, minoxidil, patch test
Procedia PDF Downloads 82447 Regional Anesthesia in Carotid Surgery: A Single Center Experience
Authors: Daniel Thompson, Muhammad Peerbux, Sophie Cerutti, Hansraj Riteesh Bookun
Abstract:
Patients with carotid stenosis, which may be asymptomatic or symptomatic in the form of transient ischaemic attack (TIA), amaurosis fugax, or stroke, often require an endarterectomy to reduce stroke risk. Risks of this procedure include stroke, death, myocardial infarction, and cranial nerve damage. Carotid endarterectomy is most commonly performed under general anaesthetic, however, it can also be undertaken with a regional anaesthetic approach. Our tertiary centre generally performs carotid endarterectomy under regional anaesthetic. Our major tertiary hospital mostly utilises regional anaesthesia for carotid endarterectomy. We completed a cross-sectional analysis of all cases of carotid endarterectomy performed under regional anaesthesia across a 10-year period between January 2010 to March 2020 at our institution. 350 patients were included in this descriptive analysis, and demographic details for patients, indications for surgery, procedural details, length of surgery, and complications were collected. Data was cross tabulated and presented in frequency tables to describe these categorical variables. 263 of the 350 patients in the analysis were male, with a mean age of 71 ± 9. 172 patients had a history of ischaemic heart disease, 104 had diabetes mellitus, 318 had hypertension, and 17 patients had chronic kidney disease greater than Stage 3. 13.1% (46 patients) were current smokers, and the majority (63%) were ex-smokers. Most commonly, carotid endarterectomy was performed conventionally with patch arterioplasty 96% of the time (337 patients). The most common indication was TIA and stroke in 64% of patients, 18.9% were classified as asymptomatic, and 13.7% had amaurosis fugax. There were few general complications, with 9 wound complications/infections, 7 postoperative haematomas requiring return to theatre, 3 myocardial infarctions, 3 arrhythmias, 1 exacerbation of congestive heart failure, 1 chest infection, and 1 urinary tract infection. Specific complications to carotid endarterectomy included 3 strokes, 1 postoperative TIA, and 1 cerebral bleed. There were no deaths in our cohort. This analysis of a large cohort of patients from a major tertiary centre who underwent carotid endarterectomy under regional anaesthesia indicates the safety of such an approach for these patients. Regional anaesthesia holds the promise of less general respiratory and cardiac events compared to general anaesthesia, and in this vulnerable patient group, calls for comparative research between local and general anaesthesia in carotid surgery.Keywords: anaesthesia, carotid endarterectomy, stroke, carotid stenosis
Procedia PDF Downloads 121446 Occupational Safety and Health in the Wake of Drones
Authors: Hoda Rahmani, Gary Weckman
Abstract:
The body of research examining the integration of drones into various industries is expanding rapidly. Despite progress made in addressing the cybersecurity concerns for commercial drones, knowledge deficits remain in determining potential occupational hazards and risks of drone use to employees’ well-being and health in the workplace. This creates difficulty in identifying key approaches to risk mitigation strategies and thus reflects the need for raising awareness among employers, safety professionals, and policymakers about workplace drone-related accidents. The purpose of this study is to investigate the prevalence of and possible risk factors for drone-related mishaps by comparing the application of drones in construction with manufacturing industries. The chief reason for considering these specific sectors is to ascertain whether there exists any significant difference between indoor and outdoor flights since most construction sites use drones outside and vice versa. Therefore, the current research seeks to examine the causes and patterns of workplace drone-related mishaps and suggest possible ergonomic interventions through data collection. Potential ergonomic practices to mitigate hazards associated with flying drones could include providing operators with professional pieces of training, conducting a risk analysis, and promoting the use of personal protective equipment. For the purpose of data analysis, two data mining techniques, the random forest and association rule mining algorithms, will be performed to find meaningful associations and trends in data as well as influential features that have an impact on the occurrence of drone-related accidents in construction and manufacturing sectors. In addition, Spearman’s correlation and chi-square tests will be used to measure the possible correlation between different variables. Indeed, by recognizing risks and hazards, occupational safety stakeholders will be able to pursue data-driven and evidence-based policy change with the aim of reducing drone mishaps, increasing productivity, creating a safer work environment, and extending human performance in safe and fulfilling ways. This research study was supported by the National Institute for Occupational Safety and Health through the Pilot Research Project Training Program of the University of Cincinnati Education and Research Center Grant #T42OH008432.Keywords: commercial drones, ergonomic interventions, occupational safety, pattern recognition
Procedia PDF Downloads 209445 A Cognitive Training Program in Learning Disability: A Program Evaluation and Follow-Up Study
Authors: Krisztina Bohacs, Klaudia Markus
Abstract:
To author’s best knowledge we are in absence of studies on cognitive program evaluation and we are certainly short of programs that prove to have high effect sizes with strong retention results. The purpose of our study was to investigate the effectiveness of a comprehensive cognitive training program, namely BrainRx. This cognitive rehabilitation program target and remediate seven core cognitive skills and related systems of sub-skills through repeated engagement in game-like mental procedures delivered one-on-one by a clinician, supplemented by digital training. A larger sample of children with learning disability were given pretest and post-test cognitive assessments. The experimental group completed a twenty-week cognitive training program in a BrainRx center. A matched control group received another twenty-week intervention with Feuerstein’s Instrumental Enrichment programs. A second matched control group did not receive training. As for pre- and post-test, we used a general intelligence test to assess IQ and a computer-based test battery for assessing cognition across the lifespan. Multiple regression analyses indicated that the experimental BrainRx treatment group had statistically significant higher outcomes in attention, working memory, processing speed, logic and reasoning, auditory processing, visual processing and long-term memory compared to the non-treatment control group with very large effect sizes. With the exception of logic and reasoning, the BrainRx treatment group realized significantly greater gains in six of the above given seven cognitive measures compared to the Feuerstein control group. Our one-year retention measures showed that all the cognitive training gains were above ninety percent with the greatest retention skills in visual processing, auditory processing, logic, and reasoning. The BrainRx program may be an effective tool to establish long-term cognitive changes in case of students with learning disabilities. Recommendations are made for treatment centers and special education institutions on the cognitive training of students with special needs. The importance of our study is that targeted, systematic, progressively loaded and intensive brain training approach may significantly change learning disabilities.Keywords: cognitive rehabilitation training, cognitive skills, learning disability, permanent structural cognitive changes
Procedia PDF Downloads 202444 Shoreline Variation with Construction of a Pair of Training Walls, Ponnani Inlet, Kerala, India
Authors: Jhoga Parth, T. Nasar, K. V. Anand
Abstract:
An idealized definition of shoreline is that it is the zone of coincidence of three spheres such as atmosphere, lithosphere, and hydrosphere. Despite its apparent simplicity, this definition in practice a challenge to apply. In reality, the shoreline location deviates continually through time, because of various dynamic factors such as wave characteristics, currents, coastal orientation and the bathymetry, which makes the shoreline volatile. This necessitates us to monitor the shoreline in a temporal basis. If shoreline’s nature is understood at particular coastal stretch, it need not be the same trend at the other location, though belonging to the same sea front. Shoreline change is hence a local phenomenon and has to be studied with great intensity considering as many factors involved as possible. Erosion and accretion of sediment are such natures of a shoreline, which needs to be quantified by comparing with its predeceasing variations and understood before implementing any coastal projects. In recent years, advent of Global Positioning System (GPS) and Geographic Information System (GIS) acts as an emerging tool to quantify the intra and inter annual sediment rate getting accreted or deposited compared to other conventional methods in regards with time was taken and man power. Remote sensing data, on the other hand, paves way to acquire historical sets of data where field data is unavailable with a higher resolution. Short term and long term period shoreline change can be accurately tracked and monitored using a software residing in GIS - Digital Shoreline Analysis System (DSAS) developed by United States Geological Survey (USGS). In the present study, using DSAS, End Point Rate (EPR) is calculated analyze the intra-annual changes, and Linear Rate Regression (LRR) is adopted to study inter annual changes of shoreline. The shoreline changes are quantified for the scenario during the construction of breakwater in Ponnani river inlet along Kerala coast, India. Ponnani is a major fishing and landing center located 10°47’12.81”N and 75°54’38.62”E in Malappuram district of Kerala, India. The rate of erosion and accretion is explored using satellite and field data. The full paper contains the rate of change of shoreline, and its analysis would provide us understanding the behavior of the inlet at the study area during the construction of the training walls.Keywords: DSAS, end point rate, field measurements, geo-informatics, shoreline variation
Procedia PDF Downloads 256443 Study of Durability of Porous Polymer Materials, Glass-Fiber-Reinforced Polyurethane Foam (R-PUF) in MarkIII Containment Membrane System
Authors: Florent Cerdan, Anne-Gaëlle Denay, Annette Roy, Jean-Claude Grandidier, Éric Laine
Abstract:
The insulation of MarkIII membrane of the Liquid Natural Gas Carriers (LNGC) consists of a load- bearing system made of panels in reinforced polyurethane foam (R-PUF). During the shipping, the cargo containment shall be potentially subject to risk events which can be water leakage through the wall ballast tank. The aim of these present works is to further develop understanding of water transfer mechanisms and water effect on properties of R-PUF. This multi-scale approach contributes to improve the durability. Macroscale / Mesoscale Firstly, the use of the gravimetric technique has allowed to define, at room temperature, the water transfer mechanisms and kinetic diffusion, in the R-PUF. The solubility follows a first kinetic fast growing connected to the water absorption by the micro-porosity, and then evolves linearly slowly, this second stage is connected to molecular diffusion and dissolution of water in the dense membranes polyurethane. Secondly, in the purpose of improving the understanding of the transfer mechanism, the study of the evolution of the buoyant force has been established. It allowed to identify the effect of the balance of total and partial pressure of mixture gas contained in pores surface. Mesoscale / Microscale The differential scanning calorimetry (DSC) and Dynamical Mechanical Analysis (DMA), have been used to investigate the hydration of the hard and soft segments of the polyurethane matrix. The purpose was to identify the sensitivity of these two phases. It been shown that the glass transition temperatures shifts towards the low temperatures when the solubility of the water increases. These observations permit to conclude to a plasticization of the polymer matrix. Microscale The Fourier Transform Infrared (FTIR) study has been used to investigate the characterization of functional groups on the edge, the center and mid-way of the sample according the duration of submersion. More water there is in the material, more the water fix themselves on the urethanes groups and more specifically on amide groups. The pic of C=O urethane shifts at lower frequencies quickly before 24 hours of submersion then grows slowly. The intensity of the pic decreases more flatly after that.Keywords: porous materials, water sorption, glass transition temperature, DSC, DMA, FTIR, transfer mechanisms
Procedia PDF Downloads 529442 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images
Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod
Abstract:
The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck
Procedia PDF Downloads 216441 Current Concepts of Male Aesthetics: Facial Areas to Be Focused and Prioritized with Botulinum Toxin and Hyaluronic Acid Dermal Fillers Combination Therapies, Recommendations on Asian Patients
Authors: Sadhana Deshmukh
Abstract:
Objective: Men represent only a fraction of the medical aesthetic practice. They are increasingly becoming more cosmetically-inclined. The primary objective is to harmonize facial proportion by prioritizing and focusing on forehead nose, cheek and chin complex. Introduction: Despite tremendous variability, diverse population of the Indian subcontinent, the male skull is unique in its overall larger size, and shape. Men tend to have a large forehead with prominent supraorbital ridges, wide glabella, square orbit, and a prominent protruding mandible. Men have increased skeletal muscle mass, with less facial subcutaneous fat. Facial aesthetics is evolving rapidly. Commonly published canons of facial proportions usually represent feminine standards and are not applicable to males. Strict adherence to these norms is therefore not necessary to obtain satisfying results in male patients. Materials and Methods: Male patients age group 30-60 years have been enrolled. Botulinum toxin and hyaluronic acid fillers were used to update consensus recommendations for facial rejuvenation using these two types of products alone and in combination. Results: There are specific recommendations by facial area, focusing on relaxing musculature, restoring volume, recontouring using toxin and dermal fillers alone and in combination. For upper face, though botulinum toxin remains the cornerstone of treatment, temples and forehead fillers are recommended for optimal results. In Mid face, these fillers are placed more laterally to maintain the masculine look. Botulinum toxin and fillers in combination can improve outcomes in the lower face. Chin augmentation remains the center point for lower face. Conclusions: Males are more likely to have shorter doctor visits, less likely to ask questions, have a lower attention to bodily changes. The physician must patiently gauge male patients’ aging and cosmetic goals. Clinicians can also benefit from ongoing guidance on products, tailoring treatments, treating multiple facial areas, and using combinations of products. An appreciation that rejuvenation is 3-dimensional process involving muscle control, volume restoration and recontouring helps.Keywords: male aesthetics, botulinum toxin, hyaluronic acid dermal fillers, Asian patients
Procedia PDF Downloads 157440 Developing of Ecological Internal Insulation Composite Boards for Innovative Retrofitting of Heritage Buildings
Authors: J. N. Nackler, K. Saleh Pascha, W. Winter
Abstract:
WHISCERS™ (Whole House In-Situ Carbon and Energy Reduction Solution) is an innovative process for Internal Wall Insulation (IWI) for energy-efficient retrofitting of heritage building, which uses laser measuring to determine the dimensions of a room, off-site insulation board cutting and rapid installation to complete the process. As part of a multinational investigation consortium the Austrian part adapted the WHISCERS system to local conditions of Vienna where most historical buildings have valuable stucco facades, precluding the application of an external insulation. The Austrian project contribution addresses the replacement of commonly used extruded polystyrene foam (XPS) with renewable materials such as wood and wood products to develop a more sustainable IWI system. As the timber industry is a major industry in Austria, a new innovative and more sustainable IWI solution could also open up new markets. The first approach of investigation was the Life Cycle Assessment (LCA) to define the performance of wood fibre board as insulation material in comparison to normally used XPS-boards. As one of the results the global-warming potential (GWP) of wood-fibre-board is 15 times less the equivalent to carbon dioxide while in the case of XPS it´s 72 times more. The hygrothermal simulation program WUFI was used to evaluate and simulate heat and moisture transport in multi-layer building components of the developed IWI solution. The results of the simulations prove in examined boundary conditions of selected representative brickwork constructions to be functional and usable without risk regarding vapour diffusion and liquid transport in proposed IWI. In a further stage three different solutions were developed and tested (1 - glued/mortared, 2 - with soft board, connected to wall with gypsum board as top layer, 3 - with soft board and clay board as top layer). All three solutions presents a flexible insulation layer out of wood fibre towards the existing wall, thus compensating irregularities of the wall surface. From first considerations at the beginning of the development phase, three different systems had been developed and optimized according to assembly technology and tested as small specimen in real object conditions. The built prototypes are monitored to detect performance and building physics problems and to validate the results of the computer simulation model. This paper illustrates the development and application of the Internal Wall Insulation system.Keywords: internal insulation, wood fibre, hygrothermal simulations, monitoring, clay, condensate
Procedia PDF Downloads 219439 Occupational Exposure and Contamination to Antineoplastic Drugs of Healthcare Professionals in Mauritania
Authors: Antoine Villa, Moustapha Mohamedou, Florence Pilliere, Catherine Verdun-Esquer, Mathieu Molimard, Mohamed Sidatt Cheikh El Moustaph, Mireille Canal-Raffin
Abstract:
Context: In Mauritania, the activity of the National Center of Oncology (NCO) has steadily risen leading to an increase in the handling of antineoplastic drugs (AD) by healthcare professionals. In this context, the AD contamination of those professionals is a major concern for occupational physicians. It has been evaluated using biological monitoring of occupational exposure (BMOE). Methods: The intervention took place in 2015, in 2 care units, and evaluated nurses preparing and/or infusing AD and agents in charge of hygiene. Participants provided a single urine sample, at the end of the week, at the end of their shift. Five molecules were sought using specific high sensitivity methods (UHPLC-MS/MS) with very low limits of quantification (LOQ) (cyclophosphamide (CP), Ifosfamide (IF), methotrexate (MTX): 2.5ng/L; doxorubicin (Doxo): 10ng/L; α-fluoro-β-alanine (FBAL, 5-FU metabolite): 20ng/L). A healthcare worker was considered as 'contaminated' when an AD was detected at a urine concentration equal to or greater than the LOQ of the analytical method or at trace concentration. Results: Twelve persons participated (6 nurses, 6 agents in charge of hygiene). Twelve urine samples were collected and analyzed. The percentage of contamination was 66.6% for all participants (n=8/12), 100% for nurses (6/6) and 33% for agents in charge of hygiene (2/6). In 62.5% (n=5/8) of the contaminated workers, two to four of the AD were detected in the urine. CP was found in the urine of all contaminated workers. FBAL was found in four, MTX in three and Doxo in one. Only IF was not detected. Urinary concentrations (all drugs combined) ranged from 3 to 844 ng/L for nurses and from 3 to 44 ng/L for agents in charge of hygiene. The median urinary concentrations were 87 ng/L, 15.1 ng/L and 4.4 ng/L for FBAL, CP and MTX, respectively. The Doxo urinary concentration was found 218ng/L. Discussion: There is no current biological exposure index for the interpretation of AD contamination. The contamination of these healthcare professionals is therefore established by the detection of one or more AD in urine. These urinary contaminations are higher than the LOQ of the analytical methods, which must be as low as possible. Given the danger of AD, the implementation of corrective measures is essential for the staff. Biological monitoring of occupational exposure is the most reliable process to identify groups at risk, tracing insufficiently controlled exposures and as an alarm signal. These results show the necessity to educate professionals about the risks of handling AD and/or to care for treated patients.Keywords: antineoplastic drugs, Mauritania, biological monitoring of occupational exposure, contamination
Procedia PDF Downloads 316438 First Systematic Review on Aerosol Bound Water: Exploring the Existing Knowledge Domain Using the CiteSpace Software
Authors: Kamila Widziewicz-Rzonca
Abstract:
The presence of PM bound water as an integral chemical compound of suspended aerosol particles (PM) has become one of the hottest issues in recent years. The UN climate summits on climate change (COP24) indicate that PM of anthropogenic origin (released mostly from coal combustion) is directly responsible for climate change. Chemical changes at the particle-liquid (water) interface determine many phenomena occurring in the atmosphere such as visibility, cloud formation or precipitation intensity. Since water-soluble particles such as nitrates, sulfates, or sea salt easily become cloud condensation nuclei, they affect the climate for example by increasing cloud droplet concentration. Aerosol water is a master component of atmospheric aerosols and a medium that enables all aqueous-phase reactions occurring in the atmosphere. Thanks to a thorough bibliometric analysis conducted using CiteSpace Software, it was possible to identify past trends and possible future directions in measuring aerosol-bound water. This work, in fact, doesn’t aim at reviewing the existing literature in the related topic but is an in-depth bibliometric analysis exploring existing gaps and new frontiers in the topic of PM-bound water. To assess the major scientific areas related to PM-bound water and clearly define which among those are the most active topics we checked Web of Science databases from 1996 till 2018. We give an answer to the questions: which authors, countries, institutions and aerosol journals to the greatest degree influenced PM-bound water research? Obtained results indicate that the paper with the greatest citation burst was Tang In and Munklewitz H.R. 'water activities, densities, and refractive indices of aqueous sulfates and sodium nitrate droplets of atmospheric importance', 1994. The largest number of articles in this specific field was published in atmospheric chemistry and physics. An absolute leader in the quantity of publications among all research institutions is the National Aeronautics Space Administration (NASA). Meteorology and atmospheric sciences is a category with the most studies in this field. A very small number of studies on PM-bound water conduct a quantitative measurement of its presence in ambient particles or its origin. Most articles rather point PM-bound water as an artifact in organic carbon and ions measurements without any chemical analysis of its contents. This scientometric study presents the current and most actual literature regarding particulate bound water.Keywords: systematic review, aerosol-bound water, PM-bound water, CiteSpace, knowledge domain
Procedia PDF Downloads 123437 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids
Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout
Abstract:
Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.Keywords: graphene, layered material, field emission, plasma, doping
Procedia PDF Downloads 361436 Analyzing Growth Trends of the Built Area in the Precincts of Various Types of Tourist Attractions in India: 2D and 3D Analysis
Authors: Yarra Sulina, Nunna Tagore Sai Priya, Ankhi Banerjee
Abstract:
With the rapid growth in tourist arrivals, there has been a huge demand for the growth of infrastructure in the destinations. With the increasing preference of tourists to stay near attractions, there has been a considerable change in the land use around tourist sites. However, with the inclusion of certain regulations and guidelines provided by the authorities based on the nature of tourism activity and geographical constraints, the pattern of growth of built form is different for various tourist sites. Therefore, this study explores the patterns of growth of built-up for a decade from 2009 to 2019 through two-dimensional and three-dimensional analysis. Land use maps are created through supervised classification of satellite images obtained from LANDSAT 4-5 and LANDSAT 8 for 2009 and 2019, respectively. The overall expansion of the built-up area in the region is analyzed in relation to the distance from the city's geographical center and the tourism-related growth regions are identified which are influenced by the proximity of tourist attractions. The primary tourist sites of various destinations with different geographical characteristics and tourism activities, that have undergone a significant increase in built-up area and are occupied with tourism-related infrastructure are selected for further study. Proximity analysis of the tourism-related growth sites is carried out to delineate the influence zone of the tourist site in a destination. Further, a temporal analysis of volumetric growth of built form is carried out to understand the morphology of the tourist precincts over time. The Digital Surface Model (DSM) and Digital Terrain Model (DTM) are used to extract the building footprints along with building height. Factors such as building height, and building density are evaluated to understand the patterns of three-dimensional growth of the built area in the region. The study also explores the underlying reasons for such changes in built form around various tourist sites and predicts the impact of such growth patterns in the region. The building height and building density around tourist site creates a huge impact on the appeal of the destination. The surroundings that are incompatible with the theme of the tourist site have a negative impact on the attractiveness of the destination that leads to negative feedback by the tourists, which is not a sustainable form of development. Therefore, proper spatial measures are necessary in terms of area and volume of the built environment for a healthy and sustainable environment around the tourist sites in the destination.Keywords: sustainable tourism, growth patterns, land-use changes, 3-dimensional analysis of built-up area
Procedia PDF Downloads 78435 The Implementation of a Nurse-Driven Palliative Care Trigger Tool
Authors: Sawyer Spurry
Abstract:
Problem: Palliative care providers at an academic medical center in Maryland stated medical intensive care unit (MICU) patients are often referred late in their hospital stay. The MICU has performed well below the hospital quality performance metric of 80% of patients who expire with expected outcomes should have received a palliative care consult within 48 hours of admission. Purpose: The purpose of this quality improvement (QI) project is to increase palliative care utilization in the MICU through the implementation of a Nurse-Driven PalliativeTriggerTool to prompt the need for specialty palliative care consult. Methods: MICU nursing staff and providers received education concerning the implications of underused palliative care services and the literature data supporting the use of nurse-driven palliative care tools as a means of increasing utilization of palliative care. A MICU population specific criteria of palliative triggers (Palliative Care Trigger Tool) was formulated by the QI implementation team, palliative care team, and patient care services department. Nursing staff were asked to assess patients daily for the presence of palliative triggers using the Palliative Care Trigger Tool and present findings during bedside rounds. MICU providers were asked to consult palliative medicinegiven the presence of palliative triggers; following interdisciplinary rounds. Rates of palliative consult, given the presence of triggers, were collected via electronic medical record e-data pull, de-identified, and recorded in the data collection tool. Preliminary Results: Over 140 MICU registered nurses were educated on the palliative trigger initiative along with 8 nurse practitioners, 4 intensivists, 2 pulmonary critical care fellows, and 2 palliative medicine physicians. Over 200 patients were admitted to the MICU and screened for palliative triggers during the 15-week implementation period. Primary outcomes showed an increase in palliative care consult rates to those patients presenting with triggers, a decreased mean time from admission to palliative consult, and increased recognition of unmet palliative care needs by MICU nurses and providers. Conclusions: Anticipatory findings of this QI project would suggest a positive correlation between utilizing palliative care trigger criteria and decreased time to palliative care consult. The direct outcomes of effective palliative care results in decreased length of stay, healthcare costs, and moral distress, as well as improved symptom management and quality of life (QOL).Keywords: palliative care, nursing, quality improvement, trigger tool
Procedia PDF Downloads 194434 The Impact of Entrepreneurship Education on the Entrepreneurial Tendencies of Students: A Quasi-Experimental Design
Authors: Lamia Emam
Abstract:
The attractiveness of entrepreneurship education stems from its perceived value as a venue through which students can develop an entrepreneurial mindset, skill set, and practice, which may not necessarily lead to them starting a new business, but could, more importantly, be manifested as a life skill that could be applied to all types of organizations and career endeavors. This, in turn, raises important questions about what happens in our classrooms; our role as educators, the role of students, center of learning, and the instructional approach; all of which eventually contribute to achieving the desired EE outcomes. With application to an undergraduate entrepreneurship course -Entrepreneurship as Practice- the current paper aims to explore the effect of entrepreneurship education on the development of students’ general entrepreneurial tendencies. Towards that purpose, the researcher herein uses a pre-test and post-test quasi-experimental research design where the Durham University General Enterprising Tendency Test (GET2) is administered to the same group of students before and after course delivery. As designed and delivered, the Entrepreneurship as Practice module is a highly applied and experiential course where students are required to develop an idea for a start-up while practicing the entrepreneurship-related knowledge, mindset, and skills that are taught in class, both individually and in groups. The course is delivered using a combination of short lectures, readings, group discussions, case analysis, guest speakers, and, more importantly, actively engaging in a series of activities that are inspired by diverse methods for developing successful and innovative business ideas, including design thinking, lean-start up and business feasibility analysis. The instructional approach of the course particularly aims at developing the students' critical thinking, reflective, analytical, and creativity-based problem-solving skills that are needed to launch one’s own start-up. The analysis and interpretation of the experiment’s outcomes shall simultaneously incorporate the views of both the educator and students. As presented, the study responds to the rising call for the application of experimental designs in entrepreneurship in general and EE in particular. While doing so, the paper presents an educator’s perspective of EE to complement the dominant stream of research which is constrained to the students’ point of view. Finally, the study sheds light on EE in the MENA region, where the study is applied.Keywords: entrepreneurship education, andragogy and heutagogy, scholarship of teaching and learning, experiment
Procedia PDF Downloads 127433 God, The Master Programmer: The Relationship Between God and Computers
Authors: Mohammad Sabbagh
Abstract:
Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.Keywords: programming, the Quran, object orientation, computers and humans, GOD
Procedia PDF Downloads 107