Search results for: method for CARD
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19112

Search results for: method for CARD

902 Inpatient Glycemic Management Strategies and Their Association with Clinical Outcomes in Hospitalized SARS-CoV-2 Patients

Authors: Thao Nguyen, Maximiliano Hyon, Sany Rajagukguk, Anna Melkonyan

Abstract:

Introduction: Type 2 Diabetes is a well-established risk factor for severe SARS-CoV-2 infection. Uncontrolled hyperglycemia in patients with established or newly diagnosed diabetes is associated with poor outcomes, including increased mortality and hospital length of stay. Objectives: Our study aims to compare three different glycemic management strategies and their association with clinical outcomes in patients hospitalized for moderate to severe SARS-CoV-2 infection. Identifying optimal glycemic management strategies will improve the quality of patient care and improve their outcomes. Method: This is a retrospective observational study on patients hospitalized at Adventist Health White Memorial with severe SARS-CoV-2 infection from 11/1/2020 to 02/28/2021. The following inclusion criteria were used: positive SARS-CoV-2 PCR test, age >18 yrs old, diabetes or random glucose >200 mg/dL on admission, oxygen requirement >4L/min, and treatment with glucocorticoids. Our exclusion criteria included: ICU admission within 24 hours, discharge within five days, death within five days, and pregnancy. The patients were divided into three glycemic management groups: Group 1, managed solely by the Primary Team, Group 2, by Pharmacy; and Group 3, by Endocrinologist. Primary outcomes were average glucose on Day 5, change in glucose between Days 3 and 5, and average insulin dose on Day 5 among groups. Secondary outcomes would be upgraded to ICU, inpatient mortality, and hospital length of stay. For statistics, we used IBM® SPSS, version 28, 2022. Results: Most studied patients were Hispanic, older than 60, and obese (BMI >30). It was the first CV-19 surge with the Delta variant in an unvaccinated population. Mortality was markedly high (> 40%) with longer LOS (> 13 days) and a high ICU transfer rate (18%). Most patients had markedly elevated inflammatory markers (CRP, Ferritin, and D-Dimer). These, in combination with glucocorticoids, resulted in severe hyperglycemia that was difficult to control. Average glucose on Day 5 was not significantly different between groups primary vs. pharmacy vs. endocrine (220.5 ± 63.4 vs. 240.9 ± 71.1 vs. 208.6 ± 61.7 ; P = 0.105). Change in glucose from days 3 to 5 was not significantly different between groups but trended towards favoring the endocrinologist group (-26.6±73.6 vs. 3.8±69.5 vs. -32.2±84.1; P= 0.052). TDD insulin was not significantly different between groups but trended towards higher TDD for the endocrinologist group (34.6 ± 26.1 vs. 35.2 ± 26.4 vs. 50.5 ± 50.9; P=0.054). The endocrinologist group used significantly more preprandial insulin compared to other groups (91.7% vs. 39.1% vs. 65.9% ; P < 0.001). The pharmacy used more basal insulin than other groups (95.1% vs. 79.5% vs. 79.2; P = 0.047). There were no differences among groups in the clinical outcomes: LOS, ICU upgrade, or mortality. Multivariate regression analysis controlled for age, sex, BMI, HbA1c level, renal function, liver function, CRP, d-dimer, and ferritin showed no difference in outcomes among groups. Conclusion: Given high-risk factors in our population, despite efforts from the glycemic management teams, it’s unsurprising no differences in clinical outcomes in mortality and length of stay.

Keywords: glycemic management, strategies, hospitalized, SARS-CoV-2, outcomes

Procedia PDF Downloads 449
901 Fructose-Aided Cross-Linked Enzyme Aggregates of Laccase: An Insight on Its Chemical and Physical Properties

Authors: Bipasa Dey, Varsha Panwar, Tanmay Dutta

Abstract:

Laccase, a multicopper oxidase (EC 1.10.3.2) have been at the forefront as a superior industrial biocatalyst. They are versatile in terms of bestowing sustainable and ecological catalytic reactions such as polymerisation, xenobiotic degradation and bioremediation of phenolic and non-phenolic compounds. Regardless of the wide biotechnological applications, the critical limiting factors viz. reusability, retrieval, and storage stability still prevail. This can cause an impediment in their applicability. Crosslinked enzyme aggregates (CLEAs) have emerged as a promising technique that rehabilitates these essential facets, albeit at the expense of their enzymatic activity. The carrier free crosslinking method prevails over the carrier-bound immobilisation in conferring high productivity, low production cost owing to the absence of additional carrier and circumvent any non-catalytic ballast which could dilute the volumetric activity. To the best of our knowledge, the ε-amino group of lysyl residue is speculated as the best choice for forming Schiff’s base with glutaraldehyde. Despite being most preferrable, excess glutaraldehyde can bring about disproportionate and undesirable crosslinking within the catalytic site and hence could deliver undesirable catalytic losses. Moreover, the surface distribution of lysine residues in Trametes versicolor laccase is significantly less. Thus, to mitigate the adverse effect of glutaraldehyde in conjunction with scaling down the degradation or catalytic loss of the enzyme, crosslinking with inert substances like gelatine, collagen, Bovine serum albumin (BSA) or excess lysine is practiced. Analogous to these molecules, sugars have been well known as a protein stabiliser. It helps to retain the structural integrity, specifically secondary structure of the protein during aggregation by changing the solvent properties. They are comprehended to avert protein denaturation or enzyme deactivation during precipitation. We prepared crosslinked enzyme aggregates (CLEAs) of laccase from T. versicolor with the aid of sugars. The sugar CLEAs were compared with the classic BSA and glutaraldehyde laccase CLEAs concerning physico-chemical properties. The activity recovery for the fructose CLEAs were found to be ~20% higher than the non-sugar CLEA. Moreover, the 𝐾𝑐𝑎𝑡𝐾𝑚⁄ values of the CLEAs were two and three-fold higher than BSA-CLEA and GACLEA, respectively. The half-life (t1/2) deciphered by sugar-CLEA was higher than the t1/2 of GA-CLEAs and free enzyme, portraying more thermal stability. Besides, it demonstrated extraordinarily high pH stability, which was analogous to BSA-CLEA. The promising attributes of increased storage stability and recyclability (>80%) gives more edge to the sugar-CLEAs over conventional CLEAs of their corresponding free enzyme. Thus, sugar-CLEA prevails in furnishing the rudimentary properties required for a biocatalyst and holds many prospects.

Keywords: cross-linked enzyme aggregates, laccase immobilization, enzyme reusability, enzyme stability

Procedia PDF Downloads 102
900 Dialysis Access Surgery for Patients in Renal Failure: A 10-Year Institutional Experience

Authors: Daniel Thompson, Muhammad Peerbux, Sophie Cerutti, Hansraj Bookun

Abstract:

Introduction: Dialysis access is a key component of the care of patients with end stage renal failure. In our institution, a combined service of vascular surgeons and nephrologists are responsible for the creation and maintenance of arteriovenous fisultas (AVF), tenckhoff cathethers and Hickman/permcath lines. This poster investigates the last 10 years of dialysis access surgery conducted at St. Vincent’s Hospital Melbourne. Method: A cross-sectional retrospective analysis was conducted of patients of St. Vincent’s Hospital Melbourne (Victoria, Australia) utilising data collection from the Australasian Vascular Audit (Australian and New Zealand Society for Vascular Surgery). Descriptive demographic analysis was carried out as well as operation type, length of hospital stays, postoperative deaths and need for reoperation. Results: 2085 patients with renal failure were operated on between the years of 2011 and 2020. 1315 were male (63.1%) and 770 were female (36.9%). The mean age was 58 (SD 13.8). 92% of patients scored three or greater on the American Society of Anesthiologiests classification system. Almost half had a history of ischaemic heart disease (48.4%), more than half had a history of diabetes (64%), and a majority had hypertension (88.4%). 1784 patients had a creatinine over 150mmol/L (85.6%), the rest were on dialysis (14.4%). The most common access procedure was AVF creation, with 474 autologous AVFs and 64 prosthetic AVFs. There were 263 Tenckhoff insertions. We performed 160 cadeveric renal transplants. The most common location for AVF formation was brachiocephalic (43.88%) followed by radiocephalic (36.7%) and brachiobasilic (16.67%). Fistulas that required re-intervention were most commonly angioplastied (n=163), followed by thrombectomy (n=136). There were 107 local fistula repairs. Average length of stay was 7.6 days, (SD 12). There were 106 unplanned returns to theatre, most commonly for fistula creation, insertion of tenckhoff or permacath removal (71.7%). There were 8 deaths in the immediately postoperative period. Discussion: Access to dialysis is vital for patients with end stage kidney disease, and requires a multidisciplinary approach from both nephrologists, vascular surgeons, and allied health practitioners. Our service provides a variety of dialysis access methods, predominately fistula creation and tenckhoff insertion. Patients with renal failure are heavily comorbid, and prolonged hospital admission following surgery is a source of significant healthcare expenditure. AVFs require careful monitoring and maintenance for ongoing utility, and our data reflects a multitude of operations required to maintain usable access. The requirement for dialysis is growing worldwide and our data demonstrates a local experience in access, with preferred methods, common complications and the associated surgical interventions.

Keywords: dialysis, fistula, nephrology, vascular surgery

Procedia PDF Downloads 113
899 Magnetron Sputtered Thin-Film Catalysts with Low Noble Metal Content for Proton Exchange Membrane Water Electrolysis

Authors: Peter Kus, Anna Ostroverkh, Yurii Yakovlev, Yevheniia Lobko, Roman Fiala, Ivan Khalakhan, Vladimir Matolin

Abstract:

Hydrogen economy is a concept of low-emission society which harvests most of its energy from renewable sources (e.g., wind and solar) and in case of overproduction, electrochemically turns the excess amount into hydrogen, which serves as an energy carrier. Proton exchange membrane water electrolyzers (PEMWE) are the backbone of this concept. By fast-response electricity to hydrogen conversion, the PEMWEs will not only stabilize the electrical grid but also provide high-purity hydrogen for variety of fuel cell powered devices, ranging from consumer electronics to vehicles. Wider commercialization of PEMWE technology is however hindered by high prices of noble metals which are necessary for catalyzing the redox reactions within the cell. Namely, platinum for hydrogen evolution reaction (HER), running on cathode, and iridium for oxygen evolution reaction (OER) on anode. Possible way of how to lower the loading of Pt and Ir is by using conductive high-surface nanostructures as catalyst supports in conjunction with thin-film catalyst deposition. The presented study discusses unconventional technique of membrane electron assembly (MEA) preparation. Noble metal catalysts (Pt and Ir) were magnetron sputtered in very low loadings onto the surface of porous sublayers (located on gas diffusion layer or directly on membrane), forming so to say localized three-phase boundary. Ultrasonically sprayed corrosion resistant TiC-based sublayer was used as a support material on anode, whereas magnetron sputtered nanostructured etched nitrogenated carbon (CNx) served the same role on cathode. By using this configuration, we were able to significantly decrease the amount of noble metals (to thickness of just tens of nanometers), while keeping the performance comparable to that of average state-of-the-art catalysts. Complex characterization of prepared supported catalysts includes in-cell performance and durability tests, electrochemical impedance spectroscopy (EIS) as well as scanning electron microscopy (SEM) imaging and X-ray photoelectron spectroscopy (XPS) analysis. Our research proves that magnetron sputtering is a suitable method for thin-film deposition of electrocatalysts. Tested set-up of thin-film supported anode and cathode catalysts with combined loading of just 120 ug.cm⁻² yields remarkable values of specific current. Described approach of thin-film low-loading catalyst deposition might be relevant when noble metal reduction is the topmost priority.

Keywords: hydrogen economy, low-loading catalyst, magnetron sputtering, proton exchange membrane water electrolyzer

Procedia PDF Downloads 163
898 Redefining Intellectual Humility in Indian Context: An Experimental Investigation

Authors: Jayashree And Gajjam

Abstract:

Intellectual humility (IH) is defined as a virtuous mean between intellectual arrogance and intellectual self-diffidence by the ‘Doxastic Account of IH’ studied, researched and developed by western scholars not earlier than 2015 at the University of Edinburgh. Ancient Indian philosophical texts or the Upanisads written in the Sanskrit language during the later Vedic period (circa 600-300 BCE) have long addressed the virtue of being humble in several stories and narratives. The current research paper questions and revisits these character traits in an Indian context following an experimental method. Based on the subjective reports of more than 400 Indian teenagers and adults, it argues that while a few traits of IH (such as trustworthiness, respectfulness, intelligence, politeness, etc.) are panhuman and pancultural, a few are not. Some attributes of IH (such as proper pride, open-mindedness, awareness of own strength, etc.) may be taken for arrogance by the Indian population, while other qualities of Intellectual Diffidence such as agreeableness, surrendering can be regarded as the characteristic of IH. The paper then gives the reasoning for this discrepancy that can be traced back to the ancient Indian (Upaniṣadic) teachings that are still prevalent in many Indian families and still anchor their views on IH. The name Upanisad itself means ‘sitting down near’ (to the Guru to gain the Supreme knowledge of the Self and the Universe and setting to rest ignorance) which is equivalent to the three traits among the BIG SEVEN characterized as IH by the western scholars viz. ‘being a good listener’, ‘curious to learn’, and ‘respect to other’s opinion’. The story of Satyakama Jabala (Chandogya Upanisad 4.4-8) who seeks the truth for several years even from the bull, the fire, the swan and waterfowl, suggests nothing but the ‘need for cognition’ or ‘desire for knowledge’. Nachiketa (Katha Upanisad), a boy with a pure mind and heart, follows his father’s words and offers himself to Yama (the God of Death) where after waiting for Yama for three days and nights, he seeks the knowledge of the mysteries of life and death. Although the main aim of these Upaniṣadic stories is to give the knowledge of life and death, the Supreme reality which can be identical with traits such as ‘curious to learn’, one cannot deny that they have a lot more to offer than mere information about true knowledge e.g., ‘politeness’, ‘good listener’, ‘awareness of own limitations’, etc. The possible future scope of this research includes (1) finding other socio-cultural factors that affect the ideas on IH such as age, gender, caste, type of education, highest qualification, place of residence and source of income, etc. which may be predominant in current Indian society despite our great teachings of the Upaniṣads, and (2) to devise different measures to impart IH in Indian children, teenagers, and younger adults for the harmonious future. The current experimental research can be considered as the first step towards these goals.

Keywords: ethics and virtue epistemology, Indian philosophy, intellectual humility, upaniṣadic texts in ancient India

Procedia PDF Downloads 92
897 Problem Solving in Mathematics Education: A Case Study of Nigerian Secondary School Mathematics Teachers’ Conceptions in Relation to Classroom Instruction

Authors: Carol Okigbo

Abstract:

Mathematical problem solving has long been accorded an important place in mathematics curricula at every education level in both advanced and emerging economies. Its classroom approaches have varied, such as teaching for problem-solving, teaching about problem-solving, and teaching mathematics through problem-solving. It requires engaging in tasks for which the solution methods are not eminent, making sense of problems and persevering in solving them by exhibiting processes, strategies, appropriate attitude, and adequate exposure. Teachers play important roles in helping students acquire competency in problem-solving; thus, they are expected to be good problem-solvers and have proper conceptions of problem-solving. Studies show that teachers’ conceptions influence their decisions about what to teach and how to teach. Therefore, how teachers view their roles in teaching problem-solving will depend on their pedagogical conceptions of problem-solving. If teaching problem-solving is a major component of secondary school mathematics instruction, as recommended by researchers and mathematics educators, then it is necessary to establish teachers’ conceptions, what they do, and how they approach problem-solving. This study is designed to determine secondary school teachers’ conceptions regarding mathematical problem solving, its current situation, how teachers’ conceptions relate to their demographics, as well as the interaction patterns in the mathematics classroom. There have been many studies of mathematics problem solving, some of which addressed teachers’ conceptions using single-method approaches, thereby presenting only limited views of this important phenomenon. To address the problem more holistically, this study adopted an integrated mixed methods approach which involved a quantitative survey, qualitative analysis of open-ended responses, and ethnographic observations of teachers in class. Data for the analysis came from a random sample of 327 secondary school mathematics teachers in two Nigerian states - Anambra State and Enugu State who completed a 45-item questionnaire. Ten of the items elicited demographic information, 11 items were open-ended questions, and 25 items were Likert-type questions. Of the 327 teachers who responded to the questionnaires, 37 were randomly selected and observed in their classes. Data analysis using ANOVA, t-tests, chi-square tests, and open coding showed that the teachers had different conceptions about problem-solving, which fall into three main themes: practice on exercises and word application problems, a process of solving mathematical problems, and a way of teaching mathematics. Teachers reported that no period is set aside for problem-solving; typically, teachers solve problems on the board, teach problem-solving strategies, and allow students time to struggle with problems on their own. The result shows a significant difference between male and female teachers’ conception of problems solving, a significant relationship among teachers’ conceptions and academic qualifications, and teachers who have spent ten years or more teaching mathematics were significantly different from the group with seven to nine years of experience in terms of their conceptions of problem-solving.

Keywords: conceptions, education, mathematics, problem solving, teacher

Procedia PDF Downloads 76
896 Hypersensitivity Reactions Following Intravenous Administration of Contrast Medium

Authors: Joanna Cydejko, Paulina Mika

Abstract:

Hypersensitivity reactions are side effects of medications that resemble an allergic reaction. Anaphylaxis is a generalized, severe allergic reaction of the body caused by exposure to a specific agent at a dose tolerated by a healthy body. The most common causes of anaphylaxis are food (about 70%), Hymenoptera venoms (22%), and medications (7%), despite detailed diagnostics in 1% of people, the cause of the anaphylactic reaction was not indicated. Contrast media are anaphylactic agents of unknown mechanism. Hypersensitivity reactions can occur with both immunological and non-immunological mechanisms. Symptoms of anaphylaxis occur within a few seconds to several minutes after exposure to the allergen. Contrast agents are chemical compounds that make it possible to visualize or improve the visibility of anatomical structures. In the diagnosis of computed tomography, the preparations currently used are derivatives of the triiodide benzene ring. Pharmacokinetic and pharmacodynamic properties, i.e., their osmolality, viscosity, low chemotoxicity and high hydrophilicity, have an impact on better tolerance of the substance by the patient's body. In MRI diagnostics, macrocyclic gadolinium contrast agents are administered during examinations. The aim of this study is to present the results of the number and severity of anaphylactic reactions that occurred in patients in all age groups undergoing diagnostic imaging with intravenous administration of contrast agents. In non-ionic iodine CT and in macrocyclic gadolinium MRI. A retrospective assessment of the number of adverse reactions after contrast administration was carried out on the basis of data from the Department of Radiology of the University Clinical Center in Gdańsk, and it was assessed whether their different physicochemical properties had an impact on the incidence of acute complications. Adverse reactions are divided according to the severity of the patient's condition and the diagnostic method used in a given patient. Complications following the administration of a contrast medium in the form of acute anaphylaxis accounted for less than 0.5% of all diagnostic procedures performed with the use of a contrast agent. In the analysis period from January to December 2022, 34,053 CT scans and 15,279 MRI examinations with the use of contrast medium were performed. The total number of acute complications was 21, of which 17 were complications of iodine-based contrast agents and 5 of gadolinium preparations. The introduction of state-of-the-art contrast formulations was an important step toward improving the safety and tolerability of contrast agents used in imaging. Currently, contrast agents administered to patients are considered to be one of the best-tolerated preparations used in medicine. However, like any drug, they can be responsible for the occurrence of adverse reactions resulting from their toxic effects. The increase in the number of imaging tests performed with the use of contrast agents has a direct impact on the number of adverse events associated with their administration. However, despite the low risk of anaphylaxis, this risk should not be marginalized. The growing threat associated with the mass performance of radiological procedures with the use of contrast agents forces the knowledge of the rules of conduct in the event of symptoms of hypersensitivity to these preparations.

Keywords: anaphylactic, contrast medium, diagnostic, medical imagine

Procedia PDF Downloads 62
895 Electrohydrodynamic Patterning for Surface Enhanced Raman Scattering for Point-of-Care Diagnostics

Authors: J. J. Rickard, A. Belli, P. Goldberg Oppenheimer

Abstract:

Medical diagnostics, environmental monitoring, homeland security and forensics increasingly demand specific and field-deployable analytical technologies for quick point-of-care diagnostics. Although technological advancements have made optical methods well-suited for miniaturization, a highly-sensitive detection technique for minute sample volumes is required. Raman spectroscopy is a well-known analytical tool, but has very weak signals and hence is unsuitable for trace level analysis. Enhancement via localized optical fields (surface plasmons resonances) on nanoscale metallic materials generates huge signals in surface-enhanced Raman scattering (SERS), enabling single molecule detection. This enhancement can be tuned by manipulation of the surface roughness and architecture at the sub-micron level. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for SERS-based sensing devices. While most SERS substrates are manufactured by conventional lithographic methods, the development of a cost-effective approach to create nanostructured surfaces is a much sought-after goal in the SERS community. Here, a method is established to create controlled, self-organized, hierarchical nanostructures using electrohydrodynamic (HEHD) instabilities. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements. HEHD pattern formation enables the fabrication of multiscale 3D structured arrays as SERS-active platforms. Importantly, each of the HEHD-patterned individual structural units yield a considerable SERS enhancement. This enables each single unit to function as an isolated sensor. Each of the formed structures can be effectively tuned and tailored to provide high SERS enhancement, while arising from different HEHD morphologies. The HEHD fabrication of sub-micrometer architectures is straightforward and robust, providing an elegant route for high-throughput biological and chemical sensing. The superior detection properties and the ability to fabricate SERS substrates on the miniaturized scale, will facilitate the development of advanced and novel opto-fluidic devices, such as portable detection systems, and will offer numerous applications in biomedical diagnostics, forensics, ecological warfare and homeland security.

Keywords: hierarchical electrohydrodynamic patterning, medical diagnostics, point-of care devices, SERS

Procedia PDF Downloads 346
894 Mechanism of Veneer Colouring for Production of Multilaminar Veneer from Plantation-Grown Eucalyptus Globulus

Authors: Ngoc Nguyen

Abstract:

There is large plantation of Eucalyptus globulus established which has been grown to produce pulpwood. This resource is not suitable for the production of decorative products, principally due to low grades of wood and “dull” appearance but many trials have been already undertaken for the production of veneer and veneer-based engineered wood products, such as plywood and laminated veneer lumber (LVL). The manufacture of veneer-based products has been recently identified as an unprecedented opportunity to promote higher value utilisation of plantation resources. However, many uncertainties remain regarding the impacts of inferior wood quality of young plantation trees on product recovery and value, and with respect to optimal processing techniques. Moreover, the quality of veneer and veneer-based products is far from optimal as trees are young and have small diameters; and the veneers have the significant colour variation which affects to the added value of final products. Developing production methods which would enhance appearance of low-quality veneer would provide a great potential for the production of high-value wood products such as furniture, joinery, flooring and other appearance products. One of the methods of enhancing appearance of low quality veneer, developed in Italy, involves the production of multilaminar veneer, also named “reconstructed veneer”. An important stage of the multilaminar production is colouring the veneer which can be achieved by dyeing veneer with dyes of different colours depending on the type of appearance products, their design and market demand. Although veneer dyeing technology has been well advanced in Italy, it has been focused on poplar veneer from plantation which wood is characterized by low density, even colour, small amount of defects and high permeability. Conversely, the majority of plantation eucalypts have medium to high density, have a lot of defects, uneven colour and low permeability. Therefore, detailed study is required to develop dyeing methods suitable for colouring eucalypt veneers. Brown reactive dye is used for veneer colouring process. Veneers from sapwood and heartwood of two moisture content levels are used to conduct colouring experiments: green veneer and veneer dried to 12% MC. Prior to dyeing, all samples are treated. Both soaking (dipping) and vacuum pressure methods are used in the study to compare the results and select most efficient method for veneer dyeing. To date, the results of colour measurements by CIELAB colour system showed significant differences in the colour of the undyed veneers produced from heartwood part. The colour became moderately darker with increasing of Sodium chloride, compared to control samples according to the colour measurements. It is difficult to conclude a suitable dye solution used in the experiments at this stage as the variables such as dye concentration, dyeing temperature or dyeing time have not been done. The dye will be used with and without UV absorbent after all trials are completed using optimal parameters in colouring veneers.

Keywords: Eucalyptus globulus, veneer colouring/dyeing, multilaminar veneer, reactive dye

Procedia PDF Downloads 350
893 Evaluation of Rheological Properties, Anisotropic Shrinkage, and Heterogeneous Densification of Ceramic Materials during Liquid Phase Sintering by Numerical-Experimental Procedure

Authors: Hamed Yaghoubi, Esmaeil Salahi, Fateme Taati

Abstract:

The effective shear and bulk viscosity, as well as dynamic viscosity, describe the rheological properties of the ceramic body during the liquid phase sintering process. The rheological parameters depend on the physical and thermomechanical characteristics of the material such as relative density, temperature, grain size, and diffusion coefficient and activation energy. The main goal of this research is to acquire a comprehensive understanding of the response of an incompressible viscose ceramic material during liquid phase sintering process such as stress-strain relations, sintering and hydrostatic stress, the prediction of anisotropic shrinkage and heterogeneous densification as a function of sintering time by including the simultaneous influence of gravity field, and frictional force. After raw materials analysis, the standard hard porcelain mixture as a ceramic body was designed and prepared. Three different experimental configurations were designed including midpoint deflection, sinter bending, and free sintering samples. The numerical method for the ceramic specimens during the liquid phase sintering process are implemented in the CREEP user subroutine code in ABAQUS. The numerical-experimental procedure shows the anisotropic behavior, the complete difference in spatial displacement through three directions, the incompressibility for ceramic samples during the sintering process. The anisotropic shrinkage factor has been proposed to investigate the shrinkage anisotropy. It has been shown that the shrinkage along the normal axis of casting sample is about 1.5 times larger than that of casting direction, the gravitational force in pyroplastic deformation intensifies the shrinkage anisotropy more than the free sintering sample. The lowest and greatest equivalent creep strain occurs at the intermediate zone and around the central line of the midpoint distorted sample, respectively. In the sinter bending test sample, the equivalent creep strain approaches to the maximum near the contact area with refractory support. The inhomogeneity in Von-Misses, pressure, and principal stress intensifies the relative density non-uniformity in all samples, except in free sintering one. The symmetrical distribution of stress around the center of free sintering sample, cause to hinder the pyroplastic deformations. Densification results confirmed that the effective bulk viscosity was well-defined with relative density values. The stress analysis confirmed that the sintering stress is more than the hydrostatic stress from start to end of sintering time so, from both theoretically and experimentally point of view, the sintering process occurs completely.

Keywords: anisotropic shrinkage, ceramic material, liquid phase sintering process, rheological properties, numerical-experimental procedure

Procedia PDF Downloads 341
892 Blackcurrant-Associated Rhabdovirus: New Pathogen for Blackcurrants in the Baltic Sea Region

Authors: Gunta Resevica, Nikita Zrelovs, Ivars Silamikelis, Ieva Kalnciema, Helvijs Niedra, Gunārs Lācis, Toms Bartulsons, Inga Moročko-Bičevska, Arturs Stalažs, Kristīne Drevinska, Andris Zeltins, Ina Balke

Abstract:

Newly discovered viruses provide novel knowledge for basic phytovirus research, serve as tools for biotechnology and can be helpful in identification of epidemic outbreaks. Blackcurrant-associated rhabdovirus (BCaRV) have been discovered in USA germplasm collection samples from Russia and France. As it was reported in one accession originating from France it is unclear whether the material was already infected when it entered in the USA or it became infected while in collection in the USA. Due to that BCaRV was definite as non-EU viruses. According to ICTV classification BCaRV is representative of Blackcurrant betanucleorhabdovirus specie in genus Betanucleorhabdovirus (family Rhabdoviridae). Nevertheless, BCaRV impact on the host, transmission mechanisms and vectors are still unknown. In RNA-seq data pool from Ribes plants resistance gene study by high throughput sequencing (HTS) we observed differences between sample group gene transcript heat maps. Additional analysis of the whole data pool (total 393660492 of 150 bp long read pairs) by rnaSPAdes v 3.13.1 resulted into 14424 bases long contig with an average coverage of 684x with shared 99.5% identity to the previously reported first complete genome of BCaRV (MF543022.1) using EMBOSS Needle. This finding proved BCaRV presence in EU and indicated that it might be relevant pathogen. In this study leaf tissue from twelve asymptomatic blackcurrant cv. Mara Eglite plants (negatively tested for blackcurrant reversion virus (BRV)) from Dobele, Latvia (56°36'31.9"N, 23°18'13.6"E) was collected and used for total RNA isolation with RNeasy Plant Mini Kit with minor modifications, followed by plant rRNA removal by a RiboMinus Plant Kit for RNA-Seq. HTS libraries were prepared using MGI Easy RNA Directional Library Prep Set for 16 reactions to obtain 150 bp pair-end reads. Libraries were pooled, circularized and cleaned and sequenced on DNBSEQ-G400 using PE150 flow cell. Additionally, all samples were tested by RT-PCR, and amplicons were directly sequenced by Sanger-based method. The contig representing the genome of BCaRV isolate Mara Eglite was deposited at European Nucleotide Archive under accession number OU015520. Those findings indicate a second evidence on the presence of this particular virus in the EU and further research on BCaRV prevalence in Ribes from other geographical areas should be performed. As there are no information on BCaRV impact on the host this should be investigated, regarding the fact that mixed infections with BRV and nucleorhabdoviruses are reported.

Keywords: BCaRV, Betanucleorhabdovirus, Ribes, RNA-seq

Procedia PDF Downloads 184
891 Effects of Prescribed Surface Perturbation on NACA 0012 at Low Reynolds Number

Authors: Diego F. Camacho, Cristian J. Mejia, Carlos Duque-Daza

Abstract:

The recent widespread use of Unmanned Aerial Vehicles (UAVs) has fueled a renewed interest in efficiency and performance of airfoils, particularly for applications at low and moderate Reynolds numbers, typical of this kind of vehicles. Most of previous efforts in the aeronautical industry, regarding aerodynamic efficiency, had been focused on high Reynolds numbers applications, typical of commercial airliners and large size aircrafts. However, in order to increase the levels of efficiency and to boost the performance of these UAV, it is necessary to explore new alternatives in terms of airfoil design and application of drag reduction techniques. The objective of the present work is to carry out the analysis and comparison of performance levels between a standard NACA0012 profile against another one featuring a wall protuberance or surface perturbation. A computational model, based on the finite volume method, is employed to evaluate the effect of the presence of geometrical distortions on the wall. The performance evaluation is achieved in terms of variations of drag and lift coefficients for the given profile. In particular, the aerodynamic performance of the new design, i.e. the airfoil with a surface perturbation, is examined under conditions of incompressible and subsonic flow in transient state. The perturbation considered is a shaped protrusion prescribed as a small surface deformation on the top wall of the aerodynamic profile. The ultimate goal by including such a controlled smooth artificial roughness was to alter the turbulent boundary layer. It is shown in the present work that such a modification has a dramatic impact on the aerodynamic characteristics of the airfoil, and if properly adjusted, in a positive way. The computational model was implemented using the unstructured, FVM-based open source C++ platform OpenFOAM. A number of numerical experiments were carried out at Reynolds number 5x104, based on the length of the chord and the free-stream velocity, and angles of attack 6° and 12°. A Large Eddy Simulation (LES) approach was used, together with the dynamic Smagorinsky approach as subgrid scale (SGS) model, in order to account for the effect of the small turbulent scales. The impact of the surface perturbation on the performance of the airfoil is judged in terms of changes in the drag and lift coefficients, as well as in terms of alterations of the main characteristics of the turbulent boundary layer on the upper wall. A dramatic change in the whole performance can be appreciated, including an arguably large level of lift-to-drag coefficient ratio increase for all angles and a size reduction of laminar separation bubble (LSB) for a twelve-angle-of-attack.

Keywords: CFD, LES, Lift-to-drag ratio, LSB, NACA 0012 airfoil

Procedia PDF Downloads 387
890 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India

Authors: Rituparna Pal, Faiz Ahmed

Abstract:

Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.

Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters

Procedia PDF Downloads 176
889 Photoemission Momentum Microscopy of Graphene on Ir (111)

Authors: Anna V. Zaporozhchenko, Dmytro Kutnyakhov, Katherina Medjanik, Christian Tusche, Hans-Joachim Elmers, Olena Fedchenko, Sergey Chernov, Martin Ellguth, Sergej A. Nepijko, Gerd Schoenhense

Abstract:

Graphene reveals a unique electronic structure that predetermines many intriguing properties such as massless charge carriers, optical transparency and high velocity of fermions at the Fermi level, opening a wide horizon of future applications. Hence, a detailed investigation of the electronic structure of graphene is crucial. The method of choice is angular resolved photoelectron spectroscopy ARPES. Here we present experiments using time-of-flight (ToF) momentum microscopy, being an alternative way of ARPES using full-field imaging of the whole Brillouin zone (BZ) and simultaneous acquisition of up to several 100 energy slices. Unlike conventional ARPES, k-microscopy is not limited in simultaneous k-space access. We have recorded the whole first BZ of graphene on Ir(111) including all six Dirac cones. As excitation source we used synchrotron radiation from BESSY II (Berlin) at the U125-2 NIM, providing linearly polarized (both polarizations p- and s-) VUV radiation. The instrument uses a delay-line detector for single-particle detection up the 5 Mcps range and parallel energy detection via ToF recording. In this way, we gather a 3D data stack I(E,kx,ky) of the full valence electronic structure in approx. 20 mins. Band dispersion stacks were measured in the energy range of 14 eV up to 23 eV with steps of 1 eV. The linearly-dispersing graphene bands for all six K and K’ points were simultaneously recorded. We find clear features of hybridization with the substrate, in particular in the linear dichroism in the angular distribution (LDAD). Recording of the whole Brillouin zone of graphene/Ir(111) revealed new features. First, the intensity differences (i.e. the LDAD) are very sensitive to the interaction of graphene bands with substrate bands. Second, the dark corridors are investigated in detail for both, p- and s- polarized radiation. They appear as local distortions of photoelectron current distribution and are induced by quantum mechanical interference of graphene sublattices. The dark corridors are located in different areas of the 6 Dirac cones and show chirality behaviour with a mirror plane along vertical axis. Moreover, two out of six show an oval shape while the rest are more circular. It clearly indicates orientation dependence with respect to E vector of incident light. Third, a pattern of faint but very sharp lines is visible at energies around 22eV that strongly remind on Kikuchi lines in diffraction. In conclusion, the simultaneous study of all six Dirac cones is crucial for a complete understanding of dichroism phenomena and the dark corridor.

Keywords: band structure, graphene, momentum microscopy, LDAD

Procedia PDF Downloads 340
888 Interacting with Multi-Scale Structures of Online Political Debates by Visualizing Phylomemies

Authors: Quentin Lobbe, David Chavalarias, Alexandre Delanoe

Abstract:

The ICT revolution has given birth to an unprecedented world of digital traces and has impacted a wide number of knowledge-driven domains such as science, education or policy making. Nowadays, we are daily fueled by unlimited flows of articles, blogs, messages, tweets, etc. The internet itself can thus be considered as an unsteady hyper-textual environment where websites emerge and expand every day. But there are structures inside knowledge. A given text can always be studied in relation to others or in light of a specific socio-cultural context. By way of their textual traces, human beings are calling each other out: hypertext citations, retweets, vocabulary similarity, etc. We are in fact the architects of a giant web of elements of knowledge whose structures and shapes convey their own information. The global shapes of these digital traces represent a source of collective knowledge and the question of their visualization remains an opened challenge. How can we explore, browse and interact with such shapes? In order to navigate across these growing constellations of words and texts, interdisciplinary innovations are emerging at the crossroad between fields of social and computational sciences. In particular, complex systems approaches make it now possible to reconstruct the hidden structures of textual knowledge by means of multi-scale objects of research such as semantic maps and phylomemies. The phylomemy reconstruction is a generic method related to the co-word analysis framework. Phylomemies aim to reveal the temporal dynamics of large corpora of textual contents by performing inter-temporal matching on extracted knowledge domains in order to identify their conceptual lineages. This study aims to address the question of visualizing the global shapes of online political discussions related to the French presidential and legislative elections of 2017. We aim to build phylomemies on top of a dedicated collection of thousands of French political tweets enriched with archived contemporary news web articles. Our goal is to reconstruct the temporal evolution of online debates fueled by each political community during the elections. To that end, we want to introduce an iterative data exploration methodology implemented and tested within the free software Gargantext. There we combine synchronic and diachronic axis of visualization to reveal the dynamics of our corpora of tweets and web pages as well as their inner syntagmatic and paradigmatic relationships. In doing so, we aim to provide researchers with innovative methodological means to explore online semantic landscapes in a collaborative and reflective way.

Keywords: online political debate, French election, hyper-text, phylomemy

Procedia PDF Downloads 186
887 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness

Authors: Keda Li, Hong Hu

Abstract:

Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.

Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb

Procedia PDF Downloads 87
886 The Effect of Aerobics and Yogic Exercise on Selected Physiological and Psychological Variables of Middle-Aged Women

Authors: A. Pallavi, N. Vijay Mohan

Abstract:

A nation can be economically progressive only when the citizens have sufficient capacity to work efficiently to increase the productivity. So, good health must be regarded as a primary need of the community. This helps the growth and development of the body and the mind, which in turn leads to progress and prosperity of the nation. An optimum growth is a necessity for an efficient existence in a biologically adverse and economically competitive world. It is also necessary for the execution of daily routine work. Yoga is a method or a system for the complete development of the personality in a human being. It can be further elaborated as an all-around and complete development of the body, mind, morality, intellect and soul of a being. Sri Aurobindo defines yoga as 'a methodical effort towards self-perfection by the development of the potentialities in the individual.' Aerobic exercise as any activity that uses large muscle groups, can be maintained continuously, and is rhythmic I nature. It is a type of exercise that overloads the heart and lungs and causes them to work harder than at rest. The important idea behind aerobic exercise today, is to get up and get moving. There are more activities that ever to choose from, whether it is a new activity or an old one. Find something you enjoy doing that keeps our heart rate elevated for a continuous time period and get moving to a healthier life. Middle aged selected and served as the subjects for the purpose of this study. The selected subjects were in the age group of 30 to 40 years. By going through the literature and after consulting the experts in yoga and aerobic training, the investigator had chosen the variables which are specifically related to the middle-aged men. The selected physiological variables are pulse rate, diastolic blood pressure, systolic blood pressure; percent body fat and vital capacity. The selected psychological variables are job anxiety, occupational stress. The study was formulated as a random group design consisting of aerobic exercise and yogic exercises groups. The subjects (N=60) were at random divided into three equal groups of twenty middle-aged men each. The groups were assigned the names as follows: 1. Experimental group I- aerobic exercises group, 2. Experimental group II- yogic exercises, 3. Control group. All the groups were subjected to pre-test prior to the experimental treatment. The experimental groups participated in their respective duration of twenty-four weeks, six days in a week throughout the study. The various tests administered were: prior to training (pre-test), after twelfth week (second test) and twenty-fourth weeks (post-test) of the training schedule.

Keywords: pulse rate, diastolic blood pressure, systolic blood pressure; percent body fat and vital capacity, psychological variables, job anxiety, occupational stress, aerobic exercise, yogic exercise

Procedia PDF Downloads 445
885 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design

Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez

Abstract:

Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.

Keywords: coffee waste, optimization, oil yield, statistical planning

Procedia PDF Downloads 119
884 A Survey of Digital Health Companies: Opportunities and Business Model Challenges

Authors: Iris Xiaohong Quan

Abstract:

The global digital health market reached 175 billion U.S. dollars in 2019, and is expected to grow at about 25% CAGR to over 650 billion USD by 2025. Different terms such as digital health, e-health, mHealth, telehealth have been used in the field, which can sometimes cause confusion. The term digital health was originally introduced to refer specifically to the use of interactive media, tools, platforms, applications, and solutions that are connected to the Internet to address health concerns of providers as well as consumers. While mHealth emphasizes the use of mobile phones in healthcare, telehealth means using technology to remotely deliver clinical health services to patients. According to FDA, “the broad scope of digital health includes categories such as mobile health (mHealth), health information technology (IT), wearable devices, telehealth and telemedicine, and personalized medicine.” Some researchers believe that digital health is nothing else but the cultural transformation healthcare has been going through in the 21st century because of digital health technologies that provide data to both patients and medical professionals. As digital health is burgeoning, but research in the area is still inadequate, our paper aims to clear the definition confusion and provide an overall picture of digital health companies. We further investigate how business models are designed and differentiated in the emerging digital health sector. Both quantitative and qualitative methods are adopted in the research. For the quantitative analysis, our research data came from two databases Crunchbase and CBInsights, which are well-recognized information sources for researchers, entrepreneurs, managers, and investors. We searched a few keywords in the Crunchbase database based on companies’ self-description: digital health, e-health, and telehealth. A search of “digital health” returned 941 unique results, “e-health” returned 167 companies, while “telehealth” 427. We also searched the CBInsights database for similar information. After merging and removing duplicate ones and cleaning up the database, we came up with a list of 1464 companies as digital health companies. A qualitative method will be used to complement the quantitative analysis. We will do an in-depth case analysis of three successful unicorn digital health companies to understand how business models evolve and discuss the challenges faced in this sector. Our research returned some interesting findings. For instance, we found that 86% of the digital health startups were founded in the recent decade since 2010. 75% of the digital health companies have less than 50 employees, and almost 50% with less than 10 employees. This shows that digital health companies are relatively young and small in scale. On the business model analysis, while traditional healthcare businesses emphasize the so-called “3P”—patient, physicians, and payer, digital health companies extend to “5p” by adding patents, which is the result of technology requirements (such as the development of artificial intelligence models), and platform, which is an effective value creation approach to bring the stakeholders together. Our case analysis will detail the 5p framework and contribute to the extant knowledge on business models in the healthcare industry.

Keywords: digital health, business models, entrepreneurship opportunities, healthcare

Procedia PDF Downloads 183
883 A Conceptual Framework of Integrated Evaluation Methodology for Aquaculture Lakes

Authors: Robby Y. Tallar, Nikodemus L., Yuri S., Jian P. Suen

Abstract:

Research in the subject of ecological water resources management is full of trivial questions addressed and it seems, today to be one branch of science that can strongly contribute to the study of complexity (physical, biological, ecological, socio-economic, environmental, and other aspects). Existing literature available on different facets of these studies, much of it is technical and targeted for specific users. This study offered the combination all aspects in evaluation methodology for aquaculture lakes with its paradigm refer to hierarchical theory and to the effects of spatial specific arrangement of an object into a space or local area. Therefore, the process in developing a conceptual framework represents the more integrated and related applicable concept from the grounded theory. A design of integrated evaluation methodology for aquaculture lakes is presented. The method is based on the identification of a series of attributes which can be used to describe status of aquaculture lakes using certain indicators from aquaculture water quality index (AWQI), aesthetic aquaculture lake index (AALI) and rapid appraisal for fisheries index (RAPFISH). The preliminary preparation could be accomplished as follows: first, the characterization of study area was undertaken at different spatial scales. Second, an inventory data as a core resource such as city master plan, water quality reports from environmental agency, and related government regulations. Third, ground-checking survey should be completed to validate the on-site condition of study area. In order to design an integrated evaluation methodology for aquaculture lakes, finally we integrated and developed rating scores system which called Integrated Aquaculture Lake Index (IALI).The development of IALI are reflecting a compromise all aspects and it responds the needs of concise information about the current status of aquaculture lakes by the comprehensive approach. IALI was elaborated as a decision aid tool for stakeholders to evaluate the impact and contribution of anthropogenic activities on the aquaculture lake’s environment. The conclusion was while there is no denying the fact that the aquaculture lakes are under great threat from the pressure of the increasing human activities, one must realize that no evaluation methodology for aquaculture lakes can succeed by keeping the pristine condition. The IALI developed in this work can be used as an effective, low-cost evaluation methodology of aquaculture lakes for developing countries. Because IALI emphasizes the simplicity and understandability as it must communicate to decision makers and the experts. Moreover, stakeholders need to be helped to perceive their lakes so that sites can be accepted and valued by local people. For this site of lake development, accessibility and planning designation of the site is of decisive importance: the local people want to know whether the lake condition is safe or whether it can be used.

Keywords: aesthetic value, AHP, aquaculture lakes, integrated lakes, RAPFISH

Procedia PDF Downloads 237
882 Ways to Prevent Increased Wear of the Drive Box Parts and the Central Drive of the Civil Aviation Turbo Engine Based on Tribology

Authors: Liudmila Shabalinskaya, Victor Golovanov, Liudmila Milinis, Sergey Loponos, Alexander Maslov, D. O. Frolov

Abstract:

The work is devoted to the rapid laboratory diagnosis of the condition of aircraft friction units, based on the application of the nondestructive testing method by analyzing the parameters of wear particles, or tribodiagnostics. The most important task of tribodiagnostics is to develop recommendations for the selection of more advanced designs, materials and lubricants based on data on wear processes for increasing the life and ensuring the safety of the operation of machines and mechanisms. The object of tribodiagnostics in this work are the tooth gears of the central drive and the gearboxes of the gas turbine engine of the civil aviation PS-90A type, in which rolling friction and sliding friction with slip occur. The main criterion for evaluating the technical state of lubricated friction units of a gas turbine engine is the intensity and rate of wear of the friction surfaces of the friction unit parts. When the engine is running, oil samples are taken and the state of the friction surfaces is evaluated according to the parameters of the wear particles contained in the oil sample, which carry important and detailed information about the wear processes in the engine transmission units. The parameters carrying this information include the concentration of wear particles and metals in the oil, the dispersion composition, the shape, the size ratio and the number of particles, the state of their surfaces, the presence in the oil of various mechanical impurities of non-metallic origin. Such a morphological analysis of wear particles has been introduced into the order of monitoring the status and diagnostics of various aircraft engines, including a gas turbine engine, since the type of wear characteristic of the central drive and the drive box is surface fatigue wear and the beginning of its development, accompanied by the formation of microcracks, leads to the formation of spherical, up to 10 μm in size, and in the aftermath of flocculent particles measuring 20-200 μm in size. Tribodiagnostics using the morphological analysis of wear particles includes the following techniques: ferrography, filtering, and computer analysis of the classification and counting of wear particles. Based on the analysis of several series of oil samples taken from the drive box of the engine during their operating time, a study was carried out of the processes of wear kinetics. Based on the results of the study and comparing the series of criteria for tribodiagnostics, wear state ratings and statistics of the results of morphological analysis, norms for the normal operating regime were developed. The study allowed to develop levels of wear state for friction surfaces of gearing and a 10-point rating system for estimating the likelihood of the occurrence of an increased wear mode and, accordingly, prevention of engine failures in flight.

Keywords: aviation, box of drives, morphological analysis, tribodiagnostics, tribology, ferrography, filtering, wear particle

Procedia PDF Downloads 260
881 Coastal Modelling Studies for Jumeirah First Beach Stabilization

Authors: Zongyan Yang, Gagan K. Jena, Sankar B. Karanam, Noora M. A. Hokal

Abstract:

Jumeirah First beach, a segment of coastline of length 1.5 km, is one of the popular public beaches in Dubai, UAE. The stability of the beach has been affected by several coastal developmental projects, including The World, Island 2 and La Mer. A comprehensive stabilization scheme comprising of two composite groynes (of lengths 90 m and 125m), modification to the northern breakwater of Jumeirah Fishing Harbour and beach re-nourishment was implemented by Dubai Municipality in 2012. However, the performance of the implemented stabilization scheme has been compromised by La Mer project (built in 2016), which modified the wave climate at the Jumeirah First beach. The objective of the coastal modelling studies is to establish design basis for further beach stabilization scheme(s). Comprehensive coastal modelling studies had been conducted to establish the nearshore wave climate, equilibrium beach orientations and stable beach plan forms. Based on the outcomes of the modeling studies, recommendation had been made to extend the composite groynes to stabilize the Jumeirah First beach. Wave transformation was performed following an interpolation approach with wave transformation matrixes derived from simulations of a possible range of wave conditions in the region. The Dubai coastal wave model is developed with MIKE21 SW. The offshore wave conditions were determined from PERGOS wave data at 4 offshore locations with consideration of the spatial variation. The lateral boundary conditions corresponding to the offshore conditions, at Dubai/Abu Dhabi and Dubai Sharjah borders, were derived with application of LitDrift 1D wave transformation module. The Dubai coastal wave model was calibrated with wave records at monitoring stations operated by Dubai Municipality. The wave transformation matrix approach was validated with nearshore wave measurement at a Dubai Municipality monitoring station in the vicinity of the Jumeirah First beach. One typical year wave time series was transformed to 7 locations in front of the beach to count for the variation of wave conditions which are affected by adjacent and offshore developments. Equilibrium beach orientations were estimated with application of LitDrift by finding the beach orientations with null annual littoral transport at the 7 selected locations. The littoral transport calculation results were compared with beach erosion/accretion quantities estimated from the beach monitoring program (twice a year including bathymetric and topographical surveys). An innovative integral method was developed to outline the stable beach plan forms from the estimated equilibrium beach orientations, with predetermined minimum beach width. The optimal lengths for the composite groyne extensions were recommended based on the stable beach plan forms.

Keywords: composite groyne, equilibrium beach orientation, stable beach plan form, wave transformation matrix

Procedia PDF Downloads 263
880 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software

Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi

Abstract:

Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.

Keywords: climate change, GIS, interpolation, co-kriging

Procedia PDF Downloads 127
879 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities

Authors: Shaurya Chauhan, Sagar Gupta

Abstract:

Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.

Keywords: open source, public participation, urbanization, urban development

Procedia PDF Downloads 149
878 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement

Authors: Brittany Richardson, Ying Wang

Abstract:

For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.

Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments

Procedia PDF Downloads 134
877 Women’s Experience of Managing Pre-Existing Lymphoedema during Pregnancy and the Early Postnatal Period

Authors: Kim Toyer, Belinda Thompson, Louise Koelmeyer

Abstract:

Lymphoedema is a chronic condition caused by dysfunction of the lymphatic system, which limits the drainage of fluid and tissue waste from the interstitial space of the affected body part. The normal physiological changes in pregnancy cause an increased load on a normal lymphatic system which can result in a transient lymphatic overload (oedema). The interaction between lymphoedema and pregnancy oedema is unclear. Women with pre-existing lymphoedema require accurate information and additional strategies to manage their lymphoedema during pregnancy. Currently, no resources are available to guide women or their healthcare providers with accurate advice and additional management strategies for coping with lymphoedema during pregnancy until they have recovered postnatally. This study explored the experiences of Australian women with pre-existing lymphoedema during recent pregnancy and the early postnatal period to determine how their usual lymphoedema management strategies were adapted and what were their additional or unmet needs. Interactions with their obstetric care providers, the hospital maternity services, and usual lymphoedema therapy services were detailed. Participants were sourced from several Australian lymphoedema community groups, including therapist networks. Opportunistic sampling is appropriate to explore this topic in a small target population as lymphoedema in women of childbearing age is uncommon, with prevalence data unavailable. Inclusion criteria were aged over 18 years, diagnosed with primary or secondary lymphoedema of the arm or leg, pregnant within the preceding ten years (since 2012), and had their pregnancy and postnatal care in Australia. Exclusion criteria were a diagnosis of lipedema and if unable to read or understand a reasonable level of English. A mixed-method qualitative design was used in two phases. This involved an online survey (REDCap platform) of the participants followed by online semi-structured interviews or focus groups to provide the transcript data for inductive thematic analysis to gain an in-depth understanding of issues raised. Women with well-managed pre-existing lymphoedema coped well with the additional oedema load of pregnancy; however, those with limited access to quality conservative care prior to pregnancy were found to be significantly impacted by pregnancy, including many reporting deterioration of their chronic lymphoedema. Misinformation and a lack of support increased fear and apprehension in planning and enjoying their pregnancy experience. Collaboration between maternity and lymphoedema therapy services did not happen despite study participants suggesting it. Helpful resources and unmet needs were identified in the recent Australian context to inform further research and the development of resources to assist women with lymphoedema who are considering or are pregnant and their supporters, including health care providers.

Keywords: lymphoedema, management strategies, pregnancy, qualitative

Procedia PDF Downloads 85
876 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest

Authors: Peter Baji

Abstract:

In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.

Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study

Procedia PDF Downloads 198
875 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions

Authors: Pirta Palola, Richard Bailey, Lisa Wedding

Abstract:

Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.

Keywords: economics of biodiversity, environmental valuation, natural capital, value function

Procedia PDF Downloads 194
874 Recovery of Food Waste: Production of Dog Food

Authors: K. Nazan Turhan, Tuğçe Ersan

Abstract:

The population of the world is approximately 8 billion, and it increases uncontrollably and irrepressibly, leading to an increase in consumption. This situation causes crucial problems, and food waste is one of these. The Food and Agriculture Organization of the United Nations (FAO) defines food waste as the discarding or alternative utilization of food that is safe and nutritious for the consumption of humans along the entire food supply chain, from primary production to end household consumer level. In addition, according to the estimation of FAO, one-third of all food produced for human consumption is lost or wasted worldwide every year. Wasting food endangers natural resources and causes hunger. For instance, excessive amounts of food waste cause greenhouse gas emissions, contributing to global warming. Therefore, waste management has been gaining significance in the last few decades at both local and global levels due to the expected scarcity of resources for the increasing population of the world. There are several ways to recover food waste. According to the United States Environmental Protection Agency’s Food Recovery Hierarchy, food waste recovery ways are source reduction, feeding hungry people, feeding animals, industrial uses, composting, and landfill/incineration from the most preferred to the least preferred, respectively. Bioethanol, biodiesel, biogas, agricultural fertilizer and animal feed can be obtained from food waste that is generated by different food industries. In this project, feeding animals was selected as a food waste recovery method and food waste of a plant was used to provide ingredient uniformity. Grasshoppers were used as a protein source. In other words, the project was performed to develop a dog food product by recovery of the plant’s food waste after following some steps. The collected food waste and purchased grasshoppers were sterilized, dried and pulverized. Then, they were all mixed with 60 g agar-agar solution (4%w/v). 3 different aromas were added, separately to the samples to enhance flavour quality. Since there are differences in the required amounts of different species of dogs, fulfilling all nutritional needs is one of the problems. In other words, there is a wide range of nutritional needs in terms of carbohydrates, protein, fat, sodium, calcium, and so on. Furthermore, the requirements differ depending on age, gender, weight, height, and species. Therefore, the product that was developed contains average amounts of each substance so as not to cause any deficiency or surplus. On the other hand, it contains more protein than similar products in the market. The product was evaluated in terms of contamination and nutritional content. For contamination risk, detection of E. coli and Salmonella experiments were performed, and the results were negative. For the nutritional value test, protein content analysis was done. The protein contents of different samples vary between 33.68% and 26.07%. In addition, water activity analysis was performed, and the water activity (aw) values of different samples ranged between 0.2456 and 0.4145.

Keywords: food waste, dog food, animal nutrition, food waste recovery

Procedia PDF Downloads 64
873 The Effect of Vibration Amplitude on Tissue Temperature and Lesion Size When Using a Vibrating Cardiac Catheter

Authors: Kaihong Yu, Tetsui Yamashita, Shigeaki Shingyochi, Kazuo Matsumoto, Makoto Ohta

Abstract:

During cardiac ablation, high power delivery for deeper lesion formation is limited by electrode-tissue interface overheating which can cause serious complications such as thrombus. To prevent this overheating, temperature control and open irrigation are often used. In temperature control, radiofrequency generator is adjusted to deliver the maximum output power, which maintains the electrode temperature at a target temperature (commonly 55°C or 60°C). Then the electrode-tissue interface temperature is also limited. The electrode temperature is a result of heating from the contacted tissue and cooling from the surrounding blood. Because the cooling from blood is decreased under conditions of low blood flow, the generator needs to decrease the output power. Thus, temperature control cannot deliver high power under conditions of low blood flow. In open irrigation, saline in room temperature is flushed through the holes arranged in the electrode. The electrode-tissue interface is cooled by the sufficient environmental cooling. And high power delivery can also be done under conditions of low blood flow. However, a large amount of saline infusions (approximately 1500 ml) during irrigation can cause other serious complication. When open irrigation cannot be used under conditions of low blood flow, a new overheating prevention may be required. The authors have proposed a new electrode cooling method by making the catheter vibrating. The previous work has introduced that the vibration can make a cooling effect on electrode, which may result form that the vibration could increase the flow velocity around the catheter. The previous work has also proved that increasing vibration frequency can increase the cooling by vibration. However, the effect of the vibration amplitude is still unknown. Thus, the present study investigated the effect of vibration amplitude on tissue temperature and lesion size. An agar phantom model was used as a tissue-equivalent material for measuring tissue temperature. Thermocouples were inserted into the agar to measure the internal temperature. Porcine myocardium was used for lesion size measurement. A normal ablation catheter was set perpendicular to the tissue (agar or porcine myocardium) with 10 gf contact force in 37°C saline without flow. Vibration amplitude of ± 0.5, ± 0.75, and ± 1.0 mm with a constant frequency (31 Hz or 63) was used. A temperature control protocol (45°C for agar phantom, 60°C for porcine myocardium) was used for the radiofrequency applications. The larger amplitude shows the larger lesion sizes. And the higher tissue temperatures in agar phantom are also shown with the higher amplitude. With a same frequency, the larger amplitude has the higher vibrating speed. And the higher vibrating speed will increase the flow velocity around the electrode more, which leads to a larger electrode temperature decrease. To maintain the electrode at the target temperature, ablator has to increase the output power. With the higher output power in the same duration, the released energy also increases. Consequently, the tissue temperature will be increased and lead to larger lesion sizes.

Keywords: cardiac ablation, electrode cooling, lesion size, tissue temperature

Procedia PDF Downloads 371