Search results for: single%20phase
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4482

Search results for: single%20phase

462 Synergistic Effect of Chondroinductive Growth Factors and Synovium-Derived Mesenchymal Stem Cells on Regeneration of Cartilage Defects in Rabbits

Authors: M. Karzhauov, А. Mukhambetova, M. Sarsenova, E. Raimagambetov, V. Ogay

Abstract:

Regeneration of injured articular cartilage remains one of the most difficult and unsolved problems in traumatology and orthopedics. Currently, for the treatment of cartilage defects surgical techniques for stimulation of the regeneration of cartilage in damaged joints such as multiple microperforation, mosaic chondroplasty, abrasion and microfractures is used. However, as shown by clinical practice, they can not provide a full and sustainable recovery of articular hyaline cartilage. In this regard, the current high hopes in the regeneration of cartilage defects reasonably are associated with the use of tissue engineering approaches to restore the structural and functional characteristics of damaged joints using stem cells, growth factors and biopolymers or scaffolds. The purpose of the present study was to investigate the effects of chondroinductive growth factors and synovium-derived mesenchymal stem cells (SD-MSCs) on the regeneration of cartilage defects in rabbits. SD-MSCs were isolated from the synovium membrane of Flemish giant rabbits, and expanded in complete culture medium α-MEM. Rabbit SD-MSCs were characterized by CFU-assay and by their ability to differentiate into osteoblasts, chondrocytes and adipocytes. The effects of growth factors (TGF-β1, BMP-2, BMP-4 and IGF-I) on MSC chondrogenesis were examined in micromass pellet cultures using histological and biochemical analysis. Articular cartilage defect (4mm in diameter) in the intercondylar groove of the patellofemoral joint was performed with a kit for the mosaic chondroplasty. The defect was made until subchondral bone plate. Delivery of SD-MSCs and growth factors was conducted in combination with hyaloronic acid (HA). SD-MSCs, growth factors and control groups were compared macroscopically and histologically at 10, 30, 60 and 90 days aftrer intra-articular injection. Our in vitro comparative study revealed that TGF-β1 and BMP-4 are key chondroinductive factors for both the growth and chondrogenesis of SD-MSCs. The highest effect on MSC chondrogenesis was observed with the synergistic interaction of TGF-β1 and BMP-4. In addition, biochemical analysis of the chondrogenic micromass pellets also revealed that the levels of glycosaminoglycans and DNA after combined treatment with TGF-β1 and BMP-4 was significantly higher in comparison to individual application of these factors. In vivo study showed that for complete regeneration of cartilage defects with intra-articular injection of SD-MSCs with HA takes time 90 days. However, single injection of SD-MSCs in combiantion with TGF-β1, BMP-4 and HA significantly promoted regeneration rate of the cartilage defects in rabbits. In this case, complete regeneration of cartilage defects was observed in 30 days after intra-articular injection. Thus, our in vitro and in vivo study demonstrated that combined application of rabbit SD-MSC with chondroinductive growth factors and HA results in strong synergistic effect on the chondrogenesis significantly enhancing regeneration of the damaged cartilage.

Keywords: Mesenchymal stem cells, synovium, chondroinductive factors, TGF-β1, BMP-2, BMP-4, IGF-I

Procedia PDF Downloads 285
461 A Culture-Contrastive Analysis Of The Communication Between Discourse Participants In European Editorials

Authors: Melanie Kerschner

Abstract:

Language is our main means of social interaction. News journalism, especially opinion discourse, holds a powerful position in this context. Editorials can be regarded as encounters of different, partially contradictory relationships between discourse participants constructed through the editorial voice. Their primary goal is to shape public opinion by commenting on events already addressed by other journalistic genres in the given newspaper. In doing so, the author tries to establish a consensus over the negotiated matter (i.e. the news event) with the reader. At the same time, he/she claims authority over the “correct” description and evaluation of an event. Yet, how can the relationship and the interaction between the discourse participants, i.e. the journalist, the reader and the news actors represented in the editorial, be best visualized and studied from a cross-cultural perspective? The present research project attempts to give insights into the role of (media) culture in British, Italian and German editorials. For this purpose the presenter will propose a basic framework: the so called “pyramid of discourse participants”, comprising the author, the reader, two types of news actors and the semantic macro-structure (as meta-level of analysis). Based on this framework, the following questions will be addressed: • Which strategies does the author employ to persuade the reader and to prompt him to give his opinion (in the comment section)? • In which ways (and with which linguistic tools) is editorial opinion expressed? • Does the author use adjectives, adverbials and modal verbs to evaluate news actors, their actions and the current state of affairs or does he/she prefer nominal labels? • Which influence do language choice and the related media culture have on the representation of news events in editorials? • In how far does the social context of a given media culture influence the amount of criticism and the way it is mediated so that it is still culturally-acceptable? The following culture-contrastive study shall examine 45 editorials (i.e. 15 per media culture) from six national quality papers that are similar in distribution, importance and the kind of envisaged readership to make valuable conclusions about culturally-motivated similarities and differences in the coverage and assessment of news events. The thematic orientation of the editorials will be the NSA scandal and the reactions of various countries, as this topic was and still is relevant to each of the three media cultures. Starting out from the “pyramid of discourse participants” as underlying framework, eight different criteria will be assigned to the individual discourse participants in the micro-analysis of the editorials. For the purpose of illustration, a single criterion, referring to the salience of authorial opinion, will be selected to demonstrate how the pyramid of discourse participants can be applied as a basis for empirical analysis. Extracts from the corpus shall furthermore enhance the understanding.

Keywords: Micro-analysis of editorials, culture-contrastive research, media culture, interaction between discourse participants, evaluation

Procedia PDF Downloads 489
460 Pregnancy Outcome in Women with HIV Infection from a Tertiary Care Centre of India

Authors: Kavita Khoiwal, Vatsla Dadhwal, K. Aparna Sharma, Dipika Deka, Plabani Sarkar

Abstract:

Introduction: About 2.4 million (1.93 - 3.04 million) people are living with HIV/AIDS in India. Of all HIV infections, 39% (9,30,000) are among women. 5.4% of infections are from mother to child transmission (MTCT), 25,000 infected children are born every year. Besides the risk of mother to child transmission of HIV, these women are at risk of the higher adverse pregnancy outcome. The objectives of the study were to compare the obstetric and neonatal outcome in women who are HIV positive with low-risk HIV negative women and effect of antiretroviral drugs on preterm birth and IUGR. Materials and Methods: This is a retrospective case record analysis of 212 HIV-positive women delivering between 2002 to 2015, in a tertiary health care centre which was compared with 238 HIV-negative controls. Women who underwent medical termination of pregnancy and abortion were excluded from the study. Obstetric outcome analyzed were pregnancy induced hypertension, HIV positive intrauterine growth restriction, preterm birth, anemia, gestational diabetes and intrahepatic cholestasis of pregnancy. Neonatal outcome analysed were birth weight, apgar score, NICU admission and perinatal transmission.HIV-positiveOut of 212 women, 204 received antiretroviral therapy (ART) to prevent MTCT, 27 women received single dose nevirapine (sdNVP) or sdNVP tailed with 7 days of zidovudine and lamivudine (ZDV + 3TC), 15 received ZDV, 82 women received duovir and 80 women received triple drug therapy depending upon the time period of presentation. Results: Mean age of 212 HIV positive women was 25.72+3.6 years, 101 women (47.6 %) were primigravida. HIV positive status was diagnosed during pregnancy in 200 women while 12 women were diagnosed prior to conception. Among 212 HIV positive women, 20 (9.4 %) women had preterm delivery (< 37 weeks), 194 women (91.5 %) delivered by cesarean section and 18 women (8.5 %) delivered vaginally. 178 neonates (83.9 %) received exclusive top feeding and 34 neonates (16.03 %) received exclusive breast feeding. When compared to low risk HIV negative women (n=238), HIV positive women were more likely to deliver preterm (OR 1.27), have anemia (OR 1.39) and intrauterine growth restriction (OR 2.07). Incidence of pregnancy induced hypertension, diabetes mellitus and ICP was not increased. Mean birth weight was significantly lower in HIV positive women (2593.60+499 gm) when compared to HIV negative women (2919+459 gm). Complete follow up is available for 148 neonates till date, rest are under evaluation. Out of these 7 neonates found to have HIV positive status. Risk of preterm birth (P value = 0.039) and IUGR (P value = 0.739) was higher in HIV positive women who did not receive any ART during pregnancy than women who received ART. Conclusion: HIV positive pregnant women are at increased risk of adverse pregnancy outcome. Multidisciplinary team approach and use of highly active antiretroviral therapy can optimize the maternal and perinatal outcome.

Keywords: antiretroviral therapy, HIV infection, IUGR, preterm birth

Procedia PDF Downloads 246
459 Life-Cycle Assessment of Residential Buildings: Addressing the Influence of Commuting

Authors: J. Bastos, P. Marques, S. Batterman, F. Freire

Abstract:

Due to demands of a growing urban population, it is crucial to manage urban development and its associated environmental impacts. While most of the environmental analyses have addressed buildings and transportation separately, both the design and location of a building affect environmental performance and focusing on one or the other can shift impacts and overlook improvement opportunities for more sustainable urban development. Recently, several life-cycle (LC) studies of residential buildings have integrated user transportation, focusing exclusively on primary energy demand and/or greenhouse gas emissions. Additionally, most papers considered only private transportation (mainly car). Although it is likely to have the largest share both in terms of use and associated impacts, exploring the variability associated with mode choice is relevant for comprehensive assessments and, eventually, for supporting decision-makers. This paper presents a life-cycle assessment (LCA) of a residential building in Lisbon (Portugal), addressing building construction, use and user transportation (commuting with private and public transportation). Five environmental indicators or categories are considered: (i) non-renewable primary energy (NRE), (ii) greenhouse gas intensity (GHG), (iii) eutrophication (EUT), (iv) acidification (ACID), and (v) ozone layer depletion (OLD). In a first stage, the analysis addresses the overall life-cycle considering the statistical model mix for commuting in the residence location. Then, a comparative analysis compares different available transportation modes to address the influence mode choice variability has on the results. The results highlight the large contribution of transportation to the overall LC results in all categories. NRE and GHG show strong correlation, as the three LC phases contribute with similar shares to both of them: building construction accounts for 6-9%, building use for 44-45%, and user transportation for 48% of the overall results. However, for other impact categories there is a large variation in the relative contribution of each phase. Transport is the most significant phase in OLD (60%); however, in EUT and ACID building use has the largest contribution to the overall LC (55% and 64%, respectively). In these categories, transportation accounts for 31-38%. A comparative analysis was also performed for four alternative transport modes for the household commuting: car, bus, motorcycle, and company/school collective transport. The car has the largest results in all impact categories. When compared to the overall LC with commuting by car, mode choice accounts for a variability of about 35% in NRE, GHG and OLD (the categories where transportation accounted for the largest share of the LC), 24% in EUT and 16% in ACID. NRE and GHG show a strong correlation because all modes have internal combustion engines. The second largest results for NRE, GHG and OLD are associated with commuting by motorcycle; however, for ACID and EUT this mode has better performance than bus and company/school transport. No single transportation mode performed best in all impact categories. Integrated assessments of buildings are needed to avoid shifts of impacts between life-cycle phases and environmental categories, and ultimately to support decision-makers.

Keywords: environmental impacts, LCA, Lisbon, transport

Procedia PDF Downloads 338
458 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 181
457 The Effectiveness of Congressional Redistricting Commissions: A Comparative Approach Investigating the Ability of Commissions to Reduce Gerrymandering with the Wilcoxon Signed-Rank Test

Authors: Arvind Salem

Abstract:

Voters across the country are transferring the power of redistricting from the state legislatures to commissions to secure “fairer” districts by curbing the influence of gerrymandering on redistricting. Gerrymandering, intentionally drawing distorted districts to achieve political advantage, has become extremely prevalent, generating widespread voter dissatisfaction and resulting in states adopting commissions for redistricting. However, the efficacy of these commissions is dubious, with some arguing that they constitute a panacea for gerrymandering, while others contend that commissions have relatively little effect on gerrymandering. A result showing that commissions are effective would allay these fears, supplying ammunition for activists across the country to advocate for commissions in their state and reducing the influence of gerrymandering across the nation. However, a result against commissions may reaffirm doubts about commissions and pressure lawmakers to make improvements to commissions or even abandon the commission system entirely. Additionally, these commissions are publicly funded: so voters have a financial interest and responsibility to know if these commissions are effective. Currently, nine states place commissions in charge of redistricting, Arizona, California, Colorado, Michigan, Idaho, Montana, Washington, and New Jersey (Hawaii also has a commission but will be excluded for reasons mentioned later). This study compares the degree of gerrymandering in the 2022 election (“after”) to the election in which voters decided to adopt commissions (“before”). The before-election provides a valuable benchmark for assessing the efficacy of commissions since voters in those elections clearly found the districts to be unfair; therefore, comparing the current election to that one is a good way to determine if commissions have improved the situation. At the time Hawaii adopted commissions, it was merely a single at-large district, so it is before metrics could not be calculated, and it was excluded. This study will use three methods to quantify the degree of gerrymandering: the efficiency gap, the percentage of seats and the percentage of votes difference, and the mean-median difference. Each of these metrics has unique advantages and disadvantages, but together, they form a balanced approach to quantifying gerrymandering. The study uses a Wilcoxon Signed-Rank Test with a null hypothesis that the value of the metrics is greater than or equal to after the election than before and an alternative hypothesis that the value of these metrics is greater in the before the election than after using a 0.05 significance level and an expected difference of 0. Accepting the alternative hypothesis would constitute evidence that commissions reduce gerrymandering to a statistically significant degree. However, this study could not conclude that commissions are effective. The p values obtained for all three metrics (p=0.42 for the efficiency gap, p=0.94 for the percentage of seats and percentage of votes difference, and p=0.47 for the mean-median difference) were extremely high and far from the necessary value needed to conclude that commissions are effective. These results halt optimism about commissions and should spur serious discussion about the effectiveness of these commissions and ways to change them moving forward so that they can accomplish their goal of generating fairer districts.

Keywords: commissions, elections, gerrymandering, redistricting

Procedia PDF Downloads 57
456 Arthroscopic Superior Capsular Reconstruction Using the Long Head of the Biceps Tendon (LHBT)

Authors: Ho Sy Nam, Tang Ha Nam Anh

Abstract:

Background: Rotator cuff tears are a common problem in the aging population. The prevalence of massive rotator cuff tears varies in some studies from 10% to 40%. Of irreparable rotator cuff tears (IRCTs), which are mostly associated with massive tear size, 79% are estimated to have recurrent tears after surgical repair. Recent studies have shown that superior capsule reconstruction (SCR) in massive rotator cuff tears can be an efficient technique with optimistic clinical scores and preservation of stable glenohumeral stability. Superior capsule reconstruction techniques most commonly use either fascia lata autograft or dermal allograft, both of which have their own benefits and drawbacks (such as the potential for donor site issues, allergic reactions, and high cost). We propose a simple technique for superior capsule reconstruction that involves using the long head of the biceps tendon as a local autograft; therefore, the comorbidities related to graft harvesting are eliminated. The long head of the biceps tendon proximal portion is relocated to the footprint and secured as the SCR, serving to both stabilize the glenohumeral joint and maintain vascular supply to aid healing. Objective: The purpose of this study is to assess the clinical outcomes of patients with large to massive RCTs treated by SCR using LHBT. Materials and methods: A study was performed of consecutive patients with large to massive RCTs who were treated by SCR using LHBT between January 2022 and December 2022. We use one double-loaded suture anchor to secure the long head of the biceps to the middle of the footprint. Two more anchors are used to repair the rotator cuff using a single-row technique, which is placed anteriorly and posteriorly on the lateral side of the previously transposed LHBT. Results: The 3 men and 5 women had an average age of 61.25 years (range 48 to 76 years) at the time of surgery. The average follow-up was 8.2 months (6 to 10 months) after surgery. The average preoperative ASES was 45.8, and the average postoperative ASES was 85.83. The average postoperative UCLA score was 29.12. VAS score was improved from 5.9 to 1.12. The mean preoperative ROM of forward flexion and external rotation of the shoulder was 720 ± 160 and 280 ± 80, respectively. The mean postoperative ROM of forward flexion and external rotation were 1310 ± 220 and 630 ± 60, respectively. There were no cases of progression of osteoarthritis or rotator cuff muscle atrophy. Conclusion: SCR using LHBT is considered a treatment option for patients with large or massive RC tears. It can restore superior glenohumeral stability and function of the shoulder joint and can be an effective procedure for selected patients, helping to avoid progression to cuff tear arthropathy.

Keywords: superior capsule reconstruction, large or massive rotator cuff tears, the long head of the biceps, stabilize the glenohumeral joint

Procedia PDF Downloads 60
455 Monitoring of Wound Healing Through Structural and Functional Mechanisms Using Photoacoustic Imaging Modality

Authors: Souradip Paul, Arijit Paramanick, M. Suheshkumar Singh

Abstract:

Traumatic injury is the leading worldwide health problem. Annually, millions of surgical wounds are created for the sake of routine medical care. The healing of these unintended injuries is always monitored based on visual inspection. The maximal restoration of tissue functionality remains a significant concern of clinical care. Although minor injuries heal well with proper care and medical treatment, large injuries negatively influence various factors (vasculature insufficiency, tissue coagulation) and cause poor healing. Demographically, the number of people suffering from severe wounds and impaired healing conditions is burdensome for both human health and the economy. An incomplete understanding of the functional and molecular mechanism of tissue healing often leads to a lack of proper therapies and treatment. Hence, strong and promising medical guidance is necessary for monitoring the tissue regeneration processes. Photoacoustic imaging (PAI), is a non-invasive, hybrid imaging modality that can provide a suitable solution in this regard. Light combined with sound offers structural, functional and molecular information from the higher penetration depth. Therefore, molecular and structural mechanisms of tissue repair will be readily observable in PAI from the superficial layer and in the deep tissue region. Blood vessel formation and its growth is an essential tissue-repairing components. These vessels supply nutrition and oxygen to the cell in the wound region. Angiogenesis (formation of new capillaries from existing blood vessels) contributes to new blood vessel formation during tissue repair. The betterment of tissue healing directly depends on angiogenesis. Other optical microscopy techniques can visualize angiogenesis in micron-scale penetration depth but are unable to provide deep tissue information. PAI overcomes this barrier due to its unique capability. It is ideally suited for deep tissue imaging and provides the rich optical contrast generated by hemoglobin in blood vessels. Hence, an early angiogenesis detection method provided by PAI leads to monitoring the medical treatment of the wound. Along with functional property, mechanical property also plays a key role in tissue regeneration. The wound heals through a dynamic series of physiological events like coagulation, granulation tissue formation, and extracellular matrix (ECM) remodeling. Therefore tissue elasticity changes, can be identified using non-contact photoacoustic elastography (PAE). In a nutshell, angiogenesis and biomechanical properties are both critical parameters for tissue healing and these can be characterized in a single imaging modality (PAI).

Keywords: PAT, wound healing, tissue coagulation, angiogenesis

Procedia PDF Downloads 83
454 Graphic Procession Unit-Based Parallel Processing for Inverse Computation of Full-Field Material Properties Based on Quantitative Laser Ultrasound Visualization

Authors: Sheng-Po Tseng, Che-Hua Yang

Abstract:

Motivation and Objective: Ultrasonic guided waves become an important tool for nondestructive evaluation of structures and components. Guided waves are used for the purpose of identifying defects or evaluating material properties in a nondestructive way. While guided waves are applied for evaluating material properties, instead of knowing the properties directly, preliminary signals such as time domain signals or frequency domain spectra are first revealed. With the measured ultrasound data, inversion calculation can be further employed to obtain the desired mechanical properties. Methods: This research is development of high speed inversion calculation technique for obtaining full-field mechanical properties from the quantitative laser ultrasound visualization system (QLUVS). The quantitative laser ultrasound visualization system (QLUVS) employs a mirror-controlled scanning pulsed laser to generate guided acoustic waves traveling in a two-dimensional target. Guided waves are detected with a piezoelectric transducer located at a fixed location. With a gyro-scanning of the generation source, the QLUVS has the advantage of fast, full-field, and quantitative inspection. Results and Discussions: This research introduces two important tools to improve the computation efficiency. Firstly, graphic procession unit (GPU) with large amount of cores are introduced. Furthermore, combining the CPU and GPU cores, parallel procession scheme is developed for the inversion of full-field mechanical properties based on the QLUVS data. The newly developed inversion scheme is applied to investigate the computation efficiency for single-layered and double-layered plate-like samples. The computation efficiency is shown to be 80 times faster than unparalleled computation scheme. Conclusions: This research demonstrates a high-speed inversion technique for the characterization of full-field material properties based on quantitative laser ultrasound visualization system. Significant computation efficiency is shown, however not reaching the limit yet. Further improvement can be reached by improving the parallel computation. Utilizing the development of the full-field mechanical property inspection technology, full-field mechanical property measured by non-destructive, high-speed and high-precision measurements can be obtained in qualitative and quantitative results. The developed high speed computation scheme is ready for applications where full-field mechanical properties are needed in a nondestructive and nearly real-time way.

Keywords: guided waves, material characterization, nondestructive evaluation, parallel processing

Procedia PDF Downloads 181
453 A Kunitz-Type Serine Protease Inhibitor from Rock Bream, Oplegnathus fasciatus Involved in Immune Responses

Authors: S. D. N. K. Bathige, G. I. Godahewa, Navaneethaiyer Umasuthan, Jehee Lee

Abstract:

Kunitz-type serine protease inhibitors (KTIs) are identified in various organisms including animals, plants and microbes. These proteins shared single or multiple Kunitz inhibitory domains link together or associated with other types of domains. Characteristic Kunitz type domain composed of around 60 amino acid residues with six conserved cysteine residues to stabilize by three disulfide bridges. KTIs are involved in various physiological processes, such as ion channel blocking, blood coagulation, fibrinolysis and inflammation. In this study, two Kunitz-type domain containing protein was identified from rock bream database and designated as RbKunitz. The coding sequence of RbKunitz encoded for 507 amino acids with 56.2 kDa theoretical molecular mass and 5.7 isoelectric point (pI). There are several functional domains including MANEC superfamily domain, PKD superfamily domain, and LDLa domain were predicted in addition to the two characteristic Kunitz domain. Moreover, trypsin interaction sites were also identified in Kunitz domain. Homology analysis revealed that RbKunitz shared highest identity (77.6%) with Takifugu rubripes. Completely conserved 28 cysteine residues were recognized, when comparison of RbKunitz with other orthologs from different taxonomical groups. These structural evidences indicate the rigidity of RbKunitz folding structure to achieve the proper function. The phylogenetic tree was constructed using neighbor-joining method and exhibited that the KTIs from fish and non-fish has been evolved in separately. Rock bream was clustered with Takifugu rubripes. The SYBR Green qPCR was performed to quantify the RbKunitz transcripts in different tissues and challenged tissues. The mRNA transcripts of RbKunitz were detected in all tissues (muscle, spleen, head kidney, blood, heart, skin, liver, intestine, kidney and gills) analyzed and highest transcripts level was detected in gill tissues. Temporal transcription profile of RbKunitz in rock bream blood tissues was analyzed upon LPS (lipopolysaccharide), Poly I:C (Polyinosinic:polycytidylic acid) and Edwardsiella tarda challenge to understand the immune responses of this gene. Compare to the unchallenged control RbKunitz exhibited strong up-regulation at 24 h post injection (p.i.) after LPS and E. tarda injection. Comparatively robust expression of RbKunits was observed at 3 h p.i. upon Poly I:C challenge. Taken together all these data indicate that RbKunitz may involve into to immune responses upon pathogenic stress, in order to protect the rock bream.

Keywords: Kunitz-type, rock bream, immune response, serine protease inhibitor

Procedia PDF Downloads 359
452 Comprehensive, Up-to-Date Climate System Change Indicators, Trends and Interactions

Authors: Peter Carter

Abstract:

Comprehensive climate change indicators and trends inform the state of the climate (system) with respect to present and future climate change scenarios and the urgency of mitigation and adaptation. With data records now going back for many decades, indicator trends can complement model projections. They are provided as datasets by several climate monitoring centers, reviewed by state of the climate reports, and documented by the IPCC assessments. Up-to-date indicators are provided here. Rates of change are instructive, as are extremes. The indicators include greenhouse gas (GHG) emissions (natural and synthetic), cumulative CO2 emissions, atmospheric GHG concentrations (including CO2 equivalent), stratospheric ozone, surface ozone, radiative forcing, global average temperature increase, land temperature increase, zonal temperature increases, carbon sinks, soil moisture, sea surface temperature, ocean heat content, ocean acidification, ocean oxygen, glacier mass, Arctic temperature, Arctic sea ice (extent and volume), northern hemisphere snow cover, permafrost indices, Arctic GHG emissions, ice sheet mass, sea level rise, and stratospheric and surface ozone. Global warming is not the most reliable single metric for the climate state. Radiative forcing, atmospheric CO2 equivalent, and ocean heat content are more reliable. Global warming does not provide future commitment, whereas atmospheric CO2 equivalent does. Cumulative carbon is used for estimating carbon budgets. The forcing of aerosols is briefly addressed. Indicator interactions are included. In particular, indicators can provide insight into several crucial global warming amplifying feedback loops, which are explained. All indicators are increasing (adversely), most as fast as ever and some faster. One particularly pressing indicator is rapidly increasing global atmospheric methane. In this respect, methane emissions and sources are covered in more detail. In their application, indicators used in assessing safe planetary boundaries are included. Indicators are considered with respect to recent published papers on possible catastrophic climate change and climate system tipping thresholds. They are climate-change-policy relevant. In particular, relevant policies include the 2015 Paris Agreement on “holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels” and the 1992 UN Framework Convention on Climate change, which has “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.”

Keywords: climate change, climate change indicators, climate change trends, climate system change interactions

Procedia PDF Downloads 82
451 Improving Patient and Clinician Experience of Oral Surgery Telephone Clinics

Authors: Katie Dolaghan, Christina Tran, Kim Hamilton, Amanda Beresford, Vicky Adams, Jamie Toole, John Marley

Abstract:

During the Covid 19 pandemic routine outpatient appointments were not possible face to face. That resulted in many branches of healthcare starting virtual clinics. These clinics have continued following the return to face to face patient appointments. With these new types of clinic it is important to ensure that a high standard of patient care is maintained. In order to improve patient and clinician experience of the telephone clinics a quality improvement project was carried out to ensure the patient and clinician experience of these clinics was enhanced whilst remaining a safe, effective and an efficient use of resources. The project began by developing a process map for the consultation process and agreed on the design of a driver diagram and tests of change. In plan do study act (PDSA) cycle1 a single consultant completed an online survey after every patient encounter over a 5 week period. Baseline patient responses were collected using a follow-up telephone survey for each patient. Piloting led to several iterations of both survey designs. Salient results of PDSA1 included; patients not receiving appointment letters, patients feeling more anxious about a virtual appointment and many would prefer a face to face appointment. The initial clinician data showed a positive response with a provisional diagnosis being reached in 96.4% of encounters. PDSA cycle 2 included provision of a patient information sheet and information leaflets relevant to the patients’ conditions were developed and sent following new patient telephone clinics with follow-up survey analysis as before to monitor for signals of change. We also introduced the ability for patients to send an images of their lesion prior to the consultation. Following the changes implemented we noted an improvement in patient satisfaction and, in fact, many patients preferring virtual clinics as it lead to less disruption of their working lives. The extra reading material both before and after the appointments eased patients’ anxiety around virtual clinics and helped them to prepare for their appointment. Following the patient feedback virtual clinics are now used for review patients as well, with all four consultants within the department continuing to utilise virtual clinics. During this presentation the progression of these clinics and the reasons that these clinics are still operating following the return to face to face appointments will be explored. The lessons that have been gained using a QI approach have helped to deliver an optimal service that is valid and reliable as well as being safe, effective and efficient for the patient along with helping reduce the pressures from ever increasing waiting lists. In summary our work in improving the quality of virtual clinics has resulted in improved patient satisfaction along with reduced pressures on the facilities of the health trust.

Keywords: clinic, satisfaction, telephone, virtual

Procedia PDF Downloads 45
450 Ecosystem Approach in Aquaculture: From Experimental Recirculating Multi-Trophic Aquaculture to Operational System in Marsh Ponds

Authors: R. Simide, T. Miard

Abstract:

Integrated multi-trophic aquaculture (IMTA) is used to reduce waste from aquaculture and increase productivity by co-cultured species. In this study, we designed a recirculating multi-trophic aquaculture system which requires low energy consumption, low water renewal and easy-care. European seabass (Dicentrarchus labrax) were raised with co-cultured sea urchin (Paracentrotus lividus), deteritivorous polychaete fed on settled particulate matter, mussels (Mytilus galloprovincialis) used to extract suspended matters, macroalgae (Ulva sp.) used to uptake dissolved nutrients and gastropod (Phorcus turbinatus) used to clean the series of 4 tanks from fouling. Experiment was performed in triplicate during one month in autumn under an experimental greenhouse at the Institute Océanographique Paul Ricard (IOPR). Thanks to the absence of a physical filter, any pomp was needed to pressure water and the water flow was carried out by a single air-lift followed by gravity flow.Total suspended solids (TSS), biochemical oxygen demand (BOD5), turbidity, phytoplankton estimation and dissolved nutrients (ammonium NH₄, nitrite NO₂⁻, nitrate NO₃⁻ and phosphorus PO₄³⁻) were measured weekly while dissolved oxygen and pH were continuously recorded. Dissolved nutrients stay under the detectable threshold during the experiment. BOD5 decreased between fish and macroalgae tanks. TSS highly increased after 2 weeks and then decreased at the end of the experiment. Those results show that bioremediation can be well used for aquaculture system to keep optimum growing conditions. Fish were the only feeding species by an external product (commercial fish pellet) in the system. The others species (extractive species) were fed from waste streams from the tank above or from Ulva produced by the system for the sea urchin. In this way, between the fish aquaculture only and the addition of the extractive species, the biomass productivity increase by 5.7. In other words, the food conversion ratio dropped from 1.08 with fish only to 0.189 including all species. This experimental recirculating multi-trophic aquaculture system was efficient enough to reduce waste and increase productivity. In a second time, this technology has been reproduced at a commercial scale. The IOPR in collaboration with Les 4 Marais company run for 6 month a recirculating IMTA in 8000 m² of water allocate between 4 marsh ponds. A similar air-lift and gravity recirculating system was design and only one feeding species of shrimp (Palaemon sp.) was growth for 3 extractive species. Thanks to this joint work at the laboratory and commercial scales we will be able to challenge IMTA system and discuss about this sustainable aquaculture technology.

Keywords: bioremediation, integrated multi-trophic aquaculture (IMTA), laboratory and commercial scales, recirculating aquaculture, sustainable

Procedia PDF Downloads 136
449 On the Development of Evidential Contrasts in the Greater Himalayan Region

Authors: Marius Zemp

Abstract:

Evidentials indicate how the speaker obtained the information conveyed in a statement. Detailed diachronic-functional accounts of evidential contrasts found in the Greater Himalayan Region (GHR) reveal that contrasting evidentials are not only defined against each other but also that most of them once had different aspecto-temporal (TA) values which must have aligned when their contrast was conventionalized. Based on these accounts, the present paper sheds light on hitherto unidentified mechanisms of grammatical change. The main insights of the present study were facilitated by ‘functional reconstruction’, which (i) revolves around morphemes which appear to be used in divergent ways within a language and/or across different related languages, (ii) persistently devises hypotheses as to how these functional divergences may have developed, and (iii) retains those hypotheses which most plausibly and economically account for the data. Based on the dense and detailed grammatical literature on the Tibetic language family, the author of this study is able to reconstruct the initial steps by which its evidentiality systems developed: By the time Proto-Tibetan started to be spread across much of Central Asia in the 7th century CE, verbal concatenations with and without a connective -s had become common. As typical for resultative constructions around the globe, Proto-Tibetan *V-s-’dug ‘was there, having undergone V’ (employing the simple past of ’dug ‘stay, be there’) allowed both for a perfect reading (‘the state resulting from V holds at the moment of speech’) and an inferential reading (‘(I infer from its result that) V has taken place’). In Western Tibetic, *V-s-’dug grammaticalized in its perfect meaning as it became contrasted with perfect *V-s-yod ‘is there, having undergone V’ (employing the existential copula yod); that is, *V-s-’dug came to mean that the speaker directly witnessed the profiled result of V, whereas *V-s-yod came to mean that the speaker does not depend on direct evidence of the result, as s/he simply knows that it holds. In Eastern Tibetic, on the other hand, V-s-’dug grammaticalized in its inferential past meaning as it became contrasted with past *V-thal ‘went past V-ing’ (employing the simple past of thal ‘go past’); that is, *V-s-’dug came to mean that the profiled past event was inferred from its result, while *V-thal came to mean that it was directly witnessed. Hence, depending on whether it became contrasted with a perfect or a past construction, resultative V-s-’dug grammaticalized either its direct evidential perfect or its inferential past function. This means that in both cases, evidential readings of constructions with distinct but overlapping TA-values became contrasted, and in order for their contrasting meanings to grammaticalize, the constructions had to agree on their tertium comparationis, which was their shared TA-value. By showing that other types of evidential contrasts in the GHR are also TA-aligned, while no single markers (or privative contrasts) are found to have grammaticalized evidential functions, the present study suggests that, at least in this region of the world, evidential meanings grammaticalize only in equipollent contrasts, which always end up TA-aligned.

Keywords: evidential contrasts, functional-diachronic accounts, grammatical change, himalayan languages, tense/aspect-alignment

Procedia PDF Downloads 109
448 The Use of Brachytherapy in the Treatment of Liver Metastases: A Systematic Review

Authors: Mateusz Bilski, Jakub Klas, Emilia Kowalczyk, Sylwia Koziej, Katarzyna Kulszo, Ludmiła Grzybowska- Szatkowska

Abstract:

Background: Liver metastases are a common complication of primary solid tumors and sig-nificantly reduce patient survival. In the era of increasing diagnosis of oligometastatic disease and oligoprogression, methods of local treatment of metastases, i.e. MDT, are becoming more important. Implementation of such treatment can be considered for liver metastases, which are a common complication of primary solid tumors and significantly reduce patient survival. To date, the mainstay of treatment for oligometastatic disease has been surgical resection, but not all patients qualify for the procedure. As an alternative to surgical resection, radiotherapy techniques have become available, including stereotactic body radiation therapy (SBRT) or high-dose interstitial brachytherapy (iBT). iBT is an invasive method that emits very high doses of radiation from the inside of the tumor to the outside. This technique provides better tumor coverage than SBRT while having little impact on surrounding healthy tissue and elim-inates some concerns involving respiratory motion. Methods: We conducted a systematic re-view of the scientific literature on the use of brachytherapy in the treatment of liver metasta-ses from 2018 - 2023 using PubMed and ResearchGate browsers according to PRISMA rules. Results: From 111 articles, 18 publications containing information on 729 patients with liver metastases were selected. iBT has been shown to provide high rates of tumor control. Among 14 patients with 54 unresectable RCC liver metastases, after iBT LTC was 92.6% during a median follow-up of 10.2 months, PFS was 3.4 months. In analysis of 167 patients after treatment with a single fractional dose of 15-25 Gy with brachytherapy at 6- and 12-month follow-up, LRFS rates of 88,4-88.7% and 70.7 - 71,5%, PFS of 78.1 and 53.8%, and OS of 92.3 - 96.7% and 76,3% - 79.6%, respectively, were achieved. No serious complications were observed in all patients. Distant intrahepatic progression occurred later in patients with unre-sectable liver metastases after brachytherapy (PFS: 19.80 months) than in HCC patients (PFS: 13.50 months). A significant difference in LRFS between CRC patients (84.1% vs. 50.6%) and other histologies (92.4% vs. 92.4%) was noted, suggesting a higher treatment dose is necessary for CRC patients. The average target dose for metastatic colorectal cancer was 40 - 60 Gy (compared to 100 - 250 Gy for HCC). To better assess sensitivity to therapy and pre-dict side effects, it has been suggested that humoral mediators be evaluated. It was also shown that baseline levels of TNF-α, MCP-1 and VEGF, as well as NGF and CX3CL corre-lated with both tumor volume and radiation-induced liver damage, one of the most serious complications of iBT, indicating their potential role as biomarkers of therapy outcome. Con-clusions: The use of brachytherapy methods in the treatment of liver metastases of various cancers appears to be an interesting and relatively safe therapeutic method alternative to sur-gery. An important challenge remains the selection of an appropriate brachytherapy method and radiation dose for the corresponding initial tumor type from which the metastasis origi-nated.

Keywords: liver metastases, brachytherapy, CT-HDRBT, iBT

Procedia PDF Downloads 89
447 Efficient Synthesis of Highly Functionalized Biologically Important Spirocarbocyclic Oxindoles via Hauser Annulation

Authors: Kanduru Lokesh, Venkitasamy Kesavan

Abstract:

The unique structural features of spiro-oxindoles with diverse biological activities have made them privileged structures in new drug discovery. The key structural characteristic of these compounds is the spiro ring fused at the C-3 position of the oxindole core with varied heterocyclic motifs. Structural diversification of heterocyclic scaffolds to synthesize new chemical entities as pharmaceuticals and agrochemicals is one of the important goals of synthetic organic chemists. Nitrogen and oxygen containing heterocycles are by far the most widely occurring privileged structures in medicinal chemistry. The structural complexity and distinct three-dimensional arrangement of functional groups of these privileged structures are generally responsible for their specificity against biological targets. Structurally diverse compound libraries have proved to be valuable assets for drug discovery against challenging biological targets. Thus, identifying a new combination of substituents at C-3 position on oxindole moiety is of great importance in drug discovery to improve the efficiency and efficacy of the drugs. The development of suitable methodology for the synthesis of spiro-oxindole compounds has attracted much interest often in response to the significant biological activity displayed by the both natural and synthetic compounds. So creating structural diversity of oxindole scaffolds is need of the decade and formidable challenge. A general way to improve synthetic efficiency and also to access diversified molecules is through the annulation reactions. Annulation reactions allow the formation of complex compounds starting from simple substrates in a single transformation consisting of several steps in an ecologically and economically favorable way. These observations motivated us to develop the annulation reaction protocol to enable the synthesis of a new class of spiro-oxindole motifs which in turn would enable the enhancement of molecular diversity. As part of our enduring interest in the development of novel, efficient synthetic strategies to enable the synthesis of biologically important oxindole fused spirocarbocyclic systems, We have developed an efficient methodology for the construction of highly functionalized spirocarbocyclic oxindoles through [4+2] annulation of phthalides via Hauser annulation. functionalized spirocarbocyclic oxindoles was accomplished for the first time in the literature using Hauser annulation strategy. The reaction between methyleneindolinones and arylsulfonylphthalides catalyzed by cesium carbonate led to the access of new class of biologically important spiro[indoline-3,2'-naphthalene] derivatives in very good yields. The synthetic utility of the annulated product was further demonstrated by fluorination Using NFSI as a fluorinating agent to furnish corresponding fluorinated product.

Keywords: Hauser-Kraus annulation, spiro carbocyclic oxindoles, oxindole-ester, fluoridation

Procedia PDF Downloads 182
446 A Case Study Demonstrating the Benefits of Low-Carb Eating in an Adult with Latent Autoimmune Diabetes Highlights the Necessity and Effectiveness of These Dietary Therapies

Authors: Jasmeet Kaur, Anup Singh, Shashikant Iyengar, Arun Kumar, Ira Sahay

Abstract:

Latent autoimmune diabetes in adults (LADA) is an irreversible autoimmune disease that affects insulin production. LADA is characterized by the production of Glutamic acid decarboxylase (GAD) antibodies, which is similar to type 1 diabetes. Individuals with LADA may eventually develop overt diabetes and require insulin. In this condition, the pancreas produces little or no insulin, which is a hormone used by the body to allow glucose to enter cells and produce energy. While type 1 diabetes was traditionally associated with children and teenagers, its prevalence has increased in adults as well. LADA is frequently misdiagnosed as type 2 diabetes, especially in adulthood when type 2 diabetes is more common. LADA develops in adulthood, usually after age 30. Managing LADA involves metabolic control with exogenous insulin and prolonging the life of surviving beta cells, thereby slowing the disease's progression. This case study examines the impact of approximately 3 months of low-carbohydrate dietary intervention in a 42-year-old woman with LADA who was initially misdiagnosed as having type 2 diabetes. Her c-peptide was 0.13 and her HbA1c was 9.3% when this trial began. Low-carbohydrate interventions have been shown to improve blood sugar levels, including fasting, post-meal, and random blood sugar levels, as well as haemoglobin levels, blood pressure, energy levels, sleep quality, and satiety levels. The use of low-carbohydrate dietary intervention significantly reduces both hypo- and hyperglycaemia events. During the 3 months of the study, there were 2 to 3 hyperglycaemic events owing to physical stress and a single hypoglycaemic event. Low-carbohydrate dietary therapies lessen insulin dose inaccuracy, which explains why there were fewer hyperglycaemic and hypoglycaemic events. In three months, the glycated haemoglobin (HbA1c) level was reduced from 9.3% to 6.3%. These improvements occur without the need for caloric restriction or physical activity. Stress management was crucial aspect of the treatment plan as stress-induced neuroendocrine hormones can cause immunological dysregulation. Additionally, supplements that support immune system and reduce inflammation were used as part of the treatment during the trial. Long-term studies are needed to track disease development and corroborate the claim that such dietary treatments can prolong the honeymoon phase in LADA. Various factors can contribute to additional autoimmune attacks, so measuring c-peptide is crucial on a regular basis to determine whether insulin levels need to be adjusted.

Keywords: autoimmune, diabetes, LADA, low_carb, nutrition

Procedia PDF Downloads 19
445 An Inquiry into the Usage of Complex Systems Models to Examine the Effects of the Agent Interaction in a Political Economic Environment

Authors: Ujjwall Sai Sunder Uppuluri

Abstract:

Group theory is a powerful tool that researchers can use to provide a structural foundation for their Agent Based Models. These Agent Based models are argued by this paper to be the future of the Social Science Disciplines. More specifically, researchers can use them to apply evolutionary theory to the study of complex social systems. This paper illustrates one such example of how theoretically an Agent Based Model can be formulated from the application of Group Theory, Systems Dynamics, and Evolutionary Biology to analyze the strategies pursued by states to mitigate risk and maximize usage of resources to achieve the objective of economic growth. This example can be applied to other social phenomena and this makes group theory so useful to the analysis of complex systems, because the theory provides the mathematical formulaic proof for validating the complex system models that researchers build and this will be discussed by the paper. The aim of this research, is to also provide researchers with a framework that can be used to model political entities such as states on a 3-dimensional plane. The x-axis representing resources (tangible and intangible) available to them, y the risks, and z the objective. There also exist other states with different constraints pursuing different strategies to climb the mountain. This mountain’s environment is made up of risks the state faces and resource endowments. This mountain is also layered in the sense that it has multiple peaks that must be overcome to reach the tallest peak. A state that sticks to a single strategy or pursues a strategy that is not conducive to the climbing of that specific peak it has reached is not able to continue advancement. To overcome the obstacle in the state’s path, it must innovate. Based on the definition of a group, we can categorize each state as being its own group. Each state is a closed system, one which is made up of micro level agents who have their own vectors and pursue strategies (actions) to achieve some sub objectives. The state also has an identity, the inverse being anarchy and/or inaction. Finally, the agents making up a state interact with each other through competition and collaboration to mitigate risks and achieve sub objectives that fall within the primary objective. Thus, researchers can categorize the state as an organism that reflects the sum of the output of the interactions pursued by agents at the micro level. When states compete, they employ a strategy and that state which has the better strategy (reflected by the strategies pursued by her parts) is able to out-compete her counterpart to acquire some resource, mitigate some risk or fulfil some objective. This paper will attempt to illustrate how group theory combined with evolutionary theory and systems dynamics can allow researchers to model the long run development, evolution, and growth of political entities through the use of a bottom up approach.

Keywords: complex systems, evolutionary theory, group theory, international political economy

Procedia PDF Downloads 114
444 Role of Institutional Quality as a Key Determinant of FDI Flows in Developing Asian Economies

Authors: Bikash Ranjan Mishra, Lopamudra D. Satpathy

Abstract:

In the wake of the phenomenal surge in international business in the last decades or more, both the developed and developing economies around the world are in massive competition to attract more and more FDI flows. While the developed countries have marched ahead in the race, the developing countries, especially those of Asian economies, have followed them at a rapid pace. While most of the previous studies have analysed the role of institutional quality in the promotion of FDI flows in developing countries, very few studies have taken an integrated approach of examining the comprehensive impact of institutional quality, globalization pattern and domestic financial development on FDI flows. In this context, the paper contributes to the literature in two important ways. Firstly, two composite indices of institutional quality and domestic financial development for the Asian countries are constructed in comparison to earlier studies that resort to a single variable for indicating the institutional quality and domestic financial development. Secondly, the impact of these variables on FDI flows through their interaction with geographical region is investigated. The study uses panel data covering the time period of 1996 to 2012 by selecting twenty Asian developing countries by emphasizing the quality of institutions from the geographical regions of eastern, south-eastern, southern and western Asia. Control of corruption, better rule of law, regulatory quality, effectiveness of the government, political stability and voice and accountability are used as indicators of institutional quality. Besides these, the study takes into account the domestic credits in the hands of public, private sectors and in stock markets as domestic financial indicators. First in the specification of model, a factor analysis is performed to reduce the vast determinants, which are highly correlated with each other, to a manageable size. Afterwards, a reduced version of the model is estimated with the extracted factors in the form of index as independent variables along with a set of control variables. It is found that the institutional quality index and index of globalization exert a significant effect on FDI inflows of the host countries; in contrast, the domestic financial index does not seem to play much worthy role. Finally, some robustness tests are performed to make sure that the results are not sensitive to temporal and spatial unobserved heterogeneity. On the basis of the above study, one general inference can be drawn from the policy prescription point of view that the government of these developing countries should strengthen their domestic institution, both financial and non-financial. In addition to these, welfare policies should also target for rapid globalization. If the financial and non-financial institutions of these developing countries become sound and grow more globalized in the economic, social and political domain, then they can appeal to more amounts of FDI inflows that will subsequently result in advancement of these economies.

Keywords: Asian developing economies, FDI, institutional quality, panel data

Procedia PDF Downloads 287
443 Anti-DNA Antibodies from Patients with Schizophrenia Hydrolyze DNA

Authors: Evgeny A. Ermakov, Lyudmila P. Smirnova, Valentina N. Buneva

Abstract:

Schizophrenia associated with dysregulation of neurotransmitter processes in the central nervous system and disturbances in the humoral immune system resulting in the formation of antibodies (Abs) to the various components of the nervous tissue. Abs to different neuronal receptors and DNA were detected in the blood of patients with schizophrenia. Abs hydrolyzing DNA were detected in pool of polyclonal autoantibodies in autoimmune and infectious diseases, such catalytic Abs were named abzymes. It is believed that DNA-hydrolyzing abzymes are cytotoxic, cause nuclear DNA fragmentation and induce cell death by apoptosis. Abzymes with DNAase activity are interesting because of the mechanism of formation and the possibility of use as diagnostic markers. Therefore, in our work we have set following goals: to determine the level anti-DNA Abs in the serum of patients with schizophrenia and to study DNA-hydrolyzing activity of IgG of patients with schizophrenia. Materials and methods: In our study there were included 41 patients with a verified diagnosis of paranoid or simple schizophrenia and 24 healthy donors. Electrophoretically and immunologically homogeneous IgGs were obtained by sequential affinity chromatography of the serum proteins on protein G-Sepharose and gel filtration. The levels of anti-DNA Abs were determined using ELISA. DNA-hydrolyzing activity was detected as the level of supercoiled pBluescript DNA transition in circular and linear forms, the hydrolysis products were analyzed by agarose electrophoresis followed by ethidium bromide stain. To correspond the registered catalytic activity directly to the antibodies we carried out a number of strict criteria: electrophoretic homogeneity of the antibodies, gel filtration (acid shock analysis) and in situ activity. Statistical analysis was performed in ‘Statistica 9.0’ using the non-parametric Mann-Whitney test. Results: The sera of approximately 30% of schizophrenia patients displayed a higher level of Abs interacting with single-stranded (ssDNA) and double-stranded DNA (dsDNA) compared with healthy donors. The average level of Abs interacting with ssDNA was only 1.1-fold lower than that for interacting with dsDNA. IgG of patient with schizophrenia were shown to possess DNA hydrolyzing activity. Using affinity chromatography, electrophoretic analysis of isolated IgG homogeneity, gel filtration in acid shock conditions and in situ DNAse activity analysis we proved that the observed activity is intrinsic property of studied antibodies. We have shown that the relative DNAase activity of IgG in patients with schizophrenia averaged 55.4±32.5%, IgG of healthy donors showed much lower activity (average of 9.1±6.5%). It should be noted that DNAase activity of IgG in patients with schizophrenia with a negative symptoms was significantly higher (73.3±23.8%), than in patients with positive symptoms (43.3±33.1%). Conclusion: Anti-DNA Abs of patients with schizophrenia not only bind DNA, but quite efficiently hydrolyze the substrate. The data show a correlation with the level of DNase activity and leading symptoms of patients with schizophrenia.

Keywords: anti-DNA antibodies, abzymes, DNA hydrolysis, schizophrenia

Procedia PDF Downloads 304
442 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 138
441 Carbonyl Iron Particles Modified with Pyrrole-Based Polymer and Electric and Magnetic Performance of Their Composites

Authors: Miroslav Mrlik, Marketa Ilcikova, Martin Cvek, Josef Osicka, Michal Sedlacik, Vladimir Pavlinek, Jaroslav Mosnacek

Abstract:

Magnetorheological elastomers (MREs) are a unique type of materials consisting of two components, magnetic filler, and elastomeric matrix. Their properties can be tailored upon application of an external magnetic field strength. In this case, the change of the viscoelastic properties (viscoelastic moduli, complex viscosity) are influenced by two crucial factors. The first one is magnetic performance of the particles and the second one is off-state stiffness of the elastomeric matrix. The former factor strongly depends on the intended applications; however general rule is that higher magnetic performance of the particles provides higher MR performance of the MRE. Since magnetic particles possess low stability properties against temperature and acidic environment, several methods how to improve these drawbacks have been developed. In the most cases, the preparation of the core-shell structures was employed as a suitable method for preservation of the magnetic particles against thermal and chemical oxidations. However, if the shell material is not single-layer substance, but polymer material, the magnetic performance is significantly suppressed, due to the in situ polymerization technique, when it is very difficult to control the polymerization rate and the polymer shell is too thick. The second factor is the off-state stiffness of the elastomeric matrix. Since the MR effectivity is calculated as the relative value of the elastic modulus upon magnetic field application divided by elastic modulus in the absence of the external field, also the tuneability of the cross-linking reaction is highly desired. Therefore, this study is focused on the controllable modification of magnetic particles using a novel monomeric system based on 2-(1H-pyrrol-1-yl)ethyl methacrylate. In this case, the short polymer chains of different chain lengths and low polydispersity index will be prepared, and thus tailorable stability properties can be achieved. Since the relatively thin polymer chains will be grafted on the surface of magnetic particles, their magnetic performance will be affected only slightly. Furthermore, also the cross-linking density will be affected, due to the presence of the short polymer chains. From the application point of view, such MREs can be utilized for, magneto-resistors, piezoresistors or pressure sensors especially, when the conducting shell on the magnetic particles will be created. Therefore, the selection of the pyrrole-based monomer is very crucial and controllably thin layer of conducting polymer can be prepared. Finally, such composite particle consisting of magnetic core and conducting shell dispersed in elastomeric matrix can find also the utilization in shielding application of electromagnetic waves.

Keywords: atom transfer radical polymerization, core-shell, particle modification, electromagnetic waves shielding

Procedia PDF Downloads 191
440 Numerical Investigation of Flow Boiling within Micro-Channels in the Slug-Plug Flow Regime

Authors: Anastasios Georgoulas, Manolia Andredaki, Marco Marengo

Abstract:

The present paper investigates the hydrodynamics and heat transfer characteristics of slug-plug flows under saturated flow boiling conditions within circular micro-channels. Numerical simulations are carried out, using an enhanced version of the open-source CFD-based solver ‘interFoam’ of OpenFOAM CFD Toolbox. The proposed user-defined solver is based in the Volume Of Fluid (VOF) method for interface advection, and the mentioned enhancements include the implementation of a smoothing process for spurious current reduction, the coupling with heat transfer and phase change as well as the incorporation of conjugate heat transfer to account for transient solid conduction. In all of the considered cases in the present paper, a single phase simulation is initially conducted until a quasi-steady state is reached with respect to the hydrodynamic and thermal boundary layer development. Then, a predefined and constant frequency of successive vapour bubbles is patched upstream at a certain distance from the channel inlet. The proposed numerical simulation set-up can capture the main hydrodynamic and heat transfer characteristics of slug-plug flow regimes within circular micro-channels. In more detail, the present investigation is focused on exploring the interaction between subsequent vapour slugs with respect to their generation frequency, the hydrodynamic characteristics of the liquid film between the generated vapour slugs and the channel wall as well as of the liquid plug between two subsequent vapour slugs. The proposed investigation is carried out for the 3 different working fluids and three different values of applied heat flux in the heated part of the considered microchannel. The post-processing and analysis of the results indicate that the dynamics of the evolving bubbles in each case are influenced by both the upstream and downstream bubbles in the generated sequence. In each case a slip velocity between the vapour bubbles and the liquid slugs is evident. In most cases interfacial waves appear close to the bubble tail that significantly reduce the liquid film thickness. Finally, in accordance with previous investigations vortices that are identified in the liquid slugs between two subsequent vapour bubbles can significantly enhance the convection heat transfer between the liquid regions and the heated channel walls. The overall results of the present investigation can be used to enhance the present understanding by providing better insight of the complex, underpinned heat transfer mechanisms in saturated boiling within micro-channels in the slug-plug flow regime.

Keywords: slug-plug flow regime, micro-channels, VOF method, OpenFOAM

Procedia PDF Downloads 244
439 A Laundry Algorithm for Colored Textiles

Authors: H. E. Budak, B. Arslan-Ilkiz, N. Cakmakci, I. Gocek, U. K. Sahin, H. Acikgoz-Tufan, M. H. Arslan

Abstract:

The aim of this study is to design a novel laundry algorithm for colored textiles which have significant decoloring problem. During the experimental work, bleached knitted single jersey fabric made of 100% cotton and dyed with reactive dyestuff was utilized, since according to a conducted survey textiles made of cotton are the most demanded textile products in the textile market by the textile consumers and for coloration of textiles reactive dyestuffs are the ones that are the most commonly used in the textile industry for dyeing cotton-made products. Therefore, the fabric used in this study was selected and purchased in accordance with the survey results. The fabric samples cut out of this fabric were dyed with different dyeing parameters by using Remazol Brilliant Red 3BS dyestuff in Gyrowash machine at laboratory conditions. From the alternative reactive-dyed cotton fabric samples, the ones that have high tendency to color loss were determined and examined. Accordingly, the parameters of the dyeing process used for these fabric samples were evaluated and the dyeing process which was chosen to be used for causing high tendency to color loss for the cotton fabrics was determined in order to reveal the level of improvement in color loss during this study clearly. Afterwards, all of the untreated fabric samples cut out of the fabric purchased were dyed with the dyeing process selected. When dyeing process was completed, an experimental design was created for the laundering process by using Minitab® program considering temperature, time and mechanical action as parameters. All of the washing experiments were performed in domestic washing machine. 16 washing experiments were performed with 8 different experimental conditions and 2 repeats for each condition. After each of the washing experiments, water samples of the main wash of the laundering process were measured with UV spectrophotometer. The values obtained were compared with the calibration curve of the materials used for the dyeing process. The results of the washing experiments were statistically analyzed with Minitab® program. According to the results, the most suitable washing algorithm to be used in terms of the parameters temperature, time and mechanical action for domestic washing machines for minimizing fabric color loss was chosen. The laundry algorithm proposed in this study have the ability of minimalizing the problem of color loss of colored textiles in washing machines by eliminating the negative effects of the parameters of laundering process on color of textiles without compromising the fundamental effects of basic cleaning action being performed properly. Therefore, since fabric color loss is minimized with this washing algorithm, dyestuff residuals will definitely be lower in the grey water released from the laundering process. In addition to this, with this laundry algorithm it is possible to wash and clean other types of textile products with proper cleaning effect and minimized color loss.

Keywords: color loss, laundry algorithm, textiles, domestic washing process

Procedia PDF Downloads 327
438 A Review on Agricultural Landscapes as a Habitat of Rodents

Authors: Nadeem Munawar, Tariq Mahmood, Paula Rivadeneira, Ali Akhter

Abstract:

In this paper, we review on rodent species which are common inhabitants of agricultural landscapes where they are an important prey source for a wide variety of avian, reptilian, and mammalian predators. Agricultural fields are surrounded by fallow land, which provide suitable sites for shelter and breeding for rodents, while shrubs, grasses, annual weeds and forbs may provide supplementary food. The assemblage of rodent’s fauna in the cropland habitats including cropped fields, meadows and adjacent field structures like hedgerows, woodland and field margins fluctuates seasonally. The mature agricultural crops provides good source of food and shelter to the rodents and these factors along with favorable climatic factors/season facilitate breeding activities of these rodent species. Changes in vegetation height and vegetative cover affect two important aspects of a rodent’s life: food and shelter. In addition, during non-crop period vegetation can be important for building nests above or below ground and it provides thermal protection for rodents from heat and cold. The review revealed that rodents form a very diverse group of mammals, ranging from tiny pigmy mice to big capybaras, from arboreal flying squirrels to subterranean mole rats, from opportunistic omnivores (e.g. Norway rats) to specialist feeders (e.g. the North African fat sand rats that feed on a single family of plants only). It is therefore no surprise that some species thrive well under the conditions that are found in agricultural fields. The review on the population dynamics of the rodent species indicated that they are agricultural pests probably due to the heterogeneous landscape and to the high rotativity of vegetable crop cultivation. They also cause damage to various crops, directly and indirectly, by gnawing, spoilage, contamination and hoarding activities, besides this behavior they have also significance importance in agricultural habitat. The burrowing activities of rodents alter the soil properties around their burrows which improve its aeration, infiltration, increase the water holding capacity and thus encourage plant growth. These properties are beneficial for the soil because they affect absorption of phosphorus, absorption zinc, copper, other nutrients and the uptake of water and thus rodents are known as indicator species in agricultural fields. Our review suggests that wide crop field’s borders, particularly those contiguous to various cropland fields, should be understood as priority sites for nesting, feeding, and cover for the rodent’s fauna. The goal of this review paper is to provide a comprehensive synthesis of understanding regarding rodent habitat and biodiversity in agricultural landscapes.

Keywords: agricultural landscapes, food, indicator species, shelter

Procedia PDF Downloads 146
437 Analyzing Social Media Discourses of Domestic Violence in Promoting Awareness and Support Seeking: An Exploratory Study

Authors: Sudha Subramani, Hua Wang

Abstract:

Domestic Violence (DV) against women is now recognized to be a serious and widespread problem worldwide. There is a growing concern that violence against women has a global public health impact, as well as a violation of human rights. From the existing statistical surveys, it is revealed that there exists a strong relationship between DV and health issues of women like bruising, lacerations, depression, anxiety, flashbacks, sleep disturbances, hyper-arousal, emotional distress, sexually transmitted diseases and so on. This social problem is still considered as behind the closed doors issue and stigmatized topic. Women conceal their sufferings from family and friends, as they experience a lack of trust in others, feelings of shame and embarrassment among the society. Hence, women survivors of DV experience some barriers in seeking the support of specialized services such as health care access, crisis support, and legal guidance. Fortunately, with the popularity of social media like Facebook and Twitter, people share their opinions and emotional feelings to seek the social and emotional support, for sympathetic encouragement, to show compassion and empathy among the public. Considering the DV, social media plays a predominant role in creating the awareness and promoting the support services to the public, as we live in the golden era of social media. The various professional people like the public health researchers, clinicians, psychologists, social workers, national family health organizations, lawyers, and victims or their family and friends share the unprecedentedly valuable information (personal opinions and experiences) in a single platform to improve the social welfare of the community. Though each tweet or post contains a less informational value, the consolidation of millions of messages can generate actionable knowledge and provide valuable insights about the public opinion in general. Hence, this paper reports on an exploratory analysis of the effectiveness of social media for unobtrusive assessment of attitudes and awareness towards DV. In this paper, mixed methods such as qualitative analysis and text mining approaches are used to understand the social media disclosures of DV through the lenses of opinion sharing, anonymity, and support seeking. The results of this study could be helpful to avoid the cost of wide scale surveys, while still maintaining appropriate research conditions is to leverage the abundance of data publicly available on the web. Also, this analysis with data enrichment and consolidation would be useful in assisting advocacy and national family health organizations to provide information about resources and support, raise awareness and counter common stigmatizing attitudes about DV.

Keywords: domestic violence, social media, social stigma and support, women health

Procedia PDF Downloads 265
436 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 129
435 Linkages between Innovation Policies and SMEs' Innovation Activities: Empirical Evidence from 15 Transition Countries

Authors: Anita Richter

Abstract:

Innovation is one of the key foundations of competitive advantage, generating growth and welfare worldwide. Consequently, all firms should innovate to bring new ideas to the market. Innovation is a vital growth driver, particularly for transition countries to move towards knowledge-based, high-income economies. However, numerous barriers, such as financial, regulatory or infrastructural constraints prevent, in particular, new and small firms in transition countries from innovating. Thus SMEs’ innovation output may benefit substantially from government support. This research paper aims to assess the effect of government interventions on innovation activities in SMEs in emerging countries. Until now academic research related to the innovation policies focused either on single country and/or high-income countries assessments and less on cross-country and/or low and middle-income countries. Therefore the paper seeks to close the research gap by providing empirical evidence from 8,500 firms in 15 transition countries (Eastern Europe, South Caucasus, South East Europe, Middle East and North Africa). Using firm-level data from the Business Environment and Enterprise Performance Survey of the World Bank and EBRD and policy data from the SME Policy Index of the OECD, the paper investigates how government interventions affect SME’s likelihood of investing in any technological and non-technological innovation. Using the Standard Linear Regression, the impact of government interventions on SMEs’ innovation output and R&D activities is measured. The empirical analysis suggests that a firm’s decision to invest into innovative activities is sensitive to government interventions. A firm’s likelihood to invest into innovative activities increases by 3% to 8%, if the innovation eco-system noticeably improves (measured by an increase of 1 level in the SME Policy Index). At the same time, a better eco-system encourages SMEs to invest more in R&D. Government reforms in establishing a dedicated policy framework (IP legislation), institutional infrastructure (science and technology parks, incubators) and financial support (public R&D grants, innovation vouchers) are particularly relevant to stimulate innovation performance in SMEs. Particular segments of the SME population, namely micro and manufacturing firms, are more likely to benefit from an increased innovation framework conditions. The marginal effects are particularly strong on product innovation, process innovation, and marketing innovation, but less on management innovation. In conclusion, government interventions supporting innovation will likely lead to higher innovation performance of SMEs. They increase productivity at both firm and country level, which is a vital step in transitioning towards knowledge-based market economies.

Keywords: innovation, research and development, government interventions, economic development, small and medium-sized enterprises, transition countries

Procedia PDF Downloads 307
434 Using the ISO 9705 Room Corner Test for Smoke Toxicity Quantification of Polyurethane

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Polyurethane (PU) foam is typically sold as acoustic foam that is often used as sound insulation in settings such as night clubs and bars. As a construction product, PU is tested by being glued to the walls and ceiling of the ISO 9705 room corner test room. However, when heat is applied to PU foam, it melts and burns as a pool fire due to it being a thermoplastic. The current test layout is unable to accurately measure mass loss and doesn’t allow for the material to burn as a pool fire without seeping out of the test room floor. The lack of mass loss measurement means gas yields pertaining to smoke toxicity analysis can’t be calculated, which makes data comparisons from any other material or test method difficult. Additionally, the heat release measurements are not representative of the actual measurements taken as a lot of the material seeps through the floor (when a tray to catch the melted material is not used). This research aimed to modify the ISO 9705 test to provide the ability to measure mass loss to allow for better calculation of gas yields and understanding of decomposition. It also aimed to accurately measure smoke toxicity in both the doorway and duct and enable dilution factors to be calculated. Finally, the study aimed to examine if doubling the fuel loading would force under-ventilated flaming. The test layout was modified to be a combination of the SBI (single burning item) test set up inside oof the ISO 9705 test room. Polyurethane was tested in two different ways with the aim of altering the ventilation condition of the tests. Test one was conducted using 1 x SBI test rig aiming for well-ventilated flaming. Test two was conducted using 2 x SBI rigs (facing each other inside the test room) (doubling the fuel loading) aiming for under-ventilated flaming. The two different configurations used were successful in achieving both well-ventilated flaming and under-ventilated flaming, shown by the measured equivalence ratios (measured using a phi meter designed and created for these experiments). The findings show that doubling the fuel loading will successfully force under-ventilated flaming conditions to be achieved. This method can therefore be used when trying to replicate post-flashover conditions in future ISO 9705 room corner tests. The radiative heat generated by the two SBI rigs facing each other facilitated a much higher overall heat release resulting in a more severe fire. The method successfully allowed for accurate measurement of smoke toxicity produced from the PU foam in terms of simple gases such as oxygen depletion, CO and CO2. Overall, the proposed test modifications improve the ability to measure the smoke toxicity of materials in different fire conditions on a large-scale.

Keywords: flammability, ISO9705, large-scale testing, polyurethane, smoke toxicity

Procedia PDF Downloads 53
433 Investigation of Cavitation in a Centrifugal Pump Using Synchronized Pump Head Measurements, Vibration Measurements and High-Speed Image Recording

Authors: Simon Caba, Raja Abou Ackl, Svend Rasmussen, Nicholas E. Pedersen

Abstract:

It is a challenge to directly monitor cavitation in a pump application during operation because of a lack of visual access to validate the presence of cavitation and its form of appearance. In this work, experimental investigations are carried out in an inline single-stage centrifugal pump with optical access. Hence, it gives the opportunity to enhance the value of CFD tools and standard cavitation measurements. Experiments are conducted using two impellers running in the same volute at 3000 rpm and the same flow rate. One of the impellers used is optimized for lower NPSH₃% by its blade design, whereas the other one is manufactured using a standard casting method. The cavitation is detected by pump performance measurements, vibration measurements and high-speed image recordings. The head drop and the pump casing vibration caused by cavitation are correlated with the visual appearance of the cavitation. The vibration data is recorded in an axial direction of the impeller using accelerometers recording at a sample rate of 131 kHz. The vibration frequency domain data (up to 20 kHz) and the time domain data are analyzed as well as the root mean square values. The high-speed recordings, focusing on the impeller suction side, are taken at 10,240 fps to provide insight into the flow patterns and the cavitation behavior in the rotating impeller. The videos are synchronized with the vibration time signals by a trigger signal. A clear correlation between cloud collapses and abrupt peaks in the vibration signal can be observed. The vibration peaks clearly indicate cavitation, especially at higher NPSHA values where the hydraulic performance is not affected. It is also observed that below a certain NPSHA value, the cavitation started in the inlet bend of the pump. Above this value, cavitation occurs exclusively on the impeller blades. The impeller optimized for NPSH₃% does show a lower NPSH₃% than the standard impeller, but the head drop starts at a higher NPSHA value and is more gradual. Instabilities in the head drop curve of the optimized impeller were observed in addition to a higher vibration level. Furthermore, the cavitation clouds on the suction side appear more unsteady when using the optimized impeller. The shape and location of the cavitation are compared to 3D fluid flow simulations. The simulation results are in good agreement with the experimental investigations. In conclusion, these investigations attempt to give a more holistic view on the appearance of cavitation by comparing the head drop, vibration spectral data, vibration time signals, image recordings and simulation results. Data indicates that a criterion for cavitation detection could be derived from the vibration time-domain measurements, which requires further investigation. Usually, spectral data is used to analyze cavitation, but these investigations indicate that the time domain could be more appropriate for some applications.

Keywords: cavitation, centrifugal pump, head drop, high-speed image recordings, pump vibration

Procedia PDF Downloads 161