Search results for: single supervision mechanism (SSM)
517 Magnetic Solid-Phase Separation of Uranium from Aqueous Solution Using High Capacity Diethylenetriamine Tethered Magnetic Adsorbents
Authors: Amesh P, Suneesh A S, Venkatesan K A
Abstract:
The magnetic solid-phase extraction is a relatively new method among the other solid-phase extraction techniques for the separating of metal ions from aqueous solutions, such as mine water and groundwater, contaminated wastes, etc. However, the bare magnetic particles (Fe3O4) exhibit poor selectivity due to the absence of target-specific functional groups for sequestering the metal ions. The selectivity of these magnetic particles can be remarkably improved by covalently tethering the task-specific ligands on magnetic surfaces. The magnetic particles offer a number of advantages such as quick phase separation aided by the external magnetic field. As a result, the solid adsorbent can be prepared with the particle size ranging from a few micrometers to the nanometer, which again offers the advantages such as enhanced kinetics of extraction, higher extraction capacity, etc. Conventionally, the magnetite (Fe3O4) particles were prepared by the hydrolysis and co-precipitation of ferrous and ferric salts in aqueous ammonia solution. Since the covalent linking of task-specific functionalities on Fe3O4 was difficult, and it is also susceptible to redox reaction in the presence of acid or alkali, it is necessary to modify the surface of Fe3O4 by silica coating. This silica coating is usually carried out by hydrolysis and condensation of tetraethyl orthosilicate over the surface of magnetite to yield a thin layer of silica-coated magnetite particles. Since the silica-coated magnetite particles amenable for further surface modification, it can be reacted with task-specific functional groups to obtain the functionalized magnetic particles. The surface area exhibited by such magnetic particles usually falls in the range of 50 to 150 m2.g-1, which offer advantage such as quick phase separation, as compared to the other solid-phase extraction systems. In addition, the magnetic (Fe3O4) particles covalently linked on mesoporous silica matrix (MCM-41) and task-specific ligands offer further advantages in terms of extraction kinetics, high stability, longer reusable cycles, and metal extraction capacity, due to the large surface area, ample porosity and enhanced number of functional groups per unit area on these adsorbents. In view of this, the present paper deals with the synthesis of uranium specific diethylenetriamine ligand (DETA) ligand anchored on silica-coated magnetite (Fe-DETA) as well as on magnetic mesoporous silica (MCM-Fe-DETA) and studies on the extraction of uranium from aqueous solution spiked with uranium to mimic the mine water or groundwater contaminated with uranium. The synthesized solid-phase adsorbents were characterized by FT-IR, Raman, TG-DTA, XRD, and SEM. The extraction behavior of uranium on the solid-phase was studied under several conditions like the effect of pH, initial concentration of uranium, rate of extraction and its variation with pH and initial concentration of uranium, effect of interference ions like CO32-, Na+, Fe+2, Ni+2, and Cr+3, etc. The maximum extraction capacity of 233 mg.g-1 was obtained for Fe-DETA, and a huge capacity of 1047 mg.g-1 was obtained for MCM-Fe-DETA. The mechanism of extraction, speciation of uranium, extraction studies, reusability, and the other results obtained in the present study suggests Fe-DETA and MCM-Fe-DETA are the potential candidates for the extraction of uranium from mine water, and groundwater.Keywords: diethylenetriamine, magnetic mesoporous silica, magnetic solid-phase extraction, uranium extraction, wastewater treatment
Procedia PDF Downloads 167516 Sorbitol Galactoside Synthesis Using β-Galactosidase Immobilized on Functionalized Silica Nanoparticles
Authors: Milica Carević, Katarina Banjanac, Marija ĆOrović, Ana Milivojević, Nevena Prlainović, Aleksandar Marinković, Dejan Bezbradica
Abstract:
Nowadays, considering the growing awareness of functional food beneficial effects on human health, due attention is dedicated to the research in the field of obtaining new prominent products exhibiting improved physiological and physicochemical characteristics. Therefore, different approaches to valuable bioactive compounds synthesis have been proposed. β-Galactosidase, for example, although mainly utilized as hydrolytic enzyme, proved to be a promising tool for these purposes. Namely, under the particular conditions, such as high lactose concentration, elevated temperatures and low water activities, reaction of galactose moiety transfer to free hydroxyl group of the alternative acceptor (e.g. different sugars, alcohols or aromatic compounds) can generate a wide range of potentially interesting products. Up to now, galacto-oligosaccharides and lactulose have attracted the most attention due to their inherent prebiotic properties. The goal of this study was to obtain a novel product sorbitol galactoside, using the similar reaction mechanism, namely transgalactosylation reaction catalyzed by β-galactosidase from Aspergillus oryzae. By using sugar alcohol (sorbitol) as alternative acceptor, a diverse mixture of potential prebiotics is produced, enabling its more favorable functional features. Nevertheless, an introduction of alternative acceptor into the reaction mixture contributed to the complexity of reaction scheme, since several potential reaction pathways were introduced. Therefore, the thorough optimization using response surface method (RSM), in order to get an insight into different parameter (lactose concentration, sorbitol to lactose molar ratio, enzyme concentration, NaCl concentration and reaction time) influences, as well as their mutual interactions on product yield and productivity, was performed. In view of product yield maximization, the obtained model predicted optimal lactose concentration 500 mM, the molar ratio of sobitol to lactose 9, enzyme concentration 0.76 mg/ml, concentration of NaCl 0.8M, and the reaction time 7h. From the aspect of productivity, the optimum substrate molar ratio was found to be 1, while the values for other factors coincide. In order to additionally, improve enzyme efficiency and enable its reuse and potential continual application, immobilization of β-galactosidase onto tailored silica nanoparticles was performed. These non-porous fumed silica nanoparticles (FNS)were chosen on the basis of their biocompatibility and non-toxicity, as well as their advantageous mechanical and hydrodinamical properties. However, in order to achieve better compatibility between enzymes and the carrier, modifications of the silica surface using amino functional organosilane (3-aminopropyltrimethoxysilane, APTMS) were made. Obtained support with amino functional groups (AFNS) enabled high enzyme loadings and, more importantly, extremely high expressed activities, approximately 230 mg proteins/g and 2100 IU/g, respectively. Moreover, this immobilized preparation showed high affinity towards sorbitol galactoside synthesis. Therefore, the findings of this study could provided a valuable contribution to the efficient production of physiologically active galactosides in immobilized enzyme reactors.Keywords: β-galactosidase, immobilization, silica nanoparticles, transgalactosylation
Procedia PDF Downloads 300515 A Vision-Based Early Warning System to Prevent Elephant-Train Collisions
Authors: Shanaka Gunasekara, Maleen Jayasuriya, Nalin Harischandra, Lilantha Samaranayake, Gamini Dissanayake
Abstract:
One serious facet of the worsening Human-Elephant conflict (HEC) in nations such as Sri Lanka involves elephant-train collisions. Endangered Asian elephants are maimed or killed during such accidents, which also often result in orphaned or disabled elephants, contributing to the phenomenon of lone elephants. These lone elephants are found to be more likely to attack villages and showcase aggressive behaviour, which further exacerbates the overall HEC. Furthermore, Railway Services incur significant financial losses and disruptions to services annually due to such accidents. Most elephant-train collisions occur due to a lack of adequate reaction time. This is due to the significant stopping distance requirements of trains, as the full braking force needs to be avoided to minimise the risk of derailment. Thus, poor driver visibility at sharp turns, nighttime operation, and poor weather conditions are often contributing factors to this problem. Initial investigations also indicate that most collisions occur in localised “hotspots” where elephant pathways/corridors intersect with railway tracks that border grazing land and watering holes. Taking these factors into consideration, this work proposes the leveraging of recent developments in Convolutional Neural Network (CNN) technology to detect elephants using an RGB/infrared capable camera around known hotspots along the railway track. The CNN was trained using a curated dataset of elephants collected on field visits to elephant sanctuaries and wildlife parks in Sri Lanka. With this vision-based detection system at its core, a prototype unit of an early warning system was designed and tested. This weatherised and waterproofed unit consists of a Reolink security camera which provides a wide field of view and range, an Nvidia Jetson Xavier computing unit, a rechargeable battery, and a solar panel for self-sufficient functioning. The prototype unit was designed to be a low-cost, low-power and small footprint device that can be mounted on infrastructures such as poles or trees. If an elephant is detected, an early warning message is communicated to the train driver using the GSM network. A mobile app for this purpose was also designed to ensure that the warning is clearly communicated. A centralized control station manages and communicates all information through the train station network to ensure coordination among important stakeholders. Initial results indicate that detection accuracy is sufficient under varying lighting situations, provided comprehensive training datasets that represent a wide range of challenging conditions are available. The overall hardware prototype was shown to be robust and reliable. We envision a network of such units may help contribute to reducing the problem of elephant-train collisions and has the potential to act as an important surveillance mechanism in dealing with the broader issue of human-elephant conflicts.Keywords: computer vision, deep learning, human-elephant conflict, wildlife early warning technology
Procedia PDF Downloads 224514 Epigenetic Modification Observed in Yeast Chromatin Remodeler Ino80p
Authors: Chang-Hui Shen, Michelle Esposito, Andrew J. Shen, Michael Adejokun, Diana Laterman
Abstract:
The packaging of DNA into nucleosomes is critical to genomic compaction, yet it can leave gene promoters inaccessible to activator proteins or transcription machinery and thus prevents transcriptional initiation. Both chromatin remodelers and histone acetylases (HATs) are the two main transcription co-activators that can reconfigure chromatin structure for transcriptional activation. Ino80p is the core component of the INO80 remodeling complex. Recently, it was shown that Ino80p dissociates from the yeast INO1 promoter after induction. However, when certain HATs were deleted or mutated, Ino80p accumulated at the promoters during gene activation. This suggests a link between HATs’ presence and Ino80p’s dissociation. However, it has yet to be demonstrated that Ino80p can be acetylated. To determine if Ino80p can be acetylated, wild-type Saccharomyces cerevisiae cells carrying Ino80p engineered with a double FLAG tag (MATa INO80-FLAG his3∆200 leu2∆0 met15∆0 trp1∆63 ura3∆0) were grown to mid log phase, as were non-tagged wild type (WT) (MATa his3∆200 leu2∆0 met15∆0 trp1∆63 ura3∆0) and ino80∆ (MATa ino80∆::TRP1 his3∆200 leu2∆0 met15∆0 trp1∆63 ura3∆0) cells as controls. Cells were harvested, and the cell lysates were subjected to immunoprecipitation (IP) with α-FLAG resin to isolate Ino80p. These eluted IP samples were subjected to SDS-PAGE and Western blot analysis. Subsequently, the blots were probed with the α-FLAG and α-acetyl lysine antibodies, respectively. For the blot probed with α-FLAG, one prominent band was shown in the INO80-FLAG cells, but no band was detected in the IP samples from the WT and ino80∆ cells. For the blot probed with the α-acetyl lysine antibody, we detected acetylated Ino80p in the INO80-FLAG strain while no bands were observed in the control strains. As such, our results showed that Ino80p can be acetylated. This acetylation can explain the co-activator’s recruitment patterns observed in current gene activation models. In yeast INO1, it has been shown that Ino80p is recruited to the promoter during repression, and then dissociates from the promoter once de-repression begins. Histone acetylases, on the other hand, have the opposite pattern of recruitment, as they have an increased presence at the promoter as INO1 de-repression commences. This Ino80p recruitment pattern significantly changes when HAT mutant strains are studied. It was observed that instead of dissociating, Ino80p accumulates at the promoter in the absence of functional HATs, such as Gcn5p or Esa1p, under de-repressing processes. As such, Ino80p acetylation may be required for its proper dissociation from the promoters. The remodelers’ dissociation mechanism may also have a wide range of implications with respect to transcriptional initiation, elongation, or even repression as it allows for increased spatial access to the promoter for the various transcription factors and regulators that need to bind in that region. Our findings here suggest a previously uncharacterized interaction between Ino80p and other co-activators recruited to promoters. As such, further analysis of Ino80p acetylation not only will provide insight into the role of epigenetic modifications in transcriptional activation, but also gives insight into the interactions occurring between co-activators at gene promoters during gene regulation.Keywords: acetylation, chromatin remodeler, epigenetic modification, Ino80p
Procedia PDF Downloads 168513 Horizontal Stress Magnitudes Using Poroelastic Model in Upper Assam Basin, India
Authors: Jenifer Alam, Rima Chatterjee
Abstract:
Upper Assam sedimentary basin is one of the oldest commercially producing basins of India. Being in a tectonically active zone, estimation of tectonic strain and stress magnitudes has vast application in hydrocarbon exploration and exploitation. This East North East –West South West trending shelf-slope basin encompasses the Bramhaputra valley extending from Mikir Hills in the southwest to the Naga foothills in the northeast. Assam Shelf lying between the Main Boundary Thrust (MBT) and Naga Thrust area is comparatively free from thrust tectonics and depicts normal faulting mechanism. The study area is bounded by the MBT and Main Central Thrust in the northwest. The Belt of Schuppen in the southeast, is bordered by Naga and Disang thrust marking the lower limit of the study area. The entire Assam basin shows low-level seismicity compared to other regions of northeast India. Pore pressure (PP), vertical stress magnitude (SV) and horizontal stress magnitudes have been estimated from two wells - N1 and T1 located in Upper Assam. N1 is located in the Assam gap below the Bramhaputra river while T1, lies in the Belt of Schuppen. N1 penetrates geological formations from top Alluvial through Dhekiajuli, Girujan, Tipam, Barail, Kopili, Sylhet and Langpur to the granitic basement while T1 in trusted zone crosses through Girujan Suprathrust, Tipam Suprathrust, Barail Suprathrust to reach Naga Thrust. Normal compaction trend is drawn through shale points through both wells for estimation of PP using the conventional Eaton sonic equation with an exponent of 1.0 which is validated with Modular Dynamic Tester and mud weight. Observed pore pressure gradient ranges from 10.3 MPa/km to 11.1 MPa/km. The SV has a gradient from 22.20 to 23.80 MPa/km. Minimum and maximum horizontal principal stress (Sh and SH) magnitudes under isotropic conditions are determined using poroelastic model. This approach determines biaxial tectonic strain utilizing static Young’s Modulus, Poisson’s Ratio, SV, PP, leak off test (LOT) and SH derived from breakouts using prior information on unconfined compressive strength. Breakout derived SH information is used for obtaining tectonic strain due to lack of measured SH data from minifrac or hydrofracturing. Tectonic strain varies from 0.00055 to 0.00096 along x direction and from -0.0010 to 0.00042 along y direction. After obtaining tectonic strains at each well, the principal horizontal stress magnitudes are calculated from linear poroelastic model. The magnitude of Sh and SH gradient in normal faulting region are 12.5 and 16.0 MPa/km while in thrust faulted region the gradients are 17.4 and 20.2 MPa/km respectively. Model predicted Sh and SH matches well with the LOT data and breakout derived SH data in both wells. It is observed from this study that the stresses SV>SH>Sh prevailing in the shelf region while near the Naga foothills the regime changes to SH≈SV>Sh area corresponds to normal faulting regime. Hence this model is a reliable tool for predicting stress magnitudes from well logs under active tectonic regime in Upper Assam Basin.Keywords: Eaton, strain, stress, poroelastic model
Procedia PDF Downloads 213512 Industrial Waste Multi-Metal Ion Exchange
Authors: Thomas S. Abia II
Abstract:
Intel Chandler Site has internally developed its first-of-kind (FOK) facility-scale wastewater treatment system to achieve multi-metal ion exchange. The process was carried out using a serial process train of carbon filtration, pH / ORP adjustment, and cationic exchange purification to treat dilute metal wastewater (DMW) discharged from a substrate packaging factory. Spanning a trial period of 10 months, a total of 3,271 samples were collected and statistically analyzed (average baseline + standard deviation) to evaluate the performance of a 95-gpm, multi-reactor continuous copper ion exchange treatment system that was consequently retrofitted for manganese ion exchange to meet environmental regulations. The system is also equipped with an inline acid and hot caustic regeneration system to rejuvenate exhausted IX resins and occasionally remove surface crud. Data generated from lab-scale studies was transferred to system operating modifications following multiple trial-and-error experiments. Despite the DMW treatment system failing to meet internal performance specifications for manganese output, it was observed to remove the cation notwithstanding the prevalence of copper in the waste stream. Accordingly, the average manganese output declined from 6.5 + 5.6 mg¹L⁻¹ at pre-pilot to 1.1 + 1.2 mg¹L⁻¹ post-pilot (83% baseline reduction). This milestone was achieved regardless of the average influent manganese to DMW increasing from 1.0 + 13.7 mg¹L⁻¹ at pre-pilot to 2.1 + 0.2 mg¹L⁻¹ post-pilot (110% baseline uptick). Likewise, the pre-trial and post-trial average influent copper values to DMW were 22.4 + 10.2 mg¹L⁻¹ and 32.1 + 39.1 mg¹L⁻¹, respectively (43% baseline increase). As a result, the pre-trial and post-trial average copper output values were 0.1 + 0.5 mg¹L⁻¹ and 0.4 + 1.2 mg¹L⁻¹, respectively (300% baseline uptick). Conclusively, the operating pH range upstream of treatment (between 3.5 and 5) was shown to be the largest single point of influence for optimizing manganese uptake during multi-metal ion exchange. However, the high variability of the influent copper-to-manganese ratio was observed to adversely impact the system functionality. The journal herein intends to discuss the operating parameters such as pH and oxidation-reduction potential (ORP) that were shown to influence the functional versatility of the ion exchange system significantly. The literature also proposes to discuss limitations of the treatment system such as influent copper-to-manganese ratio variations, operational configuration, waste by-product management, and system recovery requirements to provide a balanced assessment of the multi-metal ion exchange process. The take-away from this literature is intended to analyze the overall feasibility of ion exchange for metals manufacturing facilities that lack the capability to expand hardware due to real estate restrictions, aggressive schedules, or budgetary constraints.Keywords: copper, industrial wastewater treatment, multi-metal ion exchange, manganese
Procedia PDF Downloads 142511 Demographic Assessment and Evaluation of Degree of Lipid Control in High Risk Indian Dyslipidemia Patients
Authors: Abhijit Trailokya
Abstract:
Background: Cardiovascular diseases (CVD’s) are the major cause of morbidity and mortality in both developed and developing countries. Many clinical trials have demonstrated that low-density lipoprotein cholesterol (LDL-C) lowering, reduces the incidence of coronary and cerebrovascular events across a broad spectrum of patients at risk. Guidelines for the management of patients at risk have been established in Europe and North America. The guidelines have advocated progressively lower LDL-C targets and more aggressive use of statin therapy. In Indian patients, comprehensive data on dyslipidemia management and its treatment outcomes are inadequate. There is lack of information on existing treatment patterns, the patient’s profile being treated, and factors that determine treatment success or failure in achieving desired goals. Purpose: The present study was planned to determine the lipid control status in high-risk dyslipidemic patients treated with lipid-lowering therapy in India. Methods: This cross-sectional, non-interventional, single visit program was conducted across 483 sites in India where male and female patients with high-risk dyslipidemia aged 18 to 65 years who had visited for a routine health check-up to their respective physician at hospital or a healthcare center. Percentage of high-risk dyslipidemic patients achieving adequate LDL-C level (< 70 mg/dL) on lipid-lowering therapy and the association of lipid parameters with patient characteristics, comorbid conditions, and lipid lowering drugs were analysed. Results: 3089 patients were enrolled in the study; of which 64% were males. LDL-C data was available for 95.2% of the patients; only 7.7% of these patients achieved LDL-C levels < 70 mg/dL on lipid-lowering therapy, which may be due to inability to follow therapeutic plans, poor compliance, or inadequate counselling by physician. The physician’s lack of awareness about recent treatment guidelines also might contribute to patients’ poor adherence, not explaining adequately the benefit and risks of a medication, not giving consideration to the patient’s life style and the cost of medication. Statin was the most commonly used anti-dyslipidemic drug across population. The higher proportion of patients had the comorbid condition of CVD and diabetes mellitus across all dyslipidemic patients. Conclusion: As per the European Society of Cardiology guidelines the ideal LDL-C levels in high risk dyslipidemic patients should be less than 70%. In the present study, 7.7% of the patients achieved LDL-C levels < 70 mg/dL on lipid lowering therapy which is very less. Most of high risk dyslipidemic patients in India are on suboptimal dosage of statin. So more aggressive and high dosage statin therapy may be required to achieve target LDLC levels in high risk Indian dyslipidemic patients.Keywords: cardiovascular disease, diabetes mellitus, dyslipidemia, LDL-C, lipid lowering drug, statins
Procedia PDF Downloads 200510 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web
Authors: Aayushi Somani, Siba P. Samal
Abstract:
Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR
Procedia PDF Downloads 168509 Integration of Icf Walls as Diurnal Solar Thermal Storage with Microchannel Solar Assisted Heat Pump for Space Heating and Domestic Hot Water Production
Authors: Mohammad Emamjome Kashan, Alan S. Fung
Abstract:
In Canada, more than 32% of the total energy demand is related to the building sector. Therefore, there is a great opportunity for Greenhouse Gases (GHG) reduction by integrating solar collectors to provide building heating load and domestic hot water (DHW). Despite the cold winter weather, Canada has a good number of sunny and clear days that can be considered for diurnal solar thermal energy storage. Due to the energy mismatch between building heating load and solar irradiation availability, relatively big storage tanks are usually needed to store solar thermal energy during the daytime and then use it at night. On the other hand, water tanks occupy huge space, especially in big cities, space is relatively expensive. This project investigates the possibility of using a specific building construction material (ICF – Insulated Concrete Form) as diurnal solar thermal energy storage that is integrated with a heat pump and microchannel solar thermal collector (MCST). Not much literature has studied the application of building pre-existing walls as active solar thermal energy storage as a feasible and industrialized solution for the solar thermal mismatch. By using ICF walls that are integrated into the building envelope, instead of big storage tanks, excess solar energy can be stored in the concrete of the ICF wall that consists of EPS insulation layers on both sides to store the thermal energy. In this study, two solar-based systems are designed and simulated inTransient Systems Simulation Program(TRNSYS)to compare ICF wall thermal storage benefits over the system without ICF walls. In this study, the heating load and DHW of a Canadian single-family house located in London, Ontario, are provided by solar-based systems. The proposed system integrates the MCST collector, a water-to-water HP, a preheat tank, the main tank, fan coils (to deliver the building heating load), and ICF walls. During the day, excess solar energy is stored in the ICF walls (charging cycle). Thermal energy can be restored from the ICF walls when the preheat tank temperature drops below the ICF wall (discharging process) to increase the COP of the heat pump. The evaporator of the heat pump is taking is coupled with the preheat tank. The provided warm water by the heat pump is stored in the second tank. Fan coil units are in contact with the tank to provide a building heating load. DHW is also delivered is provided from the main tank. It is investigated that the system with ICF walls with an average solar fraction of 82%- 88% can cover the whole heating demand+DHW of nine months and has a 10-15% higher average solar fraction than the system without ICF walls. Sensitivity analysis for different parameters influencing the solar fraction is discussed in detail.Keywords: net-zero building, renewable energy, solar thermal storage, microchannel solar thermal collector
Procedia PDF Downloads 120508 Frequency of Tube Feeding in Aboriginal and Non-aboriginal Head and Neck Cancer Patients and the Impact on Relapse and Survival Outcomes
Authors: Kim Kennedy, Daren Gibson, Stephanie Flukes, Chandra Diwakarla, Lisa Spalding, Leanne Pilkington, Andrew Redfern
Abstract:
Introduction: Head and neck cancer and treatments are known for their profound effect on nutrition and tube feeding is a common requirement to maintain nutrition. Aim: We aimed to evaluate the frequency of tube feeding in Aboriginal and non-Aboriginal patients, and to examine the relapse and survival outcomes in patients who require enteral tube feeding. Methods: We performed a retrospective cohort analysis of 320 head and neck cancer patients from a single centre in Western Australia, identifying 80 Aboriginal patients and 240 non-Aboriginal patients matched on a 1:3 ratio by site, histology, rurality, and age. Data collected included patient demographics, tumour features, treatment details, and cancer and survival outcomes. Results: Aboriginal and non-Aboriginal patients required feeding tubes at similar rates (42.5% vs 46.2% respectively), however Aboriginal patients were far more likely to fail to return to oral nutrition, with 26.3% requiring long-term tube feeding versus only 15% of non-Aboriginal patients. In the overall study population, 27.5% required short-term tube feeding, 17.8% required long-term enteral tube nutrition, and 45.3% of patients did not have a feeding tube at any point. Relapse was more common in patients who required tube feeding, with relapses in 42.1% of the patients requiring long-term tube feeding, 31.8% in those requiring a short-term tube, versus 18.9% in the ‘no tube’ group. Survival outcomes for patients who required a long-term tube were also significantly poorer when compared to patients who only required a short-term tube, or not at all. Long-term tube-requiring patients were half as likely to survive (29.8%) compared to patients requiring a short-term tube (62.5%) or no tube at all (63.5%). Patients requiring a long-term tube were twice as likely to die with active disease (59.6%) as patients with no tube (28%), or a short term tube (33%). This may suggest an increased relapse risk in patients who require long-term feeding, due to consequences of malnutrition on cancer and treatment outcomes, although may simply reflect that patients with recurrent disease were more likely to have longer-term swallowing dysfunction due to recurrent disease and salvage treatments. Interestingly long-term tube patients were also more likely to die with no active disease (10.5%) (compared with short-term tube requiring patients (4.6%), or patients with no tube (8%)), which is likely reflective of the increased mortality associated with long-term aspiration and malnutrition issues. Conclusions: Requirement for tube feeding was associated with a higher rate of cancer relapse, and in particular, long-term tube feeding was associated with a higher likelihood of dying from head and neck cancer, but also a higher risk of dying from other causes without cancer relapse. This data reflects the complex effect of head and neck cancer and its treatments on swallowing and nutrition, and ultimately, the effects of malnutrition, swallowing dysfunction, and aspiration on overall cancer and survival outcomes. Tube feeding was seen at similar rates in Aboriginal and non-Aboriginal patient, however failure to return to oral intake with a requirement for a long-term feeding tube was seen far more commonly in the Aboriginal population.Keywords: head and neck cancer, enteral tube feeding, malnutrition, survival, relapse, aboriginal patients
Procedia PDF Downloads 100507 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India
Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar
Abstract:
The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose
Procedia PDF Downloads 255506 Metalorganic Chemical Vapor Deposition Overgrowth on the Bragg Grating for Gallium Nitride Based Distributed Feedback Laser
Abstract:
Laser diodes fabricated from the III-nitride material system are emerging solutions for the next generation telecommunication systems and optical clocks based on Ca at 397nm, Rb at 420.2nm and Yb at 398.9nm combined 556 nm. Most of the applications require single longitudinal optical mode lasers, with very narrow linewidth and compact size, such as communication systems and laser cooling. In this case, the GaN based distributed feedback (DFB) laser diode is one of the most effective candidates with gratings are known to operate with narrow spectra as well as high power and efficiency. Given the wavelength range, the period of the first-order diffraction grating is under 100 nm, and the realization of such gratings is technically difficult due to the narrow line width and the high quality nitride overgrowth based on the Bragg grating. Some groups have reported GaN DFB lasers with high order distributed feedback surface gratings, which avoids the overgrowth. However, generally the strength of coupling is lower than that with Bragg grating embedded into the waveguide within the GaN laser structure by two-step-epitaxy. Therefore, the overgrowth on the grating technology need to be studied and optimized. Here we propose to fabricate the fine step shape structure of first-order grating by the nanoimprint combined inductively coupled plasma (ICP) dry etching, then carry out overgrowth high quality AlGaN film by metalorganic chemical vapor deposition (MOCVD). Then a series of gratings with different period, depths and duty ratios are designed and fabricated to study the influence of grating structure to the nano-heteroepitaxy. Moreover, we observe the nucleation and growth process by step-by-step growth to study the growth mode for nitride overgrowth on grating, under the condition that the grating period is larger than the mental migration length on the surface. The AFM images demonstrate that a smooth surface of AlGaN film is achieved with an average roughness of 0.20 nm over 3 × 3 μm2. The full width at half maximums (FWHMs) of the (002) reflections in the XRD rocking curves are 278 arcsec for the AlGaN film, and the component of the Al within the film is 8% according to the XRD mapping measurement, which is in accordance with design values. By observing the samples with growth time changing from 200s, 400s to 600s, the growth model is summarized as the follow steps: initially, the nucleation is evenly distributed on the grating structure, as the migration length of Al atoms is low; then, AlGaN growth alone with the grating top surface; finally, the AlGaN film formed by lateral growth. This work contributed to carrying out GaN DFB laser by fabricating grating and overgrowth on the nano-grating patterned substrate by wafer scale, moreover, growth dynamics had been analyzed as well.Keywords: DFB laser, MOCVD, nanoepitaxy, III-niitride
Procedia PDF Downloads 186505 A Randomized, Controlled Trial to Test Behavior Change Techniques to Improve Low Intensity Physical Activity in Older Adults
Authors: Ciaran Friel, Jerry Suls, Mark Butler, Patrick Robles, Samantha Gordon, Frank Vicari, Karina W. Davidson
Abstract:
Physical activity guidelines focus on increasing moderate-intensity activity for older adults, but adherence to recommendations remains low. This is despite the fact that scientific evidence supports that any increase in physical activity is positively correlated with health benefits. Behavior change techniques (BCTs) have demonstrated effectiveness in reducing sedentary behavior and promoting physical activity. This pilot study uses a Personalized Trials (N-of-1) design to evaluate the efficacy of using four BCTs to promote an increase in low-intensity physical activity (2,000 steps of walking per day) in adults aged 45-75 years old. The 4 BCTs tested were goal setting, action planning, feedback, and self-monitoring. BCTs were tested in random order and delivered by text message prompts requiring participant engagement. The study recruited health system employees in the target age range, without mobility restrictions and demonstrating interest in increasing their daily activity by a minimum of 2,000 steps per day for a minimum of five days per week. Participants were sent a Fitbit® fitness tracker with an established study account and password. Participants were recommended to wear the Fitbit device 24/7 but were required to wear it for a minimum of ten hours per day. Baseline physical activity was measured by Fitbit for two weeks. In the 8-week intervention phase of the study, participants received each of the four BCTs, in random order, for a two-week period. Text message prompts were delivered daily each morning at a consistent time. All prompts required participant engagement to acknowledge receipt of the BCT message. Engagement is dependent upon the BCT message and may have included recording that a detailed plan for walking has been made or confirmed a daily step goal (action planning, goal setting). Additionally, participants may have been directed to a study dashboard to view their step counts or compare themselves to their baseline average step count (self-monitoring, feedback). At the end of each two-week testing interval, participants were asked to complete the Self-Efficacy for Walking Scale (SEW_Dur), a validated measure that assesses the participant’s confidence in walking incremental distances, and a survey measuring their satisfaction with the individual BCT that they tested. At the end of their trial, participants received a personalized summary of their step data in response to each individual BCT. The analysis will examine the novel individual-level heterogeneity of treatment effect made possible by N-of-1 design and pool results across participants to efficiently estimate the overall efficacy of the selected behavioral change techniques in increasing low-intensity walking by 2,000 steps, five days per week. Self-efficacy will be explored as the likely mechanism of action prompting behavior change. This study will inform the providers and demonstrate the feasibility of an N-of-1 study design to effectively promote physical activity as a component of healthy aging.Keywords: aging, exercise, habit, walking
Procedia PDF Downloads 91504 A Paradigm Shift in Patent Protection-Protecting Methods of Doing Business: Implications for Economic Development in Africa
Authors: Odirachukwu S. Mwim, Tana Pistorius
Abstract:
Since the early 1990s political and economic pressures have been mounted on policy and law makers to increase patent protection by raising the protection standards. The perception of the relation between patent protection and development, particularly economic development, has evolved significantly in the past few years. Debate on patent protection in the international arena has been significantly influenced by the perception that there is a strong link between patent protection and economic development. The level of patent protection determines the extent of development that can be achieved. Recently there has been a paradigm shift with a lot of emphasis on extending patent protection to method of doing business generally referred to as Business Method Patenting (BMP). The general perception among international organizations and the private sectors also indicates that there is a strong correlation between BMP protection and economic growth. There are two diametrically opposing views as regards the relation between Intellectual Property (IP) protection and development and innovation. One school of thought promotes the view that IP protection improves economic development through stimulation of innovation and creativity. The other school advances the view that IP protection is unnecessary for stimulation of innovation and creativity and is in fact a hindrance to open access to resources and information required for innovative and creative modalities. Therefore, different theories and policies attach different levels of protection to BMP which have specific implications for economic growth. This study examines the impact of BMP protection on development by focusing on the challenges confronting economic growth in African communities as a result of the new paradigm in patent law. (Africa is used as a single unit in this study but this should not be construed as African homogeneity. Rather, the views advanced in this study are used to address the common challenges facing many communities in Africa). The study reviews (from the point of views of legal philosophers, policy makers and decisions of competent courts) the relevant literature, patent legislation particularly the International Treaty, policies and legal judgments. Findings from this study suggest that over and above the various criticisms levelled against the extreme liberal approach to the recognition of business methods as patentable subject matter, there are other specific implications that are associated with such approach. The most critical implication of extending patent protection to business methods is the locking-up of knowledge which may hamper human development in general and economic development in particular. Locking up knowledge necessary for economic advancement and competitiveness may have a negative effect on economic growth by promoting economic exclusion, particularly in African communities. This study suggests that knowledge of BMP within the African context and the extent of protection linked to it is crucial in achieving a sustainable economic growth in Africa. It also suggests that a balance is struck between the two diametrically opposing views.Keywords: Africa, business method patenting, economic growth, intellectual property, patent protection
Procedia PDF Downloads 125503 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization
Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller
Abstract:
The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization
Procedia PDF Downloads 32502 Additive Manufacturing with Ceramic Filler
Authors: Irsa Wolfram, Boruch Lorenz
Abstract:
Innovative solutions with additive manufacturing applying material extrusion for functional parts necessitate innovative filaments with persistent quality. Uniform homogeneity and a consistent dispersion of particles embedded in filaments generally require multiple cycles of extrusion or well-prepared primal matter by injection molding, kneader machines, or mixing equipment. These technologies commit to dedicated equipment that is rarely at the disposal in production laboratories unfamiliar with research in polymer materials. This stands in contrast to laboratories that investigate complex material topics and technology science to leverage the potential of 3-D printing. Consequently, scientific studies in labs are often constrained to compositions and concentrations of fillersofferedfrom the market. Therefore, we introduce a prototypal laboratory methodology scalable to tailoredprimal matter for extruding ceramic composite filaments with fused filament fabrication (FFF) technology. - A desktop single-screw extruder serves as a core device for the experiments. Custom-made filaments encapsulate the ceramic fillers and serve with polylactide (PLA), which is a thermoplastic polyester, as primal matter and is processed in the melting area of the extruder, preserving the defined concentration of the fillers. Validated results demonstrate that this approach enables continuously produced and uniform composite filaments with consistent homogeneity. Itis 3-D printable with controllable dimensions, which is a prerequisite for any scalable application. Additionally, digital microscopy confirms the steady dispersion of the ceramic particles in the composite filament. - This permits a 2D reconstruction of the planar distribution of the embedded ceramic particles in the PLA matrices. The innovation of the introduced method lies in the smart simplicity of preparing the composite primal matter. It circumvents the inconvenience of numerous extrusion operations and expensive laboratory equipment. Nevertheless, it deliversconsistent filaments of controlled, predictable, and reproducible filler concentration, which is the prerequisite for any industrial application. The introduced prototypal laboratory methodology seems capable for other polymer matrices and suitable to further utilitarian particle types beyond and above ceramic fillers. This inaugurates a roadmap for supplementary laboratory development of peculiar composite filaments, providing value for industries and societies. This low-threshold entry of sophisticated preparation of composite filaments - enabling businesses to create their own dedicated filaments - will support the mutual efforts for establishing 3D printing to new functional devices.Keywords: additive manufacturing, ceramic composites, complex filament, industrial application
Procedia PDF Downloads 105501 Synergistic Effect of Chondroinductive Growth Factors and Synovium-Derived Mesenchymal Stem Cells on Regeneration of Cartilage Defects in Rabbits
Authors: M. Karzhauov, А. Mukhambetova, M. Sarsenova, E. Raimagambetov, V. Ogay
Abstract:
Regeneration of injured articular cartilage remains one of the most difficult and unsolved problems in traumatology and orthopedics. Currently, for the treatment of cartilage defects surgical techniques for stimulation of the regeneration of cartilage in damaged joints such as multiple microperforation, mosaic chondroplasty, abrasion and microfractures is used. However, as shown by clinical practice, they can not provide a full and sustainable recovery of articular hyaline cartilage. In this regard, the current high hopes in the regeneration of cartilage defects reasonably are associated with the use of tissue engineering approaches to restore the structural and functional characteristics of damaged joints using stem cells, growth factors and biopolymers or scaffolds. The purpose of the present study was to investigate the effects of chondroinductive growth factors and synovium-derived mesenchymal stem cells (SD-MSCs) on the regeneration of cartilage defects in rabbits. SD-MSCs were isolated from the synovium membrane of Flemish giant rabbits, and expanded in complete culture medium α-MEM. Rabbit SD-MSCs were characterized by CFU-assay and by their ability to differentiate into osteoblasts, chondrocytes and adipocytes. The effects of growth factors (TGF-β1, BMP-2, BMP-4 and IGF-I) on MSC chondrogenesis were examined in micromass pellet cultures using histological and biochemical analysis. Articular cartilage defect (4mm in diameter) in the intercondylar groove of the patellofemoral joint was performed with a kit for the mosaic chondroplasty. The defect was made until subchondral bone plate. Delivery of SD-MSCs and growth factors was conducted in combination with hyaloronic acid (HA). SD-MSCs, growth factors and control groups were compared macroscopically and histologically at 10, 30, 60 and 90 days aftrer intra-articular injection. Our in vitro comparative study revealed that TGF-β1 and BMP-4 are key chondroinductive factors for both the growth and chondrogenesis of SD-MSCs. The highest effect on MSC chondrogenesis was observed with the synergistic interaction of TGF-β1 and BMP-4. In addition, biochemical analysis of the chondrogenic micromass pellets also revealed that the levels of glycosaminoglycans and DNA after combined treatment with TGF-β1 and BMP-4 was significantly higher in comparison to individual application of these factors. In vivo study showed that for complete regeneration of cartilage defects with intra-articular injection of SD-MSCs with HA takes time 90 days. However, single injection of SD-MSCs in combiantion with TGF-β1, BMP-4 and HA significantly promoted regeneration rate of the cartilage defects in rabbits. In this case, complete regeneration of cartilage defects was observed in 30 days after intra-articular injection. Thus, our in vitro and in vivo study demonstrated that combined application of rabbit SD-MSC with chondroinductive growth factors and HA results in strong synergistic effect on the chondrogenesis significantly enhancing regeneration of the damaged cartilage.Keywords: Mesenchymal stem cells, synovium, chondroinductive factors, TGF-β1, BMP-2, BMP-4, IGF-I
Procedia PDF Downloads 304500 A Culture-Contrastive Analysis Of The Communication Between Discourse Participants In European Editorials
Authors: Melanie Kerschner
Abstract:
Language is our main means of social interaction. News journalism, especially opinion discourse, holds a powerful position in this context. Editorials can be regarded as encounters of different, partially contradictory relationships between discourse participants constructed through the editorial voice. Their primary goal is to shape public opinion by commenting on events already addressed by other journalistic genres in the given newspaper. In doing so, the author tries to establish a consensus over the negotiated matter (i.e. the news event) with the reader. At the same time, he/she claims authority over the “correct” description and evaluation of an event. Yet, how can the relationship and the interaction between the discourse participants, i.e. the journalist, the reader and the news actors represented in the editorial, be best visualized and studied from a cross-cultural perspective? The present research project attempts to give insights into the role of (media) culture in British, Italian and German editorials. For this purpose the presenter will propose a basic framework: the so called “pyramid of discourse participants”, comprising the author, the reader, two types of news actors and the semantic macro-structure (as meta-level of analysis). Based on this framework, the following questions will be addressed: • Which strategies does the author employ to persuade the reader and to prompt him to give his opinion (in the comment section)? • In which ways (and with which linguistic tools) is editorial opinion expressed? • Does the author use adjectives, adverbials and modal verbs to evaluate news actors, their actions and the current state of affairs or does he/she prefer nominal labels? • Which influence do language choice and the related media culture have on the representation of news events in editorials? • In how far does the social context of a given media culture influence the amount of criticism and the way it is mediated so that it is still culturally-acceptable? The following culture-contrastive study shall examine 45 editorials (i.e. 15 per media culture) from six national quality papers that are similar in distribution, importance and the kind of envisaged readership to make valuable conclusions about culturally-motivated similarities and differences in the coverage and assessment of news events. The thematic orientation of the editorials will be the NSA scandal and the reactions of various countries, as this topic was and still is relevant to each of the three media cultures. Starting out from the “pyramid of discourse participants” as underlying framework, eight different criteria will be assigned to the individual discourse participants in the micro-analysis of the editorials. For the purpose of illustration, a single criterion, referring to the salience of authorial opinion, will be selected to demonstrate how the pyramid of discourse participants can be applied as a basis for empirical analysis. Extracts from the corpus shall furthermore enhance the understanding.Keywords: Micro-analysis of editorials, culture-contrastive research, media culture, interaction between discourse participants, evaluation
Procedia PDF Downloads 515499 Active Filtration of Phosphorus in Ca-Rich Hydrated Oil Shale Ash Filters: The Effect of Organic Loading and Form of Precipitated Phosphatic Material
Authors: Päärn Paiste, Margit Kõiv, Riho Mõtlep, Kalle Kirsimäe
Abstract:
For small-scale wastewater management, the treatment wetlands (TWs) as a low cost alternative to conventional treatment facilities, can be used. However, P removal capacity of TW systems is usually problematic. P removal in TWs is mainly dependent on the physico–chemical and hydrological properties of the filter material. Highest P removal efficiency has been shown trough Ca-phosphate precipitation (i.e. active filtration) in Ca-rich alkaline filter materials, e.g. industrial by-products like hydrated oil shale ash (HOSA), metallurgical slags. In this contribution we report preliminary results of a full-scale TW system using HOSA material for P removal for a municipal wastewater at Nõo site, Estonia. The main goals of this ongoing project are to evaluate: a) the long-term P removal efficiency of HOSA using real waste water; b) the effect of high organic loading rate; c) variable P-loading effects on the P removal mechanism (adsorption/direct precipitation); and d) the form and composition of phosphate precipitates. Onsite full-scale experiment with two concurrent filter systems for treatment of municipal wastewater was established in September 2013. System’s pretreatment steps include septic tank (2 m2) and vertical down-flow LECA filters (3 m2 each), followed by horizontal subsurface HOSA filters (effective volume 8 m3 each). Overall organic and hydraulic loading rates of both systems are the same. However, the first system is operated in a stable hydraulic loading regime and the second in variable loading regime that imitates the wastewater production in an average household. Piezometers for water and perforated sample containers for filter material sampling were incorporated inside the filter beds to allow for continuous in-situ monitoring. During the 18 months of operation the median removal efficiency (inflow to outflow) of both systems were over 99% for TP, 93% for COD and 57% for TN. However, we observed significant differences in the samples collected in different points inside the filter systems. In both systems, we observed development of preferred flow paths and zones with high and low loadings. The filters show formation and a gradual advance of a “dead” zone along the flow path (zone with saturated filter material characterized by ineffective removal rates), which develops more rapidly in the system working under variable loading regime. The formation of the “dead” zone is accompanied by the growth of organic substances on the filter material particles that evidently inhibit the P removal. Phase analysis of used filter materials using X-ray diffraction method reveals formation of minor amounts of amorphous Ca-phosphate precipitates. This finding is supported by ATR-FTIR and SEM-EDS measurements, which also reveal Ca-phosphate and authigenic carbonate precipitation. Our first experimental results demonstrate that organic pollution and loading regime significantly affect the performance of hydrated ash filters. The material analyses also show that P is incorporated into a carbonate substituted hydroxyapatite phase.Keywords: active filtration, apatite, hydrated oil shale ash, organic pollution, phosphorus
Procedia PDF Downloads 272498 Simple Finite-Element Procedure for Modeling Crack Propagation in Reinforced Concrete Bridge Deck under Repetitive Moving Truck Wheel Loads
Authors: Rajwanlop Kumpoopong, Sukit Yindeesuk, Pornchai Silarom
Abstract:
Modeling cracks in concrete is complicated by its strain-softening behavior which requires the use of sophisticated energy criteria of fracture mechanics to assure stable and convergent solutions in the finite-element (FE) analysis particularly for relatively large structures. However, for small-scale structures such as beams and slabs, a simpler approach relies on retaining some shear stiffness in the cracking plane has been adopted in literature to model the strain-softening behavior of concrete under monotonically increased loading. According to the shear retaining approach, each element is assumed to be an isotropic material prior to cracking of concrete. Once an element is cracked, the isotropic element is replaced with an orthotropic element in which the new orthotropic stiffness matrix is formulated with respect to the crack orientation. The shear transfer factor of 0.5 is used in parallel to the crack plane. The shear retaining approach is adopted in this research to model cracks in RC bridge deck with some modifications to take into account the effect of repetitive moving truck wheel loads as they cause fatigue cracking of concrete. First modification is the introduction of fatigue tests of concrete and reinforcing steel and the Palmgren-Miner linear criterion of cumulative damage in the conventional FE analysis. For a certain loading, the number of cycles to failure of each concrete or RC element can be calculated from the fatigue or S-N curves of concrete and reinforcing steel. The elements with the minimum number of cycles to failure are the failed elements. For the elements that do not fail, the damage is accumulated according to Palmgren-Miner linear criterion of cumulative damage. The stiffness of the failed element is modified and the procedure is repeated until the deck slab fails. The total number of load cycles to failure of the deck slab can then be obtained from which the S-N curve of the deck slab can be simulated. Second modification is the modification in shear transfer factor. Moving loading causes continuous rubbing of crack interfaces which greatly reduces shear transfer mechanism. It is therefore conservatively assumed in this study that the analysis is conducted with shear transfer factor of zero for the case of moving loading. A customized FE program has been developed using the MATLAB software to accomodate such modifications. The developed procedure has been validated with the fatigue test of the 1/6.6-scale AASHTO bridge deck under the applications of both fixed-point repetitive loading and moving loading presented in the literature. Results are in good agreement both experimental vs. simulated S-N curves and observed vs. simulated crack patterns. Significant contribution of the developed procedure is a series of S-N relations which can now be simulated at any desired levels of cracking in addition to the experimentally derived S-N relation at the failure of the deck slab. This permits the systematic investigation of crack propagation or deterioration of RC bridge deck which is appeared to be useful information for highway agencies to prolong the life of their bridge decks.Keywords: bridge deck, cracking, deterioration, fatigue, finite-element, moving truck, reinforced concrete
Procedia PDF Downloads 256497 Bridging Educational Research and Policymaking: The Development of Educational Think Tank in China
Authors: Yumei Han, Ling Li, Naiqing Song, Xiaoping Yang, Yuping Han
Abstract:
Educational think tank is agreeably regarded as significant part of a nation’s soft power to promote the scientific and democratic level of educational policy making, and it plays critical role of bridging educational research in higher institutions and educational policy making. This study explores the concept, functions and significance of educational think tank in China, and conceptualizes a three dimensional framework to analyze the approaches of transforming research-based higher institutions into effective educational think tanks to serve educational policy making in the nation wide. Since 2014, the Ministry of Education P.R. China has been promoting the strategy of developing new type of educational think tanks in higher institutions, and such a strategy has been put into the agenda for the 13th Five Year Plan for National Education Development released in 2017.In such context, increasing scholars conduct studies to put forth strategies of promoting the development and transformation of new educational think tanks to serve educational policy making process. Based on literature synthesis, policy text analysis, and analysis of theories about policy making process and relationship between educational research and policy-making, this study constructed a three dimensional conceptual framework to address the following questions: (a) what are the new features of educational think tanks in the new era comparing traditional think tanks, (b) what are the functional objectives of the new educational think tanks, (c) what are the organizational patterns and mechanism of the new educational think tanks, (d) in what approaches traditional research-based higher institutions can be developed or transformed into think tanks to effectively serve the educational policy making process. The authors adopted case study approach on five influential education policy study centers affiliated with top higher institutions in China and applied the three dimensional conceptual framework to analyze their functional objectives, organizational patterns as well as their academic pathways that researchers use to contribute to the development of think tanks to serve education policy making process.Data was mainly collected through interviews with center administrators, leading researchers and academic leaders in the institutions. Findings show that: (a) higher institution based think tanks mainly function for multi-level objectives, providing evidence, theoretical foundations, strategies, or evaluation feedbacks for critical problem solving or policy-making on the national, provincial, and city/county level; (b) higher institution based think tanks organize various types of research programs for different time spans to serve different phases of policy planning, decision making, and policy implementation; (c) in order to transform research-based higher institutions into educational think tanks, the institutions must promote paradigm shift that promotes issue-oriented field studies, large data mining and analysis, empirical studies, and trans-disciplinary research collaborations; and (d) the five cases showed distinguished features in their way of constructing think tanks, and yet they also exposed obstacles and challenges such as independency of the think tanks, the discourse shift from academic papers to consultancy report for policy makers, weakness in empirical research methods, lack of experience in trans-disciplinary collaboration. The authors finally put forth implications for think tank construction in China and abroad.Keywords: education policy-making, educational research, educational think tank, higher institution
Procedia PDF Downloads 157496 The Development of Wind Energy and Its Social Acceptance: The Role of Income Received by Wind Farm Owners, the Case of Galicia, Northwest Spain
Authors: X. Simon, D. Copena, M. Montero
Abstract:
The last decades have witnessed a significant increase in renewable energy, especially wind energy, to achieve sustainable development. Specialized literature in this field has carried out interesting case studies to extensively analyze both the environmental benefits of this energy and its social acceptance. However, to the best of our knowledge, work to date makes no analysis of the role of private owners of lands with wind potential within a broader territory of strong wind implantation, nor does it estimate their economic incomes relating them to social acceptance. This work fills this gap by focusing on Galicia, territory housing over 4,000 wind turbines and almost 3,400 MW of power. The main difficulty in getting this financial information is that it is classified, not public. We develop methodological techniques (semi- structured interviews and work groups), inserted within the Participatory Research, to overcome this important obstacle. In this manner, the work directly compiles qualitative and quantitative information on the processes as well as the economic results derived from implementing wind energy in Galicia. During the field work, we held 106 semi-structured interviews and 32 workshops with owners of lands occupied by wind farms. The compiled information made it possible to create the socioeconomic database on wind energy in Galicia (SDWEG). This database collects a diversity of quantitative and qualitative information and contains economic information on the income received by the owners of lands occupied by wind farms. In the Galician case, regulatory framework prevented local participation under the community wind farm formula. The possibility of local participation in the new energy model narrowed down to companies wanting to install a wind farm and demanding land occupation. The economic mechanism of local participation begins here, thus explaining the level of acceptance of wind farms. Land owners can receive significant income given that these payments constitute an important source of economic resources, favor local economic activity, allow rural areas to develop productive dynamism projects and improve the standard of living of rural inhabitants. This work estimates that land owners in Galicia perceive about 10 million euros per year in total wind revenues. This represents between 1% and 2% of total wind farm invoicing. On the other hand, relative revenues (Euros per MW), far from the amounts reached in other spaces, show enormous payment variability. This signals the absence of a regulated market, the predominance of partial agreements, and the existence of asymmetric positions between owners and developers. Sustainable development requires the replacement of conventional technologies by low environmental impact technologies, especially those that emit less CO₂. However, this new paradigm also requires rural owners to participate in the income derived from the structural transformation processes linked to sustainable development. This paper demonstrates that regulatory framework may contribute to increasing sustainable technologies with high social acceptance without relevant local economic participation.Keywords: regulatory framework, social acceptance, sustainable development, wind energy, wind income for landowners
Procedia PDF Downloads 142495 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’
Authors: Luminiţa Duţică, Gheorghe Duţică
Abstract:
One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.Keywords: heterophony, modalism, serialism, synchrony, syntax
Procedia PDF Downloads 340494 Pregnancy Outcome in Women with HIV Infection from a Tertiary Care Centre of India
Authors: Kavita Khoiwal, Vatsla Dadhwal, K. Aparna Sharma, Dipika Deka, Plabani Sarkar
Abstract:
Introduction: About 2.4 million (1.93 - 3.04 million) people are living with HIV/AIDS in India. Of all HIV infections, 39% (9,30,000) are among women. 5.4% of infections are from mother to child transmission (MTCT), 25,000 infected children are born every year. Besides the risk of mother to child transmission of HIV, these women are at risk of the higher adverse pregnancy outcome. The objectives of the study were to compare the obstetric and neonatal outcome in women who are HIV positive with low-risk HIV negative women and effect of antiretroviral drugs on preterm birth and IUGR. Materials and Methods: This is a retrospective case record analysis of 212 HIV-positive women delivering between 2002 to 2015, in a tertiary health care centre which was compared with 238 HIV-negative controls. Women who underwent medical termination of pregnancy and abortion were excluded from the study. Obstetric outcome analyzed were pregnancy induced hypertension, HIV positive intrauterine growth restriction, preterm birth, anemia, gestational diabetes and intrahepatic cholestasis of pregnancy. Neonatal outcome analysed were birth weight, apgar score, NICU admission and perinatal transmission.HIV-positiveOut of 212 women, 204 received antiretroviral therapy (ART) to prevent MTCT, 27 women received single dose nevirapine (sdNVP) or sdNVP tailed with 7 days of zidovudine and lamivudine (ZDV + 3TC), 15 received ZDV, 82 women received duovir and 80 women received triple drug therapy depending upon the time period of presentation. Results: Mean age of 212 HIV positive women was 25.72+3.6 years, 101 women (47.6 %) were primigravida. HIV positive status was diagnosed during pregnancy in 200 women while 12 women were diagnosed prior to conception. Among 212 HIV positive women, 20 (9.4 %) women had preterm delivery (< 37 weeks), 194 women (91.5 %) delivered by cesarean section and 18 women (8.5 %) delivered vaginally. 178 neonates (83.9 %) received exclusive top feeding and 34 neonates (16.03 %) received exclusive breast feeding. When compared to low risk HIV negative women (n=238), HIV positive women were more likely to deliver preterm (OR 1.27), have anemia (OR 1.39) and intrauterine growth restriction (OR 2.07). Incidence of pregnancy induced hypertension, diabetes mellitus and ICP was not increased. Mean birth weight was significantly lower in HIV positive women (2593.60+499 gm) when compared to HIV negative women (2919+459 gm). Complete follow up is available for 148 neonates till date, rest are under evaluation. Out of these 7 neonates found to have HIV positive status. Risk of preterm birth (P value = 0.039) and IUGR (P value = 0.739) was higher in HIV positive women who did not receive any ART during pregnancy than women who received ART. Conclusion: HIV positive pregnant women are at increased risk of adverse pregnancy outcome. Multidisciplinary team approach and use of highly active antiretroviral therapy can optimize the maternal and perinatal outcome.Keywords: antiretroviral therapy, HIV infection, IUGR, preterm birth
Procedia PDF Downloads 259493 Life-Cycle Assessment of Residential Buildings: Addressing the Influence of Commuting
Authors: J. Bastos, P. Marques, S. Batterman, F. Freire
Abstract:
Due to demands of a growing urban population, it is crucial to manage urban development and its associated environmental impacts. While most of the environmental analyses have addressed buildings and transportation separately, both the design and location of a building affect environmental performance and focusing on one or the other can shift impacts and overlook improvement opportunities for more sustainable urban development. Recently, several life-cycle (LC) studies of residential buildings have integrated user transportation, focusing exclusively on primary energy demand and/or greenhouse gas emissions. Additionally, most papers considered only private transportation (mainly car). Although it is likely to have the largest share both in terms of use and associated impacts, exploring the variability associated with mode choice is relevant for comprehensive assessments and, eventually, for supporting decision-makers. This paper presents a life-cycle assessment (LCA) of a residential building in Lisbon (Portugal), addressing building construction, use and user transportation (commuting with private and public transportation). Five environmental indicators or categories are considered: (i) non-renewable primary energy (NRE), (ii) greenhouse gas intensity (GHG), (iii) eutrophication (EUT), (iv) acidification (ACID), and (v) ozone layer depletion (OLD). In a first stage, the analysis addresses the overall life-cycle considering the statistical model mix for commuting in the residence location. Then, a comparative analysis compares different available transportation modes to address the influence mode choice variability has on the results. The results highlight the large contribution of transportation to the overall LC results in all categories. NRE and GHG show strong correlation, as the three LC phases contribute with similar shares to both of them: building construction accounts for 6-9%, building use for 44-45%, and user transportation for 48% of the overall results. However, for other impact categories there is a large variation in the relative contribution of each phase. Transport is the most significant phase in OLD (60%); however, in EUT and ACID building use has the largest contribution to the overall LC (55% and 64%, respectively). In these categories, transportation accounts for 31-38%. A comparative analysis was also performed for four alternative transport modes for the household commuting: car, bus, motorcycle, and company/school collective transport. The car has the largest results in all impact categories. When compared to the overall LC with commuting by car, mode choice accounts for a variability of about 35% in NRE, GHG and OLD (the categories where transportation accounted for the largest share of the LC), 24% in EUT and 16% in ACID. NRE and GHG show a strong correlation because all modes have internal combustion engines. The second largest results for NRE, GHG and OLD are associated with commuting by motorcycle; however, for ACID and EUT this mode has better performance than bus and company/school transport. No single transportation mode performed best in all impact categories. Integrated assessments of buildings are needed to avoid shifts of impacts between life-cycle phases and environmental categories, and ultimately to support decision-makers.Keywords: environmental impacts, LCA, Lisbon, transport
Procedia PDF Downloads 362492 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 203491 The Effectiveness of Congressional Redistricting Commissions: A Comparative Approach Investigating the Ability of Commissions to Reduce Gerrymandering with the Wilcoxon Signed-Rank Test
Authors: Arvind Salem
Abstract:
Voters across the country are transferring the power of redistricting from the state legislatures to commissions to secure “fairer” districts by curbing the influence of gerrymandering on redistricting. Gerrymandering, intentionally drawing distorted districts to achieve political advantage, has become extremely prevalent, generating widespread voter dissatisfaction and resulting in states adopting commissions for redistricting. However, the efficacy of these commissions is dubious, with some arguing that they constitute a panacea for gerrymandering, while others contend that commissions have relatively little effect on gerrymandering. A result showing that commissions are effective would allay these fears, supplying ammunition for activists across the country to advocate for commissions in their state and reducing the influence of gerrymandering across the nation. However, a result against commissions may reaffirm doubts about commissions and pressure lawmakers to make improvements to commissions or even abandon the commission system entirely. Additionally, these commissions are publicly funded: so voters have a financial interest and responsibility to know if these commissions are effective. Currently, nine states place commissions in charge of redistricting, Arizona, California, Colorado, Michigan, Idaho, Montana, Washington, and New Jersey (Hawaii also has a commission but will be excluded for reasons mentioned later). This study compares the degree of gerrymandering in the 2022 election (“after”) to the election in which voters decided to adopt commissions (“before”). The before-election provides a valuable benchmark for assessing the efficacy of commissions since voters in those elections clearly found the districts to be unfair; therefore, comparing the current election to that one is a good way to determine if commissions have improved the situation. At the time Hawaii adopted commissions, it was merely a single at-large district, so it is before metrics could not be calculated, and it was excluded. This study will use three methods to quantify the degree of gerrymandering: the efficiency gap, the percentage of seats and the percentage of votes difference, and the mean-median difference. Each of these metrics has unique advantages and disadvantages, but together, they form a balanced approach to quantifying gerrymandering. The study uses a Wilcoxon Signed-Rank Test with a null hypothesis that the value of the metrics is greater than or equal to after the election than before and an alternative hypothesis that the value of these metrics is greater in the before the election than after using a 0.05 significance level and an expected difference of 0. Accepting the alternative hypothesis would constitute evidence that commissions reduce gerrymandering to a statistically significant degree. However, this study could not conclude that commissions are effective. The p values obtained for all three metrics (p=0.42 for the efficiency gap, p=0.94 for the percentage of seats and percentage of votes difference, and p=0.47 for the mean-median difference) were extremely high and far from the necessary value needed to conclude that commissions are effective. These results halt optimism about commissions and should spur serious discussion about the effectiveness of these commissions and ways to change them moving forward so that they can accomplish their goal of generating fairer districts.Keywords: commissions, elections, gerrymandering, redistricting
Procedia PDF Downloads 72490 Invasion of Scaevola sericea (Goodeniaceae) in Cuba: Invasive Dynamic and Density-Dependent Relationship with the Native Species Tournefortia gnaphalodes (Boraginaceae)
Authors: Jorge Ferro-Diaz, Lazaro Marquez-Llauger, Jose Alberto Camejo-Lamas, Lazaro Marquez-Govea
Abstract:
The invasion of Scaevola sericea Vahl (Goodeniaceae) in Cuba is a recent process, this exotic invasive species was reported for the first time, in the national territory, by 2008. S. sericea is native to the coasts around the Indian Ocean and western Pacific, common on sandy beaches; it has expanded rapidly around the planet by either natural or anthropic causes, mainly due to its use in hotel gardening. Cuba is highly vulnerable to the colonization of these species, mainly due to tropical hurricanes which have increased in the last decades; it also affects other native species such as Tournefortia gnaphalodes (L.) R. Br. (Boraginaceae) that show invasive manifestations because of the unbalanced state of demographic processes of littoral vegetation, which has been studied by authors during the last 10 years. The fast development of Cuban tourism has encouraged the use of exotic species in gardening that invade large sectors of sandy coasts. Taking into account the importance of assessing the impacts dimensions and adopting effective control measures, a monitoring program for the invasion of S. sericea in Cuba was undertaken. The program has been implemented since 2013 and the main objective was to identify invasive patterns and interactions with other native species of coastal vegetation. This experience also aimed to validate the design and propose a standardized monitoring protocol to be applied throughout the country. In the Cuban territory, 12 sites were chosen, where there were established 24 permanent plots of 100 m2; measurements were taken twice a year taking into consideration variables such as abundance, plant height, soil cover, flora and companion vegetation, density and frequency; other physical variables of the beaches were also measured. Similarly, for associated individuals of T. gnaphalodes, the same variables were measured. The results of these first four years allowed us to document patterns of S. sericea invasion, highlighting the use of adventitious roots to enhance their colonization, and to characterize demographic indicators, ecosystem affections, and interactions with native plants. A density-dependent relationship with T. gnaphalodes was documented, finding a controlling effect on S. sericea, so that a manipulation experiment was applied to evaluate possible management actions to be incorporated in the Plans of the protected areas involved. With these results, it was concluded, for the evaluated sites, that S. sericea has had an invasion dynamics ruled by effects of coastal dynamics, more intense in beaches with affectations to the native vegetation, and more controlled in beaches with more preserved vegetation. It was found that when S. sericea is established, the mechanism that most reinforces its invasion is the use of adventitious roots, used to expand the patches and colonize beach sectors. It was also found that when the density of T. gnaphalodes increases, it detains the expansion of S. sericea and reduces its colonization possibilities, behaving as a natural controller of its biological invasion. The results include a proposal of a new Monitoring Protocol for Scaevola sericea in Cuba, with the possibility of extending its implementation to other countries in the region.Keywords: biological invasion, exotic invasive species, plant interactions, Scaevola sericea
Procedia PDF Downloads 226489 Role of Functional Divergence in Specific Inhibitor Design: Using γ-Glutamyltranspeptidase (GGT) as a Model Protein
Authors: Ved Vrat Verma, Rani Gupta, Manisha Goel
Abstract:
γ-glutamyltranspeptidase (GGT: EC 2.3.2.2) is an N-terminal nucleophile hydrolase conserved in all three domains of life. GGT plays a key role in glutathione metabolism where it catalyzes the breakage of the γ-glutamyl bonds and transfer of γ-glutamyl group to water (hydrolytic activity) or amino acids or short peptides (transpeptidase activity). GGTs from bacteria, archaea, and eukaryotes (human, rat and mouse) are homologous proteins sharing >50% sequence similarity and conserved four layered αββα sandwich like three dimensional structural fold. These proteins though similar in their structure to each other, are quite diverse in their enzyme activity: some GGTs are better at hydrolysis reactions but poor in transpeptidase activity, whereas many others may show opposite behaviour. GGT is known to be involved in various diseases like asthma, parkinson, arthritis, and gastric cancer. Its inhibition prior to chemotherapy treatments has been shown to sensitize tumours to the treatment. Microbial GGT is known to be a virulence factor too, important for the colonization of bacteria in host. However, all known inhibitors (mimics of its native substrate, glutamate) are highly toxic because they interfere with other enzyme pathways. However, a few successful efforts have been reported previously in designing species specific inhibitors. We aim to leverage the diversity seen in GGT family (pathogen vs. eukaryotes) for designing specific inhibitors. Thus, in the present study, we have used DIVERGE software to identify sites in GGT proteins, which are crucial for the functional and structural divergence of these proteins. Since, type II divergence sites vary in clade specific manner, so type II divergent sites were our focus of interest throughout the study. Type II divergent sites were identified for pathogen vs. eukaryotes clusters and sites were marked on clade specific representative structures HpGGT (2QM6) and HmGGT (4ZCG) of pathogen and eukaryotes clade respectively. The crucial divergent sites within 15 A radii of the binding cavity were highlighted, and in-silico mutations were performed on these sites to delineate the role of these sites on the mechanism of catalysis and protein folding. Further, the amino acid network (AAN) analysis was also performed by Cytoscape to delineate assortative mixing for cavity divergent sites which could strengthen our hypothesis. Additionally, molecular dynamics simulations were performed for wild complexes and mutant complexes close to physiological conditions (pH 7.0, 0.1 M ionic strength and 1 atm pressure) and the role of putative divergence sites and structural integrities of the homologous proteins have been analysed. The dynamics data were scrutinized in terms of RMSD, RMSF, non-native H-bonds and salt bridges. The RMSD, RMSF fluctuations of proteins complexes are compared, and the changes at protein ligand binding sites were highlighted. The outcomes of our study highlighted some crucial divergent sites which could be used for novel inhibitors designing in a species-specific manner. Since, for drug development, it is challenging to design novel drug by targeting similar protein which exists in eukaryotes, so this study could set up an initial platform to overcome this challenge and help to deduce the more effective targets for novel drug discovery.Keywords: γ-glutamyltranspeptidase, divergence, species-specific, drug design
Procedia PDF Downloads 267488 Arthroscopic Superior Capsular Reconstruction Using the Long Head of the Biceps Tendon (LHBT)
Authors: Ho Sy Nam, Tang Ha Nam Anh
Abstract:
Background: Rotator cuff tears are a common problem in the aging population. The prevalence of massive rotator cuff tears varies in some studies from 10% to 40%. Of irreparable rotator cuff tears (IRCTs), which are mostly associated with massive tear size, 79% are estimated to have recurrent tears after surgical repair. Recent studies have shown that superior capsule reconstruction (SCR) in massive rotator cuff tears can be an efficient technique with optimistic clinical scores and preservation of stable glenohumeral stability. Superior capsule reconstruction techniques most commonly use either fascia lata autograft or dermal allograft, both of which have their own benefits and drawbacks (such as the potential for donor site issues, allergic reactions, and high cost). We propose a simple technique for superior capsule reconstruction that involves using the long head of the biceps tendon as a local autograft; therefore, the comorbidities related to graft harvesting are eliminated. The long head of the biceps tendon proximal portion is relocated to the footprint and secured as the SCR, serving to both stabilize the glenohumeral joint and maintain vascular supply to aid healing. Objective: The purpose of this study is to assess the clinical outcomes of patients with large to massive RCTs treated by SCR using LHBT. Materials and methods: A study was performed of consecutive patients with large to massive RCTs who were treated by SCR using LHBT between January 2022 and December 2022. We use one double-loaded suture anchor to secure the long head of the biceps to the middle of the footprint. Two more anchors are used to repair the rotator cuff using a single-row technique, which is placed anteriorly and posteriorly on the lateral side of the previously transposed LHBT. Results: The 3 men and 5 women had an average age of 61.25 years (range 48 to 76 years) at the time of surgery. The average follow-up was 8.2 months (6 to 10 months) after surgery. The average preoperative ASES was 45.8, and the average postoperative ASES was 85.83. The average postoperative UCLA score was 29.12. VAS score was improved from 5.9 to 1.12. The mean preoperative ROM of forward flexion and external rotation of the shoulder was 720 ± 160 and 280 ± 80, respectively. The mean postoperative ROM of forward flexion and external rotation were 1310 ± 220 and 630 ± 60, respectively. There were no cases of progression of osteoarthritis or rotator cuff muscle atrophy. Conclusion: SCR using LHBT is considered a treatment option for patients with large or massive RC tears. It can restore superior glenohumeral stability and function of the shoulder joint and can be an effective procedure for selected patients, helping to avoid progression to cuff tear arthropathy.Keywords: superior capsule reconstruction, large or massive rotator cuff tears, the long head of the biceps, stabilize the glenohumeral joint
Procedia PDF Downloads 75