Search results for: limit cycle
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3531

Search results for: limit cycle

261 Polymer Nanocomposite Containing Silver Nanoparticles for Wound Healing

Authors: Patrícia Severino, Luciana Nalone, Daniele Martins, Marco Chaud, Classius Ferreira, Cristiane Bani, Ricardo Albuquerque

Abstract:

Hydrogels produced with polymers have been used in the development of dressings for wound treatment and tissue revitalization. Our study on polymer nanocomposites containing silver nanoparticles shows antimicrobial activity and applications in wound healing. The effects are linked with the slow oxidation and Ag⁺ liberation to the biological environment. Furthermore, bacterial cell membrane penetration and metabolic disruption through cell cycle disarrangement also contribute to microbial cell death. The silver antimicrobial activity has been known for many years, and previous reports show that low silver concentrations are safe for human use. This work aims to develop a hydrogel using natural polymers (sodium alginate and gelatin) combined with silver nanoparticles for wound healing and with antimicrobial properties in cutaneous lesions. The hydrogel development utilized different sodium alginate and gelatin proportions (20:80, 50:50 and 80:20). The silver nanoparticles incorporation was evaluated at the concentrations of 1.0, 2.0 and 4.0 mM. The physico-chemical properties of the formulation were evaluated using ultraviolet-visible (UV-Vis) absorption spectroscopy, Fourier transform infrared (FTIR) spectroscopy, differential scanning calorimetry (DSC), and thermogravimetric (TG) analysis. The morphological characterization was made using transmission electron microscopy (TEM). Human fibroblast (L2929) viability assay was performed with a minimum inhibitory concentration (MIC) assessment as well as an in vivo cicatrizant test. The results suggested that sodium alginate and gelatin in the (80:20) proportion with 4 mM of AgNO₃ in the (UV-Vis) exhibited a better hydrogel formulation. The nanoparticle absorption spectra of this analysis showed a maximum band around 430 - 450 nm, which suggests a spheroidal form. The TG curve exhibited two weight loss events. DSC indicated one endothermic peak at 230-250 °C, due to sample fusion. The polymers acted as stabilizers of a nanoparticle, defining their size and shape. Human fibroblast viability assay L929 gave 105 % cell viability with a negative control, while gelatin presented 96% viability, alginate: gelatin (80:20) 96.66 %, and alginate 100.33 % viability. The sodium alginate:gelatin (80:20) exhibited significant antimicrobial activity, with minimal bacterial growth at a ratio of 1.06 mg.mL⁻¹ in Pseudomonas aeruginosa and 0.53 mg.mL⁻¹ in Staphylococcus aureus. The in vivo results showed a significant reduction in wound surface area. On the seventh day, the hydrogel-nanoparticle formulation reduced the total area of injury by 81.14 %, while control reached a 45.66 % reduction. The results suggest that silver-hydrogel nanoformulation exhibits potential for wound dressing therapeutics.

Keywords: nanocomposite, wound healing, hydrogel, silver nanoparticle

Procedia PDF Downloads 85
260 Changes in Physicochemical Characteristics of a Serpentine Soil and in Root Architecture of a Hyperaccumulating Plant Cropped with a Legume

Authors: Ramez F. Saad, Ahmad Kobaissi, Bernard Amiaud, Julien Ruelle, Emile Benizri

Abstract:

Agromining is a new technology that establishes agricultural systems on ultramafic soils in order to produce valuable metal compounds such as nickel (Ni), with the final aim of restoring a soil's agricultural functions. But ultramafic soils are characterized by low fertility levels and this can limit yields of hyperaccumulators and metal phytoextraction. The objectives of the present work were to test if the association of a hyperaccumulating plant (Alyssum murale) and a Fabaceae (Vicia sativa var. Prontivesa) could induce changes in physicochemical characteristics of a serpentine soil and in root architecture of a hyperaccumulating plant then lead to efficient agromining practices through soil quality improvement. Based on standard agricultural systems, consisting in the association of legumes and another crop such as wheat or rape, a three-month rhizobox experiment was carried out to study the effect of the co-cropping (Co) or rotation (Ro) of a hyperaccumulating plant (Alyssum murale) with a legume (Vicia sativa) and incorporating legume biomass to soil, in comparison with mineral fertilization (FMo), on the structure and physicochemical properties of an ultramafic soil and on root architecture. All parameters measured (biomass, C and N contents, and taken-up Ni) on Alyssum murale conducted in co-cropping system showed the highest values followed by the mineral fertilization and rotation (Co > FMo > Ro), except for root nickel yield for which rotation was better than the mineral fertilization (Ro > FMo). The rhizosphere soil of Alyssum murale in co-cropping had larger soil particles size and better aggregates stability than other treatments. Using geostatistics, co-cropped Alyssum murale showed a greater root surface area spatial distribution. Moreover, co-cropping and rotation-induced lower soil DTPA-extractable nickel concentrations than other treatments, but higher pH values. Alyssum murale co-cropped with a legume showed a higher biomass production, improved soil physical characteristics and enhanced nickel phytoextraction. This study showed that the introduction of a legume into Ni agromining systems could improve yields of dry biomass of the hyperaccumulating plant used and consequently, the yields of Ni. Our strategy can decrease the need to apply fertilizers and thus minimizes the risk of nitrogen leaching and underground water pollution. Co-cropping of Alyssum murale with the legume showed a clear tendency to increase nickel phytoextraction and plant biomass in comparison to rotation treatment and fertilized mono-culture. In addition, co-cropping improved soil physical characteristics and soil structure through larger and more stabilized aggregates. It is, therefore, reasonable to conclude that the use of legumes in Ni-agromining systems could be a good strategy to reduce chemical inputs and to restore soil agricultural functions. Improving the agromining system by the replacement of inorganic fertilizers could simultaneously be a safe way of rehabilitating degraded soils and a method to restore soil quality and functions leading to the recovery of ecosystem services.

Keywords: plant association, legumes, hyperaccumulating plants, ultramafic soil physicochemical properties

Procedia PDF Downloads 146
259 Investigation of Xanthomonas euvesicatoria on Seed Germination and Seed to Seedling Transmission in Tomato

Authors: H. Mayton, X. Yan, A. G. Taylor

Abstract:

Infested tomato seeds were used to investigate the influence of Xanthomonas euvesicatoria on germination and seed to seedling transmission in a controlled environment and greenhouse assays in an effort to develop effective seed treatments and characterize seed borne transmission of bacterial leaf spot of tomato. Bacterial leaf spot of tomato, caused by four distinct Xanthomonas species, X. euvesicatoria, X. gardneri, X. perforans, and X. vesicatoria, is a serious disease worldwide. In the United States, disease prevention is expensive for commercial growers in warm, humid regions of the country, and crop losses can be devastating. In this study, four different infested tomato seed lots were extracted from tomato fruits infected with bacterial leaf spot from a field in New York State in 2017 that had been inoculated with X. euvesicatoria. In addition, vacuum infiltration at 61 kilopascals for 1, 5, 10, and 15 minutes and seed soaking for 5, 10, 15, and 30 minutes with different bacterial concentrations were used to artificially infest seed in the laboratory. For controlled environment assays, infested tomato seeds from the field and laboratory were placed othe n moistened blue blotter in square plastic boxes (10 cm x 10 cm) and incubated at 20/30 ˚C with an 8/16 hour light cycle, respectively. Infested tomato seeds from the field and laboratory were also planted in small plastic trays in soil (peat-lite medium) and placed in the greenhouse with 24/18 ˚C day and night temperatures, respectively, with a 14-hour photoperiod. Seed germination was assessed after eight days in the laboratory and 14 days in the greenhouse. Polymerase chain reaction (PCR) using the hrpB7 primers (RST65 [5’- GTCGTCGTTACGGCAAGGTGGTG-3’] and RST69 [5’-TCGCCCAGCGTCATCAGGCCATC-3’]) was performed to confirm presence or absence of the bacterial pathogen in seed lots collected from the field and in germinating seedlings in all experiments. For infested seed lots from the field, germination was lowest (84%) in the seed lot with the highest level of bacterial infestation (55%) and ranged from 84-98%. No adverse effect on germination was observed from artificially infested seeds for any bacterial concentration and method of infiltration when compared to a non-infested control. Germination in laboratory assays for artificially infested seeds ranged from 82-100%. In controlled environment assays, 2.5 % were PCR positive for the pathogen, and in the greenhouse assays, no infected seedlings were detected. From these experiments, X. euvesicatoria does not appear to adversely influence germination. The lowest rate of germination from field collected seed may be due to contamination with multiple pathogens and saprophytic organisms as no effect of artificial bacterial seed infestation in the laboratory on germination was observed. No evidence of systemic movement from seed to seedling was observed in the greenhouse assays; however, in the controlled environment assays, some seedlings were PCR positive. Additional experiments are underway with green fluorescent protein-expressing isolates to further characterize seed to seedling transmission of the bacterial leaf spot pathogen in tomato.

Keywords: bacterial leaf spot, seed germination, tomato, Xanthomonas euvesicatoria

Procedia PDF Downloads 116
258 Localized Recharge Modeling of a Coastal Aquifer from a Dam Reservoir (Korba, Tunisia)

Authors: Nejmeddine Ouhichi, Fethi Lachaal, Radhouane Hamdi, Olivier Grunberger

Abstract:

Located in Cap Bon peninsula (Tunisia), the Lebna dam was built in 1987 to balance local water salt intrusion taking place in the coastal aquifer of Korba. The first intention was to reduce coastal groundwater over-pumping by supplying surface water to a large irrigation system. The unpredicted beneficial effect was recorded with the occurrence of a direct localized recharge to the coastal aquifer by leakage through the geological material of the southern bank of the lake. The hydrological balance of the reservoir dam gave an estimation of the annual leakage volume, but dynamic processes and sound quantification of recharge inputs are still required to understand the localized effect of the recharge in terms of piezometry and quality. Present work focused on simulating the recharge process to confirm the hypothesis, and established a sound quantification of the water supply to the coastal aquifer and extend it to multi-annual effects. A spatial frame of 30km² was used for modeling. Intensive outcrops and geophysical surveys based on 68 electrical resistivity soundings were used to characterize the aquifer 3D geometry and the limit of the Plio-quaternary geological material concerned by the underground flow paths. Permeabilities were determined using 17 pumping tests on wells and piezometers. Six seasonal piezometric surveys on 71 wells around southern reservoir dam banks were performed during the 2019-2021 period. Eight monitoring boreholes of high frequency (15min) piezometric data were used to examine dynamical aspects. Model boundary conditions were specified using the geophysics interpretations coupled with the piezometric maps. The dam-groundwater flow model was performed using Visual MODFLOW software. Firstly, permanent state calibration based on the first piezometric map of February 2019 was established to estimate the permanent flow related to the different reservoir levels. Secondly, piezometric data for the 2019-2021 period were used for transient state calibration and to confirm the robustness of the model. Preliminary results confirmed the temporal link between the reservoir level and the localized recharge flow with a strong threshold effect for levels below 16 m.a.s.l. The good agreement of computed flow through recharge cells on the southern banks and hydrological budget of the reservoir open the path to future simulation scenarios of the dilution plume imposed by the localized recharge. The dam reservoir-groundwater flow-model simulation results approve a potential for storage of up to 17mm/year in existing wells, under gravity-feed conditions during level increases on the reservoir into the three years of operation. The Lebna dam groundwater flow model characterized a spatiotemporal relation between groundwater and surface water.

Keywords: leakage, MODFLOW, saltwater intrusion, surface water-groundwater interaction

Procedia PDF Downloads 122
257 Improving Patient and Clinician Experience of Oral Surgery Telephone Clinics

Authors: Katie Dolaghan, Christina Tran, Kim Hamilton, Amanda Beresford, Vicky Adams, Jamie Toole, John Marley

Abstract:

During the Covid 19 pandemic routine outpatient appointments were not possible face to face. That resulted in many branches of healthcare starting virtual clinics. These clinics have continued following the return to face to face patient appointments. With these new types of clinic it is important to ensure that a high standard of patient care is maintained. In order to improve patient and clinician experience of the telephone clinics a quality improvement project was carried out to ensure the patient and clinician experience of these clinics was enhanced whilst remaining a safe, effective and an efficient use of resources. The project began by developing a process map for the consultation process and agreed on the design of a driver diagram and tests of change. In plan do study act (PDSA) cycle1 a single consultant completed an online survey after every patient encounter over a 5 week period. Baseline patient responses were collected using a follow-up telephone survey for each patient. Piloting led to several iterations of both survey designs. Salient results of PDSA1 included; patients not receiving appointment letters, patients feeling more anxious about a virtual appointment and many would prefer a face to face appointment. The initial clinician data showed a positive response with a provisional diagnosis being reached in 96.4% of encounters. PDSA cycle 2 included provision of a patient information sheet and information leaflets relevant to the patients’ conditions were developed and sent following new patient telephone clinics with follow-up survey analysis as before to monitor for signals of change. We also introduced the ability for patients to send an images of their lesion prior to the consultation. Following the changes implemented we noted an improvement in patient satisfaction and, in fact, many patients preferring virtual clinics as it lead to less disruption of their working lives. The extra reading material both before and after the appointments eased patients’ anxiety around virtual clinics and helped them to prepare for their appointment. Following the patient feedback virtual clinics are now used for review patients as well, with all four consultants within the department continuing to utilise virtual clinics. During this presentation the progression of these clinics and the reasons that these clinics are still operating following the return to face to face appointments will be explored. The lessons that have been gained using a QI approach have helped to deliver an optimal service that is valid and reliable as well as being safe, effective and efficient for the patient along with helping reduce the pressures from ever increasing waiting lists. In summary our work in improving the quality of virtual clinics has resulted in improved patient satisfaction along with reduced pressures on the facilities of the health trust.

Keywords: clinic, satisfaction, telephone, virtual

Procedia PDF Downloads 43
256 Diagnosis of Intermittent High Vibration Peaks in Industrial Gas Turbine Using Advanced Vibrations Analysis

Authors: Abubakar Rashid, Muhammad Saad, Faheem Ahmed

Abstract:

This paper provides a comprehensive study pertaining to diagnosis of intermittent high vibrations on an industrial gas turbine using detailed vibrations analysis, followed by its rectification. Engro Polymer & Chemicals Limited, a Chlor-Vinyl complex located in Pakistan has a captive combined cycle power plant having two 28 MW gas turbines (make Hitachi) & one 15 MW steam turbine. In 2018, the organization faced an issue of high vibrations on one of the gas turbines. These high vibration peaks appeared intermittently on both compressor’s drive end (DE) & turbine’s non-drive end (NDE) bearing. The amplitude of high vibration peaks was between 150-170% on the DE bearing & 200-300% on the NDE bearing from baseline values. In one of these episodes, the gas turbine got tripped on “High Vibrations Trip” logic actuated at 155µm. Limited instrumentation is available on the machine, which is monitored with GE Bently Nevada 3300 system having two proximity probes installed at Turbine NDE, Compressor DE &at Generator DE & NDE bearings. Machine’s transient ramp-up & steady state data was collected using ADRE SXP & DSPI 408. Since only 01 key phasor is installed at Turbine high speed shaft, a derived drive key phasor was configured in ADRE to obtain low speed shaft rpm required for data analysis. By analyzing the Bode plots, Shaft center line plot, Polar plot & orbit plots; rubbing was evident on Turbine’s NDE along with increased bearing clearance of Turbine’s NDE radial bearing. The subject bearing was then inspected & heavy deposition of carbonized coke was found on the labyrinth seals of bearing housing with clear rubbing marks on shaft & housing covering at 20-25 degrees on the inner radius of labyrinth seals. The collected coke sample was tested in laboratory & found to be the residue of lube oil in the bearing housing. After detailed inspection & cleaning of shaft journal area & bearing housing, new radial bearing was installed. Before assembling the bearing housing, cleaning of bearing cooling & sealing air lines was also carried out as inadequate flow of cooling & sealing air can accelerate coke formation in bearing housing. The machine was then taken back online & data was collected again using ADRE SXP & DSPI 408 for health analysis. The vibrations were found in acceptable zone as per ISO standard 7919-3 while all other parameters were also within vendor defined range. As a learning from subject case, revised operating & maintenance regime has also been proposed to enhance machine’s reliability.

Keywords: ADRE, bearing, gas turbine, GE Bently Nevada, Hitachi, vibration

Procedia PDF Downloads 124
255 Plastic Behavior of Steel Frames Using Different Concentric Bracing Configurations

Authors: Madan Chandra Maurya, A. R. Dar

Abstract:

Among the entire natural calamities earthquake is the one which is most devastating. If the losses due to all other calamities are added still it will be very less than the losses due to earthquakes. So it means we must be ready to face such a situation, which is only possible if we make our structures earthquake resistant. A review of structural damages to the braced frame systems after several major earthquakes—including recent earthquakes—has identified some anticipated and unanticipated damage. This damage has prompted many engineers and researchers around the world to consider new approaches to improve the behavior of braced frame systems. Extensive experimental studies over the last fourty years of conventional buckling brace components and several braced frame specimens have been briefly reviewed, highlighting that the number of studies on the full-scale concentric braced frames is still limited. So for this reason the study surrounds the words plastic behavior, steel structure, brace frame system. In this study, there are two different analytical approaches which have been used to predict the behavior and strength of an un-braced frame. The first is referred as incremental elasto-plastic analysis a plastic approach. This method gives a complete load-deflection history of the structure until collapse. It is based on the plastic hinge concept for fully plastic cross sections in a structure under increasing proportional loading. In this, the incremental elasto-plastic analysis- hinge by hinge method is used in this study because of its simplicity to know the complete load- deformation history of two storey un-braced scaled model. After that the experiments were conducted on two storey scaled building model with and without bracing system to know the true or experimental load deformation curve of scaled model. Only way, is to understand and analyze these techniques and adopt these techniques in our structures. The study named as Plastic Behavior of Steel Frames using Different Concentric Bracing Configurations deals with all this. This study aimed at improving the already practiced traditional systems and to check the behavior and its usefulness with respect to X-braced system as reference model i.e. is how plastically it is different from X-braced. Laboratory tests involved determination of plastic behavior of these models (with and without brace) in terms of load-deformation curve. Thus, the aim of this study is to improve the lateral displacement resistance capacity by using new configuration of brace member in concentric manner which is different from conventional concentric brace. Once the experimental and manual results (using plastic approach) compared, simultaneously the results from both approach were also compared with nonlinear static analysis (pushover analysis) approach using ETABS i.e how both the previous results closely depicts the behavior in pushover curve and upto what limit. Tests results shows that all the three approaches behaves somewhat in similar manner upto yield point and also the applicability of elasto-plastic analysis (hinge by hinge method) to know the plastic behavior. Finally the outcome from three approaches shows that the newer one configuration which is chosen for study behaves in-between the plane frame (without brace or reference frame) and the conventional X-brace frame.

Keywords: elasto-plastic analysis, concentric steel braced frame, pushover analysis, ETABS

Procedia PDF Downloads 210
254 Passive Greenhouse Systems in Poland

Authors: Magdalena Grudzińska

Abstract:

Passive systems allow solar radiation to be converted into thermal energy thanks to appropriate building construction. Greenhouse systems are particularly worth attention, due to the low costs of their realization and strong architectural appeal. The paper discusses the energy effects of using passive greenhouse systems, such as glazed balconies, in an example residential building. The research was carried out for five localities in Poland, belonging to climatic zones different in terms of external air temperature and insolation: Koszalin, Poznań, Lublin, Białystok and Zakopane The analysed apartment had a floor area of approximately 74 m² Three thermal zones were distinguished in the flat - the balcony, the room adjacent to it, and the remaining space, for which various internal conditions were defined. Calculations of the energy demand were made using the dynamic simulation program, based on the control volume method. The climatic data were represented by Typical Meteorological Years, prepared on the basis of source data collected from 1971 to 2000. In each locality, the introduction of a passive greenhouse system led to a lower demand for heating in the apartment, and the shortening of the heating season. The smallest effectiveness of passive solar energy systems was noted in Białystok. Demand for heating was reduced there by 14.5% and the heating season remained the longest, due to low temperatures of external air and small sums of solar radiation intensity. In Zakopane, energy savings came to 21% and the heating season was reduced to 107 days, thanks to the greatest insolation during winter. The introduction of greenhouse systems caused an increase in cooling demand in the warmer part of the year, but total energy demand declined in each of the discussed places. However, potential energy savings are smaller if the building's annual life cycle is taken into consideration, and amount from 5.6% up to 14%. Koszalin and Zakopane are localities in which the greenhouse system allows the best energy results to be achieved. It should be emphasized that favourable conditions for introducing greenhouse systems are connected with different climatic conditions. In the seaside area (Koszalin) they result from high temperatures in the heating season and the smallest insolation in the summer period, while in the mountainous area (Zakopane) they result from high insolation in the winter and low temperatures in the summer. In the region of middle and middle-eastern Poland active systems (such as solar energy collectors or photovoltaic panels) could be more beneficial, due to high insolation during summer. It is assessed that passive systems do not eliminate the need for traditional heating in Poland. They can, however, substantially contribute to lower use of non-renewable fuels and the shortening of the heating season. The calculations showed diversification in the effectiveness of greenhouse systems resulting from climatic conditions, and allowed to identify areas which are the most suitable for the passive use of solar radiation.

Keywords: solar energy, passive greenhouse systems, glazed balconies, climatic conditions

Procedia PDF Downloads 349
253 A Spatial Perspective on the Metallized Combustion Aspect of Rockets

Authors: Chitresh Prasad, Arvind Ramesh, Aditya Virkar, Karan Dholkaria, Vinayak Malhotra

Abstract:

Solid Propellant Rocket is a rocket that utilises a combination of a solid Oxidizer and a solid Fuel. Success in Solid Rocket Motor design and development depends significantly on knowledge of burning rate behaviour of the selected solid propellant under all motor operating conditions and design limit conditions. Most Solid Motor Rockets consist of the Main Engine, along with multiple Boosters that provide an additional thrust to the space-bound vehicle. Though widely used, they have been eclipsed by Liquid Propellant Rockets, because of their better performance characteristics. The addition of a catalyst such as Iron Oxide, on the other hand, can drastically enhance the performance of a Solid Rocket. This scientific investigation tries to emulate the working of a Solid Rocket using Sparklers and Energized Candles, with a central Energized Candle acting as the Main Engine and surrounding Sparklers acting as the Booster. The Energized Candle is made of Paraffin Wax, with Magnesium filings embedded in it’s wick. The Sparkler is made up of 45% Barium Nitrate, 35% Iron, 9% Aluminium, 10% Dextrin and the remaining composition consists of Boric Acid. The Magnesium in the Energized Candle, and the combination of Iron and Aluminium in the Sparkler, act as catalysts and enhance the burn rates of both materials. This combustion of Metallized Propellants has an influence over the regression rate of the subject candle. The experimental parameters explored here are Separation Distance, Systematically varying Configuration and Layout Symmetry. The major performance parameter under observation is the Regression Rate of the Energized Candle. The rate of regression is significantly affected by the orientation and configuration of the sparklers, which usually act as heat sources for the energized candle. The Overall Efficiency of any engine is factorised by the thermal and propulsive efficiencies. Numerous efforts have been made to improve one or the other. This investigation focuses on the Orientation of Rocket Motor Design to maximize their Overall Efficiency. The primary objective is to analyse the Flame Spread Rate variations of the energized candle, which resembles the solid rocket propellant used in the first stage of rocket operation thereby affecting the Specific Impulse values in a Rocket, which in turn have a deciding impact on their Time of Flight. Another objective of this research venture is to determine the effectiveness of the key controlling parameters explored. This investigation also emulates the exhaust gas interactions of the Solid Rocket through concurrent ignition of the Energized Candle and Sparklers, and their behaviour is analysed. Modern space programmes intend to explore the universe outside our solar system. To accomplish these goals, it is necessary to design a launch vehicle which is capable of providing incessant propulsion along with better efficiency for vast durations. The main motivation of this study is to enhance Rocket performance and their Overall Efficiency through better designing and optimization techniques, which will play a crucial role in this human conquest for knowledge.

Keywords: design modifications, improving overall efficiency, metallized combustion, regression rate variations

Procedia PDF Downloads 156
252 Approaching a Tat-Rev Independent HIV-1 Clone towards a Model for Research

Authors: Walter Vera-Ortega, Idoia Busnadiego, Sam J. Wilson

Abstract:

Introduction: Human Immunodeficiency Virus type 1 (HIV-1) is responsible for the acquired immunodeficiency syndrome (AIDS), a leading cause of death worldwide infecting millions of people each year. Despite intensive research in vaccine development, therapies against HIV-1 infection are not curative, and the huge genetic variability of HIV-1 challenges to drug development. Current animal models for HIV-1 research present important limitations, impairing the progress of in vivo approaches. Macaques require a CD8+ depletion to progress to AIDS, and the maintenance cost is high. Mice are a cheaper alternative but need to be 'humanized,' and breeding is not possible. The development of an HIV-1 clone able to replicate in mice is a challenging proposal. The lack of human co-factors in mice impedes the function of the HIV-1 accessory proteins, Tat and Rev, hampering HIV-1 replication. However, Tat and Rev function can be replaced by constitutive/chimeric promoters, codon-optimized proteins and the constitutive transport element (CTE), generating a novel HIV-1 clone able to replicate in mice without disrupting the amino acid sequence of the virus. By minimally manipulating the genomic 'identity' of the virus, we propose the generation of an HIV-1 clone able to replicate in mice to assist in antiviral drug development. Methods: i) Plasmid construction: The chimeric promoters and CTE copies were cloned by PCR using lentiviral vectors as templates (pCGSW and pSIV-MPCG). Tat mutants were generated from replication competent HIV-1 plasmids (NHG and NL4-3). ii) Infectivity assays: Retroviral vectors were generated by transfection of human 293T cells and murine NIH 3T3 cells. Virus titre was determined by flow cytometry measuring GFP expression. Human B-cells (AA-2) and Hela cells (TZMbl) were used for infectivity assays. iii) Protein analysis: Tat protein expression was determined by TZMbl assay and HIV-1 capsid by western blot. Results: We have determined that NIH 3T3 cells are able to generate HIV-1 particles. However, they are not infectious, and further analysis needs to be performed. Codon-optimized HIV-1 constructs are efficiently made in 293T cells in a Tat and Rev independent manner and capable of packaging a competent genome in trans. CSGW is capable of generating infectious particles in the absence of Tat and Rev in human cells when 4 copies of the CTE are placed preceding the 3’LTR. HIV-1 Tat mutant clones encoding different promoters are functional during the first cycle of replication when Tat is added in trans. Conclusion: Our findings suggest that the development of an HIV-1 Tat-Rev independent clone is challenging but achievable aim. However, further investigations need to be developed prior presenting our HIV-1 clone as a candidate model for research.

Keywords: codon-optimized, constitutive transport element, HIV-1, long terminal repeats, research model

Procedia PDF Downloads 286
251 A Strength Weaknesses Opportunities and Threats Analysis of Socialisation Externalisation Combination and Internalisation Modes in Knowledge Management Practice: A Systematic Review of Literature

Authors: Aderonke Olaitan Adesina

Abstract:

Background: The paradigm shift to knowledge, as the key to organizational innovation and competitive advantage, has made the management of knowledge resources in organizations a mandate. A key component of the knowledge management (KM) cycle is knowledge creation, which is researched to be the result of the interaction between explicit and tacit knowledge. An effective knowledge creation process requires the use of the right model. The SECI (Socialisation, Externalisation, Combination, and Internalisation) model, proposed in 1995, is attested to be a preferred model of choice for knowledge creation activities. The model has, however, been criticized by researchers, who raise their concern, especially about its sequential nature. Therefore, this paper reviews extant literature on the practical application of each mode of the SECI model, from 1995 to date, with a view to ascertaining the relevance in modern-day KM practice. The study will establish the trends of use, with regards to the location and industry of use, and the interconnectedness of the modes. The main research question is, for organizational knowledge creation activities, is the SECI model indeed linear and sequential? In other words, does the model need to be reviewed in today’s KM practice? The review will generate a compendium of the usage of the SECI modes and propose a framework of use, based on the strength weaknesses opportunities and threats (SWOT) findings of the study. Method: This study will employ the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate the usage and SWOT of the modes, in order to ascertain the success, or otherwise, of the sequential application of the modes in practice from 1995 to 2019. To achieve the purpose, four databases will be explored to search for open access, peer-reviewed articles from 1995 to 2019. The year 1995 is chosen as the baseline because it was the year the first paper on the SECI model was published. The study will appraise relevant peer-reviewed articles under the search terms: SECI (or its synonym, knowledge creation theory), socialization, externalization, combination, and internalization in the title, abstract, or keywords list. This review will include only empirical studies of knowledge management initiatives in which the SECI model and its modes were used. Findings: It is expected that the study will highlight the practical relevance of each mode of the SECI model, the linearity or not of the model, the SWOT in each mode. Concluding Statement: Organisations can, from the analysis, determine the modes of emphasis for their knowledge creation activities. It is expected that the study will support decision making in the choice of the SECI model as a strategy for the management of organizational knowledge resources, and in appropriating the SECI model, or its remodeled version, as a theoretical framework in future KM research.

Keywords: combination, externalisation, internalisation, knowledge management, SECI model, socialisation

Procedia PDF Downloads 322
250 Targeted Delivery of Docetaxel Drug Using Cetuximab Conjugated Vitamin E TPGS Micelles Increases the Anti-Tumor Efficacy and Inhibit Migration of MDA-MB-231 Triple Negative Breast Cancer

Authors: V. K. Rajaletchumy, S. L. Chia, M. I. Setyawati, M. S. Muthu, S. S. Feng, D. T. Leong

Abstract:

Triple negative breast cancers (TNBC) can be classified as one of the most aggressive with a high rate of local recurrences and systematic metastases. TNBCs are insensitive to existing hormonal therapy or targeted therapies such as the use of monoclonal antibodies, due to the lack of oestrogen receptor (ER) and progesterone receptor (PR) and the absence of overexpression of human epidermal growth factor receptor 2 (HER2) compared with other types of breast cancers. The absence of targeted therapies for selective delivery of therapeutic agents into tumours, led to the search for druggable targets in TNBC. In this study, we developed a targeted micellar system of cetuximab-conjugated micelles of D-α-tocopheryl polyethylene glycol succinate (vitamin E TPGS) for targeted delivery of docetaxel as a model anticancer drug for the treatment of TNBCs. We examined the efficacy of our micellar system in xenograft models of triple negative breast cancers and explored the effect of the micelles on post-treatment tumours in order to elucidate the mechanism underlying the nanomedicine treatment in oncology. The targeting micelles were found preferentially accumulated in tumours immediately after the administration of the micelles compare to normal tissue. The fluorescence signal gradually increased up to 12 h at the tumour site and sustained for up to 24 h, reflecting the increases in targeted micelles (TPFC) micelles in MDA-MB-231/Luc cells. In comparison, for the non-targeting micelles (TPF), the fluorescence signal was evenly distributed all over the body of the mice. Only a slight increase in fluorescence at the chest area was observed after 24 h post-injection, reflecting the moderate uptake of micelles by the tumour. The successful delivery of docetaxel into tumour by the targeted micelles (TPDC) exhibited a greater degree of tumour growth inhibition than Taxotere® after 15 days of treatment. The ex vivo study has demonstrated that tumours treated with targeting micelles exhibit enhanced cell cycle arrest and attenuated proliferation compared with the control and with those treated non-targeting micelles. Furthermore, the ex vivo investigation revealed that both the targeting and non-targeting micellar formulations shows significant inhibition of cell migration with migration indices reduced by 0.098- and 0.28-fold, respectively, relative to the control. Overall, both the in vivo and ex vivo data increased the confidence that our micellar formulations effectively targeted and inhibited EGF-overexpressing MDA-MB-231 tumours.

Keywords: biodegradable polymers, cancer nanotechnology, drug targeting, molecular biomaterials, nanomedicine

Procedia PDF Downloads 257
249 Edible Active Antimicrobial Coatings onto Plastic-Based Laminates and Its Performance Assessment on the Shelf Life of Vacuum Packaged Beef Steaks

Authors: Andrey A. Tyuftin, David Clarke, Malco C. Cruz-Romero, Declan Bolton, Seamus Fanning, Shashi K. Pankaj, Carmen Bueno-Ferrer, Patrick J. Cullen, Joe P. Kerry

Abstract:

Prolonging of shelf-life is essential in order to address issues such as; supplier demands across continents, economical profit, customer satisfaction, and reduction of food wastage. Smart packaging solutions presented in the form of naturally occurred antimicrobially-active packaging may be a solution to these and other issues. Gelatin film forming solution with adding of natural sourced antimicrobials is a promising tool for the active smart packaging. The objective of this study was to coat conventional plastic hydrophobic packaging material with hydrophilic antimicrobial active beef gelatin coating and conduct shelf life trials on beef sub-primal cuts. Minimal inhibition concentration (MIC) of Caprylic acid sodium salt (SO) and commercially available Auranta FV (AFV) (bitter oranges extract with mixture of nutritive organic acids) were found of 1 and 1.5 % respectively against bacterial strains Bacillus cereus, Pseudomonas fluorescens, Escherichia coli, Staphylococcus aureus and aerobic and anaerobic beef microflora. Therefore SO or AFV were incorporated in beef gelatin film forming solution in concentration of two times of MIC which was coated on a conventional plastic LDPE/PA film on the inner cold plasma treated polyethylene surface. Beef samples were vacuum packed in this material and stored under chilling conditions, sampled at weekly intervals during 42 days shelf life study. No significant differences (p < 0.05) in the cook loss was observed among the different treatments compared to control samples until the day 29. Only for AFV coated beef sample it was 3% higher (37.3%) than the control (34.4 %) on the day 36. It was found antimicrobial films did not protect beef against discoloration. SO containing packages significantly (p < 0.05) reduced Total viable bacterial counts (TVC) compared to the control and AFV samples until the day 35. No significant reduction in TVC was observed between SO and AFV films on the day 42 but a significant difference was observed compared to control samples with a 1.40 log of bacteria reduction on the day 42. AFV films significantly (p < 0.05) reduced TVC compared to control samples from the day 14 until the day 42. Control samples reached the set value of 7 log CFU/g on day 27 of testing, AFV films did not reach this set limit until day 35 and SO films until day 42 of testing. The antimicrobial AFV and SO coated films significantly prolonged the shelf-life of beef steaks by 33 or 55% (on 7 and 14 days respectively) compared to control film samples. It is concluded antimicrobial coated films were successfully developed by coating the inner polyethylene layer of conventional LDPE/PA laminated films after plasma surface treatment. The results indicated that the use of antimicrobial active packaging coated with SO or AFV increased significantly (p < 0.05) the shelf life of the beef sub-primal. Overall, AFV or SO containing gelatin coatings have the potential of being used as effective antimicrobials for active packaging applications for muscle-based food products.

Keywords: active packaging, antimicrobials, edible coatings, food packaging, gelatin films, meat science

Procedia PDF Downloads 283
248 Regional Barriers and Opportunities for Developing Innovation Networks in the New Media Industry: A Comparison between Beijing and Bangalore Regional Innovation Systems

Authors: Cristina Chaminade, Mandar Kulkarni, Balaji Parthasarathy, Monica Plechero

Abstract:

The characteristics of a regional innovation system (RIS) and the specificity of the knowledge base of an industry may contribute to create peculiar paths for innovation and development of firms’ geographic extended innovation networks. However, the relative empirical evidence in emerging economies remains underexplored. The paper aims to fill the research gap by means of some recent qualitative research conducted in 2016 in Beijing (China) and Bangalore (India). It analyzes cases studies of firms in the new media industry, a sector that merges different IT competences with competences from other knowledge domains and that is emerging in those RIS. The results show that while in Beijing the new media sector results to be more in line with the existing institutional setting and governmental goals aimed at targeting specific social aspects and social problems of the population, in Bangalore it remains a more spontaneous firms-led process. In Beijing what matters for the development of innovation networks is the governmental setting and the national and regional strategies to promote science and technology in this sector, internet and mass innovation. The peculiarities of recent governmental policies aligned to the domestic goals may provide good possibilities for start-ups to develop innovation networks. However, due to the specificities of those policies targeting the Chinese market, networking outside the domestic market are not so promoted. Moreover, while some institutional peculiarities, such as a culture of collaboration in the region, may be favorable for local networking, regulations related to Internet censorship may limit the use of global networks particularly when based on virtual spaces. Mainly firms with already some foreign experiences and contact take advantage of global networks. In Bangalore, the role of government in pushing networking for the new media industry at the present stage is quite absent at all geographical levels. Indeed there is no particular strategic planning or prioritizing in the region toward the new media industry, albeit one industrial organization has emerged to represent the animation industry interests. This results in a lack of initiatives for sustaining the integration of complementary knowledge into the local portfolio of IT specialization. Firms actually involved in the new media industry face institutional constrains related to a poor level of local trust and cooperation, something that does not allow for full exploitation of local linkages. Moreover, knowledge-provider organizations in Bangalore remain still a solid base for the IT domain, but not for other domains. Initiatives to link to international networks seem therefore more the result of individual entrepreneurial actions aimed at acquiring complementary knowledge and competencies from different domains and exploiting potentiality in different markets. From those cases, it emerges that role of government, soft institutions and organizations in the two RIS differ substantially in the creation of barriers and opportunities for the development of innovation networks and their specific aim.

Keywords: regional innovation system, emerging economies, innovation network, institutions, organizations, Bangalore, Beijing

Procedia PDF Downloads 293
247 A Holistic View of Microbial Community Dynamics during a Toxic Harmful Algal Bloom

Authors: Shi-Bo Feng, Sheng-Jie Zhang, Jin Zhou

Abstract:

The relationship between microbial diversity and algal bloom has received considerable attention for decades. Microbes undoubtedly affect annual bloom events and impact the physiology of both partners, as well as shape ecosystem diversity. However, knowledge about interactions and network correlations among broader-spectrum microbes that lead to the dynamics in a complete bloom cycle are limited. In this study, pyrosequencing and network approaches simultaneously assessed the associate patterns among bacteria, archaea, and microeukaryotes in surface water and sediments in response to a natural dinoflagellate (Alexandrium sp.) bloom. In surface water, among the bacterial community, Gamma-Proteobacteria and Bacteroidetes dominated in the initial bloom stage, while Alpha-Proteobacteria, Cyanobacteria, and Actinobacteria become the most abundant taxa during the post-stage. In the archaea biosphere, it clustered predominantly with Methanogenic members in the early pre-bloom period while the majority of species identified in the later-bloom stage were ammonia-oxidizing archaea and Halobacteriales. In eukaryotes, dinoflagellate (Alexandrium sp.) was dominated in the onset stage, whereas multiply species (such as microzooplankton, diatom, green algae, and rotifera) coexistence in bloom collapse stag. In sediments, the microbial species biomass and richness are much higher than the water body. Only Flavobacteriales and Rhodobacterales showed a slight response to bloom stages. Unlike the bacteria, there are small fluctuations of archaeal and eukaryotic structure in the sediment. The network analyses among the inter-specific associations show that bacteria (Alteromonadaceae, Oceanospirillaceae, Cryomorphaceae, and Piscirickettsiaceae) and some zooplankton (Mediophyceae, Mamiellophyceae, Dictyochophyceae and Trebouxiophyceae) have a stronger impact on the structuring of phytoplankton communities than archaeal effects. The changes in population were also significantly shaped by water temperature and substrate availability (N & P resources). The results suggest that clades are specialized at different time-periods and that the pre-bloom succession was mainly a bottom-up controlled, and late-bloom period was controlled by top-down patterns. Additionally, phytoplankton and prokaryotic communities correlated better with each other, which indicate interactions among microorganisms are critical in controlling plankton dynamics and fates. Our results supplied a wider view (temporal and spatial scales) to understand the microbial ecological responses and their network association during algal blooming. It gives us a potential multidisciplinary explanation for algal-microbe interaction and helps us beyond the traditional view linked to patterns of algal bloom initiation, development, decline, and biogeochemistry.

Keywords: microbial community, harmful algal bloom, ecological process, network

Procedia PDF Downloads 89
246 Effect of Vitrification on Embryos Euploidy Obtained from Thawed Oocytes

Authors: Natalia Buderatskaya, Igor Ilyin, Julia Gontar, Sergey Lavrynenko, Olga Parnitskaya, Ekaterina Ilyina, Eduard Kapustin, Yana Lakhno

Abstract:

Introduction: It is known that cryopreservation of oocytes has peculiar features due to the complex structure of the oocyte. One of the most important features is that mature oocytes contain meiotic division spindle which is very sensitive even to the slightest variation in temperature. Thus, the main objective of this study is to analyse the resulting euploid embryos obtained from thawed oocytes in comparison with the data of preimplantation genetic screening (PGS) in fresh embryo cycles. Material and Methods: The study was conducted at 'Medical Centre IGR' from January to July 2016. Data were analysed for 908 donor oocytes obtained in 67 cycles of assisted reproductive technologies (ART), of which 693 oocytes were used in the 51 'fresh' cycles (group A), and 215 oocytes - 16 ART programs with vitrification female gametes (group B). The average age of donors in the groups match 27.3±2.9 and 27.8±6.6 years. Stimulation of superovulation was conducted the standard way. Vitrification was performed in 1-2 hours after transvaginal puncture and thawing of oocytes were carried out in accordance with the standard protocol of Cryotech (Japan). Manipulation ICSI was performed 4-5 hours after transvaginal follicle puncture for fresh oocytes, or after defrosting - for vitrified female gametes. For the PGS, an embryonic biopsy was done on the third or on the fifth day after fertilization. Diagnostic procedures were performed using fluorescence in situ hybridization with the study of such chromosomes as 13, 16, 18, 21, 22, X, Y. Only morphologically quality blastocysts were used for the transfer, the estimation of which corresponded to the Gardner criteria. The statistical hypotheses were done using the criteria t, x^2 at a significance levels p<0.05, p<0.01, p<0.001. Results: The mean number of mature oocytes per cycle in group A was 13.58±6.65 and in group B - 13.44±6.68 oocytes for patient. The survival of oocytes after thawing totaled 95.3% (n=205), which indicates a highly effective quality of performed vitrification. The proportion of zygotes in the group A corresponded to 91.1%(n=631), in the group B – 80.5%(n=165), which shows statistically significant difference between the groups (p<0.001) and explained by non-viable oocytes elimination after vitrification. This is confirmed by the fact that on the fifth day of embryos development a statistically significant difference in the number of blastocysts was absent (p>0.05), and constituted respectively 61.6%(n=389) and 63.0%(n=104) in the groups. For the PGS performing 250 embryos analyzed in the group A and 72 embryos - in the group B. The results showed that euploidy in the studied chromosomes were 40.0%(n=100) embryos in the group A and 41.7% (n=30) - in the group B, which shows no statistical significant difference (p>0.05). The indicators of clinical pregnancies in the groups amounted to 64.7% (22 pregnancies per 34 embryo transfers) and 61.5% (8 pregnancies per 13 embryo transfers) respectively, and also had no significant difference between the groups (p>0.05). Conclusions: The results showed that the vitrification does not affect the resulting euploid embryos in assisted reproductive technologies and are not reflected in their morphological characteristics in ART programs.

Keywords: euploid embryos, preimplantation genetic screening, thawing oocytes, vitrification

Procedia PDF Downloads 310
245 Debriefing Practices and Models: An Integrative Review

Authors: Judson P. LaGrone

Abstract:

Simulation-based education in curricula was once a luxurious component of nursing programs but now serves as a vital element of an individual’s learning experience. A debriefing occurs after the simulation scenario or clinical experience is completed to allow the instructor(s) or trained professional(s) to act as a debriefer to guide a reflection with a purpose of acknowledging, assessing, and synthesizing the thought process, decision-making process, and actions/behaviors performed during the scenario or clinical experience. Debriefing is a vital component of the simulation process and educational experience to allow the learner(s) to progressively build upon past experiences and current scenarios within a safe and welcoming environment with a guided dialog to enhance future practice. The aim of this integrative review was to assess current practices of debriefing models in simulation-based education for health care professionals and students. The following databases were utilized for the search: CINAHL Plus, Cochrane Database of Systemic Reviews, EBSCO (ERIC), PsycINFO (Ovid), and Google Scholar. The advanced search option was useful to narrow down the search of articles (full text, Boolean operators, English language, peer-reviewed, published in the past five years). Key terms included debrief, debriefing, debriefing model, debriefing intervention, psychological debriefing, simulation, simulation-based education, simulation pedagogy, health care professional, nursing student, and learning process. Included studies focus on debriefing after clinical scenarios of nursing students, medical students, and interprofessional teams conducted between 2015 and 2020. Common themes were identified after the analysis of articles matching the search criteria. Several debriefing models are addressed in the literature with similarities of effectiveness for participants in clinical simulation-based pedagogy. Themes identified included (a) importance of debriefing in simulation-based pedagogy, (b) environment for which debriefing takes place is an important consideration, (c) individuals who should conduct the debrief, (d) length of debrief, and (e) methodology of the debrief. Debriefing models supported by theoretical frameworks and facilitated by trained staff are vital for a successful debriefing experience. Models differed from self-debriefing, facilitator-led debriefing, video-assisted debriefing, rapid cycle deliberate practice, and reflective debriefing. A reoccurring finding was centered around the emphasis of continued research for systematic tool development and analysis of the validity and effectiveness of current debriefing practices. There is a lack of consistency of debriefing models among nursing curriculum with an increasing rate of ill-prepared faculty to facilitate the debriefing phase of the simulation.

Keywords: debriefing model, debriefing intervention, health care professional, simulation-based education

Procedia PDF Downloads 134
244 Study of Chemical State Analysis of Rubidium Compounds in Lα, Lβ₁, Lβ₃,₄ and Lγ₂,₃ X-Ray Emission Lines with Wavelength Dispersive X-Ray Fluorescence Spectrometer

Authors: Harpreet Singh Kainth

Abstract:

Rubidium salts have been commonly used as an electrolyte to improve the efficiency cycle of Li-ion batteries. In recent years, it has been implemented into the large scale for further technological advances to improve the performance rate and better cyclability in the batteries. X-ray absorption spectroscopy (XAS) is a powerful tool for obtaining the information in the electronic structure which involves the chemical state analysis in the active materials used in the batteries. However, this technique is not well suited for the industrial applications because it needs a synchrotron X-ray source and special sample file for in-situ measurements. In contrast to this, conventional wavelength dispersive X-ray fluorescence (WDXRF) spectrometer is nondestructive technique used to study the chemical shift in all transitions (K, L, M, …) and does not require any special pre-preparation planning. In the present work, the fluorescent Lα, Lβ₁ , Lβ₃,₄ and Lγ₂,₃ X-ray spectra of rubidium in different chemical forms (Rb₂CO₃ , RbCl, RbBr, and RbI) have been measured first time with high resolution wavelength dispersive X-ray fluorescence (WDXRF) spectrometer (Model: S8 TIGER, Bruker, Germany), equipped with an Rh anode X-ray tube (4-kW, 60 kV and 170 mA). In ₃₇Rb compounds, the measured energy shifts are in the range (-0.45 to - 1.71) eV for Lα X-ray peak, (0.02 to 0.21) eV for Lβ₁ , (0.04 to 0.21) eV for Lβ₃ , (0.15 to 0.43) eV for Lβ₄ and (0.22 to 0.75) eV for Lγ₂,₃ X-ray emission lines. The chemical shifts in rubidium compounds have been measured by considering Rb₂CO₃ compounds taking as a standard reference. A Voigt function is used to determine the central peak position of all compounds. Both positive and negative shifts have been observed in L shell emission lines. In Lα X-ray emission lines, all compounds show negative shift while in Lβ₁, Lβ₃,₄, and Lγ₂,₃ X-ray emission lines, all compounds show a positive shift. These positive and negative shifts result increase or decrease in X-ray energy shifts. It looks like that ligands attached with central metal atom attract or repel the electrons towards or away from the parent nucleus. This pulling and pushing character of rubidium affects the central peak position of the compounds which causes a chemical shift. To understand the chemical effect more briefly, factors like electro-negativity, line intensity ratio, effective charge and bond length are responsible for the chemical state analysis in rubidium compounds. The effective charge has been calculated from Suchet and Pauling method while the line intensity ratio has been calculated by calculating the area under the relevant emission peak. In the present work, it has been observed that electro-negativity, effective charge and intensity ratio (Lβ₁/Lα, Lβ₃,₄/Lα and Lγ₂,₃/Lα) are inversely proportional to the chemical shift (RbCl > RbBr > RbI), while bond length has been found directly proportional to the chemical shift (RbI > RbBr > RbCl).

Keywords: chemical shift in L emission lines, bond length, electro-negativity, effective charge, intensity ratio, Rubidium compounds, WDXRF spectrometer

Procedia PDF Downloads 487
243 Detection and Quantification of Viable but Not Culturable Vibrio Parahaemolyticus in Frozen Bivalve Molluscs

Authors: Eleonora Di Salvo, Antonio Panebianco, Graziella Ziino

Abstract:

Background: Vibrio parahaemolyticus is a human pathogen that is widely distributed in marine environments. It is frequently isolated from raw seafood, particularly shellfish. Consumption of raw or undercooked seafood contaminated with V. parahaemolyticus may lead to acute gastroenteritis. Vibrio spp. has excellent resistance to low temperatures so it can be found in frozen products for a long time. Recently, the viable but non-culturable state (VBNC) of bacteria has attracted great attention, and more than 85 species of bacteria have been demonstrated to be capable of entering this state. VBNC cells cannot grow in conventional culture medium but are viable and maintain metabolic activity, which may constitute an unrecognized source of food contamination and infection. Also V. parahaemolyticus could exist in VBNC state under nutrient starvation or low-temperature conditions. Aim: The aim of the present study was to optimize methods and investigate V. parahaemolyticus VBNC cells and their presence in frozen bivalve molluscs, regularly marketed. Materials and Methods: propidium monoazide (PMA) was integrated with real-time polymerase chain reaction (qPCR) targeting the tl gene to detect and quantify V. parahaemolyticus in the VBNC state. PMA-qPCR resulted highly specific to V. parahaemolyticus with a limit of detection (LOD) of 10-1 log CFU/mL in pure bacterial culture. A standard curve for V. parahaemolyticus cell concentrations was established with the correlation coefficient of 0.9999 at the linear range of 1.0 to 8.0 log CFU/mL. A total of 77 samples of frozen bivalve molluscs (35 mussels; 42 clams) were subsequently subjected to the qualitative (on alkaline phosphate buffer solution) and quantitative research of V. parahaemolyticus on thiosulfate-citrate-bile salts-sucrose (TCBS) agar (DIFCO) NaCl 2.5%, and incubation at 30°C for 24-48 hours. Real-time PCR was conducted on homogenate samples, in duplicate, with and without propidium monoazide (PMA) dye, and exposed for 45 min under halogen lights (650 W). Total DNA was extracted from cell suspension in homogenate samples according to bolliture protocol. The Real-time PCR was conducted with species-specific primers for V. parahaemolitycus. The RT-PCR was performed in a final volume of 20 µL, containing 10 µL of SYBR Green Mixture (Applied Biosystems), 2 µL of template DNA, 2 µL of each primer (final concentration 0.6 mM), and H2O 4 µL. The qPCR was carried out on CFX96 TouchTM (Bio-Rad, USA). Results: All samples were negative both to the quantitative and qualitative detection of V. parahaemolyticus by the classical culturing technique. The PMA-qPCR let us individuating VBNC V. parahaemolyticus in the 20,78% of the samples evaluated with a value between the Log 10-1 and Log 10-3 CFU/g. Only clams samples were positive for PMA-qPCR detection. Conclusion: The present research is the first evaluating PMA-qPCR assay for detection of VBNC V. parahaemolyticus in bivalve molluscs samples, and the used method was applicable to the rapid control of marketed bivalve molluscs. We strongly recommend to use of PMA-qPCR in order to identify VBNC forms, undetectable by the classic microbiological methods. A precise knowledge of the V.parahaemolyticus in a VBNC form is fundamental for the correct risk assessment not only in bivalve molluscs but also in other seafood.

Keywords: food safety, frozen bivalve molluscs, PMA dye, Real-time PCR, VBNC state, Vibrio parahaemolyticus

Procedia PDF Downloads 116
242 A Brazilian Study Applied to the Regulatory Environmental Issues of Nanomaterials

Authors: Luciana S. Almeida

Abstract:

Nanotechnology has revolutionized the world of science and technology bringing great expectations due to its great potential of application in the most varied industrial sectors. The same characteristics that make nanoparticles interesting from the point of view of the technological application, these may be undesirable when released into the environment. The small size of nanoparticles facilitates their diffusion and transport in the atmosphere, water, and soil and facilitates the entry and accumulation of nanoparticles in living cells. The main objective of this study is to evaluate the environmental regulatory process of nanomaterials in the Brazilian scenario. Three specific objectives were outlined. The first is to carry out a global scientometric study, in a research platform, with the purpose of identifying the main lines of study of nanomaterials in the environmental area. The second is to verify how environmental agencies in other countries have been working on this issue by means of a bibliographic review. And the third is to carry out an assessment of the Brazilian Nanotechnology Draft Law 6741/2013 with the state environmental agencies. This last one has the aim of identifying the knowledge of the subject by the environmental agencies and necessary resources available in the country for the implementation of the Policy. A questionnaire will be used as a tool for this evaluation to identify the operational elements and build indicators through the Environment of Evaluation Application, a computational application developed for the development of questionnaires. At the end will be verified the need to propose changes in the Draft Law of the National Nanotechnology Policy. Initial studies, in relation to the first specific objective, have already identified that Brazil stands out in the production of scientific publications in the area of nanotechnology, although the minority is in studies focused on environmental impact studies. Regarding the general panorama of other countries, some findings have also been raised. The United States has included the nanoform of the substances in an existing program in the EPA (Environmental Protection Agency), the TSCA (Toxic Substances Control Act). The European Union issued a draft of a document amending Regulation 1907/2006 of the European Parliament and Council to cover the nanoform of substances. Both programs are based on the study and identification of environmental risks associated with nanomaterials taking into consideration the product life cycle. In relation to Brazil, regarding the third specific objective, it is notable that the country does not have any regulations applicable to nanostructures, although there is a Draft Law in progress. In this document, it is possible to identify some requirements related to the environment, such as environmental inspection and licensing; industrial waste management; notification of accidents and application of sanctions. However, it is not known if these requirements are sufficient for the prevention of environmental impacts and if national environmental agencies will know how to apply them correctly. This study intends to serve as a basis for future actions regarding environmental management applied to the use of nanotechnology in Brazil.

Keywords: environment; management; nanotecnology; politics

Procedia PDF Downloads 99
241 Monitoring of Formaldehyde over Punjab Pakistan Using Car Max-Doas and Satellite Observation

Authors: Waqas Ahmed Khan, Faheem Khokhaar

Abstract:

Air pollution is one of the main perpetrators of climate change. GHGs cause melting of glaciers and cause change in temperature and heavy rain fall many gasses like Formaldehyde is not direct precursor that damage ozone like CO2 or Methane but Formaldehyde (HCHO) form glyoxal (CHOCHO) that has effect on ozone. Countries around the globe have unique air quality monitoring protocols to describe local air pollution. Formaldehyde is a colorless, flammable, strong-smelling chemical that is used in building materials and to produce many household products and medical preservatives. Formaldehyde also occurs naturally in the environment. It is produced in small amounts by most living organisms as part of normal metabolic processes. Pakistan lacks the monitoring facilities on larger scale to measure the atmospheric gasses on regular bases. Formaldehyde is formed from Glyoxal and effect mountain biodiversity and livelihood. So its monitoring is necessary in order to maintain and preserve biodiversity. Objective: Present study is aimed to measure atmospheric HCHO vertical column densities (VCDs) obtained from ground-base and compute HCHO data in Punjab and elevated areas (Rawalpindi & Islamabad) by satellite observation during the time period of 2014-2015. Methodology: In order to explore the spatial distributing of H2CO, various fields campaigns including international scientist by using car Max-Doas. Major focus was on the cities along national highways and industrial region of Punjab Pakistan. Level 2 data product of satellite instruments OMI retrieved by differential optical absorption spectroscopy (DOAS) technique are used. Spatio-temporal distribution of HCHO column densities over main cities and region of Pakistan has been discussed. Results: Results show the High HCHO column densities exceeding permissible limit over the main cities of Pakistan particularly the areas with rapid urbanization and enhanced economic growth. The VCDs value over elevated areas of Pakistan like Islamabad, Rawalpindi is around 1.0×1016 to 34.01×1016 Molecules’/cm2. While Punjab has values revolving around the figure 34.01×1016. Similarly areas with major industrial activity showed high amount of HCHO concentrations. Tropospheric glyoxal VCDs were found to be 4.75 × 1015 molecules/cm2. Conclusion: Results shows that monitoring site surrounded by Margalla hills (Islamabad) have higher concentrations of Formaldehyde. Wind data shows that industrial areas and areas having high economic growth have high values as they provide pathways for transmission of HCHO. Results obtained from this study would help EPA, WHO and air protection departments in order to monitor air quality and further preservation and restoration of mountain biodiversity.

Keywords: air quality, formaldehyde, Max-Doas, vertical column densities (VCDs), satellite instrument, climate change

Procedia PDF Downloads 193
240 Synergy Surface Modification for High Performance Li-Rich Cathode

Authors: Aipeng Zhu, Yun Zhang

Abstract:

The growing grievous environment problems together with the exhaustion of energy resources put urgent demands for developing high energy density. Considering the factors including capacity, resource and environment, Manganese-based lithium-rich layer-structured cathode materials xLi₂MnO₃⋅(1-x)LiMO₂ (M = Ni, Co, Mn, and other metals) are drawing increasing attention due to their high reversible capacities, high discharge potentials, and low cost. They are expected to be one type of the most promising cathode materials for the next-generation Li-ion batteries (LIBs) with higher energy densities. Unfortunately, their commercial applications are hindered with crucial drawbacks such as poor rate performance, limited cycle life and continuous falling of the discharge potential. With decades of extensive studies, significant achievements have been obtained in improving their cyclability and rate performances, but they cannot meet the requirement of commercial utilization till now. One major problem for lithium-rich layer-structured cathode materials (LLOs) is the side reaction during cycling, which leads to severe surface degradation. In this process, the metal ions can dissolve in the electrolyte, and the surface phase change can hinder the intercalation/deintercalation of Li ions and resulting in low capacity retention and low working voltage. To optimize the LLOs cathode material, the surface coating is an efficient method. Considering the price and stability, Al₂O₃ was used as a coating material in the research. Meanwhile, due to the low initial Coulombic efficiency (ICE), the pristine LLOs was pretreated by KMnO₄ to increase the ICE. The precursor was prepared by a facile coprecipitation method. The as-prepared precursor was then thoroughly mixed with Li₂CO₃ and calcined in air at 500℃ for 5h and 900℃ for 12h to produce Li₁.₂[Ni₀.₂Mn₀.₆]O₂ (LNMO). The LNMO was then put into 0.1ml/g KMnO₄ solution stirring for 3h. The resultant was filtered and washed with water, and dried in an oven. The LLOs obtained was dispersed in Al(NO₃)₃ solution. The mixture was lyophilized to confer the Al(NO₃)₃ was uniformly coated on LLOs. After lyophilization, the LLOs was calcined at 500℃ for 3h to obtain LNMO@LMO@ALO. The working electrodes were prepared by casting the mixture of active material, acetylene black, and binder (polyvinglidene fluoride) dissolved in N-methyl-2-pyrrolidone with a mass ratio of 80: 15: 5 onto an aluminum foil. The electrochemical performance tests showed that the multiple surface modified materials had a higher initial Coulombic efficiency (84%) and better capacity retention (91% after 100 cycles) compared with that of pristine LNMO (76% and 80%, respectively). The modified material suggests that the KMnO₄ pretreat and Al₂O₃ coating can increase the ICE and cycling stability.

Keywords: Li-rich materials, surface coating, lithium ion batteries, Al₂O₃

Procedia PDF Downloads 110
239 Scientific and Regulatory Challenges of Advanced Therapy Medicinal Products

Authors: Alaa Abdellatif, Gabrièle Breda

Abstract:

Background. Advanced therapy medicinal products (ATMPs) are innovative therapies that mainly target orphan diseases and high unmet medical needs. ATMP includes gene therapy medicinal products (GTMP), somatic cell therapy medicinal products (CTMP), and tissue-engineered therapies (TEP). Since legislation opened the way in 2007, 25 ATMPs have been approved in the EU, which is about the same amount as the U.S. Food and Drug Administration. However, not all of the ATMPs that have been approved have successfully reached the market and retained their approval. Objectives. We aim to understand all the factors limiting the market access to very promising therapies in a systemic approach, to be able to overcome these problems, in the future, with scientific, regulatory and commercial innovations. Further to recent reviews that focus either on specific countries, products, or dimensions, we will address all the challenges faced by ATMP development today. Methodology. We used mixed methods and a multi-level approach for data collection. First, we performed an updated academic literature review on ATMP development and their scientific and market access challenges (papers published between 2018 and April 2023). Second, we analyzed industry feedback from cell and gene therapy webinars and white papers published by providers and pharmaceutical industries. Finally, we established a comparative analysis of the regulatory guidelines published by EMA and the FDA for ATMP approval. Results: The main challenges in bringing these therapies to market are the high development costs. Developing ATMPs is expensive due to the need for specialized manufacturing processes. Furthermore, the regulatory pathways for ATMPs are often complex and can vary between countries, making it challenging to obtain approval and ensure compliance with different regulations. As a result of the high costs associated with ATMPs, challenges in obtaining reimbursement from healthcare payers lead to limited patient access to these treatments. ATMPs are often developed for orphan diseases, which means that the patient population is limited for clinical trials which can make it challenging to demonstrate their safety and efficacy. In addition, the complex manufacturing processes required for ATMPs can make it challenging to scale up production to meet demand, which can limit their availability and increase costs. Finally, ATMPs face safety and efficacy challenges: dangerous adverse events of these therapies like toxicity related to the use of viral vectors or cell therapy, starting material and donor-related aspects. Conclusion. As a result of our mixed method analysis, we found that ATMPs face a number of challenges in their development, regulatory approval, and commercialization and that addressing these challenges requires collaboration between industry, regulators, healthcare providers, and patient groups. This first analysis will help us to address, for each challenge, proper and innovative solution(s) in order to increase the number of ATMPs approved and reach the patients

Keywords: advanced therapy medicinal products (ATMPs), product development, market access, innovation

Procedia PDF Downloads 55
238 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 263
237 How Holton’s Thematic Analysis Can Help to Understand Why Fred Hoyle Never Accepted Big Bang Cosmology

Authors: Joao Barbosa

Abstract:

After an intense dispute between the big bang cosmology and its big rival, the steady-state cosmology, some important experimental observations, such as the determination of helium abundance in the universe and the discovery of the cosmic background radiation in the 1960s were decisive for the progressive and wide acceptance of big bang cosmology and the inevitable abandonment of steady-state cosmology. But, despite solid theoretical support and those solid experimental observations favorable to big bang cosmology, Fred Hoyle, one of the proponents of the steady-state and the main opponent of the idea of the big bang (which, paradoxically, himself he baptized), never gave up and continued to fight for the idea of a stationary (or quasi-stationary) universe until the end of his life, even after decades of widespread consensus around the big bang cosmology. We can try to understand this persistent attitude of Hoyle by applying Holton’s thematic analysis to cosmology. Holton recognizes in the scientific activity a dimension that, even unconscious or not assumed, is nevertheless very important in the work of scientists, in implicit articulation with the experimental and the theoretical dimensions of science. This is the thematic dimension, constituted by themata – concepts, methodologies, and hypotheses with a metaphysical, aesthetic, logical, or epistemological nature, associated both with the cultural context and the individual psychology of scientists. In practice, themata can be expressed through personal preferences and choices that guide the individual and collective work of scientists. Thematic analysis shows that big bang cosmology is mainly based on a set of themata consisting of evolution, finitude, life cycle, and change; the cosmology of the steady-state is based on opposite themata: steady-state, infinity, continuous existence, and constancy. The passionate controversy that these cosmological views carried out is part of an old cosmological opposition: the thematic opposition between an evolutionary view of the world (associated with Heraclitus) and a stationary view (associated with Parmenides). Personal preferences seem to have been important in this (thematic) controversy, and the thematic analysis that was developed shows that Hoyle is a very illustrative example of a life-long personal commitment to some themata, in this case to the opposite themata of the big bang cosmology. His struggle against the big bang idea was strongly based on philosophical and even religious reasons – which, in a certain sense and in a Holtonian perspective, is related to thematic preferences. In this personal and persistent struggle, Hoyle always refused the way how some experimental observations were considered decisive in favor of the big bang idea, arguing that the success of this idea is based on sociological and cultural prejudices. This Hoyle’s attitude is a personal thematic attitude, in which the acceptance or rejection of what is presented as proof or scientific fact is conditioned by themata: what is a proof or a scientific fact for one scientist is something yet to be established for another scientist who defends different or even opposites themata.

Keywords: cosmology, experimental observations, fred hoyle, interpretation, life-long personal commitment, Themata

Procedia PDF Downloads 140
236 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago

Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu

Abstract:

Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.

Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago

Procedia PDF Downloads 22
235 Piezotronic Effect on Electrical Characteristics of Zinc Oxide Varistors

Authors: Nadine Raidl, Benjamin Kaufmann, Michael Hofstätter, Peter Supancic

Abstract:

If polycrystalline ZnO is properly doped and sintered under very specific conditions, it shows unique electrical properties, which are indispensable for today’s electronic industries, where it is used as the number one overvoltage protection material. Under a critical voltage, the polycrystalline bulk exhibits high electrical resistance but becomes suddenly up to twelve magnitudes more conductive if this voltage limit is exceeded (i.e., varistor effect). It is known that these peerless properties have their origin in the grain boundaries of the material. Electric charge is accumulated in the boundaries, causing a depletion layer in their vicinity and forming potential barriers (so-called Double Schottky Barriers, or DSB) which are responsible for the highly non-linear conductivity. Since ZnO is a piezoelectric material, mechanical stresses induce polarisation charges that modify the DSB heights and as a result the global electrical characteristics (i.e., piezotronic effect). In this work, a finite element method was used to simulate emerging stresses on individual grains in the bulk. Besides, experimental efforts were made to testify a coherent model that could explain this influence. Electron back scattering diffraction was used to identify grain orientations. With the help of wet chemical etching, grain polarization was determined. Micro lock-in infrared thermography (MLIRT) was applied to detect current paths through the material, and a micro 4-point probes method system (M4PPS) was employed to investigate current-voltage characteristics between single grains. Bulk samples were tested under uniaxial pressure. It was found that the conductivity can increase by up to three orders of magnitude with increasing stress. Through in-situ MLIRT, it could be shown that this effect is caused by the activation of additional current paths in the material. Further, compressive tests were performed on miniaturized samples with grain paths containing solely one or two grain boundaries. The tests evinced both an increase of the conductivity, as observed for the bulk, as well as a decreased conductivity. This phenomenon has been predicted theoretically and can be explained by piezotronically induced surface charges that have an impact on the DSB at the grain boundaries. Depending on grain orientation and stress direction, DSB can be raised or lowered. Also, the experiments revealed that the conductivity within one single specimen can increase and decrease, depending on the current direction. This novel finding indicates the existence of asymmetric Double Schottky Barriers, which was furthermore proved by complementary methods. MLIRT studies showed that the intensity of heat generation within individual current paths is dependent on the direction of the stimulating current. M4PPS was used to study the relationship between the I-V characteristics of single grain boundaries and grain orientation and revealed asymmetric behavior for very specific orientation configurations. A new model for the Double Schottky Barrier, taking into account the natural asymmetry and explaining the experimental results, will be given.

Keywords: Asymmetric Double Schottky Barrier, piezotronic, varistor, zinc oxide

Procedia PDF Downloads 248
234 Immobilization of β-Galactosidase from Kluyveromyces Lactis on Polyethylenimine-Agarose for Production of Lactulose

Authors: Carlos A. C. G. Neto, Natan C. G. Silva, Thais O. Costa, Luciana R. B. Goncalves, Maria v. P. Rocha

Abstract:

Galactosidases are enzymes responsible for catalyzing lactose hydrolysis reactions and also favoring transgalactosylation reactions for the production of prebiotics, among which lactulose stands out. These enzymes, when immobilized, can have some enzymatic characteristics substantially improved, and the coating of supports with multifunctional polymers in immobilization processes is a promising alternative in order to extend the useful life of the biocatalysts, for example, the coating with polyethyleneimine (PEI). PEI is a flexible polymer that suits the structure of the enzyme, giving greater stability, especially for multimeric enzymes such as β-galactosidases and also protects it from environmental variations, for example, pH and temperature. In addition, it can substantially improve the immobilization parameters and also the efficiency of enzymatic reactions. In this context, the aim of the present work was first to develop biocatalysts of β-galactosidase from Kluyveromyces lactis immobilized on PEI coated agarose, determining the immobilization parameters, its operational and thermal stability, and then to apply it in the hydrolysis of lactose and synthesis of lactulose, using whey as a substrate. This immobilization strategy was chosen in order to improve the catalytic efficiency of the enzyme in the transgalactosylation reaction for the production of prebiotics, and there are few studies with β-galactosidase from this strain. The immobilization of β-galactosidase in agarose previously functionalized with 48% (w/v) glycidol and then coated with 10% (w/v) PEI solution was evaluated using an enzymatic load of 10 mg/g of protein. Subsequently, the hydrolysis and transgalactosylation reactions were conducted at 50 °C, 120 RPM for 20 minutes, using whey (66.7 g/L of lactose) supplemented with 133.3 g/L fructose at a ratio of 1:2 (lactose/fructose). Operational stability studies were performed in the same conditions for 10 cycles. Thermal stabilities of biocatalysts were conducted at 50 ºC in 50 mM phosphate buffer, pH 6.6, with 0.1 mM MnCl2. The biocatalysts whose supports were coated were named AGA_GLY_PEI_GAL, and those that were not coated were named AGA_GLY_GAL. The coating of the support with PEI considerably improved immobilization yield (2.6-fold), the biocatalyst activity (1.4-fold), and efficiency (2.2-fold). The biocatalyst AGA_GLY_PEI_GAL was better than AGA_GLY_GAL in hydrolysis and transgalactosylation reactions, converting 88.92% of lactose at 5 min of reaction and obtaining a residual concentration of 5.24 g/L. Besides that, it was produced 13.90 g/L lactulose in the same time interval. AGA_GLY_PEI_GAL biocatalyst was stable during the 10 cycles evaluated, converting approximately 80% of lactose and producing 10.95 g/L of lactulose even after the tenth cycle. However, the thermal stability of AGA_GLY_GAL biocatalyst was superior, with a half-life time 5 times higher, probably because the enzyme was immobilized by covalent bonding, which is stronger than adsorption (AGA_GLY_PEI_GAL). Therefore, the strategy of coating the supports with PEI has proven to be effective for the immobilization of β-galactosidase from K. lactis, considerably improving the immobilization parameters, as well as the enzyme, catalyzed reactions. In addition, the use of whey as a raw material for lactulose production has proved to be an industrially advantageous alternative.

Keywords: β-galactosidase, immobilization, lactulose, polyethylenimine, whey

Procedia PDF Downloads 102
233 Collaborative Management Approach for Logistics Flow Management of Cuban Medicine Supply Chain

Authors: Ana Julia Acevedo Urquiaga, Jose A. Acevedo Suarez, Ana Julia Urquiaga Rodriguez, Neyfe Sablon Cossio

Abstract:

Despite the progress made in logistics and supply chains fields, it is unavoidable the development of business models that use efficiently information to facilitate the integrated logistics flows management between partners. Collaborative management is an important tool for materializing the cooperation between companies, as a way to achieve the supply chain efficiency and effectiveness. The first face of this research was a comprehensive analysis of the collaborative planning on the Cuban companies. It is evident that they have difficulties in supply chains planning where production, supplies and replenishment planning are independent tasks, as well as logistics and distribution operations. Large inventories generate serious financial and organizational problems for entities, demanding increasing levels of working capital that cannot be financed. Problems were found in the efficient application of Information and Communication Technology on business management. The general objective of this work is to develop a methodology that allows the deployment of a planning and control system in a coordinated way on the medicine’s logistics system in Cuba. To achieve these objectives, several mechanisms of supply chain coordination, mathematical programming models, and other management techniques were analyzed to meet the requirements of collaborative logistics management in Cuba. One of the findings is the practical and theoretical inadequacies of the studied models to solve the current situation of the Cuban logistics systems management. To contribute to the tactical-operative management of logistics, the Collaborative Logistics Flow Management Model (CLFMM) is proposed as a tool for the balance of cycles, capacities, and inventories, always to meet the final customers’ demands in correspondence with the service level expected by these. The CLFMM has as center the supply chain planning and control system as a unique information system, which acts on the processes network. The development of the model is based on the empirical methods of analysis-synthesis and the study cases. Other finding is the demonstration of the use of a single information system to support the supply chain logistics management, allows determining the deadlines and quantities required in each process. This ensures that medications are always available to patients and there are no faults that put the population's health at risk. The simulation of planning and control with the CLFMM in medicines such as dipyrone and chlordiazepoxide, during 5 months of 2017, permitted to take measures to adjust the logistic flow, eliminate delayed processes and avoid shortages of the medicines studied. As a result, the logistics cycle efficiency can be increased to 91%, the inventory rotation would increase, and this results in a release of financial resources.

Keywords: collaborative management, medicine logistic system, supply chain planning, tactical-operative planning

Procedia PDF Downloads 155
232 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 143