Search results for: single and tandem organic solar cells
311 Non-Thermal Pulsed Plasma Discharge for Contaminants of Emerging Concern Removal in Water
Authors: Davide Palma, Dimitra Papagiannaki, Marco Minella, Manuel Lai, Rita Binetti, Claire Richard
Abstract:
Modern analytical technologies allow us to detect water contaminants at trace and ultra-trace concentrations highlighting how a large number of organic compounds is not efficiently abated by most wastewater treatment facilities relying on biological processes; we usually refer to these micropollutants as contaminants of emerging concern (CECs). The availability of reliable end effective technologies, able to guarantee the high standards of water quality demanded by legislators worldwide, has therefore become a primary need. In this context, water plasma stands out among developing technologies as it is extremely effective in the abatement of numerous classes of pollutants, cost-effective, and environmentally friendly. In this work, a custom-built non-thermal pulsed plasma discharge generator was used to abate the concentration of selected CECs in the water samples. Samples were treated in a 50 mL pyrex reactor using two different types of plasma discharge occurring at the surface of the treated solution or, underwater, working with positive polarity. The distance between the tips of the electrodes determined where the discharge was formed: underwater when the distance was < 2mm, at the water surface when the distance was > 2 mm. Peak voltage was in the 100-130kV range with typical current values of 20-40 A. The duration of the pulse was 500 ns, and the frequency of discharge could be manually set between 5 and 45 Hz. Treatment of 100 µM diclofenac solution in MilliQ water, with a pulse frequency of 17Hz, revealed that surface discharge was more efficient in the degradation of diclofenac that was no longer detectable after 6 minutes of treatment. Over 30 minutes were required to obtain the same results with underwater discharge. These results are justified by the higher rate of H₂O₂ formation (21.80 µmolL⁻¹min⁻¹ for surface discharge against 1.20 µmolL⁻¹min⁻¹ for underwater discharge), larger discharge volume and UV light emission, high rate of ozone and NOx production (up to 800 and 1400 ppb respectively) observed when working with surface discharge. Then, the surface discharge was used for the treatment of the three selected perfluoroalkyl compounds, namely, perfluorooctanoic acid (PFOA), perfluorohexanoic acid (PFHxA), and pefluorooctanesulfonic acid (PFOS) both individually and in mixture, in ultrapure and groundwater matrices with initial concentration of 1 ppb. In both matrices, PFOS exhibited the best degradation reaching complete removal after 30 min of treatment (degradation rate 0.107 min⁻¹ in ultrapure water and 0.0633 min⁻¹ in groundwater), while the degradation rate of PFOA and PFHxA was slower of around 65% and 80%, respectively. Total nitrogen (TN) measurements revealed levels up to 45 mgL⁻¹h⁻¹ in water samples treated with surface discharge, while, in analogous samples treated with underwater discharge, TN increase was 5 to 10 times lower. These results can be explained by the significant NOx concentrations (over 1400 ppb) measured above functioning reactor operating with superficial discharge; rapid NOx hydrolysis led to nitrates accumulation in the solution explaining the observed evolution of TN values. Ionic chromatography measures confirmed that the vast majority of TN was under the form of nitrates. In conclusion, non-thermal pulsed plasma discharge, obtained with a custom-built generator, was proven to effectively degrade diclofenac in water matrices confirming the potential interest of this technology for wastewater treatment. The surface discharge was proven to be more effective in CECs removal due to the high rate of formation of H₂O₂, ozone, reactive radical species, and strong UV light emission. Furthermore, nitrates enriched water obtained after treatment could be an interesting added-value product to be used as fertilizer in agriculture. Acknowledgment: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 765860.Keywords: CECs removal, nitrogen fixation, non-thermal plasma, water treatment
Procedia PDF Downloads 121310 A Preliminary Study on the Effects of Equestrian and Basketball Exercises in Children with Autism
Authors: Li Shuping, Shu Huaping, Yi Chaofan, Tao Jiang
Abstract:
Equestrian practice is often considered having a unique effect on improving symptoms in children with autism. This study evaluated and measured the changes in daily behavior, morphological, physical function, and fitness indexes of two group children with autism by means of 12 weeks of equestrian and basketball exercises. 19 clinically diagnosed children with moderate/mild autism were randomly divided into equestrian group (9 children, age=10.11±1.90y) and basketball group (10 children, age=10.70±2.16y). Both the equestrian and basketball groups practiced twice a week for 45 to 60 minutes each time. Three scales, the Autism Behavior Checklist (ABC), the Childhood Autism Rating Scale (CARS) and the Clancy Autism Behavior Scale (CABS) were used to assess their human behavior and psychology. Four morphological, seven physical function and fitness indicators were measured to evaluate the effects of the two exercises on the children’s body. The evaluations were taken by every four weeks ( pre-exercise, the 4th week, the 8th week and 12th week (post exercise). The result showed that the total scores of ABC, CARS and CABS, the dimension scores of ABC on the somatic motor, language and life self-care obtained after exercise were significantly lower than those obtained before 12 week exercises in both groups. The ABC feeling dimension scores of equestrian group and ABC communication dimension score of basketball group were significantly lower,and The upper arm circumference, sitting forward flexion, 40 second sit-up, 15s lateral jump, vital capacity, and single foot standing of both groups were significantly higher than that of before exercise.. The BMI of equestrian group was significantly reduced. The handgrip strength of basketball group was significantly increased. In conclusion, both types of exercises could improve daily behavior, morphological, physical function, and fitness indexes of the children with autism. However, the behavioral psychological scores, body morphology and function indicators and time points were different in the middle and back of the two interventions.But the indicators and the timing of the improvement were different. To the group of equestrian, the improvement of the flexibility occurred at week 4, the improvement of the sensory perception, control and use their own body, and promote the development of core strength endurance, coordination and cardiopulmonary function occurred at week 8,and the improvement of core strength endurance, coordination and cardiopulmonary function occurred at week 12. To the group of basketball, the improvement of the hand strength, balance, flexibility and cardiopulmonary function occurred at week 4, the improvement of the self-care ability and language expression ability, and core strength endurance and coordination occurred at week 8, the improvement of the control and use of their own body and social interaction ability occurred at week 12. In comparison of the exercise effects, the equestrian exercise improved the physical control and application ability appeared earlier than that of basketball group. Basketball exercise improved the language expression ability, self-care ability, balance ability and cardiopulmonary function of autistic children appeared earlier than that of equestrian group.Keywords: intervention, children with autism, equestrain, basketball
Procedia PDF Downloads 68309 Polymer Dispersed Liquid Crystals Based on Poly Vinyl Alcohol Boric Acid Matrix
Authors: Daniela Ailincai, Bogdan C. Simionescu, Luminita Marin
Abstract:
Polymer dispersed liquid crystals (PDLC) represent an interesting class of materials which combine the ability of polymers to form films and their mechanical strength with the opto-electronic properties of liquid crystals. The proper choice of the two components - the liquid crystal and the polymeric matrix - leads to materials suitable for a large area of applications, from electronics to biomedical devices. The objective of our work was to obtain PDLC films with potential applications in the biomedical field, using poly vinyl alcohol boric acid (PVAB) as a polymeric matrix for the first time. Presenting all the tremendous properties of poly vinyl alcohol (such as: biocompatibility, biodegradability, water solubility, good chemical stability and film forming ability), PVAB brings the advantage of containing the electron deficient boron atom, and due to this, it should promote the liquid crystal anchoring and a narrow liquid crystal droplets polydispersity. Two different PDLC systems have been obtained, by the use of two liquid crystals, a nematic commercial one: 4-cyano-4’-penthylbiphenyl (5CB) and a new smectic liquid crystal, synthesized by us: buthyl-p-[p’-n-octyloxy benzoyloxy] benzoate (BBO). The PDLC composites have been obtained by the encapsulation method, working with four different ratios between the polymeric matrix and the liquid crystal, from 60:40 to 90:10. In all cases, the composites were able to form free standing, flexible films. Polarized light microscopy, scanning electron microscopy, differential scanning calorimetry, RAMAN- spectroscopy and the contact angle measurements have been performed, in order to characterize the new composites. The new smectic liquid crystal has been characterized using 1H-NMR and single crystal X-ray diffraction and its thermotropic behavior has been established using differential scanning calorimetry and polarized light microscopy. The polarized light microscopy evidenced the formation of round birefringent droplets, anchored homeotropic in the first case and planar in the second, with a narrow dimensional polydispersity, especially for the PDLC containing the largest amount of liquid crystal, fact evidenced by SEM, also. The obtained values for the water to air contact angle showed that the composites have a proper hydrophilic-hydrophobic balance, making them potential candidates for bioapplications. More than this, our studies demonstrated that the water to air contact angle varies as a function of PVAB matrix crystalinity degree, which can be controled as a function of time. This fact allowed us to conclude that the use of PVAB as matrix for PDLCs obtaining offers the possibility to modulate their properties for specific applications.Keywords: 4-cyano-4’-penthylbiphenyl, buthyl-p-[p’-n-octyloxy benzoyloxy] benzoate, contact angle, polymer dispersed liquid crystals, poly vinyl alcohol boric acid
Procedia PDF Downloads 450308 A Greener Approach towards the Synthesis of an Antimalarial Drug Lumefantrine
Authors: Luphumlo Ncanywa, Paul Watts
Abstract:
Malaria is a disease that kills approximately one million people annually. Children and pregnant women in sub-Saharan Africa lost their lives due to malaria. Malaria continues to be one of the major causes of death, especially in poor countries in Africa. Decrease the burden of malaria and save lives is very essential. There is a major concern about malaria parasites being able to develop resistance towards antimalarial drugs. People are still dying due to lack of medicine affordability in less well-off countries in the world. If more people could receive treatment by reducing the cost of drugs, the number of deaths in Africa could be massively reduced. There is a shortage of pharmaceutical manufacturing capability within many of the countries in Africa. However one has to question how Africa would actually manufacture drugs, active pharmaceutical ingredients or medicines developed within these research programs. It is quite likely that such manufacturing would be outsourced overseas, hence increasing the cost of production and potentially limiting the full benefit of the original research. As a result the last few years has seen major interest in developing more effective and cheaper technology for manufacturing generic pharmaceutical products. Micro-reactor technology (MRT) is an emerging technique that enables those working in research and development to rapidly screen reactions utilizing continuous flow, leading to the identification of reaction conditions that are suitable for usage at a production level. This emerging technique will be used to develop antimalarial drugs. It is this system flexibility that has the potential to reduce both the time was taken and risk associated with transferring reaction methodology from research to production. Using an approach referred to as scale-out or numbering up, a reaction is first optimized within the laboratory using a single micro-reactor, and in order to increase production volume, the number of reactors employed is simply increased. The overall aim of this research project is to develop and optimize synthetic process of antimalarial drugs in the continuous processing. This will provide a step change in pharmaceutical manufacturing technology that will increase the availability and affordability of antimalarial drugs on a worldwide scale, with a particular emphasis on Africa in the first instance. The research will determine the best chemistry and technology to define the lowest cost manufacturing route to pharmaceutical products. We are currently developing a method to synthesize Lumefantrine in continuous flow using batch process as bench mark. Lumefantrine is a dichlorobenzylidine derivative effective for the treatment of various types of malaria. Lumefantrine is an antimalarial drug used with artemether for the treatment of uncomplicated malaria. The results obtained when synthesizing Lumefantrine in a batch process are transferred into a continuous flow process in order to develop an even better and reproducible process. Therefore, development of an appropriate synthetic route for Lumefantrine is significant in pharmaceutical industry. Consequently, if better (and cheaper) manufacturing routes to antimalarial drugs could be developed and implemented where needed, it is far more likely to enable antimalarial drugs to be available to those in need.Keywords: antimalarial, flow, lumefantrine, synthesis
Procedia PDF Downloads 203307 Re-Entrant Direct Hexagonal Phases in a Lyotropic System Induced by Ionic Liquids
Authors: Saheli Mitra, Ramesh Karri, Praveen K. Mylapalli, Arka. B. Dey, Gourav Bhattacharya, Gouriprasanna Roy, Syed M. Kamil, Surajit Dhara, Sunil K. Sinha, Sajal K. Ghosh
Abstract:
The most well-known structures of lyotropic liquid crystalline systems are the two dimensional hexagonal phase of cylindrical micelles with a positive interfacial curvature and the lamellar phase of flat bilayers with zero interfacial curvature. In aqueous solution of surfactants, the concentration dependent phase transitions have been investigated extensively. However, instead of changing the surfactant concentrations, the local curvature of an aggregate can be altered by tuning the electrostatic interactions among the constituent molecules. Intermediate phases with non-uniform interfacial curvature are still unexplored steps to understand the route of phase transition from hexagonal to lamellar. Understanding such structural evolution in lyotropic liquid crystalline systems is important as it decides the complex rheological behavior of the system, which is one of the main interests of the soft matter industry. Sodium dodecyl sulfate (SDS) is an anionic surfactant and can be considered as a unique system to tune the electrostatics by cationic additives. In present study, imidazolium-based ionic liquids (ILs) with different number of carbon atoms in their single hydrocarbon chain were used as the additive in the aqueous solution of SDS. At a fixed concentration of total non-aqueous components (SDS and IL), the molar ratio of these components was changed, which effectively altered the electrostatic interactions between the SDS molecules. As a result, the local curvature is observed to modify, and correspondingly, the structure of the hexagonal liquid crystalline phases are transformed into other phases. Polarizing optical microscopy of SDS and imidazole-based-IL systems have exhibited different textures of the liquid crystalline phases as a function of increasing concentration of the ILs. The small angle synchrotron x-ray diffraction (SAXD) study has indicated the hexagonal phase of direct cylindrical micelles to transform to a rectangular phase at the presence of short (two hydrocarbons) chain IL. However, the hexagonal phase is transformed to a lamellar phase at the presence of long (ten hydrocarbons) chain IL. Interestingly, at the presence of a medium (four hydrocarbons) chain IL, the hexagonal phase is transformed to another hexagonal phase of direct cylindrical micelles through the lamellar phase. To the best of our knowledge, such a phase sequence has not been reported earlier. Even though the small angle x-ray diffraction study has revealed the lattice parameters of these phases to be similar to each other, their rheological behavior has been distinctly different. These rheological studies have shed lights on how these phases differ in their viscoelastic behavior. Finally, the packing parameters, calculated for these phases based on the geometry of the aggregates, have explained the formation of the self-assembled aggregates.Keywords: lyotropic liquid crystals, polarizing optical microscopy, rheology, surfactants, small angle x-ray diffraction
Procedia PDF Downloads 138306 Tall Building Transit-Oriented Development (TB-TOD) and Energy Efficiency in Suburbia: Case Studies, Sydney, Toronto, and Washington D.C.
Authors: Narjes Abbasabadi
Abstract:
As the world continues to urbanize and suburbanize, where suburbanization associated with mass sprawl has been the dominant form of this expansion, sustainable development challenges will be more concerned. Sprawling, characterized by low density and automobile dependency, presents significant environmental issues regarding energy consumption and Co2 emissions. This paper examines the vertical expansion of suburbs integrated into mass transit nodes as a planning strategy for boosting density, intensification of land use, conversion of single family homes to multifamily dwellings or mixed use buildings and development of viable alternative transportation choices. It analyzes the spatial patterns of tall building transit-oriented development (TB-TOD) of suburban regions in Sydney (Australia), Toronto (Canada), and Washington D.C. (United States). The main objectives of this research seek to understand the effect of the new morphology of suburban tall, the physical dimensions of individual buildings and their arrangement at a larger scale with energy efficiency. This study aims to answer these questions: 1) why and how can the potential phenomenon of vertical expansion or high-rise development be integrated into suburb settings? 2) How can this phenomenon contribute to an overall denser development of suburbs? 3) Which spatial pattern or typologies/ sub-typologies of the TB-TOD model do have the greatest energy efficiency? It addresses these questions by focusing on 1) energy, heat energy demand (excluding cooling and lighting) related to design issues at two levels: macro, urban scale and micro, individual buildings—physical dimension, height, morphology, spatial pattern of tall buildings and their relationship with each other and transport infrastructure; 2) Examining TB-TOD to provide more evidence of how the model works regarding ridership. The findings of the research show that the TB-TOD model can be identified as the most appropriate spatial patterns of tall buildings in suburban settings. And among the TB-TOD typologies/ sub-typologies, compact tall building blocks can be the most energy efficient one. This model is associated with much lower energy demands in buildings at the neighborhood level as well as lower transport needs in an urban scale while detached suburban high rise or low rise suburban housing will have the lowest energy efficiency. The research methodology is based on quantitative study through applying the available literature and static data as well as mapping and visual documentations of urban regions such as Google Earth, Microsoft Bing Bird View and Streetview. It will examine each suburb within each city through the satellite imagery and explore the typologies/ sub-typologies which are morphologically distinct. The study quantifies heat energy efficiency of different spatial patterns through simulation via GIS software.Keywords: energy efficiency, spatial pattern, suburb, tall building transit-oriented development (TB-TOD)
Procedia PDF Downloads 260305 Amphiphilic Compounds as Potential Non-Toxic Antifouling Agents: A Study of Biofilm Formation Assessed by Micro-titer Assays with Marine Bacteria and Eco-toxicological Effect on Marine Algae
Authors: D. Malouch, M. Berchel, C. Dreanno, S. Stachowski-Haberkorn, P-A. Jaffres
Abstract:
Biofilm is a predominant lifestyle chosen by bacteria. Whether it is developed on an immerged surface or a mobile biofilm known as flocs, the bacteria within this form of life show properties different from its planktonic ones. Within the biofilm, the self-formed matrix of Extracellular Polymeric Substances (EPS) offers hydration, resources capture, enhanced resistance to antimicrobial agents, and allows cell-communication. Biofouling is a complex natural phenomenon that involves biological, physical and chemical properties related to the environment, the submerged surface and the living organisms involved. Bio-colonization of artificial structures can cause various economic and environmental impacts. The increase in costs associated with the over-consumption of fuel from biocolonized vessels has been widely studied. Measurement drifts from submerged sensors, as well as obstructions in heat exchangers, and deterioration of offshore structures are major difficulties that industries are dealing with. Therefore, surfaces that inhibit biocolonization are required in different areas (water treatment, marine paints, etc.) and many efforts have been devoted to produce efficient and eco-compatible antifouling agents. The different steps of surface fouling are widely described in literature. Studying the biofilm and its stages provides a better understanding of how to elaborate more efficient antifouling strategies. Several approaches are currently applied, such as the use of biocide anti-fouling paint6 (mainly with copper derivatives) and super-hydrophobic coatings. While these two processes are proving to be the most effective, they are not entirely satisfactory, especially in a context of a changing legislation. Nowadays, the challenge is to prevent biofouling with non-biocide compounds, offering a cost effective solution, but with no toxic effects on marine organisms. Since the micro-fouling phase plays an important role in the regulation of the following steps of biofilm formation7, it is desired to reduce or delate biofouling of a given surface by inhibiting the micro fouling at its early stages. In our recent works, we reported that some amphiphilic compounds exhibited bacteriostatic or bactericidal properties at a concentration that did not affect eukaryotic cells. These remarkable properties invited us to assess this type of bio-inspired phospholipids9 to prevent the colonization of surfaces by marine bacteria. Of note, other studies reported that amphiphilic compounds interacted with bacteria leading to a reduction of their development. An amphiphilic compound is a molecule consisting of a hydrophobic domain and a polar head (ionic or non-ionic). These compounds appear to have interesting antifouling properties: some ionic compounds have shown antimicrobial activity, and zwitterions can reduce nonspecific adsorption of proteins. Herein, we investigate the potential of amphiphilic compounds as inhibitors of bacterial growth and marine biofilm formation. The aim of this study is to compare the efficacy of four synthetic phospholipids that features a cationic charge (BSV36, KLN47) or a zwitterionic polar-head group (SL386, MB2871) to prevent microfouling with marine bacteria. We also study the toxicity of these compounds in order to identify the most promising compound that must feature high anti-adhesive properties and a low cytotoxicity on two links representative of coastal marine food webs: phytoplankton and oyster larvae.Keywords: amphiphilic phospholipids, bacterial biofilm, marine microfouling, non-toxic antifouling
Procedia PDF Downloads 147304 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction
Authors: Bruce Wrightsman
Abstract:
Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.Keywords: wood building systems, material histories, monocoque systems, construction waste
Procedia PDF Downloads 78303 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming
Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero
Abstract:
Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up
Procedia PDF Downloads 244302 Time Travel Testing: A Mechanism for Improving Renewal Experience
Authors: Aritra Majumdar
Abstract:
While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas
Procedia PDF Downloads 159301 Comparative Study of Active Release Technique and Myofascial Release Technique in Patients with Upper Trapezius Spasm
Authors: Harihara Prakash Ramanathan, Daksha Mishra, Ankita Dhaduk
Abstract:
Relevance: This qualitative study will educate the clinician in putting into practice the advanced method of movement science in restoring the function. Purpose: The purpose of this study is to compare the effectiveness of Active Release Technique and myofascial release technique on range of motion, neck function and pain in patients with upper trapezius spasm. Methods/Analysis: The study was approved by the institutional Human Research and Ethics committee. This study included sixty patients of age group between 20 to 55 years with upper trapezius spasm. Patients were randomly divided into two groups receiving Active Release Technique (Group A) and Myofascial Release Technique (Group B). The patients were treated for 1 week and three outcome measures ROM, pain and functional level were measured using Goniometer, Visual analog scale(VAS), Neck disability Index Questionnaire(NDI) respectively. Paired Sample 't' test was used to compare the differences of pre and post intervention values of Cervical Range of motion, Neck disability Index, Visual analog scale of Group A and Group B. Independent't' test was used to compare the differences between two groups in terms of improvement in cervical range of motion, decrease in visual analogue scale(VAS), decrease in Neck disability index score. Results: Both the groups showed statistically significant improvements in cervical ROM, reduction in pain and in NDI scores. However, mean change in Cervical flexion, cervical extension, right side flexion, left side flexion, right side rotation, left side rotation, pain, neck disability level showed statistically significant improvement (P < 0. 05)) in the patients who received Active Release Technique as compared to Myofascial release technique. Discussion and conclusions: In present study, the average improvement immediately post intervention is significantly greater as compared to before treatment but there is even more improvement after seven sessions as compared to single session. Hence, this proves that several sessions of Manual techniques are necessary to produce clinically relevant results. Active release technique help to reduce the pain threshold by removing adhesion and promote normal tissue extensibility. The act of tensioning and compressing the affected tissue both with digital contact and through the active movement performed by the patient can be a plausible mechanism for tissue healing in this study. This study concluded that both Active Release Technique (ART) and Myofascial release technique (MFR) are equally effective in managing upper trapezius muscle spasm, but more improvement can be achieved by Active Release Technique (ART). Impact and Implications: Active Release Technique can be adopted as mainstay of treatment approach in treating trapezius spasm for faster relief and improving the functional status.Keywords: trapezius spasm, myofascial release, active release technique, pain
Procedia PDF Downloads 273300 Co₂Fe LDH on Aromatic Acid Functionalized N Doped Graphene: Hybrid Electrocatalyst for Oxygen Evolution Reaction
Authors: Biswaranjan D. Mohapatra, Ipsha Hota, Swarna P. Mantry, Nibedita Behera, Kumar S. K. Varadwaj
Abstract:
Designing highly active and low-cost oxygen evolution (2H₂O → 4H⁺ + 4e⁻ + O₂) electrocatalyst is one of the most active areas of advanced energy research. Some precious metal-based electrocatalysts, such as IrO₂ and RuO₂, have shown excellent performance for oxygen evolution reaction (OER); however, they suffer from high-cost and low abundance which limits their applications. Recently, layered double hydroxides (LDHs), composed of layers of divalent and trivalent transition metal cations coordinated to hydroxide anions, have gathered attention as an alternative OER catalyst. However, LDHs are insulators and coupled with carbon materials for the electrocatalytic applications. Graphene covalently doped with nitrogen has been demonstrated to be an excellent electrocatalyst for energy conversion technologies such as; oxygen reduction reaction (ORR), oxygen evolution reaction (OER) & hydrogen evolution reaction (HER). However, they operate at high overpotentials, significantly above the thermodynamic standard potentials. Recently, we reported remarkably enhanced catalytic activity of benzoate or 1-pyrenebutyrate functionalized N-doped graphene towards the ORR in alkaline medium. The molecular and heteroatom co-doping on graphene is expected to tune the electronic structure of graphene. Therefore, an innovative catalyst architecture, in which LDHs are anchored on aromatic acid functionalized ‘N’ doped graphene may presumably boost the OER activity to a new benchmark. Herein, we report fabrication of Co₂Fe-LDH on aromatic acid (AA) functionalized ‘N’ doped reduced graphene oxide (NG) and studied their OER activities in alkaline medium. In the first step, a novel polyol method is applied for synthesis of AA functionalized NG, which is well dispersed in aqueous medium. In the second step, Co₂Fe LDH were grown on AA functionalized NG by co-precipitation method. The hybrid samples are abbreviated as Co₂Fe LDH/AA-NG, where AA is either Benzoic acid or 1, 3-Benzene dicarboxylic acid (BDA) or 1, 3, 5 Benzene tricarboxylic acid (BTA). The crystal structure and morphology of the samples were characterized by X-ray diffraction (XRD), scanning electron microscope (SEM) and transmission electron microscope (TEM). These studies confirmed the growth of layered single phase LDH. The electrocatalytic OER activity of these hybrid materials was investigated by rotating disc electrode (RDE) technique on a glassy carbon electrode. The linear sweep voltammetry (LSV) on these catalyst samples were taken at 1600rpm. We observed significant OER performance enhancement in terms of onset potential and current density on Co₂Fe LDH/BTA-NG hybrid, indicating the synergic effect. This exploration of molecular functionalization effect in doped graphene and LDH system may provide an excellent platform for innovative design of OER catalysts.Keywords: π-π functionalization, layered double hydroxide, oxygen evolution reaction, reduced graphene oxide
Procedia PDF Downloads 207299 A Self-Heating Gas Sensor of SnO2-Based Nanoparticles Electrophoretic Deposited
Authors: Glauco M. M. M. Lustosa, João Paulo C. Costa, Sonia M. Zanetti, Mario Cilense, Leinig Antônio Perazolli, Maria Aparecida Zaghete
Abstract:
The contamination of the environment has been one of the biggest problems of our time, mostly due to developments of many industries. SnO2 is an n-type semiconductor with band gap about 3.5 eV and has its electrical conductivity dependent of type and amount of modifiers agents added into matrix ceramic during synthesis process, allowing applications as sensing of gaseous pollutants on ambient. The chemical synthesis by polymeric precursor method consists in a complexation reaction between tin ion and citric acid at 90 °C/2 hours and subsequently addition of ethyleneglycol for polymerization at 130 °C/2 hours. It also prepared polymeric resin of zinc, cobalt and niobium ions. Stoichiometric amounts of the solutions were mixed to obtain the systems (Zn, Nb)-SnO2 and (Co, Nb) SnO2 . The metal immobilization reduces its segregation during the calcination resulting in a crystalline oxide with high chemical homogeneity. The resin was pre-calcined at 300 °C/1 hour, milled in Atritor Mill at 500 rpm/1 hour, and then calcined at 600 °C/2 hours. X-Ray Diffraction (XDR) indicated formation of SnO2 -rutile phase (JCPDS card nº 41-1445). The characterization by Scanning Electron Microscope of High Resolution showed spherical ceramic powder nanostructured with 10-20 nm of diameter. 20 mg of SnO2 -based powder was kept in 20 ml of isopropyl alcohol and then taken to an electrophoretic deposition (EPD) system. The EPD method allows control the thickness films through the voltage or current applied in the electrophoretic cell and by the time used for deposition of ceramics particles. This procedure obtains films in a short time with low costs, bringing prospects for a new generation of smaller size devices with easy integration technology. In this research, films were obtained in an alumina substrate with interdigital electrodes after applying 2 kV during 5 and 10 minutes in cells containing alcoholic suspension of (Zn, Nb)-SnO2 and (Co, Nb) SnO2 of powders, forming a sensing layer. The substrate has designed integrated micro hotplates that provide an instantaneous and precise temperature control capability when a voltage is applied. The films were sintered at 900 and 1000 °C in a microwave oven of 770 W, adapted by the research group itself with a temperature controller. This sintering is a fast process with homogeneous heating rate which promotes controlled growth of grain size and also the diffusion of modifiers agents, inducing the creation of intrinsic defects which will change the electrical characteristics of SnO2 -based powders. This study has successfully demonstrated a microfabricated system with an integrated micro-hotplate for detection of CO and NO2 gas at different concentrations and temperature, with self-heating SnO2 - based nanoparticles films, being suitable for both industrial process monitoring and detection of low concentrations in buildings/residences in order to safeguard human health. The results indicate the possibility for development of gas sensors devices with low power consumption for integration in portable electronic equipment with fast analysis. Acknowledgments The authors thanks to the LMA-IQ for providing the FEG-SEM images, and the financial support of this project by the Brazilian research funding agencies CNPq, FAPESP 2014/11314-9 and CEPID/CDMF- FAPESP 2013/07296-2.Keywords: chemical synthesis, electrophoretic deposition, self-heating, gas sensor
Procedia PDF Downloads 275298 Birth Weight, Weight Gain and Feeding Pattern as Predictors for the Onset of Obesity in School Children
Authors: Thimira Pasas P, Nirmala Priyadarshani M, Ishani R
Abstract:
Obesity is a global health issue. Early identification is essential to plan interventions and intervene than to reduce the worsening of obesity and its consequences on the health issues of the individual. Childhood obesity is multifactorial, with both modifiable and unmodifiable risk factors. A genetically susceptible individual (unmodifiable), when placed in an obesogenic environment (modifiable), is likely to become obese in onset and progression. The present study was conducted to identify the age of onset of childhood obesity and the influence of modifiable risk factors for childhood obesity among school children living in a suburban area of Sri Lanka. The study population was aged 11-12 years of Piliyandala Educational Zone. Data were collected from 11–12-year-old school children attending government schools in the Piliyandala Educational Zone. They were using a validated, pre-tested self-administered questionnaire. A stratified random sampling method was performed to select schools and to select a representative sample to include all 3 types of government schools of students due to the prevailing pandemic situation, information from the last school medical inspection on data from 2020used for this purpose. For each obese child identified, 2 non-obese children were selected as controls. A single representative from the area was selected by using a systematic random sampling method with a sampling interval of 3. Data was collected using a validated, pre-tested self-administered questionnaire and the Child Health Development Record of the child. An introduction, which included explanations and instructions for filing the questionnaire, was carried out as a group activity prior to distributing the questionnaire among the sample. The results of the present study aligned with the hypothesis that the age of onset of childhood obesity and prediction must be within the first two years of child life. A total of 130 children (66 males: 64 females) participated in the study. The age of onset of obesity was seen to be within the first two years of life. The risk of obesity at 11-12 years of age was Obesity risk was identified at 3-time s higher among females who underwent rapid weight gain within their infancy period. Consuming milk prior to breakfast emerged as a risk factor that increases the risk of obesity by three times. The current study found that the drink before breakfast tends to increase the obesity risk by 3-folds, especially among obese females. Proper monitoring must be carried out to identify the rapid weight gain, especially within the first 2 years of life. Consumption of mug milk before breakfast tends to increase the obesity risk by 3 times. Identification of the confounding factors, proper awareness of the mothers/guardians and effective proper interventions need to be carried out to reduce the obesity risk among school children in the future.Keywords: childhood obesity, school children, age of onset, weight gain, feeding pattern, activity level
Procedia PDF Downloads 141297 Auto Surgical-Emissive Hand
Authors: Abhit Kumar
Abstract:
The world is full of master slave Telemanipulator where the doctor’s masters the console and the surgical arm perform the operations, i.e. these robots are passive robots, what the world needs to focus is that in use of these passive robots we are acquiring doctors for operating these console hence the utilization of the concept of robotics is still not fully utilized ,hence the focus should be on active robots, Auto Surgical-Emissive Hand use the similar concept of active robotics where this anthropomorphic hand focuses on the autonomous surgical, emissive and scanning operation, enabled with the vision of 3 way emission of Laser Beam/-5°C < ICY Steam < 5°C/ TIC embedded in palm of the anthropomorphic hand and structured in a form of 3 way disc. Fingers of AS-EH (Auto Surgical-Emissive Hand) as called, will have tactile, force, pressure sensor rooted to it so that the mechanical mechanism of force, pressure and physical presence on the external subject can be maintained, conversely our main focus is on the concept of “emission” the question arises how all the 3 non related methods will work together that to merged in a single programmed hand, all the 3 methods will be utilized according to the need of the external subject, the laser if considered will be emitted via a pin sized outlet, this radiation is channelized via a thin channel which further connect to the palm of the surgical hand internally leading to the pin sized outlet, here the laser is used to emit radiation enough to cut open the skin for removal of metal scrap or any other foreign material while the patient is in under anesthesia, keeping the complexity of the operation very low, at the same time the TIC fitted with accurate temperature compensator will be providing us the real time feed of the surgery in the form of heat image, this gives us the chance to analyze the level, also ATC will help us to determine the elevated body temperature while the operation is being proceeded, the thermal imaging camera in rooted internally in the AS-EH while also being connected to the real time software externally to provide us live feedback. The ICY steam will provide the cooling effect before and after the operation, however for more utilization of this concept we can understand the working of simple procedure in which If a finger remain in icy water for a long time it freezes the blood flow stops and the portion become numb and isolated hence even if you try to pinch it will not provide any sensation as the nerve impulse did not coordinated with the brain hence sensory receptor did not got active which means no sense of touch was observed utilizing the same concept we can use the icy stem to be emitted via a pin sized hole on the area of concern ,temperature below 273K which will frost the area after which operation can be done, this steam can also be use to desensitized the pain while the operation in under process. The mathematical calculation, algorithm, programming of working and movement of this hand will be installed in the system prior to the procedure, since this AS-EH is a programmable hand it comes with the limitation hence this AS-EH robot will perform surgical process of low complexity only.Keywords: active robots, algorithm, emission, icy steam, TIC, laser
Procedia PDF Downloads 356296 Comparative Economic Evaluation of Additional Respiratory Resources Utilized after Methylxanthine Initiation for the Treatment of Apnea of Prematurity in a South Asian Country
Authors: Shivakumar M, Leslie Edward S Lewis, Shashikala Devadiga, Sonia Khurana
Abstract:
Introduction: Methylxanthines are used for the treatment of AOP, to facilitate extubation and as a prophylactic agent to prevent apnea. Though the popularity of Caffeine has risen, it is expensive in a resource constrained developing countries like India. Objective: To evaluate the cost-effectiveness of Caffeine compared with Aminophylline treatment for AOP with respect to additional ventilatory resource utilized in different birth weight categorization. Design, Settings and Participants – Single centered, retrospective economic evaluation was done. Participants included preterm newborns with < 34 completed weeks of gestation age that were recruited under an Indian Council of Medical Research funded randomized clinical trial. Per protocol data was included from Neonatal Intensive Care Unit, Kasturba Hospital, Manipal, India between April 2012 and December 2014. Exposure: Preterm neonates were randomly allocated to either Caffeine or Aminophylline as per the trial protocol. Outcomes and Measures – We assessed surfactant requirement, duration of Invasive and Non-Invasive Ventilation, Total Methylxanthine cost and additional cost for respiratory support bared by the payers per day during hospital stay. For the purpose of this study Newborns were stratified as Category A – < 1000g, Category B – 1001 to 1500g and Category C – 1501 to 2500g. Results: Total 146 (Caffeine -72 and Aminophylline – 74) babies with Mean ± SD gestation age of 29.63 ± 1.89 weeks were assessed. 32.19% constitute of Category A, 55.48% were B and 12.33% were C. The difference in median duration of additional NIV and IMV support was statistically insignificant. However 60% of neonates who received Caffeine required additional surfactant therapy (p=0.02). The total median (IQR) cost of Caffeine was significantly high with Rs.10535 (Q3-6317.50, Q1-15992.50) where against Aminophylline cost was Rs.352 (Q3-236, Q1-709) (p < 0.001). The additional costs spent on respiratory support per day in neonates on either Methylxanthines were found to be statistically insignificant in the entire weight based category of our study. Whereas in Category B, the median O2 charges per day were found to have more in Caffeine treated newborns (p=0.05) with border line significance. In category A, providing one day NIV or IMV support significantly increases the unit log cost of Caffeine by 13.6% (CI – 95% ranging from 4 to 24; p=0.005) over log cost of Aminophylline. Conclusion: Cost of Caffeine is expensive than Aminophylline. It was found to be equally efficacious in reducing the number duration of NIV or IMV support. However adjusted with the NIV and IMV days of support, neonates fall in category A and category B who were on Caffeine pays excess amount of respiratory charges per day over aminophylline. In perspective of resource poor settings Aminophylline is cost saving and economically approachable.Keywords: methylxanthines include caffeine and aminophylline, AOP (apnea of prematurity), IMV (invasive mechanical ventilation), NIV (non invasive ventilation), category a – <1000g, category b – 1001 to 1500g and category c – 1501 to 2500g
Procedia PDF Downloads 433295 Maternal Risk Factors Associated with Low Birth Weight Neonates in Pokhara, Nepal: A Hospital Based Case Control Study
Authors: Dipendra Kumar Yadav, Nabaraj Paudel, Anjana Yadav
Abstract:
Background: Low Birth weight (LBW) is defined as the weight at birth less than 2500 grams, irrespective of the period of their gestation. LBW is an important indicator of general health status of population and is considered as the single most important predictors of infant mortality especially of deaths within the first month of life that is birth weight determines the chances of newborn survival. Objective of this study was to identify the maternal risk factors associated with low birth weight neonates. Materials and Methods: A hospital based case-control study was conducted in maternity ward of Manipal Teaching Hospital, Pokhara, Nepal from 23 September 2014 to 12 November 2014. During study period 59 cases were obtained and twice number of control group were selected with frequency matching of the mother`s age with ± 3 years and total controls were 118. Interview schedule was used for data collection along with record review. Data were entered in Epi-data program and analysis was done with help of SPSS software program. Results: From bivariate logistic regression analysis, eighteen variables were found significantly associated with LBW and these were place of residence, family monthly income, education, previous still birth, previous LBW, history of STD, history of vaginal bleeding, anemia, ANC visits, less than four ANC visits, de-worming status, counseling during pregnancy, CVD, physical workload, stress, extra meal during pregnancy, smoking and alcohol consumption status. However after adjusting confounding variables, only six variables were found significantly associated with LBW. Mothers who had family monthly income up to ten thousand rupees were 4.83 times more likely to deliver LBW with CI (1.5-40.645) and p value 0.014 compared to mothers whose family income NRs.20,001-60,000. Mothers who had previous still birth were 2.01 times more likely to deliver LBW with CI (0.69-5.87) and p value 0.02 compared to mothers who did not has previous still birth. Mothers who had previous LBW were 5.472 times more likely to deliver LBW with CI (1.2-24.93) and p value 0.028 compared to mothers who did not has previous LBW. Mothers who had anemia during pregnancy were 3.36 times more likely to deliver LBW with CI (0.77-14.57) and p value 0.014 compared to mothers who did not has anemia. Mothers who delivered female newborn were 2.96 times more likely to have LBW with 95% CI (1.27-7.28) and p value 0.01 compared to mothers who deliver male newborn. Mothers who did not get extra meal during pregnancy were 6.04 times more likely to deliver LBW with CI (1.11-32.7) and p value 0.037 compared to mothers who getting the extra meal during pregnancy. Mothers who consumed alcohol during pregnancy were 4.83 times more likely to deliver LBW with CI (1.57-14.83) and p value 0.006 compared to mothers who did not consumed alcohol during pregnancy. Conclusions: To reduce low birth weight baby through economic empowerment of family and individual women. Prevention and control of anemia during pregnancy is one of the another strategy to control the LBW baby and mothers should take full dose of iron supplements with screening of haemoglobin level. Extra nutritional food should be provided to women during pregnancy. Health promotion program will be focused on avoidance of alcohol and strengthen of health services that leads increasing use of maternity services.Keywords: low birth weight, case-control, risk factors, hospital based study
Procedia PDF Downloads 300294 Carbon Aerogels with Tailored Porosity as Cathode in Li-Ion Capacitors
Authors: María Canal-Rodríguez, María Arnaiz, Natalia Rey-Raap, Ana Arenillas, Jon Ajuria
Abstract:
The constant demand of electrical energy, as well as the increase in environmental concern, lead to the necessity of investing in clean and eco-friendly energy sources that implies the development of enhanced energy storage devices. Li-ion batteries (LIBs) and Electrical double layer capacitors (EDLCs) are the most widespread energy systems. Batteries are able to storage high energy densities contrary to capacitors, which main strength is the high-power density supply and the long cycle life. The combination of both technologies gave rise to Li-ion capacitors (LICs), which offers all these advantages in a single device. This is achieved combining a capacitive, supercapacitor-like positive electrode with a faradaic, battery-like negative electrode. Due to the abundance and affordability, dual carbon-based LICs are nowadays the common technology. Normally, an Active Carbon (AC) is used as the EDLC like electrode, while graphite is the material commonly employed as anode. LICs are potential systems to be used in applications in which high energy and power densities are required, such us kinetic energy recovery systems. Although these devices are already in the market, some drawbacks like the limited power delivered by graphite or the energy limiting nature of AC must be solved to trigger their used. Focusing on the anode, one possibility could be to replace graphite with Hard Carbon (HC). The better rate capability of the latter increases the power performance of the device. Moreover, the disordered carbonaceous structure of HCs enables storage twice the theoretical capacity of graphite. With respect to the cathode, the ACs are characterized for their high volume of micropores, in which the charge is storage. Nevertheless, they normally do not show mesoporous, which are really important mainly at high C-rates as they act as transport channels for the ions to reach the micropores. Usually, the porosity of ACs cannot be tailored, as it strongly depends on the precursor employed to get the final carbon. Moreover, they are not characterized for having a high electrical conductivity, which is an important characteristic to get a good performance in energy storage applications. A possible candidate to substitute ACs are carbon aerogels (CAs). CAs are materials that combine a high porosity with great electrical conductivity, opposite characteristics in carbon materials. Furthermore, its porous properties can be tailored quite accurately according to with the requirements of the application. In the present study, CAs with controlled porosity were obtained from polymerization of resorcinol and formaldehyde by microwave heating. Varying the synthesis conditions, mainly the amount of precursors and pH of the precursor solution, carbons with different textural properties were obtained. The way the porous characteristics affect the performance of the cathode was studied by means of a half-cell configuration. The material with the best performance was evaluated as cathode in a LIC versus a hard carbon as anode. An analogous full LIC made by a high microporous commercial cathode was also assembled for comparison purposes.Keywords: li-ion capacitors, energy storage, tailored porosity, carbon aerogels
Procedia PDF Downloads 167293 Patterns of Libido, Sexual Activity and Sexual Performance in Female Migraineurs
Authors: John Farr Rothrock
Abstract:
Although migraine traditionally has been assumed to convey a relative decrease in libido, sexual activity and sexual performance, recent data have suggested that the female migraine population is far from homogenous in this regard. We sought to determine the levels of libido, sexual activity and sexual performance in the female migraine patient population both generally and according to clinical phenotype. In this single-blind study, a consecutive series of sexually active new female patients ages 25-55 initially presenting to a university-based headache clinic and having a >1 year history of migraine were asked to complete anonymously a survey assessing their sexual histories generally and as they related to their headache disorder and the 19-item Female Sexual Function Index (FSFI). To serve as 2 separate control groups, 100 sexually active females with no history of migraine and 100 female migraineurs from the general (non-clinic) population but matched for age, marital status, educational background and socioeconomic status completed a similar survey. Over a period of 3 months, 188 consecutive migraine patients were invited to participate. Twenty declined, and 28 of the remaining 160 potential subjects failed to meet the inclusion criterion utilized for “sexually active” (ie, heterosexual intercourse at a frequency of > once per month in each of the preceding 6 months). In all groups younger age (p<.005), higher educational level attained (p<.05) and higher socioeconomic status (p<.025) correlated with a higher monthly frequency of intercourse and a higher likelihood of intercourse resulting in orgasm. Relative to the 100 control subjects with no history of migraine, the two migraine groups (total n=232) reported a lower monthly frequency of intercourse and recorded a lower FSFI score (both p<.025), but the contribution to this difference came primarily from the chronic migraine (CM) subgroup (n=92). Patients with low frequency episodic migraine (LFEM) and mid frequency episodic migraine (MFEM) reported a higher FSFI score, higher monthly frequency of intercourse, higher likelihood of intercourse resulting in orgasm and higher likelihood of multiple active sex partners than controls. All migraine subgroups reported a decreased likelihood of engaging in intercourse during an active migraine attack, but relative to the CM subgroup (8/92=9%), a higher proportion of patients in the LFEM (12/49=25%), MFEM (14/67=21%) and high frequency episodic migraine (HFEM: 6/14=43%) subgroups reported utilizing intercourse - and orgasm specifically - as a means of potentially terminating a migraine attack. In the clinic vs no-clinic groups there were no significant differences in the dependent variables assessed. Research subjects with LFEM and MFEM may report a level of libido, frequency of intercourse and likelihood of orgasm-associated intercourse that exceeds what is reported by age-matched controls free of migraine. Many patients with LFEM, MFEM and HFEM appear to utilize intercourse/orgasm as a means to potentially terminate an acute migraine attack.Keywords: migraine, female, libido, sexual activity, phenotype
Procedia PDF Downloads 77292 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations
Authors: Teng Li, Kamran Mohseni
Abstract:
This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulationsKeywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow
Procedia PDF Downloads 502291 Is Liking for Sampled Energy-Dense Foods Mediated by Taste Phenotypes?
Authors: Gary J. Pickering, Sarah Lucas, Catherine E. Klodnicki, Nicole J. Gaudette
Abstract:
Two taste pheno types that are of interest in the study of habitual diet-related risk factors and disease are 6-n-propylthiouracil (PROP) responsiveness and thermal tasting. Individuals differ considerable in how intensely they experience the bitterness of PROP, which is partially explained by three major single nucleotide polymorphisms associated with the TAS2R38 gene. Importantly, this variable responsiveness is a useful proxy for general taste responsiveness, and links to diet-related disease risk, including body mass index, in some studies. Thermal tasting - a newly discovered taste phenotype independent of PROP responsiveness - refers to the capacity of many individuals to perceive phantom tastes in response to lingual thermal stimulation, and is linked with TRPM5 channels. Thermal tasters (TTs) also experience oral sensations more intensely than thermal non-tasters (TnTs), and this was shown to associate with differences in self-reported food preferences in a previous survey from our lab. Here we report on two related studies, where we sought to determine whether PROP responsiveness and thermal tasting would associate with perceptual differences in the oral sensations elicited by sampled energy-dense foods, and whether in turn this would influence liking. We hypothesized that hyper-tasters (thermal tasters and individuals who experience PROP intensely) would (a) rate sweet and high-fat foods more intensely than hypo-tasters, and (b) would differ from hypo-tasters in liking scores. (Liking has been proposed recently as a more accurate measure of actual food consumption). In Study 1, a range of energy-dense foods and beverages, including table cream and chocolate, was assessed by 25 TTs and 19 TnTs. Ratings of oral sensation intensity and overall liking were obtained using gVAS and gDOL scales, respectively. TTs and TnTs did not differ significantly in intensity ratings for most stimuli (ANOVA). In a 2nd study, 44 female participants sampled 22 foods and beverages, assessing them for intensity of oral sensations (gVAS) and overall liking (9-point hedonic scale). TTs (n=23) rated their overall liking of creaminess and milk products lower than did TnTs (n=21), and liked milk chocolate less. PROP responsiveness was negatively correlated with liking of food and beverages belonging to the sweet or sensory food grouping. No other differences in intensity or liking scores between hyper- and hypo-tasters were found. Taken overall, our results are somewhat unexpected, lending only modest support to the hypothesis that these taste phenotypes associate with energy-dense food liking and consumption through differences in the oral sensations they elicit. Reasons for this lack of concordance with expectations and some prior literature are discussed, and suggestions for future research are advanced.Keywords: taste phenotypes, sensory evaluation, PROP, thermal tasting, diet-related health risk
Procedia PDF Downloads 457290 Effect of Varied Climate, Landuse and Human Activities on the Termite (Isoptera: Insecta) Diversity in Three Different Habitats of Shivamogga District, Karnataka, India
Authors: C. M. Kalleshwaraswamy, G. S. Sathisha, A. S. Vidyashree, H. B. Pavithra
Abstract:
Isoptera are an interesting group of social insects with different castes and division of labour. They are primarily wood-feeders, but also feed on a variety of other organic substrates, such as living trees, leaf litter, soil, lichens and animal faeces. The number of species and their biomass are especially large in tropics. In natural ecosystems, they perform a beneficial role in nutrient cycles by accelerating decomposition. The magnitude and dimension of ecological role played by termites is a function of their diversity, population density, and biomass. Termite assemblage composition has a strong response to habitat disturbance and may be indicative of quantitative changes in the decomposition process. Many previous studies in Western Ghat region of India suggest increased anthropogenic activities that adversely affect the soil macrofauna and diversity. Shivamogga district provides a good opportunity to study the effect of topography, cropping pattern, human disturbance on the termite fauna, thereby acquiring accurate baseline information for conservation decision making. The district has 3 distinct agro-ecological areas such as maidan area, semi-malnad and Western Ghat region. Thus, the district provides a unique opportunity to study the effect of varied climate and anthropogenic disturbance on the termite diversity. The standard protocol of belt transects method developed by Eggleton et al. (1997) was used for sampling termites. Sampling was done at monthly interval from September-2014 to August-2015 in Western Ghats, semi-malnad and maidan habitats. The transect was 100m long and 2m wide and divided into 20 contiguous sections, each 5 x 2m in each habitat. Within each section, all the probable microhabitats of termites were searched, which include dead logs, fallen tree, branch, sticks, leaf litter, vegetation etc.,. All the castes collected were labelled, preserved in 80% alcohol, counted and identified to species level. The number of encounters of a species in the transect was used as an indicator of relative abundance of species. The species diversity, species richness, density were compared in three different habitats such as Western Ghats, semi-malnad and maidan region. The study indicated differences in the species composition in the three different habitats. A total of 15 species were recorded which belonging to four sub family and five genera in three habitats. Eleven species viz., Odontotermes obesus, O. feae, O. anamallensis, O. bellahunisensis, O. adampurensis, O. boveni, Microcerotermes fletcheri, M. pakistanicus, Nasutitermes anamalaiensis, N. indicola, N. krishna were recorded in Western Ghat region. Similarly, 11 species viz., Odontotermes obesus, O. feae, O. anamallensis, O. bellahunisensis, O. hornii, O. bhagwathi, Microtermes obesi, Microcerotermes fletcheri, M. pakistanicus, Nasutitermes indicola and Pericapritermes sp. were recorded in semi-malnad habitat. However, only four species viz., O. obesus, O. feae, Microtemes obesi and Pericapritermes sp. species were recorded in maidan area. Shannon’s wiener diversity index (H) showed that Western Ghats had more species dominance (1.56) followed by semi- malnad (1.36) and lowest in maidan (0.89) habitats. Highest value of simpson’s index (D) was observed in Western Ghats habitat (0.70) with more diverse species followed by semi-malnad (0.58) and lowest in maidan (0.53). Similarly, evenness was highest (0.65) in Western Ghats followed by maidan (0.64) and least in semi-malnad habitat (0.54). Menhinick’s index (Dmn) value was ranging from 0.03 to 0.06 in different habitats in the study area. Highest index was observed in Western Ghats (0.06) followed by semi-malnad (0.05) and lowest in maidan (0.03). The study conclusively demonstrated that Western Ghat had highest species diversity compared to semi-malnad and maidan habitat indicating these two habitats are continuously subjected to anthropogenic disturbances. Efforts are needed to conserve the uncommon species which otherwise may become extinct due to human activities.Keywords: anthropogenic disturbance, isoptera, termite species diversity, Western ghats
Procedia PDF Downloads 269289 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 10288 The Processing of Context-Dependent and Context-Independent Scalar Implicatures
Authors: Liu Jia’nan
Abstract:
The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing
Procedia PDF Downloads 323287 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 91286 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas
Authors: Sahithi Yarlagadda
Abstract:
The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm
Procedia PDF Downloads 110285 Effects of Radiation on Mixed Convection in Power Law Fluids along Vertical Wedge Embedded in a Saturated Porous Medium under Prescribed Surface Heat Flux Condition
Authors: Qaisar Ali, Waqar A. Khan, Shafiq R. Qureshi
Abstract:
Heat transfer in Power Law Fluids across cylindrical surfaces has copious engineering applications. These applications comprises of areas such as underwater pollution, bio medical engineering, filtration systems, chemical, petroleum, polymer, food processing, recovery of geothermal energy, crude oil extraction, pharmaceutical and thermal energy storage. The quantum of research work with diversified conditions to study the effects of combined heat transfer and fluid flow across porous media has increased considerably over last few decades. The most non-Newtonian fluids of practical interest are highly viscous and therefore are often processed in the laminar flow regime. Several studies have been performed to investigate the effects of free and mixed convection in Newtonian fluids along vertical and horizontal cylinder embedded in a saturated porous medium, whereas very few analysis have been performed on Power law fluids along wedge. In this study, boundary layer analysis under the effects of radiation-mixed convection in power law fluids along vertical wedge in porous medium have been investigated using an implicit finite difference method (Keller box method). Steady, 2-D laminar flow has been considered under prescribed surface heat flux condition. Darcy, Boussinesq and Roseland approximations are assumed to be valid. Neglecting viscous dissipation effects and the radiate heat flux in the flow direction, the boundary layer equations governing mixed convection flow over a vertical wedge are transformed into dimensionless form. The single mathematical model represents the case for vertical wedge, cone and plate by introducing the geometry parameter. Both similar and Non- similar solutions have been obtained and results for Non similar case have been presented/ plotted. Effects of radiation parameter, variable heat flux parameter, wedge angle parameter ‘m’ and mixed convection parameter have been studied for both Newtonian and Non-Newtonian fluids. The results are also compared with the available data for the analysis of heat transfer in the prescribed range of parameters and found in good agreement. Results for the details of dimensionless local Nusselt number, temperature and velocity fields have also been presented for both Newtonian and Non-Newtonian fluids. Analysis of data revealed that as the radiation parameter or wedge angle is increased, the Nusselt number decreases whereas it increases with increase in the value of heat flux parameter at a given value of mixed convection parameter. Also, it is observed that as viscosity increases, the skin friction co-efficient increases which tends to reduce the velocity. Moreover, pseudo plastic fluids are more heat conductive than Newtonian and dilatant fluids respectively. All fluids behave identically in pure forced convection domain.Keywords: porous medium, power law fluids, surface heat flux, vertical wedge
Procedia PDF Downloads 312284 Nanocarriers Made of Amino Acid Based Biodegradable Polymers: Poly(Ester Amide) and Related Cationic and PEGylating Polymers
Authors: Sophio Kobauri, Temur Kantaria, Nina Kulikova, David Tugushi, Ramaz Katsarava
Abstract:
Polymeric nanoparticles-based drug delivery systems and therapeutics have a great potential in the treatment of a numerous diseases, due to they are characterizing the flexible properties which is giving possibility to modify their structures with a complex definition over their structures, compositions and properties. Important characteristics of the polymeric nanoparticles (PNPs) used as drug carriers are high particle’s stability, high carrier capacity, feasibility of encapsulation of both hydrophilic and hydrophobic drugs, and feasibility of variable routes of administration, including oral application and inhalation; NPs are especially effective for intracellular drug delivery since they penetrate into the cells’ interior though endocytosis. A variety of PNPs based drug delivery systems including charged and neutral, degradable and non-degradable polymers of both natural and synthetic origin have been developed. Among these huge varieties the biodegradable PNPs which can be cleared from the body after the fulfillment of their function could be considered as one of the most promising. For intracellular uptake it is highly desirable to have positively charged PNPs since they can penetrate deep into cell membranes. For long-lasting circulation of PNPs in the body it is important they have so called “stealth coatings” to protect them from the attack of immune system of the organism. One of the effective ways to render the PNPs “invisible” for immune system is their PEGylation which represent the process of pretreatment of polyethylene glycol (PEG) on the surface of PNPs. The present work deals with constructing PNPs from amino acid based biodegradable polymers – regular poly(ester amide) (PEA) composed of sebacic acid, leucine and 1,6-hexandiol (labeled as 8L6), cationic PEA composed of sebacic acid, arginine and 1,6-hexandiol (labeled as 8R6), and comb-like co-PEA composed of sebacic acid, malic acid, leucine and 1,6-hexandiol (labeled as PEG-PEA). The PNPs were fabricated using the polymer deposition/solvent displacement (nanoprecipitation) method. The regular PEA 8L6 form stable negatively charged (zeta-potential within 2-12 mV) PNPs of desired size (within 150-200 nm) in the presence of various surfactants (Tween 20, Tween 80, Brij 010, etc.). Blending the PEAs 8L6 and 8R6 gave the 130-140 nm sized positively charged PNPs having zeta-potential within +20 ÷ +28 mV depending 8L6/8R6 ratio. The PEGylating PEA PEG-PEA was synthesized by interaction of epoxy-co-PEA [8L6]0,5-[tES-L6]0,5 with mPEG-amine-2000 The stable and positively charged PNPs were fabricated using pure PEG-PEA as a surfactant. A firm anchoring of the PEG-PEA with 8L6/8R6 based PNPs (owing to a high afinity of the backbones of all three PEAs) provided good stabilization of the NPs. In vitro biocompatibility study of the new PNPs with four different stable cell lines: A549 (human), U-937 (human), RAW264.7 (murine), Hepa 1-6 (murine) showed they are biocompatible. Considering high stability and cell compatibility of the elaborated PNPs one can conclude that they are promising for subsequent therapeutic applications. This work was supported by the joint grant from the Science and Technology Center in Ukraine and Shota Rustaveli National Science Foundation of Georgia #6298 “New biodegradable cationic polymers composed of arginine and spermine-versatile biomaterials for various biomedical applications”.Keywords: biodegradable poly(ester amide)s, cationic poly(ester amide), pegylating poly(ester amide), nanoparticles
Procedia PDF Downloads 121283 Developing a Product Circularity Index with an Emphasis on Longevity, Repairability, and Material Efficiency
Authors: Lina Psarra, Manogj Sundaresan, Purjeet Sutar
Abstract:
In response to the global imperative for sustainable solutions, this article proposes the development of a comprehensive circularity index applicable to a wide range of products across various industries. The absence of a consensus on using a universal metric to assess circularity performance presents a significant challenge in prioritizing and effectively managing sustainable initiatives. This circularity index serves as a quantitative measure to evaluate the adherence of products, processes, and systems to the principles of a circular economy. Unlike traditional distinct metrics such as recycling rates or material efficiency, this index considers the entire lifecycle of a product in one single metric, also incorporating additional factors such as reusability, scarcity of materials, reparability, and recyclability. Through a systematic approach and by reviewing existing metrics and past methodologies, this work aims to address this gap by formulating a circularity index that can be applied to diverse product portfolio and assist in comparing the circularity of products on a scale of 0%-100%. Project objectives include developing a formula, designing and implementing a pilot tool based on the developed Product Circularity Index (PCI), evaluating the effectiveness of the formula and tool using real product data, and assessing the feasibility of integration into various sustainability initiatives. The research methodology involves an iterative process of comprehensive research, analysis, and refinement where key steps include defining circularity parameters, collecting relevant product data, applying the developed formula, and testing the tool in a pilot phase to gather insights and make necessary adjustments. Major findings of the study indicate that the PCI provides a robust framework for evaluating product circularity across various dimensions. The Excel-based pilot tool demonstrated high accuracy and reliability in measuring circularity, and the database proved instrumental in supporting comprehensive assessments. The PCI facilitated the identification of key areas for improvement, enabling more informed decision-making towards circularity and benchmarking across different products, essentially assisting towards better resource management. In conclusion, the development of the Product Circularity Index represents a significant advancement in global sustainability efforts. By providing a standardized metric, the PCI empowers companies and stakeholders to systematically assess product circularity, track progress, identify improvement areas, and make informed decisions about resource management. This project contributes to the broader discourse on sustainable development by offering a practical approach to enhance circularity within industrial systems, thus paving the way towards a more resilient and sustainable future.Keywords: circular economy, circular metrics, circularity assessment, circularity tool, sustainable product design, product circularity index
Procedia PDF Downloads 28282 The Impact of Efflux Pump Inhibitor on the Activity of Benzosiloxaboroles and Benzoxadiboroles against Gram-Negative Rods
Authors: Agnieszka E. Laudy, Karolina Stępien, Sergiusz Lulinski, Krzysztof Durka, Stefan Tyski
Abstract:
1,3-dihydro-1-hydroxy-2,1-benzoxaborole and its derivatives are a particularly interesting group of synthetic agents and were successfully employed in supramolecular chemistry medicine. The first important compounds, 5-fluoro-1,3-dihydro-1-hydroxy-2,1-benzoxaborole and 5-chloro-1,3-dihydro-1-hydroxy-2,1-benzoxaborole were identified as potent antifungal agents. In contrast, (S)-3-(aminomethyl)-7-(3-hydroxypropoxy)-1-hydroxy-1,3-dihydro-2,1-benzoxaborole hydrochloride is in the second phase of clinical trials as a drug for the treatment of Gram-negative bacterial infections of the Enterobacteriaceae family and Pseudomonas aeruginosa. Equally important and difficult task is to search for compounds active against Gram-negative bacilli, which have multi-drug-resistance efflux pumps actively removing many of the antibiotics from bacterial cells. We have examined whether halogen-substituted benzoxaborole-based derivatives and their analogues possess antibacterial activity and are substrates for multi-drug-resistance efflux pumps. The antibacterial activity of 1,3-dihydro-3-hydroxy-1,1-dimethyl-1,2,3-benzosiloxaborole and 10 halogen-substituted its derivatives, as well as 1,2-phenylenediboronic acid and 3 synthesised fluoro-substituted its analogs, were evaluated. The activity against the reference strains of Gram-positive (n=5) and Gram-negative bacteria (n=10) was screened by the disc-diffusion test (0.4 mg of tested compounds was applied onto paper disc). The minimal inhibitory concentration values and the minimal bactericidal concentration values were estimated according to The Clinical and Laboratory Standards Institute and The European Committee on Antimicrobial Susceptibility Testing recommendations. During the minimal inhibitory concentration values determination with or without phenylalanine-arginine beta-naphthylamide (50 mg/L) efflux pump inhibitor, the concentrations of tested compounds ranged 0.39-400 mg/L in the broth medium supplemented with 1 mM magnesium sulfate. Generally, the studied benzosiloxaboroles and benzoxadiboroles showed a higher activity against Gram-positive cocci than against Gram-negative rods. Moreover, benzosiloxaboroles have the higher activity than benzoxadiboroles compounds. In this study, we demonstrated that substitution (mono-, di- or tetra-) of 1,3-dihydro-3-hydroxy-1,1-dimethyl-1,2,3-benzosiloxaborole with halogen groups resulted in an increase in antimicrobial activity as compared to the parent substance. Interestingly, the 6,7-dichloro-substituted parent substance was found to be the most potent against Gram-positive cocci: Staphylococcus sp. (minimal inhibitory concentration 6.25 mg/L) and Enterococcus sp. (minimal inhibitory concentration 25 mg/L). On the other hand, mono- and dichloro-substituted compounds were the most actively removed by efflux pumps present in Gram-negative bacteria mainly from Enterobacteriaceae family. In the presence of efflux pump inhibitor the minimal inhibitory concentration values of chloro-substituted benzosiloxaboroles decreased from 400 mg/L to 3.12 mg/L. Of note, the highest increase in bacterial susceptibility to tested compounds in the presence of phenylalanine-arginine beta-naphthylamide was observed for 6-chloro-, 6,7-dichloro- and 6,7-difluoro-substituted benzosiloxaboroles. In the case of Escherichia coli, Enterobacter cloacae and P. aeruginosa strains at least a 32-fold decrease in the minimal inhibitory concentration values of these agents were observed. These data demonstrate structure-activity relationships of the tested derivatives and highlight the need for further search for benzoxaboroles and related compounds with significant antimicrobial properties. Moreover, the influence of phenylalanine-arginine beta-naphthylamide on the susceptibility of Gram-negative rods to studied benzosiloxaboroles indicate that some tested agents are substrates for efflux pumps in Gram-negative rods.Keywords: antibacterial activity, benzosiloxaboroles, efflux pumps, phenylalanine-arginine beta-naphthylamide
Procedia PDF Downloads 271