Search results for: layer of consumers.
208 Lithium and Sodium Ion Capacitors with High Energy and Power Densities based on Carbons from Recycled Olive Pits
Authors: Jon Ajuria, Edurne Redondo, Roman Mysyk, Eider Goikolea
Abstract:
Hybrid capacitor configurations are now of increasing interest to overcome the current energy limitations of supercapacitors entirely based on non-Faradaic charge storage. Among them, Li-ion capacitors including a negative battery-type lithium intercalation electrode and a positive capacitor-type electrode have achieved tremendous progress and have gone up to commercialization. Inexpensive electrode materials from renewable sources have recently received increased attention since cost is a persistently major criterion to make supercapacitors a more viable energy solution, with electrode materials being a major contributor to supercapacitor cost. Additionally, Na-ion battery chemistries are currently under development as less expensive and accessible alternative to Li-ion based battery electrodes. In this work, we are presenting both lithium and sodium ion capacitor (LIC & NIC) entirely based on electrodes prepared from carbon materials derived from recycled olive pits. Yearly, around 1 million ton of olive pit waste is generated worldwide, of which a third originates in the Spanish olive oil industry. On the one hand, olive pits were pyrolized at different temperatures to obtain a low specific surface area semigraphitic hard carbon to be used as the Li/Na ion intercalation (battery-type) negative electrode. The best hard carbon delivers a total capacity of 270mAh/g vs Na/Na+ in 1M NaPF6 and 350mAh/g vs Li/Li+ in 1M LiPF6. On the other hand, the same hard carbon is chemically activated with KOH to obtain high specific surface area -about 2000 m2g-1- activated carbon that is further used as the ion-adsorption (capacitor-type) positive electrode. In a voltage window of 1.5-4.2V, activated carbon delivers a specific capacity of 80 mAh/g vs. Na/Na+ and 95 mAh/g vs. Li/Li+ at 0.1A /g. Both electrodes were assembled in the same hybrid cell to build a LIC/NIC. For comparison purposes, a symmetric EDLC supercapacitor cell using the same activated carbon in 1.5M Et4NBF4 electrolyte was also built. Both LIC & NIC demonstrates considerable improvements in the energy density over its EDLC counterpart, delivering a maximum energy density of 110Wh/Kg at a power density of 30W/kg AM and a maximum power density of 6200W/Kg at an energy density of 27 Wh/Kg in the case of NIC and a maximum energy density of 110Wh/Kg at a power density of 30W/kg and a maximum power density of 18000W/Kg at an energy density of 22 Wh/Kg in the case of LIC. In conclusion, our work demonstrates that the same biomass waste can be adapted to offer a hybrid capacitor/battery storage device overcoming the limited energy density of corresponding double layer capacitors.Keywords: hybrid supercapacitor, Na-Ion capacitor, supercapacitor, Li-Ion capacitor, EDLC
Procedia PDF Downloads 201207 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour
Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale
Abstract:
Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.Keywords: artificial neural network, back-propagation, tide data, training algorithm
Procedia PDF Downloads 485206 Microfluidic Plasmonic Bio-Sensing of Exosomes by Using a Gold Nano-Island Platform
Authors: Srinivas Bathini, Duraichelvan Raju, Simona Badilescu, Muthukumaran Packirisamy
Abstract:
A bio-sensing method, based on the plasmonic property of gold nano-islands, has been developed for detection of exosomes in a clinical setting. The position of the gold plasmon band in the UV-Visible spectrum depends on the size and shape of gold nanoparticles as well as on the surrounding environment. By adsorbing various chemical entities, or binding them, the gold plasmon band will shift toward longer wavelengths and the shift is proportional to the concentration. Exosomes transport cargoes of molecules and genetic materials to proximal and distal cells. Presently, the standard method for their isolation and quantification from body fluids is by ultracentrifugation, not a practical method to be implemented in a clinical setting. Thus, a versatile and cutting-edge platform is required to selectively detect and isolate exosomes for further analysis at clinical level. The new sensing protocol, instead of antibodies, makes use of a specially synthesized polypeptide (Vn96), to capture and quantify the exosomes from different media, by binding the heat shock proteins from exosomes. The protocol has been established and optimized by using a glass substrate, in order to facilitate the next stage, namely the transfer of the protocol to a microfluidic environment. After each step of the protocol, the UV-Vis spectrum was recorded and the position of gold Localized Surface Plasmon Resonance (LSPR) band was measured. The sensing process was modelled, taking into account the characteristics of the nano-island structure, prepared by thermal convection and annealing. The optimal molar ratios of the most important chemical entities, involved in the detection of exosomes were calculated as well. Indeed, it was found that the results of the sensing process depend on the two major steps: the molar ratios of streptavidin to biotin-PEG-Vn96 and, the final step, the capture of exosomes by the biotin-PEG-Vn96 complex. The microfluidic device designed for sensing of exosomes consists of a glass substrate, sealed by a PDMS layer that contains the channel and a collecting chamber. In the device, the solutions of linker, cross-linker, etc., are pumped over the gold nano-islands and an Ocean Optics spectrometer is used to measure the position of the Au plasmon band at each step of the sensing. The experiments have shown that the shift of the Au LSPR band is proportional to the concentration of exosomes and, thereby, exosomes can be accurately quantified. An important advantage of the method is the ability to discriminate between exosomes having different origins.Keywords: exosomes, gold nano-islands, microfluidics, plasmonic biosensing
Procedia PDF Downloads 174205 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties
Authors: E. Salem
Abstract:
Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames
Procedia PDF Downloads 115204 Effect of Pre-bonding Storage Period on Laser-treated Al Surfaces
Authors: Rio Hirakawa, Christian Gundlach, Sven Hartwig
Abstract:
In recent years, the use of aluminium has further expanded and is expected to replace steel in the future as vehicles become lighter and more recyclable in order to reduce greenhouse gas (GHG) emissions and improve fuel economy. In line with this, structures and components are becoming increasingly multi-material, with different materials, including aluminium, being used in combination to improve mechanical utility and performance. A common method of assembling dissimilar materials is mechanical fastening, but it has several drawbacks, such as increased manufacturing processes and the influence of substrate-specific mechanical properties. Adhesive bonding and fusion bonding are methods that overcome the above disadvantages. In these two joining methods, surface pre-treatment of the substrate is always necessary to ensure the strength and durability of the joint. Previous studies have shown that laser surface treatment improves the strength and durability of the joint. Yan et al. showed that laser surface treatment of aluminium alloys changes α-Al2O3 in the oxide layer to γ-Al2O3. As γ-Al2O3 has a large specific surface area, is very porous and chemically active, laser-treated aluminium surfaces are expected to undergo physico-chemical changes over time and adsorb moisture and organic substances from the air or storage atmosphere. The impurities accumulated on the laser-treated surface may be released at the adhesive and bonding interface by the heat input to the bonding system during the joining phase, affecting the strength and durability of the joint. However, only a few studies have discussed the effect of such storage periods on laser-treated surfaces. This paper, therefore, investigates the ageing of laser-treated aluminium alloy surfaces through thermal analysis, electrochemical analysis and microstructural observations.AlMg3 of 0.5 mm and 1.5 mm thickness was cut using a water-jet cutting machine, cleaned and degreased with isopropanol and surface pre-treated with a pulsed fibre laser at 1060 nm wavelength, 70 W maximum power and 55 kHz repetition frequency. The aluminium surface was then analysed using SEM, thermogravimetric analysis (TGA), Fourier transform infrared spectroscopy (FTIR) and cyclic voltammetry (CV) after storage in air for various periods ranging from one day to several months TGA and FTIR analysed impurities adsorbed on the aluminium surface, while CV revealed changes in the true electrochemically active surface area. SEM also revealed visual changes on the treated surface. In summary, the changes in the laser-treated aluminium surface with storage time were investigated, and the final results were used to determine the appropriate storage period.Keywords: laser surface treatment, pre-treatment, adhesion, bonding, corrosion, durability, dissimilar material interface, automotive, aluminium alloys
Procedia PDF Downloads 80203 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 133202 A Hybrid Artificial Intelligence and Two Dimensional Depth Averaged Numerical Model for Solving Shallow Water and Exner Equations Simultaneously
Authors: S. Mehrab Amiri, Nasser Talebbeydokhti
Abstract:
Modeling sediment transport processes by means of numerical approach often poses severe challenges. In this way, a number of techniques have been suggested to solve flow and sediment equations in decoupled, semi-coupled or fully coupled forms. Furthermore, in order to capture flow discontinuities, a number of techniques, like artificial viscosity and shock fitting, have been proposed for solving these equations which are mostly required careful calibration processes. In this research, a numerical scheme for solving shallow water and Exner equations in fully coupled form is presented. First-Order Centered scheme is applied for producing required numerical fluxes and the reconstruction process is carried out toward using Monotonic Upstream Scheme for Conservation Laws to achieve a high order scheme. In order to satisfy C-property of the scheme in presence of bed topography, Surface Gradient Method is proposed. Combining the presented scheme with fourth order Runge-Kutta algorithm for time integration yields a competent numerical scheme. In addition, to handle non-prismatic channels problems, Cartesian Cut Cell Method is employed. A trained Multi-Layer Perceptron Artificial Neural Network which is of Feed Forward Back Propagation (FFBP) type estimates sediment flow discharge in the model rather than usual empirical formulas. Hydrodynamic part of the model is tested for showing its capability in simulation of flow discontinuities, transcritical flows, wetting/drying conditions and non-prismatic channel flows. In this end, dam-break flow onto a locally non-prismatic converging-diverging channel with initially dry bed conditions is modeled. The morphodynamic part of the model is verified simulating dam break on a dry movable bed and bed level variations in an alluvial junction. The results show that the model is capable in capturing the flow discontinuities, solving wetting/drying problems even in non-prismatic channels and presenting proper results for movable bed situations. It can also be deducted that applying Artificial Neural Network, instead of common empirical formulas for estimating sediment flow discharge, leads to more accurate results.Keywords: artificial neural network, morphodynamic model, sediment continuity equation, shallow water equations
Procedia PDF Downloads 188201 Archaic Ontologies Nowadays: Music of Rituals
Authors: Luminiţa Duţică, Gheorghe Duţică
Abstract:
Many of the interrogations or dilemmas of the contemporary world found the answer in what was generically called the appeal to matrix. This genuine spiritual exercise of re-connection of the present to origins, to the primary source, revealed the ontological condition of timelessness, ahistorical, immutable (epi)phenomena, of those pure essences concentrated in the archetypal-referential layer of the human existence. The musical creation was no exception to this trend, the impasse generated by the deterministic excesses of the whole serialism or, conversely, by some questionable results of the extreme indeterminism proper to the avant-garde movements, stimulating the orientation of many composers to rediscover a universal grammar, as an emanation of a new ‘collective’ order (reverse of the utopian individualism). In this context, the music of oral tradition and therefore the world of the ancient modes represented a true revelation for the composers of the twentieth century, who were suddenly in front of some unsuspected (re)sources, with a major impact on all levels of edification of the musical work: morphology, syntax, timbrality, semantics etc. For the contemporary Romanian creators, the music of rituals, existing in the local archaic culture, opened unsuspected perspectives for which it meant to be a synthetic, inclusive and recoverer vision, where the primary (archetypal) genuine elements merge with the latest achievements of language of the European composers. Thus, anchored in a strong and genuine modal source, the compositions analysed in this paper evoke, in a manner as modern as possible, the atmosphere of some ancestral rituals such as: the invocation of rain during the drought (Paparudele, Scaloianul), funeral ceremony (Bocetul), traditions specific to the winter holidays and new year (Colinda, Cântecul de stea, Sorcova, Folklore traditional dances) etc. The reactivity of those rituals in the sound context of the twentieth century meant potentiating or resizing the archaic spirit of the primordial symbolic entities, in terms of some complexity levels generated by the technique of harmonies of chordal layers, of complex aggregates (gravitational or non-gravitational, geometric), of the mixture polyphonies and with global effect (group, mass), by the technique of heterophony, of texture and cluster, leading to the implementation of some processes of collective improvisation and instrumental theatre.Keywords: archetype, improvisation, polyphony, ritual, instrumental theatre
Procedia PDF Downloads 305200 Phage Therapy as a Potential Solution in the Fight against Antimicrobial Resistance
Authors: Sanjay Shukla
Abstract:
Excessive use of antibiotics is a main problem in the treatment of wounds and other chronic infections and antibiotic treatment is frequently non-curative, thus alternative treatment is necessary. Phage therapy is considered one of the most effective approaches to treat multi-drug resistant bacterial pathogens. Infections caused by Staphylococcus aureus are very efficiently controlled with phage cocktails, containing a different individual phages lysate infecting a majority of known pathogenic S. aureus strains. The aim of current study was to investigate the efficiency of a purified phage cocktail for prophylactic as well as therapeutic application in mouse model and in large animals with chronic septic infection of wounds. A total of 150 sewage samples were collected from various livestock farms. These samples were subjected for the isolation of bacteriophage by double agar layer method. A total of 27 sewage samples showed plaque formation by producing lytic activity against S. aureus in double agar overlay method out of 150 sewage samples. In TEM recovered isolates of bacteriophages showed hexagonal structure with tail fiber. In the bacteriophage (ØVS) had an icosahedral symmetry with the head size 52.20 nm in diameter and long tail of 109 nm. Head and tail were held together by connector and can be classified as a member of the Myoviridae family under the order of Caudovirale. Recovered bacteriophage had shown the antibacterial activity against the S. aureus in vitro. Cocktail (ØVS1, ØVS5, ØVS9 and ØVS 27) of phage lysate were tested to know in vivo antibacterial activity as well as the safety profile. Result of mice experiment indicated that the bacteriophage lysate was very safe, did not show any appearance of abscess formation which indicates its safety in living system. The mice were also prophylactically protected against S. aureus when administered with cocktail of bacteriophage lysate just before the administration of S. aureus which indicates that they are good prophylactic agent. The S. aureus inoculated mice were completely recovered by bacteriophage administration with 100% recovery which was very good as compere to conventional therapy. In present study ten chronic cases of wound were treated with phage lysate and follow up of these cases was done regularly up to ten days (at 0, 5 and 10 d). Result indicated that the six cases out of ten showed complete recovery of wounds within 10 d. The efficacy of bacteriophage therapy was found to be 60% which was very good as compared to the conventional antibiotic therapy in chronic septic wounds infections. Thus, the application of lytic phage in single dose proved to be innovative and effective therapy for treatment of septic chronic wounds.Keywords: phage therapy, phage lysate, antimicrobial resistance, S. aureus
Procedia PDF Downloads 119199 Reverse Logistics Network Optimization for E-Commerce
Authors: Albert W. K. Tan
Abstract:
This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.Keywords: reverse logistics, supply chain management, optimization, e-commerce
Procedia PDF Downloads 41198 Railway Ballast Volumes Automated Estimation Based on LiDAR Data
Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert
Abstract:
The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point
Procedia PDF Downloads 112197 Technology, Ethics and Experience: Understanding Interactions as Ethical Practice
Authors: Joan Casas-Roma
Abstract:
Technology has become one of the main channels through which people engage in most of their everyday activities; from working to learning, or even when socializing, technology often acts as both an enabler and a mediator of such activities. Moreover, the affordances and interactions created by those technological tools determine the way in which the users interact with one another, as well as how they relate to the relevant environment, thus favoring certain kinds of actions and behaviors while discouraging others. In this regard, virtue ethics theories place a strong focus on a person's daily practice (understood as their decisions, actions, and behaviors) as the means to develop and enhance their habits and ethical competences --such as their awareness and sensitivity towards certain ethically-desirable principles. Under this understanding of ethics, this set of technologically-enabled affordances and interactions can be seen as the possibility space where the daily practice of their users takes place in a wide plethora of contexts and situations. At this point, the following question pops into mind: could these affordances and interactions be shaped in a way that would promote behaviors and habits basedonethically-desirable principles into their users? In the field of game design, the MDA framework (which stands for Mechanics, Dynamics, Aesthetics) explores how the interactions enabled within the possibility space of a game can lead to creating certain experiences and provoking specific reactions to the players. In this sense, these interactions can be shaped in ways thatcreate experiences to raise the players' awareness and sensitivity towards certain topics or principles. This research brings together the notions of technological affordances, the notions of practice and practical wisdom from virtue ethics, and the MDA framework from game design in order to explore how the possibility space created by technological interactions can be shaped in ways that enable and promote actions and behaviors supporting certain ethically-desirable principles. When shaped accordingly, interactions supporting certain ethically-desirable principlescould allow their users to carry out the kind of practice that, according to virtue ethics theories, provides the grounds to develop and enhance their awareness, sensitivity, and ethical reasoning capabilities. Moreover, and because ethical practice can happen collaterally in almost every context, decision, and action, this additional layer could potentially be applied in a wide variety of technological tools, contexts, and functionalities. This work explores the theoretical background, as well as the initial considerations and steps that would be needed in order to harness the potential ethically-desirable benefits that technology can bring, once it is understood as the space where most of their users' daily practice takes place.Keywords: ethics, design methodology, human-computer interaction, philosophy of technology
Procedia PDF Downloads 159196 Quantification of Lawsone and Adulterants in Commercial Henna Products
Authors: Ruchi B. Semwal, Deepak K. Semwal, Thobile A. N. Nkosi, Alvaro M. Viljoen
Abstract:
The use of Lawsonia inermis L. (Lythraeae), commonly known as henna, has many medicinal benefits and is used as a remedy for the treatment of diarrhoea, cancer, inflammation, headache, jaundice and skin diseases in folk medicine. Although widely used for hair dyeing and temporary tattooing, henna body art has popularized over the last 15 years and changed from being a traditional bridal and festival adornment to an exotic fashion accessory. The naphthoquinone, lawsone, is one of the main constituents of the plant and responsible for its dyeing property. Henna leaves typically contain 1.8–1.9% lawsone, which is used as a marker compound for the quality control of henna products. Adulteration of henna with various toxic chemicals such as p-phenylenediamine, p-methylaminophenol, p-aminobenzene and p-toluenodiamine to produce a variety of colours, is very common and has resulted in serious health problems, including allergic reactions. This study aims to assess the quality of henna products collected from different parts of the world by determining the lawsone content, as well as the concentrations of any adulterants present. Ultra high performance liquid chromatography-mass spectrometry (UPLC-MS) was used to determine the lawsone concentrations in 172 henna products. Separation of the chemical constituents was achieved on an Acquity UPLC BEH C18 column using gradient elution (0.1% formic acid and acetonitrile). The results from UPLC-MS revealed that of 172 henna products, 11 contained 1.0-1.8% lawsone, 110 contained 0.1-0.9% lawsone, whereas 51 samples did not contain detectable levels of lawsone. High performance thin layer chromatography was investigated as a cheaper, more rapid technique for the quality control of henna in relation to the lawsone content. The samples were applied using an automatic TLC Sampler 4 (CAMAG) to pre-coated silica plates, which were subsequently developed with acetic acid, acetone and toluene (0.5: 1.0: 8.5 v/v). A Reprostar 3 digital system allowed the images to be captured. The results obtained corresponded to those from UPLC-MS analysis. Vibrational spectroscopy analysis (MIR or NIR) of the powdered henna, followed by chemometric modelling of the data, indicates that this technique shows promise as an alternative quality control method. Principal component analysis (PCA) was used to investigate the data by observing clustering and identifying outliers. Partial least squares (PLS) multivariate calibration models were constructed for the quantification of lawsone. In conclusion, only a few of the samples analysed contain lawsone in high concentrations, indicating that they are of poor quality. Currently, the presence of adulterants that may have been added to enhance the dyeing properties of the products, is being investigated.Keywords: Lawsonia inermis, paraphenylenediamine, temporary tattooing, lawsone
Procedia PDF Downloads 460195 STR and SNP Markers of Y-Chromosome Unveil Similarity between the Gene Pool of Kurds and Yezidis
Authors: M. Chukhryaeva, R. Skhalyakho, J. Kagazegeva, E. Pocheshkhova, L. Yepiskopossyan, O. Balanovsky, E. Balanovska
Abstract:
The Middle East is crossroad of different populations at different times. The Kurds are of particular interest in this region. Historical sources suggested that the origin of the Kurds is associated with Medes. Therefore, it was especially interesting to compare gene pool of Kurds with other supposed descendants of Medes-Tats. Yezidis are ethno confessional group of Kurds. Yezidism as a confessional teaching was formed in the XI-XIII centuries in Iraq. Yezidism has caused reproductively isolation of Yezidis from neighboring populations for centuries. Also, isolation helps to retain Yezidian caste system. It is unknown how the history of Yezidis affected its genу pool because it has never been the object of researching. We have examined the Y-chromosome variation in Yezidis and Kurdish males to understand their gene pool. We collected DNA samples from 90 Yezidi males and 24 Kurdish males together with their pedigrees. We performed Y-STR analysis of 17 loci in the samples collected (Yfiler system from Applied Biosystems) and analysis of 42 Y-SNPs by real-time PCR. We compared our data with published data from other Kurdish groups and from European, Caucasian, and West Asian populations. We found that gene pool of Yezidis contains haplogroups common in the Middle East (J-M172(xM67,M12)- 24%, E-M35(xM78)- 9%) and in South Western Asia (R-M124- 8%) and variant with wide distribution area - R-M198(xM458- 9%). The gene pool of Kurdish has higher genetic diversity than Yezidis. Their dominants haplogroups are R-M198- 20,3 %, E-M35- 9%, J-M172- 9%. Multidimensional scaling also shows that the Kurds and Yezidis are part of the same frontier Asian cluster, which, in addition, included Armenians, Iranians, Turks, and Greeks. At the same time, the peoples of the Caucasus and Europe form isolated clusters that do not overlap with the Asian clusters. It is noteworthy that Kurds from our study gravitate towards Tats, which indicates that most likely these two populations are descendants of ancient Medes population. Multidimensional scaling also reveals similarity between gene pool of Yezidis, Kurds with Armenians and Iranians. The analysis of Yezidis pedigrees and their STR variability did not reveal a reliable connection between genetic diversity and caste system. This indicates that the Yezidis caste system is a social division and not a biological one. Thus, we showed that, despite many years of isolation, the gene pool of Yezidis retained a common layer with the gene pool of Kurds, these populations have common spectrum of haplogroups, but Yezidis have lower genetic diversity than Kurds. This study received primary support from the RSF grant No. 16-36-00122 to MC and grant No. 16-06-00364 to EP.Keywords: gene pool, haplogroup, Kurds, SNP and STR markers, Yezidis
Procedia PDF Downloads 205194 Development and Characterization of Novel Topical Formulation Containing Niacinamide
Authors: Sevdenur Onger, Ali Asram Sagiroglu
Abstract:
Hyperpigmentation is a cosmetically unappealing skin problem caused by an overabundance of melanin in the skin. Its pathophysiology is caused by melanocytes being exposed to paracrine melanogenic stimuli, which can upregulate melanogenesis-related enzymes (such as tyrosinase) and cause melanosome formation. Tyrosinase is linked to the development of melanosomes biochemically, and it is the main target of hyperpigmentation treatment. therefore, decreasing tyrosinase activity to reduce melanosomes has become the main target of hyperpigmentation treatment. Niacinamide (NA) is a natural chemical found in a variety of plants that is used as a skin-whitening ingredient in cosmetic formulations. NA decreases melanogenesis in the skin by inhibiting melanosome transfer from melanocytes to covering keratinocytes. Furthermore, NA protects the skin from reactive oxygen species and acts as a main barrier with the skin, reducing moisture loss by increasing ceramide and fatty acid synthesis. However, it is very difficult for hydrophilic compounds such as NA to penetrate deep into the skin. Furthermore, because of the nicotinic acid in NA, it is an irritant. As a result, we've concentrated on strategies to increase NA skin permeability while avoiding its irritating impacts. Since nanotechnology can affect drug penetration behavior by controlling the release and increasing the period of permanence on the skin, it can be a useful technique in the development of whitening formulations. Liposomes have become increasingly popular in the cosmetics industry in recent years due to benefits such as their lack of toxicity, high penetration ability in living skin layers, ability to increase skin moisture by forming a thin layer on the skin surface, and suitability for large-scale production. Therefore, liposomes containing NA were developed for this study. Different formulations were prepared by varying the amount of phospholipid and cholesterol and examined in terms of particle sizes, polydispersity index (PDI) and pH values. The pH values of the produced formulations were determined to be suitable with the pH value of the skin. Particle sizes were determined to be smaller than 250 nm and the particles were found to be of homogeneous size in the formulation (pdi<0.30). Despite the important advantages of liposomal systems, they have low viscosity and stability for topical use. For these reasons, in this study, liposomal cream formulations have been prepared for easy topical application of liposomal systems. As a result, liposomal cream formulations containing NA have been successfully prepared and characterized. Following the in-vitro release and ex-vivo diffusion studies to be conducted in the continuation of the study, it is planned to test the formulation that gives the most appropriate result on the volunteers after obtaining the approval of the ethics committee.Keywords: delivery systems, hyperpigmentation, liposome, niacinamide
Procedia PDF Downloads 112193 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion
Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan
Abstract:
In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.Keywords: accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion
Procedia PDF Downloads 219192 Beyond Geometry: The Importance of Surface Properties in Space Syntax Research
Authors: Christoph Opperer
Abstract:
Space syntax is a theory and method for analyzing the spatial layout of buildings and urban environments to understand how they can influence patterns of human movement, social interaction, and behavior. While direct visibility is a key factor in space syntax research, important visual information such as light, color, texture, etc., are typically not considered, even though psychological studies have shown a strong correlation to the human perceptual experience within physical space – with light and color, for example, playing a crucial role in shaping the perception of spaciousness. Furthermore, these surface properties are often the visual features that are most salient and responsible for drawing attention to certain elements within the environment. This paper explores the potential of integrating these factors into general space syntax methods and visibility-based analysis of space, particularly for architectural spatial layouts. To this end, we use a combination of geometric (isovist) and topological (visibility graph) approaches together with image-based methods, allowing a comprehensive exploration of the relationship between spatial geometry, visual aesthetics, and human experience. Custom-coded ray-tracing techniques are employed to generate spherical panorama images, encoding three-dimensional spatial data in the form of two-dimensional images. These images are then processed through computer vision algorithms to generate saliency-maps, which serve as a visual representation of areas most likely to attract human attention based on their visual properties. The maps are subsequently used to weight the vertices of isovists and the visibility graph, placing greater emphasis on areas with high saliency. Compared to traditional methods, our weighted visibility analysis introduces an additional layer of information density by assigning different weights or importance levels to various aspects within the field of view. This extends general space syntax measures to provide a more nuanced understanding of visibility patterns that better reflect the dynamics of human attention and perception. Furthermore, by drawing parallels to traditional isovist and VGA analysis, our weighted approach emphasizes a crucial distinction, which has been pointed out by Ervin and Steinitz: the difference between what is possible to see and what is likely to be seen. Therefore, this paper emphasizes the importance of including surface properties in visibility-based analysis to gain deeper insights into how people interact with their surroundings and to establish a stronger connection with human attention and perception.Keywords: space syntax, visibility analysis, isovist, visibility graph, visual features, human perception, saliency detection, raytracing, spherical images
Procedia PDF Downloads 77191 Modeling, Topology Optimization and Experimental Validation of Glass-Transition-Based 4D-Printed Polymeric Structures
Authors: Sara A. Pakvis, Giulia Scalet, Stefania Marconi, Ferdinando Auricchio, Matthijs Langelaar
Abstract:
In recent developments in the field of multi-material additive manufacturing, differences in material properties are exploited to create printed shape-memory structures, which are referred to as 4D-printed structures. New printing techniques allow for the deliberate introduction of prestresses in the specimen during manufacturing, and, in combination with the right design, this enables new functionalities. This research focuses on bi-polymer 4D-printed structures, where the transformation process is based on a heat-induced glass transition in one material lowering its Young’s modulus, combined with an initial prestress in the other material. Upon the decrease in stiffness, the prestress is released, which results in the realization of an essentially pre-programmed deformation. As the design of such functional multi-material structures is crucial but far from trivial, a systematic methodology to find the design of 4D-printed structures is developed, where a finite element model is combined with a density-based topology optimization method to describe the material layout. This modeling approach is verified by a convergence analysis and validated by comparing its numerical results to analytical and published data. Specific aspects that are addressed include the interplay between the definition of the prestress and the material interpolation function used in the density-based topology description, the inclusion of a temperature-dependent stiffness relationship to simulate the glass transition effect, and the importance of the consideration of geometric nonlinearity in the finite element modeling. The efficacy of topology optimization to design 4D-printed structures is explored by applying the methodology to a variety of design problems, both in 2D and 3D settings. Bi-layer designs composed of thermoplastic polymers are printed by means of the fused deposition modeling (FDM) technology. Acrylonitrile butadiene styrene (ABS) polymer undergoes the glass transition transformation, while polyurethane (TPU) polymer is prestressed by means of the 3D-printing process itself. Tests inducing shape transformation in the printed samples through heating are performed to calibrate the prestress and validate the modeling approach by comparing the numerical results to the experimental findings. Using the experimentally obtained prestress values, more complex designs have been generated through topology optimization, and samples have been printed and tested to evaluate their performance. This study demonstrates that by combining topology optimization and 4D-printing concepts, stimuli-responsive structures with specific properties can be designed and realized.Keywords: 4D-printing, glass transition, shape memory polymer, topology optimization
Procedia PDF Downloads 211190 The Use of Antioxidant and Antimicrobial Properties of Plant Extracts for Increased Safety and Sustainability of Dairy Products
Authors: Loreta Serniene, Dalia Sekmokiene, Justina Tomkeviciute, Lina Lauciene, Vaida Andruleviciute, Ingrida Sinkeviciene, Kristina Kondrotiene, Neringa Kasetiene, Mindaugas Malakauskas
Abstract:
One of the most important areas of product development and research in the dairy industry is the product enrichment with active ingredients as well as leading to increased product safety and sustainability. The most expanding field of the active ingredients is the various plants' CO₂ extracts with aromatic, antioxidant and antimicrobial properties. In this study, 15 plant extracts were evaluated based on their antioxidant, antimicrobial properties as well as sensory acceptance indicators for the development of new dairy products. In order to increase the total antioxidant capacity of the milk products, it was important to determine the content of phenolic compounds and antioxidant activity of CO₂ extract. The total phenolic content of fifteen different commercial CO₂ extracts was determined by the Folin-Ciocalteu reagent and expressed as milligrams of the Gallic acid equivalents (GAE) in gram of extract. The antioxidant activities were determined by 2.2′-azinobis-(3-ethylbenzthiazoline)-6-sulfonate (ABTS) methods. The study revealed that the antioxidant activities of investigated CO₂ extract vary from 4.478-62.035 µmole Trolox/g, while the total phenolic content was in the range of 2.021-38.906 mg GAE/g of extract. For the example, the estimated antioxidant activity of Chinese cinnamon (Cinammonum aromaticum) CO₂ extract was 62.023 ± 0.15 µmole Trolox/g and the total flavonoid content reached 17.962 ± 0.35 mg GAE/g. These two parameters suggest that cinnamon could be a promising supplement for the development of new cheese. The inhibitory effects of these essential oils were tested by using agar disc diffusion method against pathogenic bacteria, most commonly found in dairy products. The obtained results showed that essential oil of lemon myrtle (Backhousia citriodora) and cinnamon (Cinnamomum cassia) has antimicrobial activity against E. coli, S. aureus, B. cereus, P. florescens, L. monocytogenes, Br. thermosphacta, P. aeruginosa and S. typhimurium with the diameter of inhibition zones variation from 10 to 52 mm. The sensory taste acceptability of plant extracts in combination with a dairy product was evaluated by a group of sensory evaluation experts (31 individuals) by the criteria of overall taste acceptability in the scale of 0 (not acceptable) to 10 (very acceptable). Each of the tested samples included 200g grams of natural unsweetened greek yogurt without additives and 1 drop of single plant extract (essential oil). The highest average of overall taste acceptability was defined for the samples with essential oils of orange (Citrus sinensis) - average score 6.67, lemon myrtle (Backhousia citriodora) – 6.62, elderberry flower (Sambucus nigra flos.) – 6.61, lemon (Citrus limon) – 5.75 and cinnamon (Cinnamomum cassia) – 5.41, respectively. The results of this study indicate plant extracts of Cinnamomum cassia and Backhousia citriodora as a promising additive not only to increase the total antioxidant capacity of the milk products and as alternative antibacterial agent to combat pathogenic bacteria commonly found in dairy products but also as a desirable flavour for the taste pallet of the consumers with expressed need for safe, sustainable and innovative dairy products. Acknowledgment: This research was funded by the European Regional Development Fund according to the supported activity 'Research Projects Implemented by World-class Researcher Groups' under Measure No. 01.2.2-LMT-K-718.Keywords: antioxidant properties, antimicrobial properties, cinnamon, CO₂ plant extracts, dairy products, essential oils, lemon myrtle
Procedia PDF Downloads 206189 Evaluation of the Incorporation of Modified Starch in Puff Pastry Dough by Mixolab Rheological Analysis
Authors: Alejandra Castillo-Arias, Carlos A. Fuenmayor, Carlos M. Zuluaga-Domínguez
Abstract:
The connection between health and nutrition has driven the food industry to explore healthier and more sustainable alternatives. Key strategies to enhance nutritional quality and extend shelf life include reducing saturated fats and incorporating natural ingredients. One area of focus is the use of modified starch in baked goods, which has attracted significant interest in food science and industry due to its functional benefits. Modified starches are commonly used for their gelling, thickening, and water-retention properties. Derived from sources like waxy corn, potatoes, tapioca, or rice, these polysaccharides improve thermal stability and resistance to dough. The use of modified starch enhances the texture and structure of baked goods, which is crucial for consumer acceptance. In this study, it was evaluated the effects of modified starch inclusion on dough used for puff pastry elaboration, measured with Mixolab analysis. This technique assesses flour quality by examining its behavior under varying conditions, providing a comprehensive profile of its baking properties. The analysis included measurements of water absorption capacity, dough development time, dough stability, softening, final consistency, and starch gelatinization. Each of these parameters offers insights into how the flour will perform during baking and the quality of the final product. The performance of wheat flour with varying levels of modified starch inclusion (10%, 20%, 30%, and 40%) was evaluated through Mixolab analysis, with a control sample consisting of 100% wheat flour. Water absorption, gluten content, and retrogradation indices were analyzed to understand how modified starch affects dough properties. The results showed that the inclusion of modified starch increased the absorption index, especially at levels above 30%, indicating a dough with better handling qualities and potentially improved texture in the final baked product. However, the reduction in wheat flour resulted in a lower kneading index, affecting dough strength. Conversely, incorporating more than 20% modified starch reduced the retrogradation index, indicating improved stability and resistance to crystallization after cooling. Additionally, the modified starch improved the gluten index, contributing to better dough elasticity and stability, providing good structural support and resistance to deformation during mixing and baking. As expected, the control sample exhibited a higher amylase index, due to the presence of enzymes in wheat flour. However, this is of low concern in puff pastry dough, as amylase activity is more relevant in fermented doughs, which is not the case here. Overall, the use of modified starch in puff pastry enhanced product quality by improving texture, structure, and shelf life, particularly when used at levels between 30% and 40%. This research underscores the potential of modified starches to address health concerns associated with traditional starches and to contribute to the development of higher-quality, consumer-friendly baked products. Furthermore, the findings suggest that modified starches could play a pivotal role in future innovations within the baking industry, particularly in products aiming to balance healthfulness with sensory appeal. By incorporating modified starch into their formulations, bakeries can meet the growing demand for healthier, more sustainable products while maintaining the indulgent qualities that consumers expect from baked goods.Keywords: baking quality, dough properties, modified starch, puff pastry
Procedia PDF Downloads 26188 Bacteriophage Is a Novel Solution of Therapy Against S. aureus Having Multiple Drug Resistance
Authors: Sanjay Shukla, A. Nayak, R. K. Sharma, A. P. Singh, S. P. Tiwari
Abstract:
Excessive use of antibiotics is a major problem in the treatment of wounds and other chronic infections, and antibiotic treatment is frequently non-curative, thus alternative treatment is necessary. Phage therapy is considered one of the most promising approaches to treat multi-drug resistant bacterial pathogens. Infections caused by Staphylococcus aureus are very efficiently controlled with phage cocktails, containing a different individual phages lysate infecting a majority of known pathogenic S. aureus strains. The aim of the present study was to evaluate the efficacy of a purified phage cocktail for prophylactic as well as therapeutic application in mouse model and in large animals with chronic septic infection of wounds. A total of 150 sewage samples were collected from various livestock farms. These samples were subjected for the isolation of bacteriophage by the double agar layer method. A total of 27 sewage samples showed plaque formation by producing lytic activity against S. aureus in the double agar overlay method out of 150 sewage samples. In TEM, recovered isolates of bacteriophages showed hexagonal structure with tail fiber. In the bacteriophage (ØVS) had an icosahedral symmetry with the head size 52.20 nm in diameter and long tail of 109 nm. Head and tail were held together by connector and can be classified as a member of the Myoviridae family under the order of Caudovirale. Recovered bacteriophage had shown the antibacterial activity against the S. aureus in vitro. Cocktail (ØVS1, ØVS5, ØVS9, and ØVS 27) of phage lysate were tested to know in vivo antibacterial activity as well as the safety profile. Result of mice experiment indicated that the bacteriophage lysate were very safe, did not show any appearance of abscess formation, which indicates its safety in living system. The mice were also prophylactically protected against S. aureus when administered with cocktail of bacteriophage lysate just before the administration of S. aureuswhich indicates that they are good prophylactic agent. The S. aureusinoculated mice were completely recovered by bacteriophage administration with 100% recovery, which was very good as compere to conventional therapy. In the present study, ten chronic cases of the wound were treated with phage lysate, and follow up of these cases was done regularly up to ten days (at 0, 5, and 10 d). The result indicated that the six cases out of ten showed complete recovery of wounds within 10 d. The efficacy of bacteriophage therapy was found to be 60% which was very good as compared to the conventional antibiotic therapy in chronic septic wounds infections. Thus, the application of lytic phage in single dose proved to be innovative and effective therapy for the treatment of septic chronic wounds.Keywords: phage therapy, S aureus, antimicrobial resistance, lytic phage, and bacteriophage
Procedia PDF Downloads 117187 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids
Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout
Abstract:
Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.Keywords: graphene, layered material, field emission, plasma, doping
Procedia PDF Downloads 361186 Opportunities for Reducing Post-Harvest Losses of Cactus Pear (Opuntia Ficus-Indica) to Improve Small-Holder Farmers Income in Eastern Tigray, Northern Ethiopia: Value Chain Approach
Authors: Meron Zenaselase Rata, Euridice Leyequien Abarca
Abstract:
The production of major crops in Northern Ethiopia, especially the Tigray Region, is at subsistence level due to drought, erratic rainfall, and poor soil fertility. Since cactus pear is a drought-resistant plant, it is considered as a lifesaver fruit and a strategy for poverty reduction in a drought-affected area of the region. Despite its contribution to household income and food security in the area, the cactus pear sub-sector is experiencing many constraints with limited attention given to its post-harvest loss management. Therefore, this research was carried out to identify opportunities for reducing post-harvest losses and recommend possible strategies to reduce post-harvest losses, thereby improving production and smallholder’s income. Both probability and non-probability sampling techniques were employed to collect the data. Ganta Afeshum district was selected from Eastern Tigray, and two peasant associations (Buket and Golea) were also selected from the district purposively for being potential in cactus pear production. Simple random sampling techniques were employed to survey 30 households from each of the two peasant associations, and a semi-structured questionnaire was used as a tool for data collection. Moreover, in this research 2 collectors, 2 wholesalers, 1 processor, 3 retailers, 2 consumers were interviewed; and two focus group discussion was also done with 14 key farmers using semi-structured checklist; and key informant interview with governmental and non-governmental organizations were interviewed to gather more information about the cactus pear production, post-harvest losses, the strategies used to reduce the post-harvest losses and suggestions to improve the post-harvest management. To enter and analyze the quantitative data, SPSS version 20 was used, whereas MS-word were used to transcribe the qualitative data. The data were presented using frequency and descriptive tables and graphs. The data analysis was also done using a chain map, correlations, stakeholder matrix, and gross margin. Mean comparisons like ANOVA and t-test between variables were used. The analysis result shows that the present cactus pear value chain involves main actors and supporters. However, there is inadequate information flow and informal market linkages among actors in the cactus pear value chain. The farmer's gross margin is higher when they sell to the processor than sell to collectors. The significant postharvest loss in the cactus pear value chain is at the producer level, followed by wholesalers and retailers. The maximum and minimum volume of post-harvest losses at the producer level is 4212 and 240 kgs per season. The post-harvest loss was caused by limited farmers skill on-farm management and harvesting, low market price, limited market information, absence of producer organization, poor post-harvest handling, absence of cold storage, absence of collection centers, poor infrastructure, inadequate credit access, using traditional transportation system, absence of quality control, illegal traders, inadequate research and extension services and using inappropriate packaging material. Therefore, some of the recommendations were providing adequate practical training, forming producer organizations, and constructing collection centers.Keywords: cactus pear, post-harvest losses, profit margin, value-chain
Procedia PDF Downloads 132185 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 354184 Exploration of the Psychological Aspect of Empowerment of Marginalized Women Working in the Unorganized Sector
Authors: Sharmistha Chanda, Anindita Choudhuri
Abstract:
This exploratory study highlights the psychological aspects of women's empowerment to find the importance of the psychological dimension of empowerment, such as; meaning, competence, self-determination, impact, and assumption, especially in the weaker marginalized section of women. A large proportion of rural, suburban, and urban poor survive by working in unorganized sectors of metropolitan cities. Relative Poverty and lack of employment in rural areas and small towns drive many people to the metropolitan city for work and livelihood. Women working in that field remain unrecognized as people of low socio-economic status. They are usually willing to do domestic work as daily wage workers, single wage earners, street vendors, family businesses like agricultural activities, domestic workers, and self-employed. Usually, these women accept such jobs because they do not have such an opportunity as they lack the basic level of education that is required for better-paid jobs. The unorganized sector, on the other hand, has no such clear-cut employer-employee relationships and lacks most forms of social protection. Having no fixed employer, these workers are casual, contractual, migrant, home-based, own-account workers who attempt to earn a living from whatever meager assets and skills they possess. Women have become more empowered both financially and individually through small-scale business ownership or entrepreneurship development and in household-based work. In-depth interviews have been done with 10 participants in order to understand their living styles, habits, self-identity, and empowerment in their society in order to evaluate the key challenges that they may face following by qualitative research approach. Transcription has been done from the collected data. The three-layer coding technique guides the data analysis process, encompassing – open coding, axial coding, and selective coding. Women’s Entrepreneurship is one of the foremost concerns as the Government, and non-government institutions are readily serving this domain with the primary objectives of promoting self-employment opportunities in general and empowering women in specific. Thus, despite hardship and unrecognition unorganized sector provides a huge array of opportunities for rural and sub-urban poor to earn. Also, the upper section of society tends to depend on this working force. This study gave an idea about the well-being, and meaning in life, life satisfaction on the basis of their lived experience.Keywords: marginalized women, psychological empowerment, relative poverty, unorganized sector
Procedia PDF Downloads 62183 Neuroprotection against N-Methyl-D-Aspartate-Induced Optic Nerve and Retinal Degeneration Changes by Philanthotoxin-343 to Alleviate Visual Impairments Involve Reduced Nitrosative Stress
Authors: Izuddin Fahmy Abu, Mohamad Haiqal Nizar Mohamad, Muhammad Fattah Fazel, Renu Agarwal, Igor Iezhitsa, Nor Salmah Bakar, Henrik Franzyk, Ian Mellor
Abstract:
Glaucoma is the global leading cause of irreversible blindness. Currently, the available treatment strategy only involves lowering intraocular pressure (IOP); however, the condition often progresses despite lowered or normal IOP in some patients. N-methyl-D-aspartate receptor (NMDAR) excitotoxicity often occurs in neurodegeneration-related glaucoma; thus it is a relevant target to develop a therapy based on neuroprotection approach. This study investigated the effects of Philanthotoxin-343 (PhTX-343), an NMDAR antagonist, on the neuroprotection of NMDA-induced glaucoma to alleviate visual impairments. Male Sprague-Dawley rats were equally divided: Groups 1 (control) and 2 (glaucoma) were intravitreally injected with phosphate buffer saline (PBS) and NMDA (160nM), respectively, while group 3 was pre-treated with PhTX-343 (160nM) 24 hours prior to NMDA injection. Seven days post-treatments, rats were subjected to visual behavior assessments and subsequently euthanized to harvest their retina and optic nerve tissues for histological analysis and determination of nitrosative stress level using 3-nitrotyrosine ELISA. Visual behavior assessments via open field, object, and color recognition tests demonstrated poor visual performance in glaucoma rats indicated by high exploratory behavior. PhTX-343 pre-treatment appeared to preserve visual abilities as all test results were significantly improved (p < 0.05). H&E staining of the retina showed a marked reduction of ganglion cell layer thickness in the glaucoma group; in contrast, PhTX-343 significantly increased the number by 1.28-folds (p < 0.05). PhTX-343 also increased the number of cell nuclei/100μm2 within inner retina by 1.82-folds compared to the glaucoma group (p < 0.05). Toluidine blue staining of optic nerve tissues showed that PhTX-343 reduced the degeneration changes compared to the glaucoma group which exhibited vacuolation overall sections. PhTX-343 also decreased retinal 3- nitrotyrosine concentration by 1.74-folds compared to the glaucoma group (p < 0.05). All results in PhTX-343 group were comparable to control (p > 0.05). We conclude that PhTX-343 protects against NMDA-induced changes and visual impairments in the rat model by reducing nitrosative stress levels.Keywords: excitotoxicity, glaucoma, nitrosative stress , NMDA receptor , N-methyl-D-aspartate , philanthotoxin, visual behaviour
Procedia PDF Downloads 137182 Impact of Boundary Conditions on the Behavior of Thin-Walled Laminated Column with L-Profile under Uniform Shortening
Authors: Jaroslaw Gawryluk, Andrzej Teter
Abstract:
Simply supported angle columns subjected to uniform shortening are tested. The experimental studies are conducted on a testing machine using additional Aramis and the acoustic emission system. The laminate samples are subjected to axial uniform shortening. The tested columns are loaded with the force values from zero to the maximal load destroying the L-shaped column, which allowed one to observe the column post-buckling behavior until its collapse. Laboratory tests are performed at a constant velocity of the cross-bar equal to 1 mm/min. In order to eliminate stress concentrations between sample and support, flexible pads are used. Analyzed samples are made with carbon-epoxy laminate using the autoclave method. The configurations of laminate layers are: [60,0₂,-60₂,60₃,-60₂,0₃,-60₂,0,60₂]T, where direction 0 is along the length of the profile. Material parameters of laminate are: Young’s modulus along the fiber direction - 170GPa, Young’s modulus along the fiber transverse direction - 7.6GPa, shear modulus in-plane - 3.52GPa, Poisson’s ratio in-plane - 0.36. The dimensions of all columns are: length-300 mm, thickness-0.81mm, width of the flanges-40mm. Next, two numerical models of the column with and without flexible pads are developed using the finite element method in Abaqus software. The L-profile laminate column is modeled using the S8R shell elements. The layup-ply technique is used to define the sequence of the laminate layers. However, the model of grips is made of the R3D4 discrete rigid elements. The flexible pad is consists of the C3D20R type solid elements. In order to estimate the moment of the first laminate layer damage, the following initiation criteria were applied: maximum stress criterion, Tsai-Hill, Tsai-Wu, Azzi-Tsai-Hill, and Hashin criteria. The best compliance of results was observed for the Hashin criterion. It was found that the use of the pad in the numerical model significantly influences the damage mechanism. The model without pads characterized a much more stiffness, as evidenced by a greater bifurcation load and damage initiation load in all analyzed criteria, lower shortening, and less deflection of the column in its center than the model with flexible pads. Acknowledgment: The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).Keywords: angle column, compression, experiment, FEM
Procedia PDF Downloads 207181 Oxalate Method for Assessing the Electrochemical Surface Area for Ni-Based Nanoelectrodes Used in Formaldehyde Sensing Applications
Authors: S. Trafela, X. Xua, K. Zuzek Rozmana
Abstract:
In this study, we used an accurate and precise method to measure the electrochemically active surface areas (Aecsa) of nickel electrodes. Calculated Aecsa is really important for the evaluation of an electro-catalyst’s activity in electrochemical reaction of different organic compounds. The method involves the electrochemical formation of Ni(OH)₂ and NiOOH in the presence of adsorbed oxalate in alkaline media. The studies were carried out using cyclic voltammetry with polycrystalline nickel as a reference material and electrodeposited nickel nanowires, homogeneous and heterogeneous nickel films. From cyclic voltammograms, the charge (Q) values for the formation of Ni(OH)₂ and NiOOH surface oxides were calculated under various conditions. At sufficiently fast potential scan rates (200 mV s⁻¹), the adsorbed oxalate limits the growth of the surface hydroxides to a monolayer. Although the Ni(OH)₂/NiOOH oxidation peak overlaps with the oxygen evolution reaction, in the reverse scan, the NiOOH/ Ni(OH)₂ reduction peak is well-separated from other electrochemical processes and can be easily integrated. The values of these integrals were used to correlate experimentally measured charge density with an electrochemically active surface layer. The Aecsa of the nickel nanowires, homogeneous and heterogeneous nickel films were calculated to be Aecsa-NiNWs = 4.2066 ± 0.0472 cm², Aecsa-homNi = 1.7175 ± 0.0503 cm² and Aecsa-hetNi = 2.1862 ± 0.0154 cm². These valuable results were expanded and used in electrochemical studies of formaldehyde oxidation. As mentioned nickel nanowires, heterogeneous and homogeneous nickel films were used as simple and efficient sensor for formaldehyde detection. For this purpose, electrodeposited nickel electrodes were modified in 0.1 mol L⁻¹ solution of KOH in order to expect electrochemical activity towards formaldehyde. The investigation of the electrochemical behavior of formaldehyde oxidation in 0.1 mol L⁻¹ NaOH solution at the surface of modified nickel nanowires, homogeneous and heterogeneous nickel films were carried out by means of electrochemical techniques such as cyclic voltammetric and chronoamperometric methods. From investigations of effect of different formaldehyde concentrations (from 0.001 to 0.1 mol L⁻¹) on electrochemical signal - current we provided catalysis mechanism of formaldehyde oxidation, detection limit and sensitivity of nickel electrodes. The results indicated that nickel electrodes participate directly in the electrocatalytic oxidation of formaldehyde. In the overall reaction, formaldehyde in alkaline aqueous solution exists predominantly in form of CH₂(OH)O⁻, which is oxidized to CH₂(O)O⁻. Taking into account the determined (Aecsa) values we have been able to calculate the sensitivities: 7 mA mol L⁻¹ cm⁻² for nickel nanowires, 3.5 mA mol L⁻¹ cm⁻² for heterogeneous nickel film and 2 mA mol L⁻¹ cm⁻² for heterogeneous nickel film. The detection limit was 0.2 mM for nickel nanowires, 0.5 mM for porous Ni film and 0.8 mM for homogeneous Ni film. All of these results make nickel electrodes capable for further applications.Keywords: electrochemically active surface areas, nickel electrodes, formaldehyde, electrocatalytic oxidation
Procedia PDF Downloads 162180 Identification of Phenolic Compounds and Study the Antimicrobial Property of Eleaocarpus Ganitrus Fruits
Authors: Velvizhi Dharmalingam, Rajalaksmi Ramalingam, Rekha Prabhu, Ilavarasan Raju
Abstract:
Background: The use of herbal products for various therapeutic regimens has increased tremendously in the developing countries. Elaeocarpus ganitrus(Rudraksha) is a broad-leaved tree, belonging to the family Elaeocarpaceae found in tropical and subtropical areas. It is popular in an indigenous system of medicine like Ayurveda, Siddha, and Unani. According to Ayurvedic medicine, Rudraksha is used in the managing of blood pressure, asthma, mental disorders, diabetes, gynaecological disorders, neurological disorders such as epilepsy and liver diseases. Objectives: The present study aimed to study the physicochemical parameters of Elaeocarpus ganitrus(fruits) and identify the phenolic compounds (gallic acid, ellagic acid, and chebulinic acid). To estimate the microbial load and the antibacterial activity of extract of Elaeocarpus ganitrus for selective pathogens. Methodology: The dried powdered fruit of Elaeocarpus ganitrus was performed the physicochemical parameters (such as Loss on drying, Alcohol soluble extractive, Water soluble extractive, Total ash and Acid insoluble ash) and pH was measured. The dried coarse powdered fruit of Elaeocarpus ganitrus was extracted successively with hexane, chloroform, ethylacetate and aqueous alcohol by cold percolation method. Identification of phenolic compounds (gallic acid, ellagic acid, chebulinic acid) was done by HPTLC method and confirmed by co-TLC using different solvent system.The successive extracts of Elaeocarpus ganitrus and standards (like gallic acid, ellagic acid, and chebulinic acid) was approximately weighed and made up with alcohol. HPTLC (CAMAG) analysis was performed on a TLC over silica gel 60F254 precoated aluminium plate, layer thickness 0.2 mm (E.Merck, Germany) by using ATS4, Visualizer and Scanner with wavelength at 254 nm, 366 nm and derivatized with different reagents. The microbial load such as total bacterial count, total fungal count, Enterobacteria, Escherichia coli, Salmonella species, Staphylococcus aureus and Pseudomonas aeruginosa by serial dilution method and antibacterial activity of was measured by Kirby bauer method for selective pathogens. Results: The physicochemical parameter of Elaeocarpus ganitrus was studied for standardization of crude drug. Among all the successive extracts were identified with phenolic compounds and Elaeocarpus ganitrus extract having potent antibacterial activity against gram-positive and gram-negative bacteria.Keywords: antimicrobial activity, Elaeocarpus ganitrus, HPTLC, phenolic compounds
Procedia PDF Downloads 342179 DIF-JACKET: a Thermal Protective Jacket for Firefighters
Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves
Abstract:
Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.Keywords: firefighters, multilayer system, phase change material, thermal protective clothing
Procedia PDF Downloads 166