Search results for: low-rank matrix
91 Nanoporous Activated Carbons for Fuel Cells and Supercapacitors
Authors: A. Volperts, G. Dobele, A. Zhurinsh, I. Kruusenberg, A. Plavniece, J. Locs
Abstract:
Nowadays energy consumption constantly increases and development of effective and cheap electrochemical sources of power, such as fuel cells and electrochemical capacitors, is topical. Due to their high specific power, charge and discharge rates, working lifetime supercapacitor based energy accumulation systems are more and more extensively being used in mobile and stationary devices. Lignocellulosic materials are widely used as precursors and account for around 45% of the total raw materials used for the manufacture of activated carbon which is the most suitable material for supercapacitors. First part of our research is devoted to study of influence of main stages of wood thermochemical activation parameters on activated carbons porous structure formation. It was found that the main factors governing the properties of carbon materials are specific surface area, volume and pore size distribution, particles dispersity, ash content and oxygen containing groups content. Influence of activated carbons attributes on capacitance and working properties of supercapacitor are demonstrated. The correlation between activated carbons porous structure indices and electrochemical specifications of supercapacitors with electrodes made from these materials has been determined. It is shown that if synthesized activated carbons are used in supercapacitors then high specific capacitances can be reached – more than 380 F/g in 4.9M sulfuric acid based electrolytes and more than 170 F/g in 1 M tetraethylammonium tetrafluoroborate in acetonitrile electrolyte. Power specifications and minimal price of H₂-O₂ fuel cells are limited by the expensive platinum-based catalysts. The main direction in development of non-platinum catalysts for the oxygen reduction is the study of cheap porous carbonaceous materials which can be obtained by the pyrolysis of polymers including renewable biomass. It is known that nitrogen atoms in carbon materials to a high degree determine properties of the doped activated carbons, such as high electrochemical stability, hardness, electric resistance, etc. The lack of sufficient knowledge on the doping of the carbon materials calls for the ongoing researches of properties and structure of modified carbon matrix. In the second part of this study, highly porous activated carbons were synthesized using alkali thermochemical activation from wood, cellulose and cellulose production residues – craft lignin and sewage sludge. Activated carbon samples were doped with dicyandiamide and melamine for the application as fuel cell cathodes. Conditions of nitrogen introduction (solvent, treatment temperature) and its content in the carbonaceous material, as well as porous structure characteristics, such as specific surface and pore size distribution, were studied. It was found that efficiency of doping reaction depends on the elemental oxygen content in the activated carbon. Relationships between nitrogen content, porous structure characteristics and electrodes electrochemical properties are demonstrated.Keywords: activated carbons, low-temperature fuel cells, nitrogen doping, porous structure, supercapacitors
Procedia PDF Downloads 12090 Carbon Nanotubes Functionalization via Ullmann-Type Reactions Yielding C-C, C-O and C-N Bonds
Authors: Anna Kolanowska, Anna Kuziel, Sławomir Boncel
Abstract:
Carbon nanotubes (CNTs) represent a combination of lightness and nanoscopic size with high tensile strength, excellent thermal and electrical conductivity. By now, CNTs have been used as a support in heterogeneous catalysis (CuCl anchored to pre-functionalized CNTs) in the Ullmann-type coupling with aryl halides toward formation of C-N and C-O bonds. The results indicated that the stability of the catalyst was much improved and the elaborated catalytic system was efficient and recyclable. However, CNTs have not been considered as the substrate itself in the Ullmann-type reactions. But if successful, this functionalization would open new areas of CNT chemistry leading to enhanced in-solvent/matrix nanotube individualization. The copper-catalyzed Ullmann-type reaction is an attractive method for the formation of carbon-heteroatom and carbon-carbon bonds in organic synthesis. This condensation reaction is usually conducted at temperature as high as 200 oC, often in the presence of stoichiometric amounts of copper reagent and with activated aryl halides. However, a small amount of organic additive (e.g. diamines, amino acids, diols, 1,10-phenanthroline) can be applied in order to increase the solubility and stability of copper catalyst, and at the same time to allow performing the reaction under mild conditions. The copper (pre-)catalyst is prepared by in situ mixing of copper salt and the appropriate chelator. Our research is focused on the application of Ullmann-type reaction for the covalent functionalization of CNTs. Firstly, CNTs were chlorinated by using iodine trichloride (ICl3) in carbon tetrachloride (CCl4). This method involves formation of several chemical species (ICl, Cl2 and I2Cl6), but the most reactive is the dimer. The fact (that the dimer is the main individual in CCl4) is the reason for high reactivity and possibly high functionalization levels of CNTs. This method, indeed, yielded a notable amount of chlorine onto the MWCNT surface. The next step was the reaction of CNT-Cl with three substrates: aniline, iodobenzene and phenol for the formation C-N, C-C and C-O bonds, respectively, in the presence of 1,10-phenanthroline and cesium carbonate (Cs2CO3) as a base. As the CNT substrates, two multi-wall CNT (MWCNT) types were used: commercially available Nanocyl NC7000™ (9.6 nm diameter, 1.5 µm length, 90% purity) and thicker MWCNTs (in-house) synthesized in our laboratory using catalytic chemical vapour deposition (c-CVD). In-house CNTs had diameter ranging between 60-70 nm and length up to 300 µm. Since classical Ullmann reaction was found as suffering from poor yields, we have investigated the effect of various solvents (toluene, acetonitrile, dimethyl sulfoxide and N,N-dimethylformamide) on the coupling of substrates. Owing to the fact that the aryl halides show the reactivity order of I>Br>Cl>F, we have also investigated the effect of iodine presence on CNT surface on reaction yield. In this case, in first step we have used iodine monochloride instead of iodine trichloride. Finally, we have used the optimized reaction conditions with p-bromophenol and 1,2,4-trihydroxybenzene for the control of CNT dispersion.Keywords: carbon nanotubes, coupling reaction, functionalization, Ullmann reaction
Procedia PDF Downloads 16889 Polymer Composites Containing Gold Nanoparticles for Biomedical Use
Authors: Bozena Tyliszczak, Anna Drabczyk, Sonia Kudlacik-Kramarczyk, Agnieszka Sobczak-Kupiec
Abstract:
Introduction: Nanomaterials become one of the leading materials in the synthesis of various compounds. This is a reason for the fact that nano-size materials exhibit other properties compared to their macroscopic equivalents. Such a change in size is reflected in a change in optical, electric or mechanical properties. Among nanomaterials, particular attention is currently directed into gold nanoparticles. They find application in a wide range of areas including cosmetology or pharmacy. Additionally, nanogold may be a component of modern wound dressings, which antibacterial activity is beneficial in the viewpoint of the wound healing process. Specific properties of this type of nanomaterials result in the fact that they may also be applied in cancer treatment. Studies on the development of new techniques of the delivery of drugs are currently an important research subject of many scientists. This is due to the fact that along with the development of such fields of science as medicine or pharmacy, the need for better and more effective methods of administering drugs is constantly growing. The solution may be the use of drug carriers. These are materials that combine with the active substance and lead it directly to the desired place. A role of such a carrier may be played by gold nanoparticles that are able to covalently bond with many organic substances. This allows the combination of nanoparticles with active substances. Therefore gold nanoparticles are widely used in the preparation of nanocomposites that may be used for medical purposes with special emphasis on drug delivery. Methodology: As part of the presented research, synthesis of composites was carried out. The mentioned composites consisted of the polymer matrix and gold nanoparticles that were introduced into the polymer network. The synthesis was conducted with the use of a crosslinking agent, and photoinitiator and the materials were obtained by means of the photopolymerization process. Next, incubation studies were conducted using selected liquids that simulated fluids are occurring in the human body. The study allows determining the biocompatibility of the tested composites in relation to selected environments. Next, the chemical structure of the composites was characterized as well as their sorption properties. Conclusions: Conducted research allowed for the preliminary characterization of prepared polymer composites containing gold nanoparticles in the viewpoint of their application for biomedical use. Tested materials were characterized by biocompatibility in tested environments. What is more, synthesized composites exhibited relatively high swelling capacity that is essential in the viewpoint of their potential application as drug carriers. During such an application, composite swells and at the same time releases from its interior introduced active substance; therefore, it is important to check the swelling ability of such material. Acknowledgements: The authors would like to thank The National Science Centre (Grant no: UMO - 2016/21/D/ST8/01697) for providing financial support to this project. This paper is based upon work from COST Action (CA18113), supported by COST (European Cooperation in Science and Technology).Keywords: nanocomposites, gold nanoparticles, drug carriers, swelling properties
Procedia PDF Downloads 11688 The Effects of in vitro Digestion on Cheese Bioactivity; Comparing Adult and Elderly Simulated in vitro Gastrointestinal Digestion Models
Authors: A. M. Plante, F. O’Halloran, A. L. McCarthy
Abstract:
By 2050 it is projected that 2 billion of the global population will be more than 60 years old. Older adults have unique dietary requirements and aging is associated with physiological changes that affect appetite, sensory perception, metabolism, and digestion. Therefore, it is essential that foods recommended and designed for older adults promote healthy aging. To assess cheese as a functional food for the elderly, a range of commercial cheese products were selected and compared for their antioxidant properties. Cheese from various milk sources (bovine, goats, sheep) with different textures and fat content, including cheddar, feta, goats, brie, roquefort, halloumi, wensleydale and gouda, were initially digested with two different simulated in vitro gastrointestinal digestion (SGID) models. One SGID model represented a validated in vitro adult digestion system and the second model, an elderly SGID, was designed to consider the physiological changes associated with aging. The antioxidant potential of all cheese digestates was investigated using in vitro chemical-based antioxidant assays, (2,2-Diphenyl-1-picrylhydrazyl (DPPH) radical scavenging, ferric reducing antioxidant power (FRAP) and total phenolic content (TPC)). All adult model digestates had high antioxidant activity across both DPPH ( > 70%) and FRAP ( > 700 µM Fe²⁺/kg.fw) assays. Following in vitro digestion using the elderly SGID model, full-fat red cheddar, low-fat white cheddar, roquefort, halloumi, wensleydale, and gouda digestates had significantly lower (p ≤ 0.05) DPPH radical scavenging properties compared to the adult model digestates. Full-fat white cheddar had higher DPPH radical scavenging activity following elderly SGID digestion compared to the adult model digestate, but the difference was not significant. All other cheese digestates from the elderly model were comparable to the digestates from the adult model in terms of radical scavenging activity. The FRAP of all elderly digestates were significantly lower (p ≤ 0.05) compared to the adult digestates. Goats cheese was significantly higher (p ≤ 0.05) in FRAP (718 µM Fe²/kg.fw) compared to all other digestates in the elderly model. TPC levels in the soft cheeses (feta, goats) and low-fat cheeses (red cheddar, white cheddar) were significantly lower (p ≤ 0.05) in the elderly digestates compared to the adult digestates. There was no significant difference in TPC levels, between the elderly and adult model for full-fat cheddar (red, white), roquefort, wensleydale, gouda, and brie digestates. Halloumi cheese was the only cheese that was significantly higher in TPC levels following elderly digestion compared to adult digestates. Low fat red cheddar had significantly higher (p ≤ 0.05) TPC levels compared to all other digestates for both adult and elderly digestive systems. Findings from this study demonstrate that aging has an impact on the bioactivity of cheese, as antioxidant activity and TPC levels were lower, following in vitro elderly digestion compared to the adult model. For older adults, soft cheese, particularly goats cheese, was associated with high radical scavenging and reducing power, while roquefort cheese had low antioxidant activity. Also, elderly digestates of halloumi and low-fat red cheddar were associated with high TPC levels. Cheese has potential as a functional food for the elderly, however, bioactivity can vary depending on the cheese matrix. Funding for this research was provided by the RISAM Scholarship Scheme, Cork Institute of Technology, Ireland.Keywords: antioxidants, cheese, in-vitro digestion, older adults
Procedia PDF Downloads 22887 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 7886 Population Diversity of Dalmatian Pyrethrum Based on Pyrethrin Content and Composition
Authors: Filip Varga, Nina Jeran, Martina Biosic, Zlatko Satovic, Martina Grdisa
Abstract:
Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip.), a species endemic to the eastern Adriatic coastline, is the source of natural insecticide pyrethrin. Pyrethrin is a mixture of six compounds (pyrethrin I and II, cinerin I and II, jasmolin I and II) that exhibits high insecticidal activity with no detrimental effects to the environment. A recently optimized matrix-solid phase dispersion method (MSPD), using florisil as the sorbent, acetone-ethyl acetate (1:1, v/v) as the elution solvent, and sodium sulfate anhydrous as the drying agent was utilized to extract the pyrethrins from 10 wild populations (20 individuals per population) distributed along the Croatian coast. All six components in the extracts were qualitatively and quantitatively determined by high-performance liquid chromatography with a diode array detector (HPLC-DAD). Pearson’s correlation index was calculated between pyrethrin compounds, and differences between the populations using the analysis of variance were tested. Additionally, the correlation of each pyrethrin component with spatio-ecological variables (bioclimate, soil properties, elevation, solar radiation, and distance from the coastline) was calculated. Total pyrethrin content ranged from 0.10% to 1.35% of dry flower weight, averaging 0.58% across all individuals. Analysis of variance revealed significant differences between populations based on all six pyrethrin compounds and total pyrethrin content. On average, the lowest total pyrethrin content was found in the population from Pelješac peninsula (0.22% of dry flower weight) in which total pyrethrin content lower than 0.18% was detected in 55% of the individuals. The highest average total pyrethrin content was observed in the population from island Zlarin (0.87% of dry flower weight), in which total pyrethrin content higher than 1.00% was recorded in only 30% of the individuals. Pyrethrin I/pyrethrin II ratio as a measure of extract quality ranged from 0.21 (population from the island Čiovo) to 5.88 (population from island Mali Lošinj) with an average of 1.77 across all individuals. By far, the lowest quality of extracts was found in the population from Mt. Biokovo (pyrethrin I/II ratio lower than 0.72 in 40% of individuals) due to the high pyrethrin II content typical for this population. Pearson’s correlation index revealed a highly significant positive correlation between pyrethrin I content and total pyrethrin content and a strong negative correlation between pyrethrin I and pyrethrin II. The results of this research clearly indicate high intra- and interpopulation diversity of Dalmatian pyrethrum with regards to pyrethrin content and composition. The information obtained has potential use in plant genetic resources conservation and biodiversity monitoring. Possibly the largest potential lies in designing breeding programs aimed at increasing pyrethrin content in commercial breeding lines and reintroduction in agriculture in Croatia. Acknowledgment: This work has been fully supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir/ Sch. Bip.) insecticidal potential’ - (PyrDiv) (IP-06-2016-9034).Keywords: Dalmatian pyrethrum, HPLC, MSPD, pyrethrin
Procedia PDF Downloads 14285 Ionophore-Based Materials for Selective Optical Sensing of Iron(III)
Authors: Natalia Lukasik, Ewa Wagner-Wysiecka
Abstract:
Development of selective, fast-responsive, and economical sensors for diverse ions detection and determination is one of the most extensively studied areas due to its importance in the field of clinical, environmental and industrial analysis. Among chemical sensors, vast popularity has gained ionophore-based optical sensors, where the generated analytical signal is a consequence of the molecular recognition of ion by the ionophore. Change of color occurring during host-guest interactions allows for quantitative analysis and for 'naked-eye' detection without the need of using sophisticated equipment. An example of application of such sensors is colorimetric detection of iron(III) cations. Iron as one of the most significant trace elements plays roles in many biochemical processes. For these reasons, the development of reliable, fast, and selective methods of iron ions determination is highly demanded. Taking all mentioned above into account a chromogenic amide derivative of 3,4-dihydroxybenzoic acid was synthesized, and its ability to iron(III) recognition was tested. To the best of authors knowledge (according to chemical abstracts) the obtained ligand has not been described in the literature so far. The catechol moiety was introduced to the ligand structure in order to mimic the action of naturally occurring siderophores-iron(III)-selective receptors. The ligand–ion interactions were studied using spectroscopic methods: UV-Vis spectrophotometry and infrared spectroscopy. The spectrophotometric measurements revealed that the amide exhibits affinity to iron(III) in dimethyl sulfoxide and fully aqueous solution, what is manifested by the change of color from yellow to green. Incorporation of the tested amide into a polymeric matrix (cellulose triacetate) ensured effective recognition of iron(III) at pH 3 with the detection limit 1.58×10⁻⁵ M. For the obtained sensor material parameters like linear response range, response time, selectivity, and possibility of regeneration were determined. In order to evaluate the effect of the size of the sensing material on iron(III) detection nanospheres (in the form of nanoemulsion) containing the tested amide were also prepared. According to DLS (dynamic light scattering) measurements, the size of the nanospheres is 308.02 ± 0.67 nm. Work parameters of the nanospheres were determined and compared with cellulose triacetate-based material. Additionally, for fast, qualitative experiments the test strips were prepared by adsorption of the amide solution on a glass microfiber material. Visual limit of detection of iron(III) at pH 3 by the test strips was estimated at the level 10⁻⁴ M. In conclusion, reported here amide derived from 3,4- dihydroxybenzoic acid proved to be an effective candidate for optical sensing of iron(III) in fully aqueous solutions. N. L. kindly acknowledges financial support from National Science Centre Poland the grant no. 2017/01/X/ST4/01680. Authors thank for financial support from Gdansk University of Technology grant no. 032406.Keywords: ion-selective optode, iron(III) recognition, nanospheres, optical sensor
Procedia PDF Downloads 15484 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 8283 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal
Authors: C. Bateira, J. Fernandes, A. Costa
Abstract:
The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards
Procedia PDF Downloads 17782 Rapid Atmospheric Pressure Photoionization-Mass Spectrometry (APPI-MS) Method for the Detection of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans in Real Environmental Samples Collected within the Vicinity of Industrial Incinerators
Authors: M. Amo, A. Alvaro, A. Astudillo, R. Mc Culloch, J. C. del Castillo, M. Gómez, J. M. Martín
Abstract:
Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) of course comprise a range of highly toxic compounds that may exist as particulates within the air or accumulate within water supplies, soil, or vegetation. They may be created either ubiquitously or naturally within the environment as a product of forest fires or volcanic eruptions. It is only since the industrial revolution, however, that it has become necessary to closely monitor their generation as a byproduct of manufacturing/combustion processes, in an effort to mitigate widespread contamination events. Of course, the environmental concentrations of these toxins are expected to be extremely low, therefore highly sensitive and accurate methods are required for their determination. Since ionization of non-polar compounds through electrospray and APCI is difficult and inefficient, we evaluate the performance of a novel low-flow Atmospheric Pressure Photoionization (APPI) source for the trace detection of various dioxins and furans using rapid Mass Spectrometry workflows. Air, soil and biota (vegetable matter) samples were collected monthly during one year from various locations within the vicinity of an industrial incinerator in Spain. Analytes were extracted and concentrated using soxhlet extraction in toluene and concentrated by rotavapor and nitrogen flow. Various ionization methods as electrospray (ES) and atmospheric pressure chemical ionization (APCI) were evaluated, however, only the low-flow APPI source was capable of providing the necessary performance, in terms of sensitivity, required for detecting all targeted analytes. In total, 10 analytes including 2,3,7,8-tetrachlorodibenzodioxin (TCDD) were detected and characterized using the APPI-MS method. Both PCDDs and PCFDs were detected most efficiently in negative ionization mode. The most abundant ion always corresponded to the loss of a chlorine and addition of an oxygen, yielding [M-Cl+O]- ions. MRM methods were created in order to provide selectivity for each analyte. No chromatographic separation was employed; however, matrix effects were determined to have a negligible impact on analyte signals. Triple Quadrupole Mass Spectrometry was chosen because of its unique potential for high sensitivity and selectivity. The mass spectrometer used was a Sciex´s Qtrap3200 working in negative Multi Reacting Monitoring Mode (MRM). Typically mass detection limits were determined to be near the 1-pg level. The APPI-MS2 technology applied to the detection of PCDD/Fs allows fast and reliable atmospheric analysis, minimizing considerably operational times and costs, with respect other technologies available. In addition, the limit of detection can be easily improved using a more sensitive mass spectrometer since the background in the analysis channel is very low. The APPI developed by SEADM allows polar and non-polar compounds ionization with high efficiency and repeatability.Keywords: atmospheric pressure photoionization-mass spectrometry (APPI-MS), dioxin, furan, incinerator
Procedia PDF Downloads 20881 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads
Authors: Raja Umer Sajjad, Chang Hee Lee
Abstract:
Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters
Procedia PDF Downloads 24080 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles
Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo
Abstract:
Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.Keywords: HRRP, NCTI, simulated/synthetic database, SVD
Procedia PDF Downloads 35479 Exploring Fluoroquinolone-Resistance Dynamics Using a Distinct in Vitro Fermentation Chicken Caeca Model
Authors: Bello Gonzalez T. D. J., Setten Van M., Essen Van A., Brouwer M., Veldman K. T.
Abstract:
Resistance to fluoroquinolones (FQ) has evolved increasingly over the years, posing a significant challenge for the treatment of human infections, particularly gastrointestinal tract infections caused by zoonotic bacteria transmitted through the food chain and environment. In broiler chickens, a relatively high proportion of FQ resistance has been observed in Escherichia coli indicator, Salmonella and Campylobacter isolates. We hypothesize that flumequine (Flu), used as a secondary choice for the treatment of poultry infections, could potentially be associated with a high proportion of FQ resistance. To evaluate this hypothesis, we used an in vitro fermentation chicken caeca model. Two continuous single-stage fermenters were used to simulate in real time the physiological conditions of the chicken caeca microbial content (temperature, pH, caecal content mixing, and anoxic environment). A pool of chicken caecal content containing FQ-resistant E. coli obtained from chickens at slaughter age was used as inoculum along with a spiked FQ-susceptible Campylobacter jejuni strain isolated from broilers. Flu was added to one of the fermenters (Flu-fermenter) every 24 hours for two days to evaluate the selection and maintenance of FQ resistance over time, while the other served as a control (C-Fermenter). The experiment duration was 5 days. Samples were collected at three different time points: before, during and after Flu administration. Serial dilutions were plated on Butzler culture media with and without Flu (8mg/L) and enrofloxacin (4mg/L) and on MacConkey culture media with and without Flu (4mg/L) and enrofloxacin (1mg/L) to determine the proportion of resistant strains over time. Positive cultures were identified by mass spectrometry and matrix-assisted laser desorption/ionization (MALDI). A subset of the obtained isolates were used for Whole Genome Sequencing analysis. Over time, E. coli exhibited positive growth in both fermenters, while C. jejuni growth was detected up to day 3. The proportion of Flu-resistant E. coli strains recovered remained consistent over time after antibiotic selective pressure, while in the C-fermenter, a decrease was observed at day 5; a similar pattern was observed in the enrofloxacin-resistant E. coli strains. This suggests that Flu might play a role in the selection and persistence of enrofloxacin resistance, compared to C-fermenter, where enrofloxacin-resistant E. coli strains appear at a later time. Furthermore, positive growth was detected from both fermenters only on Butzler plates without antibiotics. A subset of C. jejuni strains from the Flu-fermenter revealed that those strains were susceptible to ciprofloxacin (MIC < 0.12 μg/mL). A selection of E. coli strains from both fermenters revealed the presence of plasmid-mediated quinolone resistance (PMQR) (qnr-B19) in only one strain from the C-fermenter belonging to sequence type (ST) 48, and in all from Flu-fermenter belonged to ST189. Our results showed that Flu selective impact on PMQR-positive E. coli strains, while no effect was observed in C. jejuni. Maintenance of Flu-resistance was correlated with antibiotic selective pressure. Further studies into antibiotic resistance gene transfer among commensal and zoonotic bacteria in the chicken caeca content may help to elucidate the resistance spread mechanisms.Keywords: fluoroquinolone-resistance, escherichia coli, campylobacter jejuni, in vitro model
Procedia PDF Downloads 6278 Investigation of Processing Conditions on Rheological Features of Emulsion Gels and Oleogels Stabilized by Biopolymers
Authors: M. Sarraf, J. E. Moros, M. C. Sánchez
Abstract:
Oleogels are self-standing systems that are able to trap edible liquid oil into a tridimensional network and also help to use less fat by forming crystallization oleogelators. There are different ways to generate oleogelation and oil structuring, including direct dispersion, structured biphasic systems, oil sorption, and indirect method (emulsion-template). The selection of processing conditions as well as the composition of the oleogels is essential to obtain a stable oleogel with characteristics suitable for its purpose. In this sense, one of the ingredients widely used in food products to produce oleogels and emulsions is polysaccharides. Basil seed gum (BSG), with the scientific name Ocimum basilicum, is a new native polysaccharide with high viscosity and pseudoplastic behavior because of its high molecular weight in the food industry. Also, proteins can stabilize oil in water due to the presence of amino and carboxyl moieties that result in surface activity. Whey proteins are widely used in the food industry due to available, cheap ingredients, nutritional and functional characteristics such as emulsifier and a gelling agent, thickening, and water-binding capacity. In general, the interaction of protein and polysaccharides has a significant effect on the food structures and their stability, like the texture of dairy products, by controlling the interactions in macromolecular systems. Using edible oleogels as oil structuring helps for targeted delivery of a component trapped in a structural network. Therefore, the development of efficient oleogel is essential in the food industry. A complete understanding of the important points, such as the ratio oil phase, processing conditions, and concentrations of biopolymers that affect the formation and stability of the emulsion, can result in crucial information in the production of a suitable oleogel. In this research, the effects of oil concentration and pressure used in the manufacture of the emulsion prior to obtaining the oleogel have been evaluated through the analysis of droplet size and rheological properties of obtained emulsions and oleogels. The results show that the emulsion prepared in the high-pressure homogenizer (HPH) at higher pressure values has smaller droplet sizes and a higher uniformity in the size distribution curve. On the other hand, in relation to the rheological characteristics of the emulsions and oleogels obtained, the predominantly elastic character of the systems must be noted, as they present values of the storage modulus higher than those of losses, also showing an important plateau zone, typical of structured systems. In the same way, if steady-state viscous flow tests have been analyzed on both emulsions and oleogels, the result is that, once again, the pressure used in the homogenizer is an important factor for obtaining emulsions with adequate droplet size and the subsequent oleogel. Thus, various routes for trapping oil inside a biopolymer matrix with adjustable mechanical properties could be applied for the creation of the three-dimensional network in order to the oil absorption and creating oleogel.Keywords: basil seed gum, particle size, viscoelastic properties, whey protein
Procedia PDF Downloads 6677 Variation of Warp and Binder Yarn Tension across the 3D Weaving Process and its Impact on Tow Tensile Strength
Authors: Reuben Newell, Edward Archer, Alistair McIlhagger, Calvin Ralph
Abstract:
Modern industry has developed a need for innovative 3D composite materials due to their attractive material properties. Composite materials are composed of a fibre reinforcement encased in a polymer matrix. The fibre reinforcement consists of warp, weft and binder yarns or tows woven together into a preform. The mechanical performance of composite material is largely controlled by the properties of the preform. As a result, the bulk of recent textile research has been focused on the design of high-strength preform architectures. Studies looking at optimisation of the weaving process have largely been neglected. It has been reported that yarns experience varying levels of damage during weaving, resulting in filament breakage and ultimately compromised composite mechanical performance. The weaving parameters involved in causing this yarn damage are not fully understood. Recent studies indicate that poor yarn tension control may be an influencing factor. As tension is increased, the yarn-to-yarn and yarn-to-weaving-equipment interactions are heightened, maximising damage. The correlation between yarn tension variation and weaving damage severity has never been adequately researched or quantified. A novel study is needed which accesses the influence of tension variation on the mechanical properties of woven yarns. This study has looked to quantify the variation of yarn tension throughout weaving and sought to link the impact of tension to weaving damage. Multiple yarns were randomly selected, and their tension was measured across the creel and shedding stages of weaving, using a hand-held tension meter. Sections of the same yarn were subsequently cut from the loom machine and tensile tested. A comparison study was made between the tensile strength of pristine and tensioned yarns to determine the induced weaving damage. Yarns from bobbins at the rear of the creel were under the least amount of tension (0.5-2.0N) compared to yarns positioned at the front of the creel (1.5-3.5N). This increase in tension has been linked to the sharp turn in the yarn path between bobbins at the front of the creel and creel I-board. Creel yarns under the lower tension suffered a 3% loss of tensile strength, compared to 7% for the greater tensioned yarns. During shedding, the tension on the yarns was higher than in the creel. The upper shed yarns were exposed to a decreased tension (3.0-4.5N) compared to the lower shed yarns (4.0-5.5N). Shed yarns under the lower tension suffered a 10% loss of tensile strength, compared to 14% for the greater tensioned yarns. Interestingly, the most severely damaged yarn was exposed to both the largest creel and shedding tensions. This study confirms for the first time that yarns under a greater level of tension suffer an increased amount of weaving damage. Significant variation of yarn tension has been identified across the creel and shedding stages of weaving. This leads to a variance of mechanical properties across the woven preform and ultimately the final composite part. The outcome from this study highlights the need for optimised yarn tension control during preform manufacture to minimize yarn-induced weaving damage.Keywords: optimisation of preform manufacture, tensile testing of damaged tows, variation of yarn weaving tension, weaving damage
Procedia PDF Downloads 23676 Impedimetric Phage-Based Sensor for the Rapid Detection of Staphylococcus aureus from Nasal Swab
Authors: Z. Yousefniayejahr, S. Bolognini, A. Bonini, C. Campobasso, N. Poma, F. Vivaldi, M. Di Luca, A. Tavanti, F. Di Francesco
Abstract:
Pathogenic bacteria represent a threat to healthcare systems and the food industry because their rapid detection remains challenging. Electrochemical biosensors are gaining prominence as a novel technology for the detection of pathogens due to intrinsic features such as low cost, rapid response time, and portability, which make them a valuable alternative to traditional methodologies. These sensors use biorecognition elements that are crucial for the identification of specific bacteria. In this context, bacteriophages are promising tools for their inherent high selectivity towards bacterial hosts, which is of fundamental importance when detecting bacterial pathogens in complex biological samples. In this study, we present the development of a low-cost and portable sensor based on the Zeno phage for the rapid detection of Staphylococcus aureus. Screen-printed gold electrodes functionalized with the Zeno phage were used, and electrochemical impedance spectroscopy was applied to evaluate the change of the charge transfer resistance (Rct) as a result of the interaction with S. aureus MRSA ATCC 43300. The phage-based biosensor showed a linear range from 101 to 104 CFU/mL with a 20-minute response time and a limit of detection (LOD) of 1.2 CFU/mL under physiological conditions. The biosensor’s ability to recognize various strains of staphylococci was also successfully demonstrated in the presence of clinical isolates collected from different geographic areas. Assays using S. epidermidis were also carried out to verify the species-specificity of the phage sensor. We only observed a remarkable change of the Rct in the presence of the target S. aureus bacteria, while no substantial binding to S. epidermidis occurred. This confirmed that the Zeno phage sensor only targets S. aureus species within the genus Staphylococcus. In addition, the biosensor's specificity with respect to other bacterial species, including gram-positive bacteria like Enterococcus faecium and the gram-negative bacterium Pseudomonas aeruginosa, was evaluated, and a non-significant impedimetric signal was observed. Notably, the biosensor successfully identified S. aureus bacterial cells in a complex matrix such as a nasal swab, opening the possibility of its use in a real-case scenario. We diluted different concentrations of S. aureus from 108 to 100 CFU/mL with a ratio of 1:10 in the nasal swap matrices collected from healthy donors. Three different sensors were applied to measure various concentrations of bacteria. Our sensor indicated high selectivity to detect S. aureus in biological matrices compared to time-consuming traditional methods, such as enzyme-linked immunosorbent assay (ELISA), polymerase chain reaction (PCR), and radioimmunoassay (RIA), etc. With the aim to study the possibility to use this biosensor to address the challenge associated to pathogen detection, ongoing research is focused on the assessment of the biosensor’s analytical performances in different biological samples and the discovery of new phage bioreceptors.Keywords: electrochemical impedance spectroscopy, bacteriophage, biosensor, Staphylococcus aureus
Procedia PDF Downloads 6675 Improvement of Greenhouse Gases Bio-Fixation by Microalgae Using a “Plasmon-Enhanced Photobioreactor”
Authors: Francisco Pereira, António Augusto Vicente, Filipe Vaz, Joel Borges, Pedro Geada
Abstract:
Light is a growth-limiting factor in microalgae cultivation, where factors like spectral components, intensity, and duration, often characterized by its wavelength, are well-reported to have a substantial impact on cell growth rates and, consequently, photosynthetic performance and mitigation of CO2, one of the most significant greenhouse gases (GHGs). Photobioreactors (PBRs) are commonly used to grow microalgae under controlled conditions, but they often fail to provide an even light distribution to the cultures. For this reason, there is a pressing need for innovations aiming at enhancing the efficient utilization of light. So, one potential approach to address this issue is by implementing plasmonic films, such as the localized surface plasmon resonance (LSPR). LSPR is an optical phenomenon connected to the interaction of light with metallic nanostructures. LSPR excitation is characterized by the oscillation of unbound conduction electrons of the nanoparticles coupled with the electromagnetic field from incident light. As a result of this excitation, highly energetic electrons and a strong electromagnetic field are generated. These effects lead to an amplification of light scattering, absorption, and extinction of specific wavelengths, contingent on the nature of the employed nanoparticle. Thus, microalgae might benefit from this biotechnology as it enables the selective filtration of inhibitory wavelengths and harnesses the electromagnetic fields produced, which could lead to enhancements in both biomass and metabolite productivity. This study aimed at implementing and evaluating a “plasmon-enhanced PBR”. The goal was to utilize LSPR thin films to enhance the growth and CO2 bio-fixation rate of Chlorella vulgaris. The internal/external walls of the PBRs were coated with a TiO2 matrix containing different nanoparticles (Au, Ag, and Au-Ag) in order to evaluate the impact of this approach on microalgae’s performance. Plasmonic films with distinct compositions resulted in different Chlorella vulgaris growth, ranging from 4.85 to 6.13 g.L-1. The highest cell concentrations were obtained with the metallic Ag films, demonstrating a 14% increase compared to the control condition. Moreover, it appeared to be no differences in growth between PBRs with inner and outer wall coatings. In terms of CO2 bio-fixation, distinct rates were obtained depending on the coating applied, ranging from 0.42 to 0.53 gCO2L-1d-1. Ag coating was demonstrated to be the most effective condition for carbon fixation by C. vulgaris. The impact of LSPR films on the biochemical characteristics of biomass (e.g., proteins, lipids, pigments) was analysed as well. Interestingly, Au coating yielded the most significant enhancements in protein content and total pigments, with increments of 15 % and 173 %, respectively, when compared to the PBR without any coating (control condition). Overall, the incorporation of plasmonic films in PBRs seems to have the potential to improve the performance and efficiency of microalgae cultivation, thereby representing an interesting approach to increase both biomass production and GHGs bio-mitigation.Keywords: CO₂ bio-fixation, plasmonic effect, photobioreactor, photosynthetic microalgae
Procedia PDF Downloads 8474 Purple Spots on Historical Parchments: Confirming the Microbial Succession at the Basis of Biodeterioration
Authors: N. Perini, M. C. Thaller, F. Mercuri, S. Orlanducci, A. Rubechini, L. Migliore
Abstract:
The preservation of cultural heritage is one of the major challenges of today’s society, because of the fundamental right of future generations to inherit it as the continuity with their historical and cultural identity. Parchments, consisting of a semi-solid matrix of collagen produced from animal skin (i.e., sheep or goats), are a significant part of the cultural heritage, being used as writing material for many centuries. Due to their animal origin, parchments easily undergo biodeterioration. The most common biological damage is characterized by isolated or coalescent purple spots that often leads to the detachment of the superficial layer and the loss of the written historical content of the document. Although many parchments with the same biodegradative features were analyzed, no common causative agent has been found so far. Very recently, a study was performed on a purple-damaged parchment roll dated back 1244 A.D, the A.A. Arm. I-XVIII 3328, belonging to the oldest collection of the Vatican Secret Archive (Fondo 'Archivum Arcis'), by comparing uncolored undamaged and purple damaged areas of the same document. As a whole, the study gave interesting results to hypothesize a model of biodeterioration, consisting of a microbial succession acting in two main phases: the first one, common to all the damaged parchments, is characterized by halophilic and halotolerant bacteria fostered by the salty environment within the parchment maybe induced by bringing of the hides; the second one, changing with the individual history of each parchment, determines the identity of its colonizers. The design of this model was pivotal to this study, performed by different labs of the Tor Vergata University (Rome, Italy), in collaboration with the Vatican Secret Archive. Three documents, belonging to a collection of dramatically damaged parchments archived as 'Faldone Patrizi A 19' (dated back XVII century A.D.), were analyzed through a multidisciplinary approach, including three updated technologies: (i) Next Generation Sequencing (NGS, Illumina) to describe the microbial communities colonizing the damaged and undamaged areas, (ii) RAMAN spectroscopy to analyze the purple pigments, (iii) Light Transmitted Analysis (LTA) to evaluate the kind and entity of the damage to native collagen. The metagenomic analysis obtained from NGS revealed DNA sequences belonging to Halobacterium salinarum mainly in the undamaged areas. RAMAN spectroscopy detected pigments within the purple spots, mainly bacteriorhodopsine/rhodopsin-like pigments, a purple transmembrane protein containing retinal and present in Halobacteria. The LTA technique revealed extremely damaged collagen structures in both damaged and undamaged areas of the parchments. In the light of these data, the study represents a first confirmation of the microbial succession model described above. The demonstration of this model is pivotal to start any possible new restoration strategy to bring back historical parchments to their original beauty, but also to open opportunities for intervention on a huge amount of documents.Keywords: biodeterioration, parchments, purple spots, ecological succession
Procedia PDF Downloads 17173 Mathematical Modeling of Avascular Tumor Growth and Invasion
Authors: Meitham Amereh, Mohsen Akbari, Ben Nadler
Abstract:
Cancer has been recognized as one of the most challenging problems in biology and medicine. Aggressive tumors are a lethal type of cancers characterized by high genomic instability, rapid progression, invasiveness, and therapeutic resistance. Their behavior involves complicated molecular biology and consequential dynamics. Although tremendous effort has been devoted to developing therapeutic approaches, there is still a huge need for new insights into the dark aspects of tumors. As one of the key requirements in better understanding the complex behavior of tumors, mathematical modeling and continuum physics, in particular, play a pivotal role. Mathematical modeling can provide a quantitative prediction on biological processes and help interpret complicated physiological interactions in tumors microenvironment. The pathophysiology of aggressive tumors is strongly affected by the extracellular cues such as stresses produced by mechanical forces between the tumor and the host tissue. During the tumor progression, the growing mass displaces the surrounding extracellular matrix (ECM), and due to the level of tissue stiffness, stress accumulates inside the tumor. The produced stress can influence the tumor by breaking adherent junctions. During this process, the tumor stops the rapid proliferation and begins to remodel its shape to preserve the homeostatic equilibrium state. To reach this, the tumor, in turn, upregulates epithelial to mesenchymal transit-inducing transcription factors (EMT-TFs). These EMT-TFs are involved in various signaling cascades, which are often associated with tumor invasiveness and malignancy. In this work, we modeled the tumor as a growing hyperplastic mass and investigated the effects of mechanical stress from surrounding ECM on tumor invasion. The invasion is modeled as volume-preserving inelastic evolution. In this framework, principal balance laws are considered for tumor mass, linear momentum, and diffusion of nutrients. Also, mechanical interactions between the tumor and ECM is modeled using Ciarlet constitutive strain energy function, and dissipation inequality is utilized to model the volumetric growth rate. System parameters, such as rate of nutrient uptake and cell proliferation, are obtained experimentally. To validate the model, human Glioblastoma multiforme (hGBM) tumor spheroids were incorporated inside Matrigel/Alginate composite hydrogel and was injected into a microfluidic chip to mimic the tumor’s natural microenvironment. The invasion structure was analyzed by imaging the spheroid over time. Also, the expression of transcriptional factors involved in invasion was measured by immune-staining the tumor. The volumetric growth, stress distribution, and inelastic evolution of tumors were predicted by the model. Results showed that the level of invasion is in direct correlation with the level of predicted stress within the tumor. Moreover, the invasion length measured by fluorescent imaging was shown to be related to the inelastic evolution of tumors obtained by the model.Keywords: cancer, invasion, mathematical modeling, microfluidic chip, tumor spheroids
Procedia PDF Downloads 11172 Investigation of Linezolid, 127I-Linezolid and 131I-Linezolid Effects on Slime Layer of Staphylococcus with Nuclear Methods
Authors: Hasan Demiroğlu, Uğur Avcıbaşı, Serhan Sakarya, Perihan Ünak
Abstract:
Implanted devices are progressively practiced in innovative medicine to relieve pain or improve a compromised function. Implant-associated infections represent an emerging complication, caused by organisms which adhere to the implant surface and grow embedded in a protective extracellular polymeric matrix, known as a biofilm. In addition, the microorganisms within biofilms enter a stationary growth phase and become phenotypically resistant to most antimicrobials, frequently causing treatment failure. In such cases, surgical removal of the implant is often required, causing high morbidity and substantial healthcare costs. Staphylococcus aureus is the most common pathogen causing implant-associated infections. Successful treatment of these infections includes early surgical intervention and antimicrobial treatment with bactericidal drugs that also act on the surface-adhering microorganisms. Linezolid is a promising anti-microbial with ant-staphylococcal activity, used for the treatment of MRSA infections. Linezolid is a synthetic antimicrobial and member of oxazolidinoni group, with a bacteriostatic or bactericidal dose-dependent antimicrobial mechanism against gram-positive bacteria. Intensive use of antibiotics, have emerged multi-resistant organisms over the years and major problems have begun to be experienced in the treatment of infections occurred with them. While new drugs have been developed worldwide, on the other hand infections formed with microorganisms which gained resistance against these drugs were reported and the scale of the problem increases gradually. Scientific studies about the production of bacterial biofilm increased in recent years. For this purpose, we investigated the activity of Lin, Lin radiolabeled with 131I (131I-Lin) and cold iodinated Lin (127I-Lin) against clinical strains of Staphylococcus aureus DSM 4910 in biofilm. In the first stage, radio and cold labeling studies were performed. Quality-control studies of Lin and iodo (radio and cold) Lin derivatives were carried out by using TLC (Thin Layer Radiochromatography) and HPLC (High Pressure Liquid Chromatography). In this context, it was found that the binding yield was obtained to be about 86±2 % for 131I-Lin. The minimal inhibitory concentration (MIC) of Lin, 127I-Lin and 131I-Lin for Staphylococcus aureus DSM 4910 strain were found to be 1µg/mL. In time-kill studies of Lin, 127I-Lin and 131I-Lin were producing ≥ 3 log10 decreases in viable counts (cfu/ml) within 6 h at 2 and 4 fold of MIC respectively. No viable bacteria were observed within the 24 h of the experiments. Biofilm eradication of S. aureus started with 64 µg/mL of Lin, 127I-Lin and 131I-Lin, and OD630 was 0.507±0.0.092, 0.589±0.058 and 0.266±0.047, respectively. The media control of biofilm producing Staphylococcus was 1.675±0,01 (OD630). 131I and 127I did not have any effects on biofilms. Lin and 127I-Lin were found less effectively than 131I-Lin at killing cells in biofilm and biofilm eradication. Our results demonstrate that the 131I-Lin have potent anti-biofilm activity against S. aureus compare to Lin, 127I-Lin and media control. This is suggested that, 131I may have harmful effect on biofilm structure.Keywords: iodine-131, linezolid, radiolabeling, slime layer, Staphylococcus
Procedia PDF Downloads 55871 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding
Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta
Abstract:
Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration
Procedia PDF Downloads 15970 Research on the Effect of Coal Ash Slag Structure Evolution on Its Flow Behavior During Co-gasification of Coal and Indirect Coal Liquefaction Residue
Authors: Linmin Zhang
Abstract:
Entrained-flow gasification technology is considered the most promising gasification technology because of its clean and efficient utilization characteristics. The stable fluidity of slag at high temperatures is the key to affecting the long-period operation of the gasifier. The diversity and differences of coal ash-slag systems make it difficult to meet the requirements for stable slagging in entrained-flow gasifiers. Therefore, coal blending or adding fluxes has been used in industry for a long time to improve the flow behavior of coal ash. As a by-product of the indirect coal liquefaction process, indirect coal liquefaction residue (ICLR) is a kind of industrial solid waste that is usually disposed of by stacking or landfilling. However, this disposal method will not only occupy land resources but also cause serious pollution to soil and water bodies by leachate containing toxic and harmful metals. As a carbon-containing matrix, ICLR is not only a kind of waste but also a kind of energy substance. Utilizing existing industrial gasifiers to blend combustion ICLR can not only transform industrial solid waste into fuel but also save coal resources. Moreover, the ICLR usually contains a unique ash chemical composition different from coal, which will affect the slagging performance of the gasifier. Therefore, exploring the effect of the ash addition in ICLR on the coal ash flow behavior can not only improve the slagging performance and gasification efficiency of entrained-flow gasifier by using the unique ash chemical composition of ICLR but also provide some theoretical support for the large-scale consumption of industrial solid waste. Combining molecular dynamics simulation with Raman spectroscopy experiment, the effect of ICLR addition on slag structure and fluidity was explained, and the relationship between the evolution law of slag short/medium range microstructure and macroscopic flow behavior was discussed. The research found that the high silicon and aluminum content in coal ash led to the formation of complex [SiO₄]⁴- tetrahedron and [AlO₄]⁵- tetrahedron structures at high temperature, and the [SiO₄]⁴- tetrahedron and [AlO₄]⁵- tetrahedron were connected by oxygen atoms to form a multi-membered ring structure with high polymerization degree. Due to the action of the multi-membered ring structure, the internal friction in the slag increased, and the viscosity value was higher on the macro-level. As a network-modified ion, Fe2+ could replace Si4+ and Al3+ in the multi-membered ring structure and combine with O2-, which will destroy the bridge oxygen (BO) structure and transform more complex tri cluster oxygen (TO) and bridge oxygen (BO) into simple non-bridge oxygen (NBO) structure. As a result, a large number of multi-membered rings with high polymerization degrees were depolymerized into low-membered rings with low polymerization degrees. The evolution of oxygen types and ring structures in slag reduced the structure complexity and polymerization degree of coal ash slag, resulting in a decrease in the viscosity of coal ash slag.Keywords: ash slag, coal gasification, fluidity, industrial solid waste, slag structure
Procedia PDF Downloads 2969 Physico-Mechanical Behavior of Indian Oil Shales
Authors: K. S. Rao, Ankesh Kumar
Abstract:
The search for alternative energy sources to petroleum has increased these days because of increase in need and depletion of petroleum reserves. Therefore the importance of oil shales as an economically viable substitute has increased many folds in last 20 years. The technologies like hydro-fracturing have opened the field of oil extraction from these unconventional rocks. Oil shale is a compact laminated rock of sedimentary origin containing organic matter known as kerogen which yields oil when distilled. Oil shales are formed from the contemporaneous deposition of fine grained mineral debris and organic degradation products derived from the breakdown of biota. Conditions required for the formation of oil shales include abundant organic productivity, early development of anaerobic conditions, and a lack of destructive organisms. These rocks are not gown through the high temperature and high pressure conditions in Mother Nature. The most common approach for oil extraction is drastically breaking the bond of the organics which involves retorting process. The two approaches for retorting are surface retorting and in-situ processing. The most environmental friendly approach for extraction is In-situ processing. The three steps involved in this process are fracturing, injection to achieve communication, and fluid migration at the underground location. Upon heating (retorting) oil shale at temperatures in the range of 300 to 400°C, the kerogen decomposes into oil, gas and residual carbon in a process referred to as pyrolysis. Therefore it is very important to understand the physico-mechenical behavior of such rocks, to improve the technology for in-situ extraction. It is clear from the past research and the physical observations that these rocks will behave as an anisotropic rock so it is very important to understand the mechanical behavior under high pressure at different orientation angles for the economical use of these resources. By knowing the engineering behavior under above conditions will allow us to simulate the deep ground retorting conditions numerically and experimentally. Many researchers have investigate the effect of organic content on the engineering behavior of oil shale but the coupled effect of organic and inorganic matrix is yet to be analyzed. The favourable characteristics of Assam coal for conversion to liquid fuels have been known for a long time. Studies have indicated that these coals and carbonaceous shale constitute the principal source rocks that have generated the hydrocarbons produced from the region. Rock cores of the representative samples are collected by performing on site drilling, as coring in laboratory is very difficult due to its highly anisotropic nature. Different tests are performed to understand the petrology of these samples, further the chemical analyses are also done to exactly quantify the organic content in these rocks. The mechanical properties of these rocks are investigated by considering different anisotropic angles. Now the results obtained from petrology and chemical analysis are correlated with the mechanical properties. These properties and correlations will further help in increasing the producibility of these rocks. It is well established that the organic content is negatively correlated to tensile strength, compressive strength and modulus of elasticity.Keywords: oil shale, producibility, hydro-fracturing, kerogen, petrology, mechanical behavior
Procedia PDF Downloads 34768 Characteristics-Based Lq-Control of Cracking Reactor by Integral Reinforcement
Authors: Jana Abu Ahmada, Zaineb Mohamed, Ilyasse Aksikas
Abstract:
The linear quadratic control system of hyperbolic first order partial differential equations (PDEs) are presented. The aim of this research is to control chemical reactions. This is achieved by converting the PDEs system to ordinary differential equations (ODEs) using the method of characteristics to reduce the system to control it by using the integral reinforcement learning. The designed controller is applied to a catalytic cracking reactor. Background—Transport-Reaction systems cover a large chemical and bio-chemical processes. They are best described by nonlinear PDEs derived from mass and energy balances. As a main application to be considered in this work is the catalytic cracking reactor. Indeed, the cracking reactor is widely used to convert high-boiling, high-molecular weight hydrocarbon fractions of petroleum crude oils into more valuable gasoline, olefinic gases, and others. On the other hand, control of PDEs systems is an important and rich area of research. One of the main control techniques is feedback control. This type of control utilizes information coming from the system to correct its trajectories and drive it to a desired state. Moreover, feedback control rejects disturbances and reduces the variation effects on the plant parameters. Linear-quadratic control is a feedback control since the developed optimal input is expressed as feedback on the system state to exponentially stabilize and drive a linear plant to the steady-state while minimizing a cost criterion. The integral reinforcement learning policy iteration technique is a strong method that solves the linear quadratic regulator problem for continuous-time systems online in real time, using only partial information about the system dynamics (i.e. the drift dynamics A of the system need not be known), and without requiring measurements of the state derivative. This is, in effect, a direct (i.e. no system identification procedure is employed) adaptive control scheme for partially unknown linear systems that converges to the optimal control solution. Contribution—The goal of this research is to Develop a characteristics-based optimal controller for a class of hyperbolic PDEs and apply the developed controller to a catalytic cracking reactor model. In the first part, developing an algorithm to control a class of hyperbolic PDEs system will be investigated. The method of characteristics will be employed to convert the PDEs system into a system of ODEs. Then, the control problem will be solved along the characteristic curves. The reinforcement technique is implemented to find the state-feedback matrix. In the other half, applying the developed algorithm to the important application of a catalytic cracking reactor. The main objective is to use the inlet fraction of gas oil as a manipulated variable to drive the process state towards desired trajectories. The outcome of this challenging research would yield the potential to provide a significant technological innovation for the gas industries since the catalytic cracking reactor is one of the most important conversion processes in petroleum refineries.Keywords: PDEs, reinforcement iteration, method of characteristics, riccati equation, cracking reactor
Procedia PDF Downloads 9167 Development and Validation of a Quantitative Measure of Engagement in the Analysing Aspect of Dialogical Inquiry
Authors: Marcus Goh Tian Xi, Alicia Chua Si Wen, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee
Abstract:
The Map of Dialogical Inquiry provides a conceptual look at the underlying nature of future-oriented skills. According to the Map, learning is learner-oriented, with conversational time shifted from teachers to learners, who play a strong role in deciding what and how they learn. For example, in courses operating on the principles of Dialogical Inquiry, learners were able to leave the classroom with a deeper understanding of the topic, broader exposure to differing perspectives, and stronger critical thinking capabilities, compared to traditional approaches to teaching. Despite its contributions to learning, the Map is grounded in a qualitative approach both in its development and its application for providing feedback to learners and educators. Studies hinge on openended responses by Map users, which can be time consuming and resource intensive. The present research is motivated by this gap in practicality by aiming to develop and validate a quantitative measure of the Map. In addition, a quantifiable measure may also strengthen applicability by making learning experiences trackable and comparable. The Map outlines eight learning aspects that learners should holistically engage. This research focuses on the Analysing aspect of learning. According to the Map, Analysing has four key components: liking or engaging in logic, using interpretative lenses, seeking patterns, and critiquing and deconstructing. Existing scales of constructs (e.g., critical thinking, rationality) related to these components were identified so that the current scale could adapt items from. Specifically, items were phrased beginning with an “I”, followed by an action phrase, to fulfil the purpose of assessing learners' engagement with Analysing either in general or in classroom contexts. Paralleling standard scale development procedure, the 26-item Analysing scale was administered to 330 participants alongside existing scales with varying levels of association to Analysing, to establish construct validity. Subsequently, the scale was refined and its dimensionality, reliability, and validity were determined. Confirmatory factor analysis (CFA) revealed if scale items loaded onto the four factors corresponding to the components of Analysing. To refine the scale, items were systematically removed via an iterative procedure, according to their factor loadings and results of likelihood ratio tests at each step. Eight items were removed this way. The Analysing scale is better conceptualised as unidimensional, rather than comprising the four components identified by the Map, for three reasons: 1) the covariance matrix of the model specified for the CFA was not positive definite, 2) correlations among the four factors were high, and 3) exploratory factor analyses did not yield an easily interpretable factor structure of Analysing. Regarding validity, since the Analysing scale had higher correlations with conceptually similar scales than conceptually distinct scales, with minor exceptions, construct validity was largely established. Overall, satisfactory reliability and validity of the scale suggest that the current procedure can result in a valid and easy-touse measure for each aspect of the Map.Keywords: analytical thinking, dialogical inquiry, education, lifelong learning, pedagogy, scale development
Procedia PDF Downloads 9166 Antimicrobial Nanocompositions Made of Amino Acid Based Biodegradable Polymers
Authors: Nino Kupatadze, Mzevinar Bedinashvili, Tamar Memanishvili, Manana Gurielidze, David Tugushi, Ramaz Katsarava
Abstract:
Bacteria easily colonize the surfaces of tissues, surgical devices (implants, orthopedics, catheters, etc.), and instruments causing surgical device related infections. Therefore, the battle against bacteria and the prevention of surgical devices from biofilm formation is one of the main challenges of biomedicine today. Our strategy to the solution of this problem consists in using antimicrobial polymeric coatings as effective “shields” to protect surfaces from bacteria’s colonization and biofilm formation. As one of the most promising approaches look be the use of antimicrobial bioerodible polymeric nanocomposites containing silver nanoparticles (AgNPs). We assume that the combination of an erodible polymer with a strong bactericide should put obstacles to bacteria to occupy the surface and to form biofilm. It has to be noted that this kind of nanocomposites are also promising as wound dressing materials to treat infected superficial wounds. Various synthetic and natural polymers were used for creating biocomposites containing AgNPs as both particles' stabilizers and matrices forming elastic films at surfaces. One of the most effective systems to fabricate AgNPs is an ethanol solution of polyvinylpyrrolidone(PVP) with dissolved AgNO3–ethanol serves as a AgNO3 reductant and PVP as AgNPs stabilizer (through the interaction of nanoparticles with nitrogen atom of the amide group). Though PVP is biocompatible and film-forming polymer, it is not a good candidate to design either "biofilm shield" or wound dressing material because of a high solubility in water – though the solubility of PVP provides the desirable release of AgNPs from the matrix, but the coating is easily washable away from the surfaces. More promising as matrices look water insoluble but bioerodible polymers that can provide the release of AgNPs and form long-lasting coatings at the surfaces. For creating bioerodible water-insoluble antimicrobial coatings containing AgNPs, we selected amino acid based biodegradable polymers(AABBPs)–poly(ester amide)s, poly(ester urea)s, their copolymers containing amide and related groups capable to stabilize AgNPs. Among a huge variety of AABBPs reported we selected the polymers soluble in ethanol. For preparing AgNPs containing nanocompositions AABBPs and AgNO3 were dissolved in ethanol and subjected to photochemical reduction using daylight-irradiation. The formation of AgNPs was observed visually by coloring the solutions in brownish-red. The obtained AgNPs were characterized by UV-spectroscopy, transmission electron microscopy(TEM), and dynamic light scattering(DLS). According to the UV and TEM data, the photochemical reduction resulted presumably in spherical AgNPs with rather high contribution of the particles below 10 nm that are known as responsible for the antimicrobial activity. DLS study showed that average size of nanoparticles formed after photo-reduction in ethanol solution ranged within 50 nm. The in vitro antimicrobial activity study of the new nanocomposite material is in progress now.Keywords: nanocomposites, silver nanoparticles, polymer, biodegradable
Procedia PDF Downloads 39665 Monitoring the Production of Large Composite Structures Using Dielectric Tool Embedded Capacitors
Authors: Galatee Levadoux, Trevor Benson, Chris Worrall
Abstract:
With the rise of public awareness on climate change comes an increasing demand for renewable sources of energy. As a result, the wind power sector is striving to manufacture longer, more efficient and reliable wind turbine blades. Currently, one of the leading causes of blade failure in service is improper cure of the resin during manufacture. The infusion process creating the main part of the composite blade structure remains a critical step that is yet to be monitored in real time. This stage consists of a viscous resin being drawn into a mould under vacuum, then undergoing a curing reaction until solidification. Successful infusion assumes the resin fills all the voids and cures completely. Given that the electrical properties of the resin change significantly during its solidification, both the filling of the mould and the curing reaction are susceptible to be followed using dieletrometry. However, industrially available dielectrics sensors are currently too small to monitor the entire surface of a wind turbine blade. The aim of the present research project is to scale up the dielectric sensor technology and develop a device able to monitor the manufacturing process of large composite structures, assessing the conformity of the blade before it even comes out of the mould. An array of flat copper wires acting as electrodes are embedded in a polymer matrix fixed in an infusion mould. A multi-frequency analysis from 1 Hz to 10 kHz is performed during the filling of the mould with an epoxy resin and the hardening of the said resin. By following the variations of the complex admittance Y*, the filling of the mould and curing process are monitored. Results are compared to numerical simulations of the sensor in order to validate a virtual cure-monitoring system. The results obtained by drawing glycerol on top of the copper sensor displayed a linear relation between the wetted length of the sensor and the complex admittance measured. Drawing epoxy resin on top of the sensor and letting it cure at room temperature for 24 hours has provided characteristic curves obtained when conventional interdigitated sensor are used to follow the same reaction. The response from the developed sensor has shown the different stages of the polymerization of the resin, validating the geometry of the prototype. The model created and analysed using COMSOL has shown that the dielectric cure process can be simulated, so long as a sufficient time and temperature dependent material properties can be determined. The model can be used to help design larger sensors suitable for use with full-sized blades. The preliminary results obtained with the sensor prototype indicate that the infusion and curing process of an epoxy resin can be followed with the chosen configuration on a scale of several decimeters. Further work is to be devoted to studying the influence of the sensor geometry and the infusion parameters on the results obtained. Ultimately, the aim is to develop a larger scale sensor able to monitor the flow and cure of large composite panels industrially.Keywords: composite manufacture, dieletrometry, epoxy, resin infusion, wind turbine blades
Procedia PDF Downloads 16664 Scenario-Based Scales and Situational Judgment Tasks to Measure the Social and Emotional Skills
Authors: Alena Kulikova, Leonid Parmaksiz, Ekaterina Orel
Abstract:
Social and emotional skills are considered by modern researchers as predictors of a person's success both in specific areas of activity and in the life of a person as a whole. The popularity of this scientific direction ensures the emergence of a large number of practices aimed at developing and evaluating socio-emotional skills. Assessment of social and emotional development is carried out at the national level, as well as at the level of individual regions and institutions. Despite the fact that many of the already existing social and emotional skills assessment tools are quite convenient and reliable, there are now more and more new technologies and task formats which improve the basic characteristics of the tools. Thus, the goal of the current study is to develop a tool for assessing social and emotional skills such as emotion recognition, emotion regulation, empathy and a culture of self-care. To develop a tool assessing social and emotional skills, Rasch-Gutman scenario-based approach was used. This approach has shown its reliability and merit for measuring various complex constructs: parental involvement; teacher practices that support cultural diversity and equity; willingness to participate in the life of the community after psychiatric rehabilitation; educational motivation and others. To assess emotion recognition, we used a situational judgment task based on OCC (Ortony, Clore, and Collins) emotions theory. The main advantage of these two approaches compare to classical Likert scales is that it reduces social desirability in answers. A field test to check the psychometric properties of the developed instrument was conducted. The instrument was developed for the presidential autonomous non-profit organization “Russia - Land of Opportunity” for nationwide soft skills assessment among higher education students. The sample for the field test consisted of 500 people, students aged from 18 to 25 (mean = 20; standard deviation 1.8), 71% female. 67% of students are only studying and are not currently working and 500 employed adults aged from 26 to 65 (mean = 42.5; SD 9), 57% female. Analysis of the psychometric characteristics of the scales was carried out using the methods of IRT (Item Response Theory). A one-parameter rating scale model RSM (Rating scale model) and Graded Response model (GRM) of the modern testing theory were applied. GRM is a polyatomic extension of the dichotomous two-parameter model of modern testing theory (2PL) based on the cumulative logit function for modeling the probability of a correct answer. The validity of the developed scales was assessed using correlation analysis and MTMM (multitrait-multimethod matrix). The developed instrument showed good psychometric quality and can be used by HR specialists or educational management. The detailed results of a psychometric study of the quality of the instrument, including the functioning of the tasks of each scale, will be presented. Also, the results of the validity study by MTMM analysis will be discussed.Keywords: social and emotional skills, psychometrics, MTMM, IRT
Procedia PDF Downloads 7463 The Development of Assessment Criteria Framework for Sustainable Healthcare Buildings in China
Authors: Chenyao Shen, Jie Shen
Abstract:
The rating system provides an effective framework for assessing building environmental performance and integrating sustainable development into building and construction processes; as it can be used as a design tool by developing appropriate sustainable design strategies and determining performance measures to guide the sustainable design and decision-making processes. Healthcare buildings are resource (water, energy, etc.) intensive. To maintain high-cost operations and complex medical facilities, they require a great deal of hazardous and non-hazardous materials, stringent control of environmental parameters, and are responsible for producing polluting emission. Compared with other types of buildings, the impact of healthcare buildings on the full cycle of the environment is particularly large. With broad recognition among designers and operators that energy use can be reduced substantially, many countries have set up their own green rating systems for healthcare buildings. There are four main green healthcare building evaluation systems widely acknowledged in the world - Green Guide for Health Care (GGHC), which was jointly organized by the United States HCWH and CMPBS in 2003; BREEAM Healthcare, issued by the British Academy of Building Research (BRE) in 2008; the Green Star-Healthcare v1 tool, released by the Green Building Council of Australia (GBCA) in 2009; and LEED Healthcare 2009, released by the United States Green Building Council (USGBC) in 2011. In addition, the German Association of Sustainable Building (DGNB) has also been developing the German Sustainable Building Evaluation Criteria (DGNB HC). In China, more and more scholars and policy makers have recognized the importance of assessment of sustainable development, and have adapted some tools and frameworks. China’s first comprehensive assessment standard for green building (the GBTs) was issued in 2006 (lately updated in 2014), promoting sustainability in the built-environment and raise awareness of environmental issues among architects, engineers, contractors as well as the public. However, healthcare building was not involved in the evaluation system of GBTs because of its complex medical procedures, strict requirements of indoor/outdoor environment and energy consumption of various functional rooms. Learn from advanced experience of GGHC, BREEAM, and LEED HC above, China’s first assessment criteria for green hospital/healthcare buildings was finally released in December 2015. Combined with both quantitative and qualitative assessment criteria, the standard highlight the differences between healthcare and other public buildings in meeting the functional needs for medical facilities and special groups. This paper has focused on the assessment criteria framework for sustainable healthcare buildings, for which the comparison of different rating systems is rather essential. Descriptive analysis is conducted together with the cross-matrix analysis to reveal rich information on green assessment criteria in a coherent manner. The research intends to know whether the green elements for healthcare buildings in China are different from those conducted in other countries, and how to improve its assessment criteria framework.Keywords: assessment criteria framework, green building design, healthcare building, building performance rating tool
Procedia PDF Downloads 14662 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 159