Search results for: sensitivity matrix
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3929

Search results for: sensitivity matrix

269 Correlation between Defect Suppression and Biosensing Capability of Hydrothermally Grown ZnO Nanorods

Authors: Mayoorika Shukla, Pramila Jakhar, Tejendra Dixit, I. A. Palani, Vipul Singh

Abstract:

Biosensors are analytical devices with wide range of applications in biological, chemical, environmental and clinical analysis. It comprises of bio-recognition layer which has biomolecules (enzymes, antibodies, DNA, etc.) immobilized over it for detection of analyte and transducer which converts the biological signal into the electrical signal. The performance of biosensor primarily the depends on the bio-recognition layer and therefore it has to be chosen wisely. In this regard, nanostructures of metal oxides such as ZnO, SnO2, V2O5, and TiO2, etc. have been explored extensively as bio-recognition layer. Recently, ZnO has the attracted attention of researchers due to its unique properties like high iso-electric point, biocompatibility, stability, high electron mobility and high electron binding energy, etc. Although there have been many reports on usage of ZnO as bio-recognition layer but to the authors’ knowledge, none has ever observed correlation between optical properties like defect suppression and biosensing capability of the sensor. Here, ZnO nanorods (ZNR) have been synthesized by a low cost, simple and low-temperature hydrothermal growth process, over Platinum (Pt) coated glass substrate. The ZNR have been synthesized in two steps viz. initially a seed layer was coated over substrate (Pt coated glass) followed by immersion of it into nutrient solution of Zinc nitrate and Hexamethylenetetramine (HMTA) with in situ addition of KMnO4. The addition of KMnO4 was observed to have a profound effect over the growth rate anisotropy of ZnO nanostructures. Clustered and powdery growth of ZnO was observed without addition of KMnO4, although by addition of it during the growth, uniform and crystalline ZNR were found to be grown over the substrate. Moreover, the same has resulted in suppression of defects as observed by Normalized Photoluminescence (PL) spectra since KMnO4 is a strong oxidizing agent which provides an oxygen rich growth environment. Further, to explore the correlation between defect suppression and biosensing capability of the ZNR Glucose oxidase (Gox) was immobilized over it, using physical adsorption technique followed by drop casting of nafion. Here the main objective of the work was to analyze effect of defect suppression over biosensing capability, and therefore Gox has been chosen as model enzyme, and electrochemical amperometric glucose detection was performed. The incorporation of KMnO4 during growth has resulted in variation of optical and charge transfer properties of ZNR which in turn were observed to have deep impact on biosensor figure of merits. The sensitivity of biosensor was found to increase by 12-18 times, due to variations introduced by addition of KMnO4 during growth. The amperometric detection of glucose in continuously stirred buffer solution was performed. Interestingly, defect suppression has been observed to contribute towards the improvement of biosensor performance. The detailed mechanism of growth of ZNR along with the overall influence of defect suppression on the sensing capabilities of the resulting enzymatic electrochemical biosensor and different figure of merits of the biosensor (Glass/Pt/ZNR/Gox/Nafion) will be discussed during the conference.

Keywords: biosensors, defects, KMnO4, ZnO nanorods

Procedia PDF Downloads 258
268 Expressing Locality in Learning English: A Study of English Textbooks for Junior High School Year VII-IX in Indonesia Context

Authors: Agnes Siwi Purwaning Tyas, Dewi Cahya Ambarwati

Abstract:

This paper concerns the language learning that develops as a habit formation and a constructive process while exercising an oppressive power to construct the learners. As a locus of discussion, the investigation problematizes the transfer of English language to Indonesian students of junior high school through the use of English textbooks ‘Real Time: An Interactive English Course for Junior High School Students Year VII-IX’. English language has long performed as a global language and it is a demand upon the non-English native speakers to master the language if they desire to become internationally recognized individuals. Generally, English teachers teach the language in accordance with the nature of language learning in which they are trained and expected to teach the language within the culture of the target language. This provides a potential soft cultural penetration of a foreign ideology through language transmission. In the context of Indonesia, learning English as international language is considered dilemmatic. Most English textbooks in Indonesia incorporate cultural elements of the target language which in some extent may challenge the sensitivity towards local cultural values. On the other hand, local teachers demand more English textbooks for junior high school students which can facilitate cultural dissemination of both local and global values and promote learners’ cultural traits of both cultures to avoid misunderstanding and confusion. It also aims to support language learning as bidirectional process instead of instrument of oppression. However, sensitizing and localizing this foreign language is not sufficient to restrain its soft infiltration. In due course, domination persists making the English language as an authoritative language and positioning the locality as ‘the other’. Such critical premise has led to a discursive analysis referring to how the cultural elements of the target language are presented in the textbooks and whether the local characteristics of Indonesia are able to gradually reduce the degree of the foreign oppressive ideology. The three textbooks researched were written by non-Indonesian author edited by two Indonesia editors published by a local commercial publishing company, PT Erlangga. The analytical elaboration examines the cultural characteristics in the forms of names, terminologies, places, objects and imageries –not the linguistic aspect– of both cultural domains; English and Indonesia. Comparisons as well as categorizations were made to identify the cultural traits of each language and scrutinize the contextual analysis. In the analysis, 128 foreign elements and 27 local elements were found in textbook for grade VII, 132 foreign elements and 23 local elements were found in textbook for grade VIII, while 144 foreign elements and 35 local elements were found in grade IX textbook, demonstrating the unequal distribution of both cultures. Even though the ideal pedagogical approach of English learning moves to a different direction by the means of inserting local elements, the learners are continuously imposed to the culture of the target language and forced to internalize the concept of values under the influence of the target language which tend to marginalize their native culture.

Keywords: bidirectional process, English, local culture, oppression

Procedia PDF Downloads 244
267 Dynamic Changes in NT-proBNP Levels in Unrelated Donors during Hematopoietic Stem Cells Mobilization

Authors: Natalia V. Minaeva, Natalia A. Zorina, Marina N. Khorobrikh, Philipp S. Sherstnev, Tatiana V. Krivokorytova, Alexander S. Luchinin, Maksim S. Minaev, Igor V. Paramonov

Abstract:

Background. Over the last few decades, the Center for International Blood and Marrow Transplant Research (CIBMTR) and the World Marrow Donor Association (WMDA) have been actively working to ensure the safety of the hematopoietic stem cell (HSC) donation process. Registration of adverse events that may occur during the donation period and establishing a relationship between donation and side effects are included in the WMDA international standards. The level of blood serum N-terminal pro-brain natriuretic peptide (NT-proBNP) is an early marker of myocardial stress. Due to the high analytical sensitivity and specificity, laboratory assessment of NT-proBNP makes it possible to objectively diagnose myocardial dysfunction. It is well known that the main stimulus for proBNP synthesis and secretion from atrial and ventricular cardiac myocytes is myocyte stretch and increasement of myocardial extensibility and pressure in the heart chambers. Аim. The aim of the study was to assess the dynamic changes in the levels of blood serum N-terminal pro-brain natriuretic peptide of unrelated donors at various stages of hematopoietic stem cell mobilization. Materials. We have examined 133 unrelated donors, including 92 men and 41 women, that have been included into the study. The NT-proBNP levels were measured before the start of mobilization, then on the day of apheresis, and after the donation of allogeneic HSC. The relationship between NT-proBNP levels and body mass index (BMI), ferritin, hemoglobin, and white blood cells (WBC) levels was assessed on the day of apheresis. The median age of donors was 34 years. Mobilization of HSCs was managed with filgrastim administration at a dose of 10 μg/kg daily for 4-5 days. The first leukocytapheresis was performed on day 4 from the start of filgrastim administration. Quantitative values of the blood serum NT-proBNP level are presented as a median (Me), first and third quartiles (Q1-Q3). Comparative analysis was carried out using the t-test and correlation analysis as well by Spearman method. Results. The baseline blood serum NT-proBNP levels in all 133 donors were within the reference values (<125 pg/ml) and equaled 21,6 (10,0; 43,3) pg/ml. At the same time, the level of NT-proBNP in women was significantly higher than that of men. On the day of the HSC apheresis, a significant increase of blood serum NT-proBNP levels was detected and equald 131,2 (72,6; 165,3) pg/ml (p<0,001), with higher rates in female donors. A statistically significant weak inverse correleation was established between the level of NT-proBNP and the BMI of donors (-0.18, p = 0,03), as well as the level of hemoglobin (-0.33, p <0,001), and ferritin levels (-0.19, p = 0,03). No relationship has been established between the magnitude of WBC levels achieved as a result of the mobilization of HSC on the day of leukocytapheresis. A day after the apheresis, the blood serum NT-proBNP levels still exceeded the reference values, but there was a decreasing tendency. Conclusion. An increase of the blood serum NT-proBNP level in unrelated donors during the mobilization of HSC was established. Future studies should clarify the reason for this phenomenon, as well as its effects on donors' long-term health.

Keywords: unrelated donors, mobilization, hematopoietic stem cells, N-terminal pro-brain natriuretic peptide

Procedia PDF Downloads 72
266 Analysis of Long-Term Response of Seawater to Change in CO₂, Heavy Metals and Nutrients Concentrations

Authors: Igor Povar, Catherine Goyet

Abstract:

The seawater is subject to multiple external stressors (ES) including rising atmospheric CO2 and ocean acidification, global warming, atmospheric deposition of pollutants and eutrophication, which deeply alter its chemistry, often on a global scale and, in some cases, at the degree significantly exceeding that in the historical and recent geological verification. In ocean systems the micro- and macronutrients, heavy metals, phosphor- and nitrogen-containing components exist in different forms depending on the concentrations of various other species, organic matter, the types of minerals, the pH etc. The major limitation to assessing more strictly the ES to oceans, such as pollutants (atmospheric greenhouse gas, heavy metals, nutrients as nitrates and phosphates) is the lack of theoretical approach which could predict the ocean resistance to multiple external stressors. In order to assess the abovementioned ES, the research has applied and developed the buffer theory approach and theoretical expressions of the formal chemical thermodynamics to ocean systems, as heterogeneous aqueous systems. The thermodynamic expressions of complex chemical equilibria, involving acid-base, complex formation and mineral ones have been deduced. This thermodynamic approach utilizes thermodynamic relationships coupled with original mass balance constraints, where the solid phases are explicitly expressed. The ocean sensitivity to different external stressors and changes in driving factors are considered in terms of derived buffering capacities or buffer factors for heterogeneous systems. Our investigations have proved that the heterogeneous aqueous systems, as ocean and seas are, manifest their buffer properties towards all their components, not only to pH, as it has been known so far, for example in respect to carbon dioxide, carbonates, phosphates, Ca2+, Mg2+, heavy metal ions etc. The derived expressions make possible to attribute changes in chemical ocean composition to different pollutants. These expressions are also useful for improving the current atmosphere-ocean-marine biogeochemistry models. The major research questions, to which the research responds, are: (i.) What kind of contamination is the most harmful for Future Ocean? (ii.) What are chemical heterogeneous processes of the heavy metal release from sediments and minerals and its impact to the ocean buffer action? (iii.) What will be the long-term response of the coastal ocean to the oceanic uptake of anthropogenic pollutants? (iv.) How will change the ocean resistance in terms of future chemical complex processes and buffer capacities and its response to external (anthropogenic) perturbations? The ocean buffer capacities towards its main components are recommended as parameters that should be included in determining the most important ocean factors which define the response of ocean environment at the technogenic loads increasing. The deduced thermodynamic expressions are valid for any combination of chemical composition, or any of the species contributing to the total concentration, as independent state variable.

Keywords: atmospheric greenhouse gas, chemical thermodynamics, external stressors, pollutants, seawater

Procedia PDF Downloads 113
265 Multi-Agent System Based Distributed Voltage Control in Distribution Systems

Authors: A. Arshad, M. Lehtonen. M. Humayun

Abstract:

With the increasing Distributed Generation (DG) penetration, distribution systems are advancing towards the smart grid technology for least latency in tackling voltage control problem in a distributed manner. This paper proposes a Multi-agent based distributed voltage level control. In this method a flat architecture of agents is used and agents involved in the whole controlling procedure are On Load Tap Changer Agent (OLTCA), Static VAR Compensator Agent (SVCA), and the agents associated with DGs and loads at their locations. The objectives of the proposed voltage control model are to minimize network losses and DG curtailments while maintaining voltage value within statutory limits as close as possible to the nominal. The total loss cost is the sum of network losses cost, DG curtailment costs, and voltage damage cost (which is based on penalty function implementation). The total cost is iteratively calculated for various stricter limits by plotting voltage damage cost and losses cost against varying voltage limit band. The method provides the optimal limits closer to nominal value with minimum total loss cost. In order to achieve the objective of voltage control, the whole network is divided into multiple control regions; downstream from the controlling device. The OLTCA behaves as a supervisory agent and performs all the optimizations. At first, a token is generated by OLTCA on each time step and it transfers from node to node until the node with voltage violation is detected. Upon detection of such a node, the token grants permission to Load Agent (LA) for initiation of possible remedial actions. LA will contact the respective controlling devices dependent on the vicinity of the violated node. If the violated node does not lie in the vicinity of the controller or the controlling capabilities of all the downstream control devices are at their limits then OLTC is considered as a last resort. For a realistic study, simulations are performed for a typical Finnish residential medium-voltage distribution system using Matlab ®. These simulations are executed for two cases; simple Distributed Voltage Control (DVC) and DVC with optimized loss cost (DVC + Penalty Function). A sensitivity analysis is performed based on DG penetration. The results indicate that costs of losses and DG curtailments are directly proportional to the DG penetration, while in case 2 there is a significant reduction in total loss. For lower DG penetration, losses are reduced more or less 50%, while for higher DG penetration, loss reduction is not very significant. Another observation is that the newer stricter limits calculated by cost optimization moves towards the statutory limits of ±10% of the nominal with the increasing DG penetration as for 25, 45 and 65% limits calculated are ±5, ±6.25 and 8.75% respectively. Observed results conclude that the novel voltage control algorithm proposed in case 1 is able to deal with the voltage control problem instantly but with higher losses. In contrast, case 2 make sure to reduce the network losses through proposed iterative method of loss cost optimization by OLTCA, slowly with time.

Keywords: distributed voltage control, distribution system, multi-agent systems, smart grids

Procedia PDF Downloads 288
264 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times

Authors: John Dimopoulos

Abstract:

This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.

Keywords: design, hypermodernity, object-oriented ontology, weapon-being

Procedia PDF Downloads 129
263 Diagenesis of the Permian Ecca Sandstones and Mudstones, in the Eastern Cape Province, South Africa: Implications for the Shale Gas Potential of the Karoo Basin

Authors: Temitope L. Baiyegunhi, Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava

Abstract:

Diagenesis is the most important factor that affects or impact the reservoir property. Despite the fact that published data gives a vast amount of information on the geology, sedimentology and lithostratigraphy of the Ecca Group in the Karoo Basin of South Africa, little is known of the diagenesis of the potentially feasible shales and sandstones of the Ecca Group. The study aims to provide a general account of the diagenesis of sandstones and mudstone of the Ecca Group. Twenty-five diagenetic textures and structures are identified and grouped into three regimes or stages that include eogenesis, mesogenesis and telogenesis. Clay minerals are the most common cementing materials in the Ecca sandstones and mudstones. Smectite, kaolinite and illite are the major clay minerals that act as pore lining rims and pore-filling cement. Most of the clay minerals and detrital grains were seriously attacked and replaced by calcite. Calcite precipitates locally in pore spaces and partly or completely replaced feldspar and quartz grains, mostly at their margins. Precipitation of cements and formation of pyrite and authigenic minerals as well as little lithification occurred during the eogenesis. This regime was followed by mesogenesis which brought about an increase in tightness of grain packing, loss of pore spaces and thinning of beds due to weight of overlying sediments and selective dissolution of framework grains. Compaction, mineral overgrowths, mineral replacement, clay-mineral authigenesis, deformation and pressure solution structures occurred during mesogenesis. During rocks were uplifted, weathered and unroofed by erosion, this resulted in additional grain fracturing, decementation and oxidation of iron-rich volcanic fragments and ferromagnesian minerals. The rocks of Ecca Group were subjected to moderate-intense mechanical and chemical compaction during its progressive burial. Intergranular pores, matrix micro pores, secondary intragranular, dissolution and fractured pores are the observed pores. The presence of fractured and dissolution pores tend to enhance reservoir quality. However, the isolated nature of the pores makes them unfavourable producers of hydrocarbons, which at best would require stimulation. The understanding of the space and time distribution of diagenetic processes in these rocks will allow the development of predictive models of their quality, which may contribute to the reduction of risks involved in their exploration.

Keywords: diagenesis, reservoir quality, Ecca Group, Karoo Supergroup

Procedia PDF Downloads 123
262 Prevalence, Antimicrobial Susceptibility Pattern and Public Health Significance for Staphylococcus Aureus of Isolated from Raw Red Meat at Butchery and Abattoir House in Mekelle, Northern Ethiopia

Authors: Haftay Abraha Tadesse

Abstract:

Background: Staphylococcus is a genus of worldwide distributed bacteria correlated to several infectious of different sites in humans and animals. They are among the most important causes of infection that are associated with the consumption of contaminated food. Objective: The objective of this study was to determine the isolates, antimicrobial susceptibility patterns and Public Health Significance of Staphylococcus aureus in raw meat from butchery and abattoir houses of Mekelle, Northern Ethiopia. Methodology: A cross-sectional study was conducted from April to October 2019. Socio-demographic data and Public Health Significance were collected using a predesigned questionnaire. The raw meat samples were collected aseptically in the butchery and abattoir houses and transported using an ice box to Mekelle University, College of Veterinary Sciences, for isolating and identification of Staphylococcus aureus. Antimicrobial susceptibility tests were determined by the disc diffusion method. Data obtained were cleaned and entered into STATA 22.0 and a logistic regression model with odds ratio was calculated to assess the association of risk factors with bacterial contamination. A P-value < 0.05 was considered statistically significant. Results: In the present study, 88 out of 250 (35.2%) were found to be contaminated with Staphylococcus aureus. Among the raw meat specimens, the positivity rate of Staphylococcus aureus was 37.6% (n=47) and (32.8% (n=41), butchery and abattoir houses, respectively. Among the associated risks, factories not using gloves reduces risk was found to (AOR=0.222; 95% CI: 0.104-0.473), Strict Separation b/n clean & dirty (AOR= 1.37; 95% CI: 0.66-2.86) and poor habit of hand washing (AOR=1.08; 95%CI: 0.35 3.35) was found to be statistically significant and have associated with Staphylococcus aureus contamination. All isolates of thirty-seven of Staphylococcus aureus were checked and displayed (100%) sensitive to doxycycline, trimethoprim, gentamicin, sulphamethoxazole, amikacin, CN, Co trimoxazole and nitrofurantoi. Whereas the showed resistance to cefotaxime (100%), ampicillin (87.5%), Penicillin (75%), B (75%), and nalidixic acid (50%) from butchery houses. On the other hand, all isolates of Staphylococcus aureus isolate 100% (n= 10) showed sensitive chloramphenicol, gentamicin and nitrofurantoin, whereas they showed 100% resistance of Penicillin, B, AMX, ceftriaxone, ampicillin and cefotaxime from abattoirs houses. The overall multi-drug resistance pattern for Staphylococcus aureus was 90% and 100% of butchery and abattoir houses, respectively. Conclusion: 35.3% Staphylococcus aureus isolated were recovered from the raw meat samples collected from the butchery and abattoirs houses. More has to be done in the development of hand washing behavior and availability of safe water in the butchery houses to reduce the burden of bacterial contamination. The results of the present finding highlight the need to implement protective measures against the levels of food contamination and alternative drug options. The development of antimicrobial resistance is nearly always a result of repeated therapeutic and/or indiscriminate use of them. Regular antimicrobial sensitivity testing helps to select effective antibiotics and to reduce the problems of drug resistance development towards commonly used antibiotics.

Keywords: abattoir house, AMR, butchery house, S. aureus

Procedia PDF Downloads 61
261 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal

Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi

Abstract:

The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.

Keywords: low income, subsidy policy, distributed energy resources, energy justice

Procedia PDF Downloads 84
260 Towards Consensus: Mapping Humanitarian-Development Integration Concepts and Their Interrelationship over Time

Authors: Matthew J. B. Wilson

Abstract:

Disaster Risk Reduction relies heavily on the effective cooperation of both humanitarian and development actors, particularly in the wake of a disaster, implementing lasting recovery measures that better protect communities from disasters to come. This can be seen to fit within a broader discussion around integrating humanitarian and development work stretching back to the 1980s. Over time, a number of key concepts have been put forward, including Linking Relief, Rehabilitation, and Development (LRRD), Early Recovery (ER), ‘Build Back Better’ (BBB), and the most recent ‘Humanitarian-Development-Peace Nexus’ or ‘Triple Nexus’ (HDPN) to define these goals and relationship. While this discussion has evolved greatly over time, from a continuum to a more integrative synergistic relationship, there remains a lack of consensus around how to describe it, and as such, the reality of effectively closing this gap has yet to be seen. The objective of this research was twofold. First, to map these four identified concepts (LRRD, ER, BBB & HDPN) used in the literature since 1995 to understand the overall trends in how this relationship is discussed. Second, map articles reference a combination of these concepts to understand their interrelationship. A scoping review was conducted for each concept identified. Results were gathered from Google Scholar by firstly inputting specific boolean search phrases for each concept as they related specifically to disasters each year since 1995 to identify the total number of articles discussing each concept over time. A second search was then done by pairing concepts together within a boolean search phrase and inputting the results into a matrix to understand how many articles contained references to more than one of the concepts. This latter search was limited to articles published after 2017 to account for the more recent emergence of HDPN. It was found that ER and particularly BBB are referred to much more widely than LRRD and HDPN. ER increased particularly in the mid-2000’s coinciding with the formation of the ER cluster, and BBB, whilst emerging gradually in the mid-2000s due to its usage in the wake of the Boxing Day Tsunami, increased significantly from about 2015 after its prominent inclusion in Sendai Framework. HDPN has only just started to increase in the last 4-5 years. In regards to the relationship between concepts, it was found the vast majority of all concepts identified were referred to in isolation from each other. The strongest relationship was between LRRD and HDPN (8% of articles referring to both), whilst ER-BBB and ER-HDPN both were about 3%, LRRD-ER 2%, and BBB-HDPN 1% and BBB-LRRD 1%. This research identified a fundamental issue around the lack of consensus and even awareness of different approaches referred to within academic literature relating to integrating humanitarian and development work. More research into synthesizing and learning from a range of approaches could work towards better closing this gap.

Keywords: build back better, disaster risk reduction, early recovery, linking relief rehabilitation and development, humanitarian development integration, humanitarian-development (peace) nexus, recovery, triple nexus

Procedia PDF Downloads 54
259 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam

Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck

Abstract:

The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.

Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam

Procedia PDF Downloads 220
258 Sensitivity Improvement of Optical Ring Resonator for Strain Analysis with the Direction of Strain Recognition Possibility

Authors: Tayebeh Sahraeibelverdi, Ahmad Shirazi Hadi Veladi, Mazdak Radmalekshah

Abstract:

Optical sensors became attractive due to preciseness, low power consumption, and intrinsic electromagnetic interference-free characteristic. Among the waveguide optical sensors, cavity-based ones attended for the high Q-factor. Micro ring resonators as a potential platform have been investigated for various applications as biosensors to pressure sensors thanks to their sensitive ring structure responding to any small change in the refractive index. Furthermore, these small micron size structures can come in an array, bringing the opportunity to have any of the resonance in a specific wavelength and be addressed in this way. Another exciting application is applying a strain to the ring and making them an optical strain gauge where the traditional ones are based on the piezoelectric material. Making them in arrays needs electrical wiring and about fifty times bigger in size. Any physical element that impacts the waveguide cross-section, Waveguide elastic-optic property change, or ring circumference can play a role. In comparison, ring size change has a larger effect than others. Here an engineered ring structure is investigated to study the strain effect on the ring resonance wavelength shift and its potential for more sensitive strain devices. At the same time, these devices can measure any strain by mounting on the surface of interest. The idea is to change the" O" shape ring to a "C" shape ring with a small opening starting from 2π/360 or one degree. We used the Mode solution of Lumbrical software to investigate the effect of changing the ring's opening and the shift induced by applied strain. The designed ring radius is a three Micron silicon on isolator ring which can be fabricated by standard complementary metal-oxide-semiconductor (CMOS) micromachining. The measured wavelength shifts from1-degree opening of the ring to a 6-degree opening have been investigated. Opening the ring for 1-degree affects the ring's quality factor from 3000 to 300, showing an order of magnitude Q-factor reduction. Assuming a strain making the ring-opening from 1 degree to 6 degrees, our simulation results showing negligible Q-factor reduction from 300 to 280. A ring resonator quality factor can reach up to 108 where an order of magnitude reduction is negligible. The resonance wavelength shift showed a blue shift and was obtained to be 1581, 1579,1578,1575nm for 1-, 2-, 4- and 6-degree ring-opening, respectively. This design can find the direction of the strain-induced by applying the opening on different parts of the ring. Moreover, by addressing the specified wavelength, we can precisely find the direction. We can open a significant opportunity to find cracks and any surface mechanical property very specifically and precisely. This idea can be implemented on polymer ring resonators while they can come with a flexible substrate and can be very sensitive to any strain making the two ends of the ring in the slit part come closer or further.

Keywords: optical ring resonator, strain gauge, strain sensor, surface mechanical property analysis

Procedia PDF Downloads 100
257 Investigation of Municipal Solid Waste Incineration Filter Cake as Minor Additional Constituent in Cement Production

Authors: Veronica Caprai, Katrin Schollbach, Miruna V. A. Florea, H. J. H. Brouwers

Abstract:

Nowadays MSWI (Municipal Solid Waste Incineration) bottom ash (BA) produced by Waste-to-Energy (WtE) plants represents the majority of the solid residues derived from MSW incineration. Once processed, the BA is often landfilled resulting in possible environmental problems, additional costs for the plant and increasing occupation of public land. In order to limit this phenomenon, European countries such as the Netherlands aid the utilization of MSWI BA in the construction field, by providing standards about the leaching of contaminants into the environment (Dutch Soil Quality Decree). Commonly, BA has a particle size below 32 mm and a heterogeneous chemical composition, depending on its source. By washing coarser BA, an MSWI sludge is obtained. It is characterized by a high content of heavy metals, chlorides, and sulfates as well as a reduced particle size (below 0.25 mm). To lower its environmental impact, MSWI sludge is filtered or centrifuged for removing easily soluble contaminants, such as chlorides. However, the presence of heavy metals is not easily reduced, compromising its possible application. For lowering the leaching of those contaminants, the use of MSWI residues in combination with cement represents a precious option, due to the known retention of those ions into the hydrated cement matrix. Among the applications, the European standard for common cement EN 197-1:1992 allows the incorporation of up to 5% by mass of a minor additional constituent (MAC), such as fly ash or blast furnace slag but also an unspecified filler into cement. To the best of the author's knowledge, although it is widely available, it has the appropriate particle size and a chemical composition similar to cement, FC has not been investigated as possible MAC in cement production. Therefore, this paper will address the suitability of MSWI FC as MAC for CEM I 52.5 R, within a 5% maximum replacement by mass. After physical and chemical characterization of the raw materials, the crystal phases of the pastes are determined by XRD for 3 replacement levels (1%, 3%, and 5%) at different ages. Thereafter, the impact of FC on mechanical and environmental performances of cement is assessed according to EN 196-1 and the Dutch Soil Quality Decree, respectively. The investigation of the reaction products evidences the formation of layered double hydroxides (LDH), in the early stage of the reaction. Mechanically the presence of FC results in a reduction of 28 days compressive strength by 8% for a replacement of 5% wt., compared with the pure CEM I 52.5 R without any MAC. In contrast, the flexural strength is not affected by the presence of FC. Environmentally, the Dutch legislation for the leaching of contaminants for unshaped (granular) material is satisfied. Based on the collected results, FC represents a suitable candidate as MAC in cement production.

Keywords: environmental impact evaluation, Minor additional constituent, MSWI residues, X-ray diffraction crystallography

Procedia PDF Downloads 139
256 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic

Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova

Abstract:

Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.

Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification

Procedia PDF Downloads 80
255 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 101
254 The Elimination of Fossil Fuel Subsidies from the Road Transportation Sector and the Promotion of Electro Mobility: The Ecuadorian Case

Authors: Henry Acurio, Alvaro Corral, Juan Fonseca

Abstract:

In Ecuador, subventions on fossil fuels for the road transportation sector have always been part of its economy throughout time, mainly because of demagogy and populism from political leaders. It is clearly seen that the government cannot maintain the subsidies anymore due to its commercial balance and its general state budget; subsidies are a key barrier to implementing the use of cleaner technologies. However, during the last few months, the elimination of subsidies has been done gradually with the purpose of reaching international prices. It is expected that with this measure, the population will opt for other means of transportation, and in a certain way, it will promote the use of private electric vehicles and public, e.g., taxis and buses (urban transport). Considering the three main elements of sustainable development, an analysis of the social, economic, and environmental impacts of eliminating subsidies will be generated at the country level. To achieve this, four scenarios will be developed in order to determine how the subsidies will contribute to the promotion of electro-mobility: 1) A Business as Usual (BAU) scenario; 2) the introduction of 10 000 electric vehicles by 2025; 3) the introduction of 100 000 electric vehicles by 2030; 4) the introduction of 750 000 electric vehicles by 2040 (for all the scenarios, buses, taxis, lightweight duty vehicles, and private vehicles will be introduced, as it is established in the National Electro Mobility Strategy for Ecuador). The Low Emissions Analysis Platform (LEAP) will be used, and it will be suitable to determine the cost for the government in terms of importing derivatives for fossil fuels and the cost of electricity to power the electric fleet that can be changed. The elimination of subventions generates fiscal resources for the state that can be used to develop other kinds of projects that will benefit Ecuadorian society. It will definitely change the energy matrix, and it will provide energy security for the country; it will be an opportunity for the government to incentivize a greater introduction of renewable energies, e.g., solar, wind, and geothermal. At the same time, it will also reduce greenhouse gas emissions (GHG) from the transportation sector, considering its mitigation potential, which as a result, will ameliorate the inhabitant quality of life by improving the quality of air, therefore reducing respiratory diseases associated with exhaust emissions, consequently, achieving sustainability, the Sustainable Development Goals (SDGs), and complying with the agreements established in the Paris Agreement COP 21 in 2015. Electro-mobility in Latin America and the Caribbean can only be achieved by the implementation of the right policies by the central government, which need to be accompanied by a National Urban Mobility Policy (NUMP), and can encompass a greater vision to develop holistic, sustainable transport systems at local governments.

Keywords: electro mobility, energy, policy, sustainable transportation

Procedia PDF Downloads 56
253 Incidences and Factors Associated with Perioperative Cardiac Arrest in Trauma Patient Receiving Anesthesia

Authors: Visith Siriphuwanun, Yodying Punjasawadwong, Suwinai Saengyo, Kittipan Rerkasem

Abstract:

Objective: To determine incidences and factors associated with perioperative cardiac arrest in trauma patients who received anesthesia for emergency surgery. Design and setting: Retrospective cohort study in trauma patients during anesthesia for emergency surgery at a university hospital in northern Thailand country. Patients and methods: This study was permitted by the medical ethical committee, Faculty of Medicine at Maharaj Nakorn Chiang Mai Hospital, Thailand. We clarified data of 19,683 trauma patients receiving anesthesia within a decade between January 2007 to March 2016. The data analyzed patient characteristics, traumas surgery procedures, anesthesia information such as ASA physical status classification, anesthesia techniques, anesthetic drugs, location of anesthesia performed, and cardiac arrest outcomes. This study excluded the data of trauma patients who had received local anesthesia by surgeons or monitoring anesthesia care (MAC) and the patient which missing more information. The factor associated with perioperative cardiac arrest was identified with univariate analyses. Multiple regressions model for risk ratio (RR) and 95% confidence intervals (CI) were used to conduct factors correlated with perioperative cardiac arrest. The multicollinearity of all variables was examined by bivariate correlation matrix. A stepwise algorithm was chosen at a p-value less than 0.02 was selected to further multivariate analysis. A P-value of less than 0.05 was concluded as statistically significant. Measurements and results: The occurrence of perioperative cardiac arrest in trauma patients receiving anesthesia for emergency surgery was 170.04 per 10,000 cases. Factors associated with perioperative cardiac arrest in trauma patients were age being more than 65 years (RR=1.41, CI=1.02–1.96, p=0.039), ASA physical status 3 or higher (RR=4.19–21.58, p < 0.001), sites of surgery (intracranial, intrathoracic, upper intra-abdominal, and major vascular, each p < 0.001), cardiopulmonary comorbidities (RR=1.55, CI=1.10–2.17, p < 0.012), hemodynamic instability with shock prior to receiving anesthesia (RR=1.60, CI=1.21–2.11, p < 0.001) , special techniques for surgery such as cardiopulmonary bypass (CPB) and hypotensive techniques (RR=5.55, CI=2.01–15.36, p=0.001; RR=6.24, CI=2.21–17.58, p=0.001, respectively), and patients who had a history of being alcoholic (RR=5.27, CI=4.09–6.79, p < 0.001). Conclusion: Incidence of perioperative cardiac arrest in trauma patients receiving anesthesia for emergency surgery was very high and correlated with many factors, especially age of patient and cardiopulmonary comorbidities, patient having a history of alcoholic addiction, increasing ASA physical status, preoperative shock, special techniques for surgery, and sites of surgery including brain, thorax, abdomen, and major vascular region. Anesthesiologists and multidisciplinary teams in pre- and perioperative periods should remain alert for warning signs of pre-cardiac arrest and be quick to manage the high-risk group of surgical trauma patients. Furthermore, a healthcare policy should be promoted for protecting against accidents in high-risk groups of the population as well.

Keywords: perioperative cardiac arrest, trauma patients, emergency surgery, anesthesia, factors risk, incidence

Procedia PDF Downloads 144
252 Computer Based Identification of Possible Molecular Targets for Induction of Drug Resistance Reversion in Multidrug Resistant Mycobacterium Tuberculosis

Authors: Oleg Reva, Ilya Korotetskiy, Marina Lankina, Murat Kulmanov, Aleksandr Ilin

Abstract:

Molecular docking approaches are widely used for design of new antibiotics and modeling of antibacterial activities of numerous ligands which bind specifically to active centers of indispensable enzymes and/or key signaling proteins of pathogens. Widespread drug resistance among pathogenic microorganisms calls for development of new antibiotics specifically targeting important metabolic and information pathways. A generally recognized problem is that almost all molecular targets have been identified already and it is getting more and more difficult to design innovative antibacterial compounds to combat the drug resistance. A promising way to overcome the drug resistance problem is an induction of reversion of drug resistance by supplementary medicines to improve the efficacy of the conventional antibiotics. In contrast to well established computer-based drug design, modeling of drug resistance reversion still is in its infancy. In this work, we proposed an approach to identification of compensatory genetic variants reducing the fitness cost associated with the acquisition of drug resistance by pathogenic bacteria. The approach was based on an analysis of the population genetic of Mycobacterium tuberculosis and on results of experimental modeling of the drug resistance reversion induced by a new anti-tuberculosis drug FS-1. The latter drug is an iodine-containing nanomolecular complex that passed clinical trials and was admitted as a new medicine against MDR-TB in Kazakhstan. Isolates of M. tuberculosis obtained on different stages of the clinical trials and also from laboratory animals infected with MDR-TB strain were characterized by antibiotic resistance, and their genomes were sequenced by the paired-end Illumina HiSeq 2000 technology. A steady increase in sensitivity to conventional anti-tuberculosis antibiotics in series of isolated treated with FS-1 was registered despite the fact that the canonical drug resistance mutations identified in the genomes of these isolates remained intact. It was hypothesized that the drug resistance phenotype in M. tuberculosis requires an adjustment of activities of many genes to compensate the fitness cost of the drug resistance mutations. FS-1 cased an aggravation of the fitness cost and removal of the drug-resistant variants of M. tuberculosis from the population. This process caused a significant increase in genetic heterogeneity of the Mtb population that was not observed in the positive and negative controls (infected laboratory animals left untreated and treated solely with the antibiotics). A large-scale search for linkage disequilibrium associations between the drug resistance mutations and genetic variants in other genomic loci allowed identification of target proteins, which could be influenced by supplementary drugs to increase the fitness cost of the drug resistance and deprive the drug-resistant bacterial variants of their competitiveness in the population. The approach will be used to improve the efficacy of FS-1 and also for computer-based design of new drugs to combat drug-resistant infections.

Keywords: complete genome sequencing, computational modeling, drug resistance reversion, Mycobacterium tuberculosis

Procedia PDF Downloads 241
251 Experimental Investigation on Tensile Durability of Glass Fiber Reinforced Polymer (GFRP) Rebar Embedded in High Performance Concrete

Authors: Yuan Yue, Wen-Wei Wang

Abstract:

The objective of this research is to comprehensively evaluate the impact of alkaline environments on the durability of Glass Fiber Reinforced Polymer (GFRP) reinforcements in concrete structures and further explore their potential value within the construction industry. Specifically, we investigate the effects of two widely used high-performance concrete (HPC) materials on the durability of GFRP bars when embedded within them under varying temperature conditions. A total of 279 GFRP bar specimens were manufactured for microcosmic and mechanical performance tests. Among them, 270 specimens were used to test the residual tensile strength after 120 days of immersion, while 9 specimens were utilized for microscopic testing to analyze degradation damage. SEM techniques were employed to examine the microstructure of GFRP and cover concrete. Unidirectional tensile strength experiments were conducted to determine the remaining tensile strength after corrosion. The experimental variables consisted of four types of concrete (engineering cementitious composite (ECC), ultra-high-performance concrete (UHPC), and two types of ordinary concrete with different compressive strengths) as well as three acceleration temperatures (20, 40, and 60℃). The experimental results demonstrate that high-performance concrete (HPC) offers superior protection for GFRP bars compared to ordinary concrete. Two types of HPC enhance durability through different mechanisms: one by reducing the pH of the concrete pore fluid and the other by decreasing permeability. For instance, ECC improves embedded GFRP's durability by lowering the pH of the pore fluid. After 120 days of immersion at 60°C under accelerated conditions, ECC (pH=11.5) retained 68.99% of its strength, while PC1 (pH=13.5) retained 54.88%. On the other hand, UHPC enhances FRP steel's durability by increasing porosity and compactness in its protective layer to reinforce FRP reinforcement's longevity. Due to fillers present in UHPC, it typically exhibits lower porosity, higher densities, and greater resistance to permeation compared to PC2 with similar pore fluid pH levels, resulting in varying degrees of durability for GFRP bars embedded in UHPC and PC2 after 120 days of immersion at a temperature of 60°C - with residual strengths being 66.32% and 60.89%, respectively. Furthermore, SEM analysis revealed no noticeable evidence indicating fiber deterioration in any examined specimens, thus suggesting that uneven stress distribution resulting from interface segregation and matrix damage emerges as a primary causative factor for tensile strength reduction in GFRP rather than fiber corrosion. Moreover, long-term prediction models were utilized to calculate residual strength values over time for reinforcement embedded in HPC under high temperature and high humidity conditions - demonstrating that approximately 75% of its initial strength was retained by reinforcement embedded in HPC after 100 years of service.

Keywords: GFRP bars, HPC, degeneration, durability, residual tensile strength.

Procedia PDF Downloads 29
250 Thinking Historiographically in the 21st Century: The Case of Spanish Musicology, a History of Music without History

Authors: Carmen Noheda

Abstract:

This text provides a reflection on the way of thinking about the study of the history of music by examining the production of historiography in Spain at the turn of the century. Based on concepts developed by the historical theorist Jörn Rüsen, the article focuses on the following aspects: the theoretical artifacts that structure the interpretation of the limits of writing the history of music, the narrative patterns used to give meaning to the discourse of history, and the orientation context that functions as a source of criteria of significance for both interpretation and representation. This analysis intends to show that historical music theory is not only a means to abstractly explore the complex questions connected to the production of historical knowledge, but also a tool for obtaining concrete images about the intellectual practice of professional musicologists. Writing about the historiography of contemporary Spanish music is a task that requires both a knowledge of the history that is being written and investigated, as well as a familiarity with current theoretical trends and methodologies that allow for the recognition and definition of the different tendencies that have arisen in recent decades. With the objective of carrying out these premises, this project takes as its point of departure the 'immediate historiography' in relation to Spanish music at the beginning of the 21st century. The hesitation that Spanish musicology has shown in opening itself to new anthropological and sociological approaches, along with its rigidity in the face of the multiple shifts in dynamic forms of thinking about history, have produced a standstill whose consequences can be seen in the delayed reception of the historiographical revolutions that have emerged in the last century. Methodologically, this essay is underpinned by Rüsen’s notion of the disciplinary matrix, which is an important contribution to the understanding of historiography. Combined with his parallel conception of differing paradigms of historiography, it is useful for analyzing the present-day forms of thinking about the history of music. Following these theories, the article will in the first place address the characteristics and identification of present historiographical currents in Spanish musicology to thereby carry out an analysis based on the theories of Rüsen. Finally, it will establish some considerations for the future of musical historiography, whose atrophy has not only fostered the maintenance of an ingrained positivist tradition, but has also implied, in the case of Spain, an absence of methodological schools and an insufficient participation in international theoretical debates. An update of fundamental concepts has become necessary in order to understand that thinking historically about music demands that we remember that subjects are always linked by reciprocal interdependencies that structure and define what it is possible to create. In this sense, the fundamental aim of this research departs from the recognition that the history of music is embedded in the conditions that make it conceivable, communicable and comprehensible within a society.

Keywords: historiography, Jörn Rüssen, Spanish musicology, theory of history of music

Procedia PDF Downloads 167
249 UVA or UVC Activation of H₂O₂ and S₂O₈²⁻ for Estrogen Degradation towards an Application in Rural Wastewater Treatment Plant

Authors: Anaelle Gabet, Helene Metivier, Christine De Brauer, Gilles Mailhot, Marcello Brigante

Abstract:

The presence of micropollutants in surface waters has been widely reported around the world, particularly downstream from wastewater treatment plants (WWTPs). Rural WWTPs constitute more than 90 % of the total WWTPs in France. Like conventional ones, they are not able to fully remove micropollutants. Estrogens are excreted by human beings every day and several studies have highlighted their endocrine disruption properties on river wildlife. They are mainly estrone (E1), 17β-estradiol (E2) and 17α-ethinylestradiol (EE2). Rural WWTPs require cheap and robust tertiary processes. UVC activation of H₂O₂ for HO· generation, a very reactive molecule, has demonstrated its effectiveness. However, UVC rays are dangerous to manipulate and energy-consuming. This is why the ability of UVA rays was investigated in this study. Moreover, the use of S₂O₈²⁻ for SO₄·- generation as an alternative to HO· has emerged in the last few years. Such processes have been widely studied on a lab scale. However, pilot-scale works constitute fewer studies. This study was carried out on a 20-L pilot composed of a 1.12-L UV reactor equipped with a polychromatic UVA lamp or a monochromatic (254 nm) UVC lamp fed in recirculation. Degradation rates of a mixture of spiked E1, E2 and EE2 (5 µM each) were followed by HPLC-UV. Results are expressed in UV dose (mJ.cm-2) received by the compounds of interest to compare UVC and UVA. In every system, estrogen degradation rates followed pseudo-first-order rates. First, experiments were carried out in tap water. All estrogens underwent photolysis under UVC rays, although E1 photolysis is higher. However, only very weak photolysis was observed under UVA rays. Preliminary studies on both oxidants have shown that S₂O₈²⁻ photolysis constants are higher than H₂O₂ under both UVA and UVC rays. Therefore, estrogen degradation rates are about ten times higher in the presence of 1 mM of S₂O₈²⁻ than with one mM of H₂O₂ under both radiations. In the same conditions, the mixture of interest required about 40 times higher UV dose when using UVA rays compared to UVC. However, the UVA/S₂O₈²⁻ system only requires four times more UV dose than the conventional UVC/H₂O₂ system. Further studies were carried out in WWTP effluent with the UVC lamp. When comparing these results to the tap water ones, estrogen degradation rates were more inhibited in the S₂O₈²⁻ system than with H₂O₂. It seems that SO₄·- undergo higher quenching by a real effluent than HO·. Preliminary experiments have shown that natural organic matter is mainly responsible for the radical quenching and that HO and SO₄ both had similar second-order reaction rate constants with dissolved organic matter. However, E1, E2 and EE2 second-order reaction rate constants are about ten times lower with SO₄ than with HO. In conclusion, the UVA/S₂O₈²⁻ system showed encouraging results for the use of UVA rays but further studies in WWTP effluent have to be carried out to confirm this interest. The efficiency of other pollutants in the real matrix also needs to be investigated.

Keywords: AOPs, decontamination, estrogens, radicals, wastewater

Procedia PDF Downloads 163
248 Monitoring of Wound Healing Through Structural and Functional Mechanisms Using Photoacoustic Imaging Modality

Authors: Souradip Paul, Arijit Paramanick, M. Suheshkumar Singh

Abstract:

Traumatic injury is the leading worldwide health problem. Annually, millions of surgical wounds are created for the sake of routine medical care. The healing of these unintended injuries is always monitored based on visual inspection. The maximal restoration of tissue functionality remains a significant concern of clinical care. Although minor injuries heal well with proper care and medical treatment, large injuries negatively influence various factors (vasculature insufficiency, tissue coagulation) and cause poor healing. Demographically, the number of people suffering from severe wounds and impaired healing conditions is burdensome for both human health and the economy. An incomplete understanding of the functional and molecular mechanism of tissue healing often leads to a lack of proper therapies and treatment. Hence, strong and promising medical guidance is necessary for monitoring the tissue regeneration processes. Photoacoustic imaging (PAI), is a non-invasive, hybrid imaging modality that can provide a suitable solution in this regard. Light combined with sound offers structural, functional and molecular information from the higher penetration depth. Therefore, molecular and structural mechanisms of tissue repair will be readily observable in PAI from the superficial layer and in the deep tissue region. Blood vessel formation and its growth is an essential tissue-repairing components. These vessels supply nutrition and oxygen to the cell in the wound region. Angiogenesis (formation of new capillaries from existing blood vessels) contributes to new blood vessel formation during tissue repair. The betterment of tissue healing directly depends on angiogenesis. Other optical microscopy techniques can visualize angiogenesis in micron-scale penetration depth but are unable to provide deep tissue information. PAI overcomes this barrier due to its unique capability. It is ideally suited for deep tissue imaging and provides the rich optical contrast generated by hemoglobin in blood vessels. Hence, an early angiogenesis detection method provided by PAI leads to monitoring the medical treatment of the wound. Along with functional property, mechanical property also plays a key role in tissue regeneration. The wound heals through a dynamic series of physiological events like coagulation, granulation tissue formation, and extracellular matrix (ECM) remodeling. Therefore tissue elasticity changes, can be identified using non-contact photoacoustic elastography (PAE). In a nutshell, angiogenesis and biomechanical properties are both critical parameters for tissue healing and these can be characterized in a single imaging modality (PAI).

Keywords: PAT, wound healing, tissue coagulation, angiogenesis

Procedia PDF Downloads 79
247 Conceptual and Preliminary Design of Landmine Searching UAS at Extreme Environmental Condition

Authors: Gopalasingam Daisan

Abstract:

Landmines and ammunitions have been creating a significant threat to the people and animals, after the war, the landmines remain in the land and it plays a vital role in civilian’s security. Especially the Children are at the highest risk because they are curious. After all, an unexploded bomb can look like a tempting toy to an inquisitive child. The initial step of designing the UAS (Unmanned Aircraft Systems) for landmine detection is to choose an appropriate and effective sensor to locate the landmines and other unexploded ammunitions. The sensor weight and other components related to the sensor supporting device’s weight are taken as a payload weight. The mission requirement is to find the landmines in a particular area by making a proper path that will cover all the vicinity in the desired area. The weight estimation of the UAV (Unmanned Aerial Vehicle) can be estimated by various techniques discovered previously with good accuracy at the first phase of the design. The next crucial part of the design is to calculate the power requirement and the wing loading calculations. The matching plot techniques are used to determine the thrust-to-weight ratio, and this technique makes this process not only easiest but also precisely. The wing loading can be calculated easily from the stall equation. After these calculations, the wing area is determined from the wing loading equation and the required power is calculated from the thrust to weight ratio calculations. According to the power requirement, an appropriate engine can be selected from the available engine from the market. And the wing geometric parameter is chosen based on the conceptual sketch. The important steps in the wing design to choose proper aerofoil and which will ensure to create sufficient lift coefficient to satisfy the requirements. The next component is the tail; the tail area and other related parameters can be estimated or calculated to counteract the effect of the wing pitching moment. As the vertical tail design depends on many parameters, the initial sizing only can be done in this phase. The fuselage is another major component, which is selected based on the slenderness ratio, and also the shape is determined on the sensor size to fit it under the fuselage. The landing gear is one of the important components which is selected based on the controllability and stability requirements. The minimum and maximum wheel track and wheelbase can be determined based on the crosswind and overturn angle requirements. The minor components of the landing gear design and estimation are not the focus of this project. Another important task is to calculate the weight of the major components and it is going to be estimated using empirical relations and also the mass is added to each such component. The CG and moment of inertia are also determined to each component separately. The sensitivity of the weight calculation is taken into consideration to avoid extra material requirements and also reduce the cost of the design. Finally, the aircraft performance is calculated, especially the V-n (velocity and load factor) diagram for different flight conditions such as not disturbed and with gust velocity.

Keywords: landmine, UAS, matching plot, optimization

Procedia PDF Downloads 149
246 A User-Side Analysis of the Public-Private Partnership: The Case of the New Bundang Subway Line in South Korea

Authors: Saiful Islam, Deuk Jong Bae

Abstract:

The purpose of this study is to examine citizen satisfaction and competitiveness of a Public Private Partnership project. The study focuses on PPP in the transport sector and investigates the New Bundang Subway Line (NBL) in South Korea as the object of a case study. Most PPP studies are dominated by the study of public and private sector interests, which are classified in to three major areas comprising of policy, finance, and management. This study will explore the user perspective by assessing customer satisfaction upon NBL cost and service quality, also the competitiveness of NBL compared to other alternative transport modes which serve the Jeongja – Gangnam trip or vice versa. The regular Bundang Subway Line, New Bundang Subway Line, bus and private vehicle are selected as the alternative transport modes. The study analysed customer satisfaction of NBL and citizen’s preference of alternative transport modes based on a survey in Bundang district, South Korea. Respondents were residents and employees who live or work in Bundang city, and were divided into the following areas Pangyo, Jeongjae – Sunae, Migeun – Ori – Jukjeon, and Imae – Yatap – Songnam. The survey was conducted in January 2015 for two weeks, and 753 responses were gathered. By applying the Hedonic Utility approach, the factors which affect the frequency of using NBL were found to be overall customer satisfaction, convenience of access, and the socio economic demographic of the individual. In addition, by applying the Analytic Hierarchy Process (AHP) method, criteria factors influencing the decision to select alternative transport modes were identified. Those factors, along with the author judgement of alternative transport modes, and their associated criteria and sub-criteria produced a priority list of user preferences regarding their alternative transport mode options. The study found that overall the regular Bundang Subway Line (BL), which was built and operated under a conventional procurement method was selected as the most preferable transport mode due to its cost competitiveness. However, on the sub-criteria level analysis, the NBL has competitiveness on service quality, particularly on journey time. By conducting a sensitivity analysis, the NBL can become the first choice of transport by increasing the NBL’s degree of weight associated with cost by 0,05. This means the NBL would need to reduce either it’s fare cost or transfer fee, or combine those two cost components to reduce the total of the current cost by 25%. In addition, the competitiveness of NBL also could be obtained by increasing NBL convenience through escalating access convenience such as constructing an additional station or providing more access modes. Although these convenience improvements would require a few extra minutes of journey time, the user found this to be acceptable. The findings and policy suggestions can contribute to the next phase of NBL development, showing that consideration should be given to the citizen’s voice. The case study results also contribute to the literature of PPP projects specifically from a user side perspective.

Keywords: public private partnership, customer satisfaction, public transport, new Bundang subway line

Procedia PDF Downloads 322
245 Geochemical Modeling of Mineralogical Changes in Rock and Concrete in Interaction with Groundwater

Authors: Barbora Svechova, Monika Licbinska

Abstract:

Geochemical modeling of mineralogical changes of various materials in contact with an aqueous solution is an important tool for predicting the processes and development of given materials at the site. The modeling focused on the mutual interaction of groundwater at the contact with the rock mass and its subsequent influence on concrete structures. The studied locality is located in Slovakia in the area of the Liptov Basin, which is a significant inter-mountain lowland, which is bordered on the north and south by the core mountains belt of the Tatras, where in the center the crystalline rises to the surface accompanied by Mesozoic cover. Groundwater in the area is bound to structures with complicated geological structures. From the hydrogeological point of view, it is an environment with a crack-fracture character. The area is characterized by a shallow surface circulation of groundwater without a significant collector structure, and from a chemical point of view, groundwater in the area has been classified as calcium bicarbonate with a high content of CO2 and SO4 ions. According to the European standard EN 206-1, these are waters with medium aggression towards the concrete. Three rock samples were taken from the area. Based on petrographic and mineralogical research, they were evaluated as calcareous shale, micritic limestone and crystalline shale. These three rock samples were placed in demineralized water for one month and the change in the chemical composition of the water was monitored. During the solution-rock interaction there was an increase in the concentrations of all major ions, except nitrates. There was an increase in concentration after a week, but at the end of the experiment, the concentration was lower than the initial value. Another experiment was the interaction of groundwater from the studied locality with a concrete structure. The concrete sample was also left in the water for 1 month. The results of the experiment confirmed the assumption of a reduction in the concentrations of calcium and bicarbonate ions in water due to the precipitation of amorphous forms of CaCO3 on the surface of the sample.Vice versa, it was surprising to increase the concentration of sulphates, sodium, iron and aluminum due to the leaching of concrete. Chemical analyzes from these experiments were performed in the PHREEQc program, which calculated the probability of the formation of amorphous forms of minerals. From the results of chemical analyses and hydrochemical modeling of water collected in situ and water from experiments, it was found: groundwater at the site is unsaturated and shows moderate aggression towards reinforced concrete structures according to EN 206-1a, which will affect the homogeneity and integrity of concrete structures; from the rocks in the given area, Ca, Na, Fe, HCO3 and SO4. Unsaturated waters will dissolve everything as soon as they come into contact with the solid matrix. The speed of this process then depends on the physicochemical parameters of the environment (T, ORP, p, n, water retention time in the environment, etc.).

Keywords: geochemical modeling, concrete , dissolution , PHREEQc

Procedia PDF Downloads 175
244 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators

Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy

Abstract:

Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.

Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators

Procedia PDF Downloads 91
243 A Proper Continuum-Based Reformulation of Current Problems in Finite Strain Plasticity

Authors: Ladislav Écsi, Roland Jančo

Abstract:

Contemporary multiplicative plasticity models assume that the body's intermediate configuration consists of an assembly of locally unloaded neighbourhoods of material particles that cannot be reassembled together to give the overall stress-free intermediate configuration since the neighbourhoods are not necessarily compatible with each other. As a result, the plastic deformation gradient, an inelastic component in the multiplicative split of the deformation gradient, cannot be integrated, and the material particle moves from the initial configuration to the intermediate configuration without a position vector and a plastic displacement field when plastic flow occurs. Such behaviour is incompatible with the continuum theory and the continuum physics of elastoplastic deformations, and the related material models can hardly be denoted as truly continuum-based. The paper presents a proper continuum-based reformulation of current problems in finite strain plasticity. It will be shown that the incompatible neighbourhoods in real material are modelled by the product of the plastic multiplier and the yield surface normal when the plastic flow is defined in the current configuration. The incompatible plastic factor can also model the neighbourhoods as the solution of the system of differential equations whose coefficient matrix is the above product when the plastic flow is defined in the intermediate configuration. The incompatible tensors replace the compatible spatial plastic velocity gradient in the former case or the compatible plastic deformation gradient in the latter case in the definition of the plastic flow rule. They act as local imperfections but have the same position vector as the compatible plastic velocity gradient or the compatible plastic deformation gradient in the definitions of the related plastic flow rules. The unstressed intermediate configuration, the unloaded configuration after the plastic flow, where the residual stresses have been removed, can always be calculated by integrating either the compatible plastic velocity gradient or the compatible plastic deformation gradient. However, the corresponding plastic displacement field becomes permanent with both elastic and plastic components. The residual strains and stresses originate from the difference between the compatible plastic/permanent displacement field gradient and the prescribed incompatible second-order tensor characterizing the plastic flow in the definition of the plastic flow rule, which becomes an assignment statement rather than an equilibrium equation. The above also means that the elastic and plastic factors in the multiplicative split of the deformation gradient are, in reality, gradients and that there is no problem with the continuum physics of elastoplastic deformations. The formulation is demonstrated in a numerical example using the regularized Mooney-Rivlin material model and modified equilibrium statements where the intermediate configuration is calculated, whose analysis results are compared with the identical material model using the current equilibrium statements. The advantages and disadvantages of each formulation, including their relationship with multiplicative plasticity, are also discussed.

Keywords: finite strain plasticity, continuum formulation, regularized Mooney-Rivlin material model, compatibility

Procedia PDF Downloads 98
242 High Capacity SnO₂/Graphene Composite Anode Materials for Li-Ion Batteries

Authors: Hilal Köse, Şeyma Dombaycıoğlu, Ali Osman Aydın, Hatem Akbulut

Abstract:

Rechargeable lithium-ion batteries (LIBs) have become promising power sources for a wide range of applications, such as mobile communication devices, portable electronic devices and electrical/hybrid vehicles due to their long cycle life, high voltage and high energy density. Graphite, as anode material, has been widely used owing to its extraordinary electronic transport properties, large surface area, and high electrocatalytic activities although its limited specific capacity (372 mAh g-1) cannot fulfil the increasing demand for lithium-ion batteries with higher energy density. To settle this problem, many studies have been taken into consideration to investigate new electrode materials and metal oxide/graphene composites are selected as a kind of promising material for lithium ion batteries as their specific capacities are much higher than graphene. Among them, SnO₂, an n-type and wide band gap semiconductor, has attracted much attention as an anode material for the new-generation lithium-ion batteries with its high theoretical capacity (790 mAh g-1). However, it suffers from large volume changes and agglomeration associated with the Li-ion insertion and extraction processes, which brings about failure and loss of electrical contact of the anode. In addition, there is also a huge irreversible capacity during the first cycle due to the formation of amorphous Li₂O matrix. To obtain high capacity anode materials, we studied on the synthesis and characterization of SnO₂-Graphene nanocomposites and investigated the capacity of this free-standing anode material in this work. For this aim, firstly, graphite oxide was obtained from graphite powder using the method described by Hummers method. To prepare the nanocomposites as free-standing anode, graphite oxide particles were ultrasonicated in distilled water with SnO2 nanoparticles (1:1, w/w). After vacuum filtration, the GO-SnO₂ paper was peeled off from the PVDF membrane to obtain a flexible, free-standing GO paper. Then, GO structure was reduced in hydrazine solution. Produced SnO2- graphene nanocomposites were characterized by scanning electron microscopy (SEM), energy dispersive X-ray spectrometer (EDS), and X-ray diffraction (XRD) analyses. CR2016 cells were assembled in a glove box (MBraun-Labstar). The cells were charged and discharged at 25°C between fixed voltage limits (2.5 V to 0.2 V) at a constant current density on a BST8-MA MTI model battery tester with 0.2C charge-discharge rate. Cyclic voltammetry (CV) was performed at the scan rate of 0.1 mVs-1 and electrochemical impedance spectroscopy (EIS) measurements were carried out using Gamry Instrument applying a sine wave of 10 mV amplitude over a frequency range of 1000 kHz-0.01 Hz.

Keywords: SnO₂-graphene, nanocomposite, anode, Li-ion battery

Procedia PDF Downloads 205
241 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 191
240 Mycophenolate-Induced Disseminated TB in a PPD-Negative Patient

Authors: Megan L. Srinivas

Abstract:

Individuals with underlying rheumatologic diseases such as dermatomyositis may not adequately respond to tuberculin (PPD) skin tests, creating false negative results. These illnesses are frequently treated with immunosuppressive therapy making proper identification of TB infection imperative. A 59-year-old Filipino man was diagnosed with dermatomyositis on the basis of rash, electromyography, and muscle biopsy. He was initially treated with IVIG infusions and transitioned to oral prednisone and mycophenolate. The patient’s symptoms improved on this regimen. Six months after starting mycophenolate, the patient began having fevers, night sweats, and productive cough without hemoptysis. He moved from the Philippines 5 years prior to dermatomyositis diagnosis, denied sick contacts, and was PPD negative both at immigration and immediately prior to starting mycophenolate treatment. A third PPD was negative following the onset of these new symptoms. He was treated for community-acquired pneumonia, but symptoms worsened over 10 days and he developed watery diarrhea and a growing non-tender, non-mobile mass on the left side of his neck. A chest x-ray demonstrated a cavitary lesion in right upper lobe suspicious for TB that had not been present one month earlier. Chest CT corroborated this finding also exhibiting necrotic hilar and paratracheal lymphadenopathy. Neck CT demonstrated the left-sided mass as cervical chain lymphadenopathy. Expectorated sputum and stool samples contained acid-fast bacilli (AFB), cultures showing TB bacteria. Fine-needle biopsy of the neck mass (scrofula) also exhibited AFB. An MRI brain showed nodular enhancement suspected to be a tuberculoma. Mycophenolate was discontinued and dermatomyositis treatment was switched to oral prednisone with a 3-day course of IVIG. The patient’s infection showed sensitivity to standard RIPE (rifampin, isoniazid, pyrazinamide, and ethambutol) treatment. Within a week of starting RIPE, the patient’s diarrhea subsided, scrofula diminished, and symptoms significantly improved. By the end of treatment week 3, the patient’s sputum no longer contained AFB; he was removed from isolation, and was discharged to continue RIPE at home. He was discharged on oral prednisone, which effectively addressed his dermatomyositis. This case illustrates the unreliability of PPD tests in patients with long-term inflammatory diseases such as dermatomyositis. Other immunosuppressive therapies (adalimumab, etanercept, and infliximab) have been affiliated with conversion of latent TB to disseminated TB. Mycophenolate is another immunosuppressive agent with similar mechanistic properties. Thus, it is imperative that patients with long-term inflammatory diseases and high-risk TB factors initiating immunosuppressive therapy receive a TB blood test (such as a quantiferon gold assay) prior to the initiation of therapy to ensure that latent TB is unmasked before it can evolve into a disseminated form of the disease.

Keywords: dermatomyositis, immunosuppressant medications, mycophenolate, disseminated tuberculosis

Procedia PDF Downloads 182