Search results for: three step search
564 Study of Biofouling Wastewater Treatment Technology
Authors: Sangho Park, Mansoo Kim, Kyujung Chae, Junhyuk Yang
Abstract:
The International Maritime Organization (IMO) recognized the problem of invasive species invasion and adopted the "International Convention for the Control and Management of Ships' Ballast Water and Sediments" in 2004, which came into force on September 8, 2017. In 2011, the IMO approved the "Guidelines for the Control and Management of Ships' Biofouling to Minimize the Transfer of Invasive Aquatic Species" to minimize the movement of invasive species by hull-attached organisms and required ships to manage the organisms attached to their hulls. Invasive species enter new environments through ships' ballast water and hull attachment. However, several obstacles to implementing these guidelines have been identified, including a lack of underwater cleaning equipment, regulations on underwater cleaning activities in ports, and difficulty accessing crevices in underwater areas. The shipping industry, which is the party responsible for understanding these guidelines, wants to implement them for fuel cost savings resulting from the removal of organisms attached to the hull, but they anticipate significant difficulties in implementing the guidelines due to the obstacles mentioned above. Robots or people remove the organisms attached to the hull underwater, and the resulting wastewater includes various species of organisms and particles of paint and other pollutants. Currently, there is no technology available to sterilize the organisms in the wastewater or stabilize the heavy metals in the paint particles. In this study, we aim to analyze the characteristics of the wastewater generated from the removal of hull-attached organisms and select the optimal treatment technology. The organisms in the wastewater generated from the removal of the attached organisms meet the biological treatment standard (D-2) using the sterilization technology applied in the ships' ballast water treatment system. The heavy metals and other pollutants in the paint particles generated during removal are treated using stabilization technologies such as thermal decomposition. The wastewater generated is treated using a two-step process: 1) development of sterilization technology through pretreatment filtration equipment and electrolytic sterilization treatment and 2) development of technology for removing particle pollutants such as heavy metals and dissolved inorganic substances. Through this study, we will develop a biological removal technology and an environmentally friendly processing system for the waste generated after removal that meets the requirements of the government and the shipping industry and lays the groundwork for future treatment standards.Keywords: biofouling, ballast water treatment system, filtration, sterilization, wastewater
Procedia PDF Downloads 109563 Discrete Element Simulations of Composite Ceramic Powders
Authors: Julia Cristina Bonaldo, Christophe L. Martin, Severine Romero Baivier, Stephane Mazerat
Abstract:
Alumina refractories are commonly used in steel and foundry industries. These refractories are prepared through a powder metallurgy route. They are a mixture of hard alumina particles and graphite platelets embedded into a soft carbonic matrix (binder). The powder can be cold pressed isostatically or uniaxially, depending on the application. The compact is then fired to obtain the final product. The quality of the product is governed by the microstructure of the composite and by the process parameters. The compaction behavior and the mechanical properties of the fired product depend greatly on the amount of each phase, on their morphology and on the initial microstructure. In order to better understand the link between these parameters and the macroscopic behavior, we use the Discrete Element Method (DEM) to simulate the compaction process and the fracture behavior of the fired composite. These simulations are coupled with well-designed experiments. Four mixes with various amounts of Al₂O₃ and binder were tested both experimentally and numerically. In DEM, each particle is modelled and the interactions between particles are taken into account through appropriate contact or bonding laws. Here, we model a bimodal mixture of large Al₂O₃ and small Al₂O₃ covered with a soft binder. This composite is itself mixed with graphite platelets. X-ray tomography images are used to analyze the morphologies of the different components. Large Al₂O₃ particles and graphite platelets are modelled in DEM as sets of particles bonded together. The binder is modelled as a soft shell that covers both large and small Al₂O₃ particles. When two particles with binder indent each other, they first interact through this soft shell. Once a critical indentation is reached (towards the end of compaction), hard Al₂O₃ - Al₂O₃ contacts appear. In accordance with experimental data, DEM simulations show that the amount of Al₂O₃ and the amount of binder play a major role for the compaction behavior. The graphite platelets bend and break during the compaction, also contributing to the macroscopic stress. Firing step is modeled in DEM by ascribing bonds to particles which contact each other after compaction. The fracture behavior of the compacted mixture is also simulated and compared with experimental data. Both diametrical tests (Brazilian tests) and triaxial tests are carried out. Again, the link between the amount of Al₂O₃ particles and the fracture behavior is investigated. The methodology described here can be generalized to other particulate materials that are used in the ceramic industry.Keywords: cold compaction, composites, discrete element method, refractory materials, x-ray tomography
Procedia PDF Downloads 138562 Participatory Cartography for Disaster Reduction in Pogreso, Yucatan Mexico
Authors: Gustavo Cruz-Bello
Abstract:
Progreso is a coastal community in Yucatan, Mexico, highly exposed to floods produced by severe storms and tropical cyclones. A participatory cartography approach was conducted to help to reduce floods disasters and assess social vulnerability within the community. The first step was to engage local authorities in risk management to facilitate the process. Two workshop were conducted, in the first, a poster size printed high spatial resolution satellite image of the town was used to gather information from the participants: eight women and seven men, among them construction workers, students, government employees and fishermen, their ages ranged between 23 and 58 years old. For the first task, participants were asked to locate emblematic places and place them in the image to familiarize with it. Then, they were asked to locate areas that get flooded, the buildings that they use as refuges, and to list actions that they usually take to reduce vulnerability, as well as to collectively come up with others that might reduce disasters. The spatial information generated at the workshops was digitized and integrated into a GIS environment. A printed version of the map was reviewed by local risk management experts, who validated feasibility of proposed actions. For the second workshop, we retrieved the information back to the community for feedback. Additionally a survey was applied in one household per block in the community to obtain socioeconomic, prevention and adaptation data. The information generated from the workshops was contrasted, through T and Chi Squared tests, with the survey data in order to probe the hypothesis that poorer or less educated people, are less prepared to face floods (more vulnerable) and live near or among higher presence of floods. Results showed that a great majority of people in the community are aware of the hazard and are prepared to face it. However, there was not a consistent relationship between regularly flooded areas with people’s average years of education, house services, or house modifications against heavy rains to be prepared to hazards. We could say that the participatory cartography intervention made participants aware of their vulnerability and made them collectively reflect about actions that can reduce disasters produced by floods. They also considered that the final map could be used as a communication and negotiation instrument with NGO and government authorities. It was not found that poorer and less educated people are located in areas with higher presence of floods.Keywords: climate change, floods, Mexico, participatory mapping, social vulnerability
Procedia PDF Downloads 113561 Enhancement of Mass Transport and Separations of Species in a Electroosmotic Flow by Distinct Oscillatory Signals
Authors: Carlos Teodoro, Oscar Bautista
Abstract:
In this work, we analyze theoretically the mass transport in a time-periodic electroosmotic flow through a parallel flat plate microchannel under different periodic functions of the applied external electric field. The microchannel connects two reservoirs having different constant concentrations of an electro-neutral solute, and the zeta potential of the microchannel walls are assumed to be uniform. The governing equations that allow determining the mass transport in the microchannel are given by the Poisson-Boltzmann equation, the modified Navier-Stokes equations, where the Debye-Hückel approximation is considered (the zeta potential is less than 25 mV), and the species conservation. These equations are nondimensionalized and four dimensionless parameters appear which control the mass transport phenomenon. In this sense, these parameters are an angular Reynolds, the Schmidt and the Péclet numbers, and an electrokinetic parameter representing the ratio of the half-height of the microchannel to the Debye length. To solve the mathematical model, first, the electric potential is determined from the Poisson-Boltzmann equation, which allows determining the electric force for various periodic functions of the external electric field expressed as Fourier series. In particular, three different excitation wave forms of the external electric field are assumed, a) sawteeth, b) step, and c) a periodic irregular functions. The periodic electric forces are substituted in the modified Navier-Stokes equations, and the hydrodynamic field is derived for each case of the electric force. From the obtained velocity fields, the species conservation equation is solved and the concentration fields are found. Numerical calculations were done by considering several binary systems where two dilute species are transported in the presence of a carrier. It is observed that there are different angular frequencies of the imposed external electric signal where the total mass transport of each species is the same, independently of the molecular diffusion coefficient. These frequencies are called crossover frequencies and are obtained graphically at the intersection when the total mass transport is plotted against the imposed frequency. The crossover frequencies are different depending on the Schmidt number, the electrokinetic parameter, the angular Reynolds number, and on the type of signal of the external electric field. It is demonstrated that the mass transport through the microchannel is strongly dependent on the modulation frequency of the applied particular alternating electric field. Possible extensions of the analysis to more complicated pulsation profiles are also outlined.Keywords: electroosmotic flow, mass transport, oscillatory flow, species separation
Procedia PDF Downloads 216560 Work-Life Balance: A Landscape Mapping of Two Decades of Scholarly Research
Authors: Gertrude I Hewapathirana, Mohamed M. Moustafa, Michel G. Zaitouni
Abstract:
The purposes of this research are: (a) to provide an epistemological and ontological understanding of the WLB theory, practice, and research to illuminate how the WLB evolved between 2000 to 2020 and (b) to analyze peer-reviewed research to identify the gaps, hotspots, underlying dynamics, theoretical and thematic trends, influential authors, research collaborations, geographic networks, and the multidisciplinary nature of the WLB theory to guide future researchers. The research used four-step bibliometric network analysis to explore five research questions. Using keywords such as WLB and associated variants, 1190 peer-reviewed articles were extracted from the Scopus database and transformed to a plain text format for filtering. The analysis was conducted using the R version 4.1 software (R Development Core Team, 2021) and several libraries such as bibliometrics, word cloud, and ggplot2. We used the VOSviewer software (van Eck & Waltman, 2019) for network visualization. The WLB theory has grown into a multifaceted, multidisciplinary field of research. There is a paucity of research between 2000 to 2005 and an exponential growth from 2006 to 2015. The rapid increase of WLB research in the USA, UK, and Australia reflects the increasing workplace stresses due to hyper competitive workplaces, inflexible work systems, and increasing diversity and the emergence of WLB support mechanisms, legal and constitutional mandates to enhance employee and family wellbeing at multilevel social systems. A severe knowledge gap exists due to inadequate publications disseminating the "core" WLB research. "Locally-centralized-globally-discrete" collaboration among researchers indicates a "North-South" divide between developed and developing nations. A shortage in WLB research in developing nations and a lack of research collaboration hinder a global understanding of the WLB as a universal phenomenon. Policymakers and practitioners can use the findings to initiate supporting policies, and innovative work systems. The boundary expansion of the WLB concepts, categories, relations, and properties would facilitate researchers/theoreticians to test a variety of new dimensions. This is the most comprehensive WLB landscape analysis that reveals emerging trends, concepts, networks, underlying dynamics, gaps, and growing theoretical and disciplinary boundaries. It portrays the WLB as a universal theory.Keywords: work-life balance, co-citation networks; keyword co-occurrence network, bibliometric analysis
Procedia PDF Downloads 196559 India’s Energy Transition, Pathways for Green Economy
Authors: B. Sudhakara Reddy
Abstract:
In modern economy, energy is fundamental to virtually every product and service in use. It has been developed on the dependence of abundant and easy-to-transform polluting fossil fuels. On one hand, increase in population and income levels combined with increased per capita energy consumption requires energy production to keep pace with economic growth, and on the other, the impact of fossil fuel use on environmental degradation is enormous. The conflicting policy objectives of protecting the environment while increasing economic growth and employment has resulted in this paradox. Hence, it is important to decouple economic growth from environmental degeneration. Hence, the search for green energy involving affordable, low-carbon, and renewable energies has become global priority. This paper explores a transition to a sustainable energy system using the socio-economic-technical scenario method. This approach takes into account the multifaceted nature of transitions which not only require the development and use of new technologies, but also of changes in user behaviour, policy and regulation. The scenarios that are developed are: baseline business as usual (BAU) as well as green energy (GE). The baseline scenario assumes that the current trends (energy use, efficiency levels, etc.) will continue in future. India’s population is projected to grow by 23% during 2010 –2030, reaching 1.47 billion. The real GDP, as per the model, is projected to grow by 6.5% per year on average between 2010 and 2030 reaching US$5.1 trillion or $3,586 per capita (base year 2010). Due to increase in population and GDP, the primary energy demand will double in two decades reaching 1,397 MTOE in 2030 with the share of fossil fuels remaining around 80%. The increase in energy use corresponds to an increase in energy intensity (TOE/US $ of GDP) from 0.019 to 0.036. The carbon emissions are projected to increase by 2.5 times from 2010 reaching 3,440 million tonnes with per capita emissions of 2.2 tons/annum. However, the carbon intensity (tons per US$ of GDP) decreases from 0.96 to 0.67. As per GE scenario, energy use will reach 1079 MTOE by 2030, a saving of about 30% over BAU. The penetration rate of renewable energy resources will reduce the total primary energy demand by 23% under GE. The reduction in fossil fuel demand and focus on clean energy will reduce the energy intensity to 0.21 (TOE/US$ of GDP) and carbon intensity to 0.42 (ton/US$ of GDP) under the GE scenario. The study develops new ‘pathways out of poverty’ by creating more than 10 million jobs and thus raise the standard of living of low-income people. Our scenarios are, to a great extent, based on the existing technologies. The challenges to this path lie in socio-economic-political domains. However, to attain a green economy the appropriate policy package should be in place which will be critical in determining the kind of investments that will be needed and the incidence of costs and benefits. These results provide a basis for policy discussions on investments, policies and incentives to be put in place by national and local governments.Keywords: energy, renewables, green technology, scenario
Procedia PDF Downloads 248558 Family-School-Community Engagement: Building a Growth Mindset
Authors: Michelann Parr
Abstract:
Family-school-community engagement enhances family-school-community well-being, collective confidence, and school climate. While it is often referred to as a positive thing in the literature for families, schools, and communities, it does not come without its struggles. While there are numerous things families, schools, and communities do each and every day to enhance engagement, it is often difficult to find our way to true belonging and engagement. Working our way surface level barriers is easy; we can provide childcare, transportation, resources, and refreshments. We can even change the environment so that families will feel welcome, valued, and respected. But there are often mindsets and perpsectives buried deep below the surface, most often grounded in societal, familial, and political norms, expectations, pressures, and narratives. This work requires ongoing energy, commitment, and engagement of all stakeholders, including families, schools, and communities. Each and every day, we need to take a reflective and introspective stance at what is said and done and how it supports the overall goal of family-school-community engagement. And whatever we must occur within a paradigm of care in additional to one of critical thinking and social justice. Families, and those working with families, must not simply accept all that is given, but should instead ask these types of questions: a) How, and by whom, are the current philosophies and practices of family-school engagement interrogated? b) How might digging below surface level meanings support understanding of what is being said and done? c) How can we move toward meaningful and authentic engagement that balances knowledge and power between family, school, district, community (local and global), and government? This type of work requires conscious attention and intentional decision-making at all levels bringing us one step closer to authentic and meaningful partnerships. Strategies useful to building a growth mindset include: a) interrogating and exploring consistencies and inconsistencies by looking at what is done and what is not done through multiple perspectives; b) recognizing that enhancing family-engagement and changing mindsets take place at the micro-level (e.g., family and school), but also require active engagement and awareness at the macro-level (e.g., community agencies, district school boards, government); c) taking action as an advocate or activist. Negative narratives about families, schools, and communities should not be maintained, but instead critical and courageous conversations in and out of school should be initiated and sustained; and d) maintaining consistency, simplicity, and steady progress. All involved in engagement need to be aware of the struggles, but keep them in check with the many successes. Change may not be observed on a day-to-day basis or even immediately, but stepping back and looking from the outside in, might change the view. Working toward a growth mindset will produce better results than a fixed mindset, and this takes time.Keywords: family engagment, family-school-community engagement, parent engagement, parent involvment
Procedia PDF Downloads 183557 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy
Procedia PDF Downloads 134556 Body Fluids Identification by Raman Spectroscopy and Matrix-Assisted Laser Desorption/Ionization Time-of-Flight Mass Spectrometry
Authors: Huixia Shi, Can Hu, Jun Zhu, Hongling Guo, Haiyan Li, Hongyan Du
Abstract:
The identification of human body fluids during forensic investigations is a critical step to determine key details, and present strong evidence to testify criminal in a case. With the popularity of DNA and improved detection technology, the potential question must be revolved that whether the suspect’s DNA derived from saliva or semen, menstrual or peripheral blood, how to identify the red substance or aged blood traces on the spot is blood; How to determine who contribute the right one in mixed stains. In recent years, molecular approaches have been developing increasingly on mRNA, miRNA, DNA methylation and microbial markers, but appear expensive, time-consuming, and destructive disadvantages. Physicochemical methods are utilized frequently such us scanning electron microscopy/energy spectroscopy and X-ray fluorescence and so on, but results only showing one or two characteristics of body fluid itself and that out of working in unknown or mixed body fluid stains. This paper focuses on using chemistry methods Raman spectroscopy and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry to discriminate species of peripheral blood, menstrual blood, semen, saliva, vaginal secretions, urine or sweat. Firstly, non-destructive, confirmatory, convenient and fast Raman spectroscopy method combined with more accurate matrix-assisted laser desorption/ionization time-of-flight mass spectrometry method can totally distinguish one from other body fluids. Secondly, 11 spectral signatures and specific metabolic molecules have been obtained by analysis results after 70 samples detected. Thirdly, Raman results showed peripheral and menstrual blood, saliva and vaginal have highly similar spectroscopic features. Advanced statistical analysis of the multiple Raman spectra must be requested to classify one to another. On the other hand, it seems that the lactic acid can differentiate peripheral and menstrual blood detected by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, but that is not a specific metabolic molecule, more sensitivity ones will be analyzed in a forward study. These results demonstrate the great potential of the developed chemistry methods for forensic applications, although more work is needed for method validation.Keywords: body fluids, identification, Raman spectroscopy, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry
Procedia PDF Downloads 137555 Optical Assessment of Marginal Sealing Performance around Restorations Using Swept-Source Optical Coherence Tomography
Authors: Rima Zakzouk, Yasushi Shimada, Yasunori Sumi, Junji Tagami
Abstract:
Background and purpose: The resin composite has become the main material for the restorations of caries in recent years due to aesthetic characteristics, especially with the development of the adhesive techniques. The quality of adhesion to tooth structures is depending on an exchange process between inorganic tooth material and synthetic resin and a micromechanical retention promoted by resin infiltration in partially demineralized dentin. Optical coherence tomography (OCT) is a noninvasive diagnostic method for obtaining cross-sectional images that produce high-resolution of the biological tissue at the micron scale. The aim of this study was to evaluate the gap formation at adhesive/tooth interface of two-step self-etch adhesives that are preceded with or without phosphoric acid pre-etching in different regions of teeth using SS-OCT. Materials and methods: Round tapered cavities (2×2 mm) were prepared in cervical part of bovine incisors teeth and divided into 2 groups (n=10): first group self-etch adhesive (Clearfil SE Bond) was applied for SE group and second group treated with acid etching before applying the self-etch adhesive for PA group. Subsequently, both groups were restored with Estelite Flow Quick Flowable Composite Resin and observed under OCT. Following 5000 thermal cycles, the same section was obtained again for each cavity using OCT at 1310-nm wavelength. Scanning was repeated after two months to monitor the gap progress. Then the gap length was measured using image analysis software, and the statistics analysis were done between both groups using SPSS software. After that, the cavities were sectioned and observed under Confocal Laser Scanning Microscope (CLSM) to confirm the result of OCT. Results: Gaps formed at the bottom of the cavity was longer than the gap formed at the margin and dento-enamel junction in both groups. On the other hand, pre-etching treatment led to damage the DEJ regions creating longer gap. After 2 months the results showed almost progress in the gap length significantly at the bottom regions in both groups. In conclusions, phosphoric acid etching treatment did not reduce the gap lrngth in most regions of the cavity. Significance: The bottom region of tooth was more exposed to gap formation than margin and DEJ regions, The DEJ damaged with phosphoric acid treatment.Keywords: optical coherence tomography, self-etch adhesives, bottom, dento enamel junction
Procedia PDF Downloads 227554 Improving Photocatalytic Efficiency of TiO2 Films Incorporated with Natural Geopolymer for Sunlight-Driven Water Purification
Authors: Satam Alotibi, Haya A. Al-Sunaidi, Almaymunah M. AlRoibah, Zahraa H. Al-Omaran, Mohammed Alyami, Fatehia S. Alhakami, Abdellah Kaiba, Mazen Alshaaer, Talal F. Qahtan
Abstract:
This research study presents a novel approach to harnessing the potential of natural geopolymer in conjunction with TiO₂ nanoparticles (TiO₂ NPs) for the development of highly efficient photocatalytic materials for water decontamination. The study begins with the formulation of a geopolymer paste derived from natural sources, which is subsequently applied as a coating on glass substrates and allowed to air-dry at room temperature. The result is a series of geopolymer-coated glass films, serving as the foundation for further experimentation. To enhance the photocatalytic capabilities of these films, a critical step involves immersing them in a suspension of TiO₂ nanoparticles (TiO₂ NPs) in water for varying durations. This immersion process yields geopolymer-loaded TiO₂ NPs films with varying concentrations, setting the stage for comprehensive characterization and analysis. A range of advanced analytical techniques, including UV-Vis spectroscopy, Fourier-transform infrared spectroscopy (FTIR), Raman spectroscopy, scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), and atomic force microscopy (AFM), were meticulously employed to assess the structural, morphological, and chemical properties of the geopolymer-based TiO₂ films. These analyses provided invaluable insights into the materials' composition and surface characteristics. The culmination of this research effort sees the geopolymer-based TiO₂ films being repurposed as immobilized photocatalytic reactors for water decontamination under natural sunlight irradiation. Remarkably, the results revealed exceptional photocatalytic performance that exceeded the capabilities of conventional TiO₂-based photocatalysts. This breakthrough underscores the significant potential of natural geopolymer as a versatile and highly effective matrix for enhancing the photocatalytic efficiency of TiO₂ nanoparticles in water treatment applications. In summary, this study represents a significant advancement in the quest for sustainable and efficient photocatalytic materials for environmental remediation. By harnessing the synergistic effects of natural geopolymer and TiO₂ nanoparticles, these geopolymer-based films exhibit outstanding promise in addressing water decontamination challenges and contribute to the development of eco-friendly solutions for a cleaner and healthier environment.Keywords: geopolymer, TiO2 nanoparticles, photocatalytic materials, water decontamination, sustainable remediation
Procedia PDF Downloads 67553 Antibacterial Activity of Rosmarinus officinalis (Rosemary) and Murraya koenigii (Curry Leaves) against Multidrug Resistant S. aureus and Coagulase Negative Staphylococcus Species
Authors: Asma Naim, Warda Mushtaq
Abstract:
Staphylococcus species are the most versatile and adaptive organism. They are widespread and naturally found on the skin, mucosa and nose in humans. Among these, Staphylococcus aureus is the most important species. These organisms act as opportunistic pathogens and can infect various organs of the host, causing minor skin infection to severe toxin mediated diseases, and life threatening nosocomial infections. Staphylococcus aureus has acquired resistance against β-lactam antibiotics by the production of β-lactamase, and Methicillin-Resistant Staphylococcus aureus (MRSA) strains have also been reported with increasing frequency. MRSA strains have been associated with nosocomial as well as community acquired infections. Medicinal plants have enormous potential as antimicrobial substances and have been used in traditional medicine. Search for medicinally valuable plants with antimicrobial activity is being emphasized due to increasing antibiotic resistance in bacteria. In the present study, the antibacterial potential of Rosmarinus officinalis (Rosemary) and Murraya koenigii (curry leaves) was evaluated. These are common household herbs used in food as enhancer of flavor and aroma. The crude aqueous infusion, decoction and ethanolic extracts of curry leaves and rosemary and essential oil of rosemary were investigated in the present study for antibacterial activity against multi-drug resistant Staphylococcus strains using well diffusion method. In the present study, 60 Multi-drug resistant clinical isolates of S. aureus (43) and Coagulase Negative Staphylococci (CoNS) (17) were screened against different concentrations of crude extracts of Rosmarinus officinalis and Murraya koenigii. Out of these 60 isolates, 43 were sensitive to the aqueous infusion of rosemary; 23 to aqueous decoction and 58 to ethanolic extract whereas, 24 isolates were sensitive to the essential oil. In the case of the curry leaves, no antibacterial activity was observed in aqueous infusion and decoction while only 14 isolates were sensitive to the ethanolic extract. The aqueous infusion of rosemary (50% concentration) exhibited a zone of inhibition of 21(±5.69) mm. against CoNS and 17(±4.77) mm. against S. aureus, the zone of inhibition of 50% concentration of aqueous decoction of rosemary was also larger against CoNS 17(±5.78) mm. then S. aureus 13(±6.91) mm. and the 50% concentrated ethanolic extract showed almost similar zone of inhibition in S. aureus 22(±3.61) mm. and CoNS 21(±7.64) mm. whereas, the essential oil of rosemary showed greater zone of inhibition against S. aureus i.e., 16(±4.67) mm. while CoNS showed 15(±6.94) mm. These results show that ethanolic extract of rosemary has significant antibacterial activity. Aqueous infusion and decoction of curry leaves revealed no significant antibacterial potential against all Staphylococcal species and ethanolic extract also showed only a weak response. Staphylococcus strains were susceptible to crude extracts and essential oil of rosemary in a dose depend manner, where the aqueous infusion showed highest zone of inhibition and ethanolic extract also demonstrated antistaphylococcal activity. These results demonstrate that rosemary possesses antistaphylococcal activity.Keywords: antibacterial activity, curry leaves, multidrug resistant, rosemary, S. aureus
Procedia PDF Downloads 248552 Some Quality Parameters of Selected Maize Hybrids from Serbia for the Production of Starch, Bioethanol and Animal Feed
Authors: Marija Milašinović-Šeremešić, Valentina Semenčenko, Milica Radosavljević, Dušanka Terzić, Ljiljana Mojović, Ljubica Dokić
Abstract:
Maize (Zea mays L.) is one of the most important cereal crops, and as such, one of the most significant naturally renewable carbohydrate raw materials for the production of energy and multitude of different products. The main goal of the present study was to investigate a suitability of selected maize hybrids of different genetic background produced in Maize Research Institute ‘Zemun Polje’, Belgrade, Serbia, for starch, bioethanol and animal feed production. All the hybrids are commercial and their detailed characterization is important for the expansion of their different uses. The starches were isolated by using a 100-g laboratory maize wet-milling procedure. Hydrolysis experiments were done in two steps (liquefaction with Termamyl SC, and saccharification with SAN Extra L). Starch hydrolysates obtained by the two-step hydrolysis of the corn flour starch were subjected to fermentation by S. cerevisiae var. ellipsoideus under semi-anaerobic conditions. The digestibility based on enzymatic solubility was performed by the Aufréré method. All investigated ZP maize hybrids had very different physical characteristics and chemical composition which could allow various possibilities of their use. The amount of hard (vitreous) and soft (floury) endosperm in kernel is considered one of the most important parameters that can influence the starch and bioethanol yields. Hybrids with a lower test weight and density and a greater proportion of soft endosperm fraction had a higher yield, recovery and purity of starch. Among the chemical composition parameters only starch content significantly affected the starch yield. Starch yields of studied maize hybrids ranged from 58.8% in ZP 633 to 69.0% in ZP 808. The lowest bioethanol yield of 7.25% w/w was obtained for hybrid ZP 611k and the highest by hybrid ZP 434 (8.96% w/w). A very significant correlation was determined between kernel starch content and the bioethanol yield, as well as volumetric productivity (48h) (r=0.66). Obtained results showed that the NDF, ADF and ADL contents in the whole maize plant of the observed ZP maize hybrids varied from 40.0% to 60.1%, 18.6% to 32.1%, and 1.4% to 3.1%, respectively. The difference in the digestibility of the dry matter of the whole plant among hybrids (ZP 735 and ZP 560) amounted to 18.1%. Moreover, the differences in the contents of the lignocelluloses fraction affected the differences in dry matter digestibility. From the results it can be concluded that genetic background of the selected maize hybrids plays an important part in estimation of the technological value of maize hybrids for various purposes. Obtained results are of an exceptional importance for the breeding programs and selection of potentially most suitable maize hybrids for starch, bioethanol and animal feed production.Keywords: bioethanol, biomass quality, maize, starch
Procedia PDF Downloads 222551 Conservation Detection Dogs to Protect Europe's Native Biodiversity from Invasive Species
Authors: Helga Heylen
Abstract:
With dogs saving wildlife in New Zealand since 1890 and governments in Africa, Australia and Canada trusting them to give the best results, Conservation Dogs Ireland want to introduce more detection dogs to protect Europe's native wildlife. Conservation detection dogs are fast, portable and endlessly trainable. They are a cost-effective, highly sensitive and non-invasive way to detect protected and invasive species and wildlife disease. Conservation dogs find targets up to 40 times faster than any other method. They give results instantly, with near-perfect accuracy. They can search for multiple targets simultaneously, with no reduction in efficacy The European Red List indicates the decline in biodiversity has been most rapid in the past 50 years, and the risk of extinction never higher. Just two examples of major threats dogs are trained to tackle are: (I)Japanese Knotweed (Fallopia Japonica), not only a serious threat to ecosystems, crops, structures like bridges and roads - it can wipe out the entire value of a house. The property industry and homeowners are only just waking up to the full extent of the nightmare. When those working in construction on the roads move topsoil with a trace of Japanese Knotweed, it suffices to start a new colony. Japanese Knotweed grows up to 7cm a day. It can stay dormant and resprout after 20 years. In the UK, the cost of removing Japanese Knotweed from the London Olympic site in 2012 was around £70m (€83m). UK banks already no longer lend on a house that has Japanese Knotweed on-site. Legally, landowners are now obliged to excavate Japanese Knotweed and have it removed to a landfill. More and more, we see Japanese Knotweed grow where a new house has been constructed, and topsoil has been brought in. Conservation dogs are trained to detect small fragments of any part of the plant on sites and in topsoil. (II)Zebra mussels (Dreissena Polymorpha) are a threat to many waterways in the world. They colonize rivers, canals, docks, lakes, reservoirs, water pipes and cooling systems. They live up to 3 years and will release up to one million eggs each year. Zebra mussels attach to surfaces like rocks, anchors, boat hulls, intake pipes and boat engines. They cause changes in nutrient cycles, reduction of plankton and increased plant growth around lake edges, leading to the decline of Europe's native mussel and fish populations. There is no solution, only costly measures to keep it at bay. With many interconnected networks of waterways, they have spread uncontrollably. Conservation detection dogs detect the Zebra mussel from its early larvae stage, which is still invisible to the human eye. Detection dogs are more thorough and cost-effective than any other conservation method, and will greatly complement and speed up the work of biologists, surveyors, developers, ecologists and researchers.Keywords: native biodiversity, conservation detection dogs, invasive species, Japanese Knotweed, zebra mussel
Procedia PDF Downloads 196550 Phytoremediation of Hydrocarbon-Polluted Soils: Assess the Potentialities of Six Tropical Plant Species
Authors: Pulcherie Matsodoum Nguemte, Adrien Wanko Ngnien, Guy Valerie Djumyom Wafo, Ives Magloire Kengne Noumsi, Pierre Francois Djocgoue
Abstract:
The identification of plant species with the capacity to grow on hydrocarbon-polluted soils is an essential step for phytoremediation. In view of developing phytoremediation in Cameroon, floristic surveys have been conducted in 4 cities (Douala, Yaounde, Limbe, and Kribi). In each city, 13 hydrocarbon-polluted, as well as unpolluted sites (control), have been investigated using quadrat method. 106 species belonging to 76 genera and 30 families have been identified on hydrocarbon-polluted sites, unlike the control sites where floristic diversity was much higher (166 species contained in 125 genera and 50 families). Poaceae, Cyperaceae, Asteraceae and Amaranthaceae have higher taxonomic richness on polluted sites (16, 15,10 and 8 taxa, respectively). Shannon diversity index of the hydrocarbon-polluted sites (1.6 to 2.7 bits/ind.) were significantly lower than the control sites (2.7 to 3.2 bits/ind.). Based on a relative frequency > 10% and abundance > 7%, this study highlights more than ten plants predisposed to be effective in the cleaning-up attempts of soils contaminated by hydrocarbons. Based on the floristic indicators, 6 species (Eleusine indica (L.) Gaertn., Cynodon dactylon (L.) Pers., Alternanthera sessilis (L.) R. Br. ex DC †, Commelinpa benghalensis L., Cleome ciliata Schum. & Thonn. and Asystasia gangetica (L.) T. Anderson) were selected for a study to determine their capacity to remediate a soil contaminated with fuel oil (82.5 ml/ kg of soil). The experiments lasting 150 days takes into account three modalities - Tn: uncontaminated soils planted (6) To contaminated soils unplanted (3) and Tp: contaminated soil planted (18) – randomized arranged. 3 on 6 species (Eleusine indica, Cynodon dactylon, and Alternanthera sessilis) survived the climatic and soil conditions. E. indica presents a significantly higher growth rate for density and leaf area while C. dactylon had a significantly higher growth rate for stem size and leaf numbers. A. sessilis showed stunted growth and development throughout the experimental period. The species Eleusine indica (L.) Gaertn. and Cynodon dactylon (L.) Pers. can be qualified as polluo-tolerant plant species; polluo-tolerance being the ability of a species to survive and develop in the midst subject to extreme physical and chemical disturbances.Keywords: Cameroon, cleaning-up, floristic surveys, phytoremediation
Procedia PDF Downloads 243549 Climate Change Law and Transnational Corporations
Authors: Manuel Jose Oyson
Abstract:
The Intergovernmental Panel on Climate Change (IPCC) warned in its most recent report for the entire world “to both mitigate and adapt to climate change if it is to effectively avoid harmful climate impacts.” The IPCC observed “with high confidence” a more rapid rise in total anthropogenic greenhouse gas emissions (GHG) emissions from 2000 to 2010 than in the past three decades that “were the highest in human history”, which if left unchecked will entail a continuing process of global warming and can alter the climate system. Current efforts, however, to respond to the threat of global warming, such as the United Nations Framework Convention on Climate Change and the Kyoto Protocol, have focused on states, and fail to involve Transnational Corporations (TNCs) which are responsible for a vast amount of GHG emissions. Involving TNCs in the search for solutions to climate change is consistent with an acknowledgment by contemporary international law that there is an international role for other international persons, including TNCs, and departs from the traditional “state-centric” response to climate change. Putting the focus of GHG emissions away from states recognises that the activities of TNCs “are not bound by national borders” and that the international movement of goods meets the needs of consumers worldwide. Although there is no legally-binding instrument that covers TNC activities or legal responsibilities generally, TNCs have increasingly been made legally responsible under international law for violations of human rights, exploitation of workers and environmental damage, but not for climate change damage. Imposing on TNCs a legally-binding obligation to reduce their GHG emissions or a legal liability for climate change damage is arguably formidable and unlikely in the absence a recognisable source of obligation in international law or municipal law. Instead a recourse to “soft law” and non-legally binding instruments may be a way forward for TNCs to reduce their GHG emissions and help in addressing climate change. Positive effects have been noted by various studies to voluntary approaches. TNCs have also in recent decades voluntarily committed to “soft law” international agreements. This development reflects a growing recognition among corporations in general and TNCs in particular of their corporate social responsibility (CSR). While CSR used to be the domain of “small, offbeat companies”, it has now become part of mainstream organization. The paper argues that TNCs must voluntarily commit to reducing their GHG emissions and helping address climate change as part of their CSR. One, as a serious “global commons problem”, climate change requires international cooperation from multiple actors, including TNCs. Two, TNCs are not innocent bystanders but are responsible for a large part of GHG emissions across their vast global operations. Three, TNCs have the capability to help solve the problem of climate change. Assuming arguendo that TNCs did not strongly contribute to the problem of climate change, society would have valid expectations for them to use their capabilities, knowledge-base and advanced technologies to help address the problem. It would seem unthinkable for TNCs to do nothing while the global environment fractures.Keywords: climate change law, corporate social responsibility, greenhouse gas emissions, transnational corporations
Procedia PDF Downloads 350548 Ayurvastra: A Study on the Ancient Indian Textile for Healing
Authors: Reena Aggarwal
Abstract:
The use of textile chemicals in the various pre and post-textile manufacturing processes has made the textile industry conscious of its negative contribution to environmental pollution. Popular environmentally friendly fibers such as recycled polyester and organic cotton have been now increasingly used by fabrics and apparel manufacturers. However, after these textiles or the finished apparel are manufactured, they have to be dyed in the same chemical dyes that are harmful and toxic to the environment. Dyeing is a major area of concern for the environment as well as for people who have chemical sensitivities as it may cause nausea, breathing difficulties, seizures, etc. Ayurvastra or herbal medical textiles are one step ahead of the organic lifestyle, which supports the core concept of holistic well-being and also eliminates the impact of harmful chemicals and pesticides. There is a wide range of herbs that can be used not only for dyeing but also for providing medicinal properties to the textiles like antibacterial, antifungal, antiseptic, antidepressant and for treating insomnia, skin diseases, etc. The concept of herbal dyeing of fabric is to manifest herbal essence in every aspect of clothing, i.e., from production to end-use, additionally to eliminate the impact of harmful chemical dyes and chemicals which are known to result in problems like skin rashes, headache, trouble concentrating, nausea, diarrhea, fatigue, muscle and joint pain, dizziness, difficulty breathing, irregular heartbeat and seizures. Herbal dyeing or finishing on textiles will give an extra edge to the textiles as it adds an extra function to the fabric. The herbal extracts can be applied to the textiles by a simple process like the pad dry cure method and mainly acts on the human body through the skin for aiding in the treatment of disease or managing the medical condition through its herbal properties. This paper, therefore, delves into producing Ayurvastra, which is a perfect amalgamation of cloth and wellness. The aim of the paper is to design and create herbal disposable and non-disposable medical textile products acting mainly topically (through the skin) for providing medicinal properties/managing medical conditions. Keeping that in mind, a range of antifungal socks and antibacterial napkins treated with turmeric and aloe vera were developed, which are recommended for the treatment of fungal and bacterial infections, respectively. Both Herbal Antifungal socks and Antibacterial napkins have proved to be efficient enough in managing and treating fungal and bacterial infections of the skin, respectively.Keywords: ayurvastra, ayurveda, herbal, pandemic, sustainable
Procedia PDF Downloads 130547 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method
Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare
Abstract:
The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test
Procedia PDF Downloads 120546 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps
Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo
Abstract:
With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.Keywords: interactive applications, power management, QoS, Web apps, WebGL
Procedia PDF Downloads 192545 The Potential of Role Models in Enhancing Smokers' Readiness to Change (Decision to Quit Smoking): A Case Study of Saudi National Anti-Smoking Campaign
Authors: Ghada M. AlSwayied, Anas N. AlHumaid
Abstract:
Smoking has been linked to thousands of deaths worldwide. Around three million adults continue to use tobacco each day in Saudi Arabia; a sign that smoking is prevalent among Saudi population and obviously considered as a public health threat. Although the awareness against smoking is continuously running, it can be observed that smoking behavior increases noticeably as common practice especially among young adults across the world. Therefore, it was an essential step to guess what does motivate smokers to think about quit smoking. Can a graphic and emotional ad that is focusing on health consequences do really make a difference? A case study has been conducted on the Annual Anti-Smoking National Campaign, which was provided by Saudi Ministry of Health in the period of May 2017. To assess campaign’s effects on the number of calls, the number of visits and online access to health messages during and after the campaign period from May to August compared with the previous campaign in 2016. The educational video was selected as a primary tool to deliver the smoking health message. The Minister of Health who is acting as a role model for young adults was used to deliver a direct message to smokers with an avoidance of smoking cues usage. Due to serious consequences of smoking, the Minister of Health delivered the news of canceling the media campaign and directing the budget to smoking cessation clinics. It was shown that the positive responses and interactions on the campaign were obviously remarkable; achieving a high rate of recall and recognition. During the campaign, the number of calls to book for a visit reached 45880 phone calls, and the total online views ran to 1,253,879. Whereas, clinic visit raised up to 213 cumulative percent. Interestingly, a total number of 15,192 patients visited the clinics along three months compared with the last year campaign’s period, which was merely 4850 patients. Furthermore, around half of patients who visited the clinics were in the age from 26 to 40-year-old. There was a great progress in enhancing public awareness on: 'where to go' to assist smokers in making a quit attempt. With regard to the stages of change theory, it was predicted that by following direct-message technique; the proportion of patients in the contemplation and preparation stages would be increased. There was no process evaluation obtained to assess implementation of the campaigns’ activities.Keywords: smoking, health promotion, role model, educational material, intervention, community health
Procedia PDF Downloads 149544 Unveiling Adorno’s Concern for Revolutionary Praxis and Its Enduring Significance: A Philosophical Analysis of His Writings on Sociology and Philosophy
Authors: Marie-Josee Lavallee
Abstract:
Adorno’s reputation as an abstract and pessimistic thinker who indulged in a critic of capitalist society and culture without bothering himself with opening prospects for change, and who has no interest in political activism, recently begun to be questioned. This paper, which has a twofold objective, will push revisionist readings a step further by putting forward the thesis that revolutionary praxis has been an enduring concern for Adorno, surfacing throughout his entire work. On the other hand, it will hold that his understanding of the relationships between theory and praxis, which will be explained by referring to Ernst Bloch’s distinction between the warm and cold currents of Marxism, can help to interpret the paralysis of revolutionary practice in our own time under a new light. Philosophy and its tasks have been an enduring topic of Adorno’s work from the 1930s to Negativ Dialektik. The writings in which he develops these ideas stand among his most obscure and abstract so that their strong ties to the political have remained mainly overlooked. Adorno’s undertaking of criticizing and ‘redeeming’ philosophy and metaphysics is inseparable from a care for retrieving the capacity to act in the world and to change it. Philosophical problems are immanent to sociological problems, and vice versa, he underlines in his Metaphysik. Begriff and Problem. The issue of truth cannot be severed from the contingent context of a given idea. As a critical undertaking extracting its contents from reality, which is what philosophy should be from Adorno's perspective, the latter has the potential to fully reveal the reification of the individual and consciousness resulting from capitalist economic and cultural domination, thus opening the way to resistance and revolutionary change. While this project, according to his usual method, is sketched mainly in negative terms, it also exhibits positive contours which depict a socialist society. Only in the latter could human suffering end, and mutilated individuals experiment with reconciliation in an authentic way. That Adorno’s continuous plea for philosophy’s self-critic and renewal hides an enduring concern for revolutionary praxis emerges clearly from a careful philosophical analysis of his writings on philosophy and a selection of his sociological work, coupled with references to his correspondences. This study points to the necessity of a serious re-evaluation of Adorno’s relationship to the political, which will impact on the interpretation of his whole oeuvre, is much needed. In the second place, Adorno's dialectical conception of theory and praxis is enlightening for our own time, since it suggests that we are experiencing a phase of creative latency rather an insurmountable impasse.Keywords: Frankfurt school, philosophy and revolution, revolutionary praxis, Theodor W. Adorno
Procedia PDF Downloads 123543 Remote Radiation Mapping Based on UAV Formation
Authors: Martin Arguelles Perez, Woosoon Yim, Alexander Barzilov
Abstract:
High-fidelity radiation monitoring is an essential component in the enhancement of the situational awareness capabilities of the Department of Energy’s Office of Environmental Management (DOE-EM) personnel. In this paper, multiple units of unmanned aerial vehicles (UAVs) each equipped with a cadmium zinc telluride (CZT) gamma-ray sensor are used for radiation source localization, which can provide vital real-time data for the EM tasks. To achieve this goal, a fully autonomous system of multicopter-based UAV swarm in 3D tetrahedron formation is used for surveying the area of interest and performing radiation source localization. The CZT sensor used in this study is suitable for small-size multicopter UAVs due to its small size and ease of interfacing with the UAV’s onboard electronics for high-resolution gamma spectroscopy enabling the characterization of radiation hazards. The multicopter platform with a fully autonomous flight feature is suitable for low-altitude applications such as radiation contamination sites. The conventional approach uses a single UAV mapping in a predefined waypoint path to predict the relative location and strength of the source, which can be time-consuming for radiation localization tasks. The proposed UAV swarm-based approach can significantly improve its ability to search for and track radiation sources. In this paper, two approaches are developed using (a) 2D planar circular (3 UAVs) and (b) 3D tetrahedron formation (4 UAVs). In both approaches, accurate estimation of the gradient vector is crucial for heading angle calculation. Each UAV carries the CZT sensor; the real-time radiation data are used for the calculation of a bulk heading vector for the swarm to achieve a UAV swarm’s source-seeking behavior. Also, a spinning formation is studied for both cases to improve gradient estimation near a radiation source. In the 3D tetrahedron formation, a UAV located closest to the source is designated as a lead unit to maintain the tetrahedron formation in space. Such a formation demonstrated a collective and coordinated movement for estimating a gradient vector for the radiation source and determining an optimal heading direction of the swarm. The proposed radiation localization technique is studied by computer simulation and validated experimentally in the indoor flight testbed using gamma sources. The technology presented in this paper provides the capability to readily add/replace radiation sensors to the UAV platforms in the field conditions enabling extensive condition measurement and greatly improving situational awareness and event management. Furthermore, the proposed radiation localization approach allows long-term measurements to be efficiently performed at wide areas of interest to prevent disasters and reduce dose risks to people and infrastructure.Keywords: radiation, unmanned aerial system(UAV), source localization, UAV swarm, tetrahedron formation
Procedia PDF Downloads 99542 Assessing Adaptive Capacity to Climate Change and Agricultural Productivity of Farming Households of Makueni County in Kenya
Authors: Lilian Mbinya Muasa
Abstract:
Climate change is inevitable and a global challenge with long term implications to the sustainable development of many countries today. The negative impacts of climate change are creating far reaching social, economic and environmental problems threatening lives and livelihoods of millions of people in the world. Developing countries especially sub-Saharan countries are more vulnerable to climate change due to their weak ecosystem, low adaptive capacity and high dependency on rain fed agriculture. Countries in Sub-Saharan Africa are more vulnerable to climate change impacts due to their weak adaptive capacity and over-reliance on rain fed agriculture. In Kenya, 78% of the rural communities are poor farmers who heavily rely on rain fed agriculture thus are directly affected by climate change impacts.Currently, many parts of Kenya are experiencing successive droughts which are contributing to persistently unstable and declining agricultural productivity especially in semi arid eastern Kenya. As a result, thousands of rural communities repeatedly experience food insecurity which plunge them to an ever over-reliance on relief food from the government and Non-Governmental Organization In addition, they have adopted poverty coping strategies to diversify their income, for instance, deforestation to burn charcoal, sand harvesting and overgrazing which instead contribute to environmental degradation.This research was conducted in Makueni County which is classified as one of the most food insecure counties in Kenya and experiencing acute environmental degradation. The study aimed at analyzing the adaptive capacity to climate change across farming households of Makueni County in Kenya by, 1) analyzing adaptive capacity to climate change and agricultural productivity across farming households, 2) identifying factors that contribute to differences in adaptive capacity across farming households, and 3) understanding the relationship between climate change, agricultural productivity and adaptive capacity. Analytical Hierarchy Process (AHP) was applied to determine adaptive capacity and Total Factor Productivity (TFP) to determine Agricultural productivity per household. Increase in frequency of prolonged droughts and scanty rainfall. Preliminary findings indicate a magnanimous decline in agricultural production in the last 10 years in Makueni County. In addition, there is an over reliance of households on indigenous knowledge which is no longer reliable because of the unpredictability nature of climate change impacts. These findings on adaptive capacity across farming households provide the first step of developing and implementing action-oriented climate change policies in Makueni County and Kenya.Keywords: adaptive capacity, agricultural productivity, climate change, vulnerability
Procedia PDF Downloads 326541 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries
Authors: Ahmed Elaksher
Abstract:
Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.Keywords: UAV, photogrammetry, SfM, DEM
Procedia PDF Downloads 294540 Delhi Metro: A Race towards Zero Emission
Authors: Pramit Garg, Vikas Kumar
Abstract:
In December 2015, all the members of the United Nations Framework Convention on Climate Change (UNFCCC) unanimously adopted the historic Paris Agreement. As per the convention, 197 countries have followed the guidelines of the agreement and have agreed to reduce the use of fossil fuels and also reduce the carbon emission to reach net carbon neutrality by 2050 and reduce the global temperature by 2°C by the year 2100. Globally, transport accounts for 23% of the energy-related CO2 that feeds global warming. Decarbonization of the transport sector is an essential step towards achieving India’s nationally determined contributions and net zero emissions by 2050. Metro rail systems are playing a vital role in the decarbonization of the transport sector as they create metro cities for the “21st-century world” that could ensure “mobility, connectivity, productivity, safety and sustainability” for the populace. Metro rail was introduced in Delhi in 2002 to decarbonize Delhi-National Capital Region and to provide a sustainable mode of public transportation. Metro Rail Projects significantly contribute to pollution reduction and are thus a prerequisite for sustainable development. The Delhi Metro is the 1ˢᵗ metro system in the world to earn carbon credits from Clean Development Mechanism (CDM) projects registered under United Nations Framework Convention on Climate Change. A good Metro Project with reasonable network coverage attracts a modal shift from various private modes and hence fewer vehicles on the road, thus restraining the pollution at the source. The absence of Greenhouse Gas emissions from the vehicle of modal shift passengers and lower emissions due to decongested roads contribute to the reduction in Green House Gas emissions and hence overall reduction in atmospheric pollution. The reduction in emission during the horizon year 2002 to 2019 has been estimated using emission standards and deterioration factor(s) for different categories of vehicles. Presently, our results indicate that the Delhi Metro system has reduced approximately 17.3% of motorized trips by road resulting in an emission reduction significantly. Overall, Delhi Metro, with an immediate catchment area of 17% of the National Capital Territory of Delhi (NCTD), is helping today to reduce 387 tonnes of emissions per day and 141.2 ktonnes of emissions yearly. The findings indicate that the Metro rail system is driving cities towards a more livable environment.Keywords: Delhi metro, GHG emission, sustainable public transport, urban transport
Procedia PDF Downloads 125539 Polypeptide Modified Carbon Nanotubes – Mediated GFP Gene Transfection for H1299 Cells and Toxicity Assessment
Authors: Pei-Ying Lo, Jing-Hao Ciou, Kai-Cheng Yang, Jia-Huei Zheng, Shih-Hsiang Huang, Kuen-Chan Lee, Er-Chieh Cho
Abstract:
As-produced CNTs are insoluble in all organic solvents and aqueous solutions have imposed limitations to the use of CNTs. Therefore, how to debundle carbon nanotubes and to modify them for further uses is an important issue. There are several methods for the dispersion of CNTs in water using covalent attachment of hydrophilic groups to the surface of tubes. These methods, however, alter the electronic structure of the nanotubes by disrupting the network of sp2 hybridized carbons. In order to keep the nanotubes’ intrinsic mechanical and electrical properties intact, non-covalent interactions are increasingly being explored as an alternative route for dispersion. Apart from conventional surfactants such as sodium dodecylsulfate (SDS) or sodium dodecylbenzenesulfonate (SDBS) which are highly effective in dispersing CNTs, biopolymers have received much attention as dispersing agents due to the anticipated biocompatibility of the dispersed CNTs. Also, The pyrenyl group is known to interact strongly with the basal plane of graphene via π-stacking. In this study, a highly re-dispersible biopolymer is reported for the synthesis of pyrene-modified poly-L-lysine (PBPL) and poly(D-Glu, D-Lys) (PGLP). To provide the evidence of the safety of the PBPL/CNT & PGLP/CNT materials we use in this study, H1299 and HCT116 cells were incubated with PBPL/CNT & PGLP/CNT materials for toxicity analysis, MTS assays. The results from MTS assays indicated that no significant cellular toxicity was shown in H1299 and HCT116 cells. Furthermore, the fluorescence marker fluorescein isothiocyanate (FITC) was added to PBPL & PGLP dispersions. From the fluorescent measurements showed that the chemical functionalisation of the PBPL/CNT & PGLP/CNT conjugates with the fluorescence marker were successful. The fluorescent PBPL/CNT & PGLP/CNT conjugates could find application in medical imaging. In the next step, the GFP gene is immobilized onto PBPL/CNT conjugates by introducing electrostatic interaction. GFP-transfected cells that emitted fluorescence were imaged and counted under a fluorescence microscope. Due to the unique biocompatibility of PBPL modified CNTs, the GFP gene could be transported into H1299 cells without using antibodies. The applicability of such soluble and chemically functionalised polypeptide/CNT conjugates in biomedicine is currently investigated. We expect that this polypeptide/CNT system will be a safe and multi-functional nanomedical delivery platform and contribute to future medical therapy.Keywords: carbon nanotube, nanotoxicology, GFP transfection, polypeptide/CNT hybrids
Procedia PDF Downloads 339538 Retrospective Cartography of Tbilisi and Surrounding Area
Authors: Dali Nikolaishvili, Nino Khareba, Mariam Tsitsagi
Abstract:
Tbilisi has been a capital of Georgia since the 5ᵗʰ century. City area was covered by forest in historical past. Nowadays the situation has been changing dramatically. Dozens of problems are caused by damages/destruction of green cover and solution, at one glance, seems to be uncomplicated (planting trees and creating green quarters), but on the other hand, according to the increasing tendency, the built up of areas still remains unsolved. Finding out the ways to overcome such obstacles is important even for protecting the health of society. Making of Retrospective cartography of the forest area of Tbilisi with use of GIS technology and remote sensing was the main aim of the research. Research about the dynamic of forest-cover in Tbilisi and its surroundings included the following steps: assessment of the dynamic of forest in Tbilisi and its surroundings. The survey was mainly based on the retrospective mapping method. Using of GIS technology, studying, comparing and identifying the narrative sources was the next step. And the last one was analyzed of the changes from the 80s to the present days on the basis of decryption of remotely sensed images. After creating a unified cartographic basis, the mapping and plans of different periods have been linked to this geodatabase. Data about green parks, individual old plants existing in the private yards and respondents' Information (according to a questionnaire created in advance) was added to the basic database, the general plan of Tbilisi and Scientific works as well. On the basis of analysis of historic, including cartographic sources, forest-cover maps for different periods of time were made. In addition, was made the catalog of individual green parks (location, area, typical composition, name and so on), which was the basis of creating several thematic maps. Areas with a high rate of green area degradation were identified. Several maps depicting the dynamics of forest cover of Tbilisi were created and analyzed. The methods of linking the data of the old cartographic sources to the modern basis were developed too, the result of which may be used in Urban Planning of Tbilisi. Understanding, perceiving and analyzing the real condition of green cover in Tbilisi and its problems, in turn, will help to take appropriate measures for the maintenance of ancient plants, to develop forests and to plan properly parks, squares, and recreational sites. Because the healthy environment is the main condition of human health and implies to the rational development of the city.Keywords: catalogue of green area, GIS, historical cartography, cartography, remote sensing, Tbilisi
Procedia PDF Downloads 137537 Automatic Differentiation of Ultrasonic Images of Cystic and Solid Breast Lesions
Authors: Dmitry V. Pasynkov, Ivan A. Egoshin, Alexey A. Kolchev, Ivan V. Kliouchkin
Abstract:
In most cases, typical cysts are easily recognized at ultrasonography. The specificity of this method for typical cysts reaches 98%, and it is usually considered as gold standard for typical cyst diagnosis. However, it is necessary to have all the following features to conclude the typical cyst: clear margin, the absence of internal echoes and dorsal acoustic enhancement. At the same time, not every breast cyst is typical. It is especially characteristic for protein-contained cysts that may have significant internal echoes. On the other hand, some solid lesions (predominantly malignant) may have cystic appearance and may be falsely accepted as cysts. Therefore we tried to develop the automatic method of cystic and solid breast lesions differentiation. Materials and methods. The input data were the ultrasonography digital images with the 256-gradations of gray color (Medison SA8000SE, Siemens X150, Esaote MyLab C). Identification of the lesion on these images was performed in two steps. On the first one, the region of interest (or contour of lesion) was searched and selected. Selection of such region is carried out using the sigmoid filter where the threshold is calculated according to the empirical distribution function of the image brightness and, if necessary, it was corrected according to the average brightness of the image points which have the highest gradient of brightness. At the second step, the identification of the selected region to one of lesion groups by its statistical characteristics of brightness distribution was made. The following characteristics were used: entropy, coefficients of the linear and polynomial regression, quantiles of different orders, an average gradient of brightness, etc. For determination of decisive criterion of belonging to one of lesion groups (cystic or solid) the training set of these characteristics of brightness distribution separately for benign and malignant lesions were received. To test our approach we used a set of 217 ultrasonic images of 107 cystic (including 53 atypical, difficult for bare eye differentiation) and 110 solid lesions. All lesions were cytologically and/or histologically confirmed. Visual identification was performed by trained specialist in breast ultrasonography. Results. Our system correctly distinguished all (107, 100%) typical cysts, 107 of 110 (97.3%) solid lesions and 50 of 53 (94.3%) atypical cysts. On the contrary, with the bare eye it was possible to identify correctly all (107, 100%) typical cysts, 96 of 110 (87.3%) solid lesions and 32 of 53 (60.4%) atypical cysts. Conclusion. Automatic approach significantly surpasses the visual assessment performed by trained specialist. The difference is especially large for atypical cysts and hypoechoic solid lesions with the clear margin. This data may have a clinical significance.Keywords: breast cyst, breast solid lesion, differentiation, ultrasonography
Procedia PDF Downloads 269536 Identification, Synthesis, and Biological Evaluation of the Major Human Metabolite of NLRP3 Inflammasome Inhibitor MCC950
Authors: Manohar Salla, Mark S. Butler, Ruby Pelingon, Geraldine Kaeslin, Daniel E. Croker, Janet C. Reid, Jong Min Baek, Paul V. Bernhardt, Elizabeth M. J. Gillam, Matthew A. Cooper, Avril A. B. Robertson
Abstract:
MCC950 is a potent and selective inhibitor of the NOD-like receptor pyrin domain-containing protein 3 (NLRP3) inflammasome that shows early promise for treatment of inflammatory diseases. The identification of major metabolites of lead molecule is an important step during drug development process. It provides an information about the metabolically labile sites in the molecule and thereby helping medicinal chemists to design metabolically stable molecules. To identify major metabolites of MCC950, the compound was incubated with human liver microsomes and subsequent analysis by (+)- and (−)-QTOF-ESI-MS/MS revealed a major metabolite formed due to hydroxylation on 1,2,3,5,6,7-hexahydro-s-indacene moiety of MCC950. This major metabolite can lose two water molecules and three possible regioisomers were synthesized. Co-elution of major metabolite with each of the synthesized compounds using HPLC-ESI-SRM-MS/MS revealed the structure of the metabolite (±) N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. Subsequent synthesis of individual enantiomers and coelution in HPLC-ESI-SRM-MS/MS using a chiral column revealed the metabolite was R-(+)- N-((1-hydroxy-1,2,3,5,6,7-hexahydro-s-indacen-4-yl)carbamoyl)-4-(2-hydroxypropan-2-yl)furan-2-sulfonamide. To study the possible cytochrome P450 enzyme(s) responsible for the formation of major metabolite, MCC950 was incubated with a panel of cytochrome P450 enzymes. The result indicated that CYP1A2, CYP2A6, CYP2B6, CYP2C9, CYP2C18, CYP2C19, CYP2J2 and CYP3A4 are most likely responsible for the formation of the major metabolite. The biological activity of the major metabolite and the other synthesized regioisomers was also investigated by screening for for NLRP3 inflammasome inhibitory activity and cytotoxicity. The major metabolite had 170-fold less inhibitory activity (IC50-1238 nM) than MCC950 (IC50-7.5 nM). Interestingly, one regioisomer had shown nanomolar inhibitory activity (IC50-232 nM). However, no evidence of cytotoxicity was observed with any of these synthesized compounds when tested in human embryonic kidney 293 cells (HEK293) and human liver hepatocellular carcinoma G2 cells (HepG2). These key findings give an insight into the SAR of the hexahydroindacene moiety of MCC950 and reveal a metabolic soft spot which could be blocked by chemical modification.Keywords: Cytochrome P450, inflammasome, MCC950, metabolite, microsome, NLRP3
Procedia PDF Downloads 252535 Developing Creative and Critically Reflective Digital Learning Communities
Authors: W. S. Barber, S. L. King
Abstract:
This paper is a qualitative case study analysis of the development of a fully online learning community of graduate students through arts-based community building activities. With increasing numbers and types of online learning spaces, it is incumbent upon educators to continue to push the edge of what best practices look like in digital learning environments. In digital learning spaces, instructors can no longer be seen as purveyors of content knowledge to be examined at the end of a set course by a final test or exam. The rapid and fluid dissemination of information via Web 3.0 demands that we reshape our approach to teaching and learning, from one that is content-focused to one that is process-driven. Rather than having instructors as formal leaders, today’s digital learning environments require us to share expertise, as it is the collective experiences and knowledge of all students together with the instructors that help to create a very different kind of learning community. This paper focuses on innovations pursued in a 36 hour 12 week graduate course in higher education entitled “Critical and Reflective Practice”. The authors chronicle their journey to developing a fully online learning community (FOLC) by emphasizing the elements of social, cognitive, emotional and digital spaces that form a moving interplay through the community. In this way, students embrace anywhere anytime learning and often take the learning, as well as the relationships they build and skills they acquire, beyond the digital class into real world situations. We argue that in order to increase student online engagement, pedagogical approaches need to stem from two primary elements, both creativity and critical reflection, that are essential pillars upon which instructors can co-design learning environments with students. The theoretical framework for the paper is based on the interaction and interdependence of Creativity, Intuition, Critical Reflection, Social Constructivism and FOLCs. By leveraging students’ embedded familiarity with a wide variety of technologies, this case study of a graduate level course on critical reflection in education, examines how relationships, quality of work produced, and student engagement can improve by using creative and imaginative pedagogical strategies. The authors examine their professional pedagogical strategies through the lens that the teacher acts as facilitator, guide and co-designer. In a world where students can easily search for and organize information as self-directed processes, creativity and connection can at times be lost in the digitized course environment. The paper concludes by posing further questions as to how institutions of higher education may be challenged to restructure their credit granting courses into more flexible modules, and how students need to be considered an important part of assessment and evaluation strategies. By introducing creativity and critical reflection as central features of the digital learning spaces, notions of best practices in digital teaching and learning emerge.Keywords: online, pedagogy, learning, communities
Procedia PDF Downloads 404