Search results for: underground installations
73 Study on Accumulation of Heavy Metals in Sweet Potato, Grown in Industrially Polluted Regions
Authors: Violina Angelova, Galina Pevicharova
Abstract:
A comparative research had been carried out to allow us to determine the quantities and the centers of accumulation of Pb, Cu, Zn and Cd in the vegetative and reproductive organs of the sweet potatoes and to ascertain the possibilities for growing them on soils, polluted with heavy metals. The experiments were performed on agricultural fields contaminated by the (1) Non-Ferrous-Metal Works near Plovdiv, (2) Lead and Zinc Complex near Kardjali and (3) a copper smelter near Pirdop, Bulgaria. The soils used in this experiment were characterized by acid, neutral and slightly alkaline reaction, loamy texture and a moderate content of organic matter. The total content of Zn, Pb, and Cd was high and exceeded the limit value in agriculture soils. Sweet potatoes were in a 2-year rotation scheme on three blocks in the experimental field. On reaching commercial ripeness the sweet potatoes were gathered and the contents of heavy metals in their different parts – root, tuber (peel and core), leaves and stems, were determined after microwave mineralization. The quantitative measurements were carried out with inductively coupled plasma atomic emission spectroscopy. The contamination of the sweet potatoes was due mainly to the presence of heavy metals in the soil, which entered the plants through their root system, as well as by diffusion through the peel. Pb, Cu, Zn, and Cd were selectively accumulated in the underground parts of the sweet potatoes, and most of all in the root system and the peel. Heavy metals have an impact on the development and productivity of the sweet potatoes. The high anthropogenic contamination leads to an increased assimilation of heavy metals which reduces the yield and the quality of the production of sweet potatoes, as well as leads to decrease of the absolute dry substance and the quantity of sugars in sweet potatoes. Sweet potatoes could be grown on soils, which are light to medium polluted with lead, zinc, and cadmium, as they do not accumulate these elements. On heavily polluted soils, however, (Pb – 1504 mg/kg, Zn – 3322 mg/kg, Cd – 47 mg/kg) the growing of sweet potatoes is not allowed, as the accumulation of Pb and Cd in the core of the potatoes exceeds the Maximum Acceptable Concentration. Acknowledgment: The authors gratefully acknowledge the financial support by the Bulgarian National Science Fund (Project DFNI DH04/9).Keywords: heavy metals, polluted soils, sweet potatoes, uptake
Procedia PDF Downloads 21272 3-Dimensional Contamination Conceptual Site Model: A Case Study Illustrating the Multiple Applications of Developing and Maintaining a 3D Contamination Model during an Active Remediation Project on a Former Urban Gasworks Site
Authors: Duncan Fraser
Abstract:
A 3-Dimensional (3D) conceptual site model was developed using the Leapfrog Works® platform utilising a comprehensive historical dataset for a large former Gasworks site in Fitzroy, Melbourne. The gasworks had been constructed across two fractured geological units with varying hydraulic conductivities. A Newer Volcanic (basaltic) outcrop covered approximately half of the site and was overlying a fractured Melbourne formation (Siltstone) bedrock outcropping over the remaining portion. During the investigative phase of works, a dense non-aqueous phase liquid (DNAPL) plume (coal tar) was identified within both geological units in the subsurface originating from multiple sources, including gasholders, tar wells, condensers, and leaking pipework. The first stage of model development was undertaken to determine the horizontal and vertical extents of the coal tar in the subsurface and assess the potential causality between potential sources, plume location, and site geology. Concentrations of key contaminants of interest (COIs) were also interpolated within Leapfrog to refine the distribution of contaminated soils. The model was subsequently used to develop a robust soil remediation strategy and achieve endorsement from an Environmental Auditor. A change in project scope, following the removal and validation of the three former gasholders, necessitated the additional excavation of a significant volume of residual contaminated rock to allow for the future construction of two-story underground basements. To assess financial liabilities associated with the offsite disposal or thermal treatment of material, the 3D model was updated with three years of additional analytical data from the active remediation phase of works. Chemical concentrations and the residual tar plume within the rock fractures were modelled to pre-classify the in-situ material and enhance separation strategies to prevent the unnecessary treatment of material and reduce costs.Keywords: 3D model, contaminated land, Leapfrog, remediation
Procedia PDF Downloads 13171 Influence of Gamma-Radiation Dosimetric Characteristics on the Stability of the Persistent Organic Pollutants
Authors: Tatiana V. Melnikova, Lyudmila P. Polyakova, Alla A. Oudalova
Abstract:
As a result of environmental pollution, the production of agriculture and foodstuffs inevitably contain residual amounts of Persistent Organic Pollutants (POP). The special attention must be given to organic pollutants, including various organochlorinated pesticides (OCP). Among priorities, OCP is DDT (and its metabolite DDE), alfa-HCH, gamma-HCH (lindane). The control of these substances spends proceeding from requirements of sanitary norms and rules. During too time often is lost sight of that the primary product can pass technological processing (in particular irradiation treatment) as a result of which transformation of physicochemical forms of initial polluting substances is possible. The goal of the present work was to study the OCP radiation degradation at a various gamma-radiation dosimetric characteristics. The problems posed for goal achievement: to evaluate the content of the priority of OCPs in food; study the character the degradation of OCP in model solutions (with micro concentrations commensurate with the real content of their agricultural and food products) depending upon dosimetric characteristics of gamma-radiation. Qualitative and quantitative analysis of OCP in food and model solutions by gas chromatograph Varian 3400 (Varian, Inc. (USA)); chromatography-mass spectrometer Varian Saturn 4D (Varian, Inc. (USA)) was carried out. The solutions of DDT, DDE, alpha- and gamma- isomer HCH (0.01, 0.1, 1 ppm) were irradiated on "Issledovatel" (60Co) and "Luch - 1" (60Co) installations at a dose 10 kGy with a variation of dose rate from 0.0083 up to 2.33 kGy/sec. It was established experimentally that OCP residual concentration in individual samples of food products (fish, milk, cereal crops, meat, butter) are evaluated as 10-1-10-4 mg/kg, the value of which depends on the factor-sensations territory and natural migration processes. The results were used in the preparation of model solutions OCP. The dependence of a degradation extent of OCP from a dose rate gamma-irradiation has complex nature. According to our data at a dose 10 kGy, the degradation extent of OCP at first increase passes through a maximum (over the range 0.23 – 0.43 Gy/sec), and then decrease with the magnification of a dose rate. The character of the dependence of a degradation extent of OCP from a dose rate is kept for various OCP, in polar and nonpolar solvents and does not vary at the change of concentration of the initial substance. Also in work conditions of the maximal radiochemical yield of OCP which were observed at having been certain: influence of gamma radiation with a dose 10 kGy, in a range of doses rate 0.23 – 0.43 Gy/sec; concentration initial OCP 1 ppm; use of solvent - 2-propanol after preliminary removal of oxygen. Based on, that at studying model solutions of OCP has been established that the degradation extent of pesticides and qualitative structure of OCP radiolysis products depend on a dose rate, has been decided to continue researches radiochemical transformations OCP into foodstuffs at various of doses rate.Keywords: degradation extent, dosimetric characteristics, gamma-radiation, organochlorinated pesticides, persistent organic pollutants
Procedia PDF Downloads 24970 Application of Thermal Dimensioning Tools to Consider Different Strategies for the Disposal of High-Heat-Generating Waste
Authors: David Holton, Michelle Dickinson, Giovanni Carta
Abstract:
The principle of geological disposal is to isolate higher-activity radioactive wastes deep inside a suitable rock formation to ensure that no harmful quantities of radioactivity reach the surface environment. To achieve this, wastes will be placed in an engineered underground containment facility – the geological disposal facility (GDF) – which will be designed so that natural and man-made barriers work together to minimise the escape of radioactivity. Internationally, various multi-barrier concepts have been developed for the disposal of higher-activity radioactive wastes. High-heat-generating wastes (HLW, spent fuel and Pu) provide a number of different technical challenges to those associated with the disposal of low-heat-generating waste. Thermal management of the disposal system must be taken into consideration in GDF design; temperature constraints might apply to the wasteform, container, buffer and host rock. Of these, the temperature limit placed on the buffer component of the engineered barrier system (EBS) can be the most constraining factor. The heat must therefore be managed such that the properties of the buffer are not compromised to the extent that it cannot deliver the required level of safety. The maximum temperature of a buffer surrounding a container at the centre of a fixed array of heat-generating sources, arises due to heat diffusing from neighbouring heat-generating wastes, incrementally contributing to the temperature of the EBS. A range of strategies can be employed for managing heat in a GDF, including the spatial arrangements or patterns of those containers; different geometrical configurations can influence the overall thermal density in a disposal facility (or area within a facility) and therefore the maximum buffer temperature. A semi-analytical thermal dimensioning tool and methodology have been applied at a generic stage to explore a range of strategies to manage the disposal of high-heat-generating waste. A number of examples, including different geometrical layouts and chequer-boarding, have been illustrated to demonstrate how these tools can be used to consider safety margins and inform strategic disposal options when faced with uncertainty, at a generic stage of the development of a GDF.Keywords: buffer, geological disposal facility, high-heat-generating waste, spent fuel
Procedia PDF Downloads 28569 3D Modeling for Frequency and Time-Domain Airborne EM Systems with Topography
Authors: C. Yin, B. Zhang, Y. Liu, J. Cai
Abstract:
Airborne EM (AEM) is an effective geophysical exploration tool, especially suitable for ridged mountain areas. In these areas, topography will have serious effects on AEM system responses. However, until now little study has been reported on topographic effect on airborne EM systems. In this paper, an edge-based unstructured finite-element (FE) method is developed for 3D topographic modeling for both frequency and time-domain airborne EM systems. Starting from the frequency-domain Maxwell equations, a vector Helmholtz equation is derived to obtain a stable and accurate solution. Considering that the AEM transmitter and receiver are both located in the air, the scattered field method is used in our modeling. The Galerkin method is applied to discretize the Helmholtz equation for the final FE equations. Solving the FE equations, the frequency-domain AEM responses are obtained. To accelerate the calculation speed, the response of source in free-space is used as the primary field and the PARDISO direct solver is used to deal with the problem with multiple transmitting sources. After calculating the frequency-domain AEM responses, a Hankel’s transform is applied to obtain the time-domain AEM responses. To check the accuracy of present algorithm and to analyze the characteristic of topographic effect on airborne EM systems, both the frequency- and time-domain AEM responses for 3 model groups are simulated: 1) a flat half-space model that has a semi-analytical solution of EM response; 2) a valley or hill earth model; 3) a valley or hill earth with an abnormal body embedded. Numerical experiments show that close to the node points of the topography, AEM responses demonstrate sharp changes. Special attentions need to be paid to the topographic effects when interpreting AEM survey data over rugged topographic areas. Besides, the profile of the AEM responses presents a mirror relation with the topographic earth surface. In comparison to the topographic effect that mainly occurs at the high-frequency end and early time channels, the EM responses of underground conductors mainly occur at low frequencies and later time channels. For the signal of the same time channel, the dB/dt field reflects the change of conductivity better than the B-field. The research of this paper will serve airborne EM in the identification and correction of the topographic effects.Keywords: 3D, Airborne EM, forward modeling, topographic effect
Procedia PDF Downloads 31768 Experimental Study and Numerical Modelling of Failure of Rocks Typical for Kuzbass Coal Basin
Authors: Mikhail O. Eremin
Abstract:
Present work is devoted to experimental study and numerical modelling of failure of rocks typical for Kuzbass coal basin (Russia). The main goal was to define strength and deformation characteristics of rocks on the base of uniaxial compression and three-point bending loadings and then to build a mathematical model of failure process for both types of loading. Depending on particular physical-mechanical characteristics typical rocks of Kuzbass coal basin (sandstones, siltstones, mudstones, etc. of different series – Kolchuginsk, Tarbagansk, Balohonsk) manifest brittle and quasi-brittle character of failure. The strength characteristics for both tension and compression are found. Other characteristics are also found from the experiment or taken from literature reviews. On the base of obtained characteristics and structure (obtained from microscopy) the mathematical and structural models are built and numerical modelling of failure under different types of loading is carried out. Effective characteristics obtained from modelling and character of failure correspond to experiment and thus, the mathematical model was verified. An Instron 1185 machine was used to carry out the experiments. Mathematical model includes fundamental conservation laws of solid mechanics – mass, impulse, energy. Each rock has a sufficiently anisotropic structure, however, each crystallite might be considered as isotropic and then a whole rock model has a quasi-isotropic structure. This idea gives an opportunity to use the Hooke’s law inside of each crystallite and thus explicitly accounting for the anisotropy of rocks and the stress-strain state at loading. Inelastic behavior is described in frameworks of two different models: von Mises yield criterion and modified Drucker-Prager yield criterion. The damage accumulation theory is also implemented in order to describe a failure process. Obtained effective characteristics of rocks are used then for modelling of rock mass evolution when mining is carried out both by an open-pit or underground opening.Keywords: damage accumulation, Drucker-Prager yield criterion, failure, mathematical modelling, three-point bending, uniaxial compression
Procedia PDF Downloads 17467 Assessment of Indigenous People Living Condition in Coal Mining Region: An Evidence from Dhanbad, India
Authors: Arun Kumar Yadav
Abstract:
Coal contributes a significant role in India’s developmental mission. But, ironically, on the other side it causes large scale population displacement and significant changes in indigenous people’s livelihood mechanism. Dhanbad which is regarded as one of the oldest and large mining area, as well as a “Coal Capital of India”. Here, mining exploration work started nearly a century ago. But with the passage of time, mining brings a lot of changes in the life of local people. In this context, study tries to do comparative situational analysis of the changes in the living condition of dwellers living in mines affected and non-mines affected villages based on livelihood approach. Since, this place has long history of mining so it is very difficult to conduct before and after comparison between mines and non-mines affected areas. Consequently, the present study is based on relative comparison approach to elucidate the actual scenario. By using primary survey data which was collected by the author during the month of September 2014 to March 2015 at Dhanbad, Jharkhand. The data were collected from eight villages, these were categorised broadly into mines and non-mines affected villages. Further at micro level, mines affected villages has been categorised into open cast and underground mines. This categorization will help us to capture the deeper understanding about the issues of mine affected villages group. Total of 400 household were surveyed. Result depicts that in every sphere mining affected villages are more vulnerable. Regarding financial capital, although mine affected villages are engaged in mining work and get higher mean income. But in contrast, non-mine affected villages are more occupationally diversified. They have an opportunity to earn money from diversified extents like agricultural land, working in mining area, selling coal informally as well as receiving remittances. Non-mines affected villages are in better physical capital which comprises of basic infrastructure to support livelihood. They have an access to secured shelter, adequate water supply & sanitation, and affordable information and transport. Mining affected villages are more prone to health risks. Regarding social capital, it shows that in comparison to last five years, law and order has been improved in mine affected villages.Keywords: displacement, indigenous, livelihood, mining
Procedia PDF Downloads 31166 Modelling for Roof Failure Analysis in an Underground Cave
Authors: M. Belén Prendes-Gero, Celestino González-Nicieza, M. Inmaculada Alvarez-Fernández
Abstract:
Roof collapse is one of the problems with a higher frequency in most of the mines of all countries, even now. There are many reasons that may cause the roof to collapse, namely the mine stress activities in the mining process, the lack of vigilance and carelessness or the complexity of the geological structure and irregular operations. This work is the result of the analysis of one accident produced in the “Mary” coal exploitation located in northern Spain. In this accident, the roof of a crossroad of excavated galleries to exploit the “Morena” Layer, 700 m deep, collapsed. In the paper, the work done by the forensic team to determine the causes of the incident, its conclusions and recommendations are collected. Initially, the available documentation (geology, geotechnics, mining, etc.) and accident area were reviewed. After that, laboratory and on-site tests were carried out to characterize the behaviour of the rock materials and the support used (metal frames and shotcrete). With this information, different hypotheses of failure were simulated to find the one that best fits reality. For this work, the software of finite differences in three dimensions, FLAC 3D, was employed. The results of the study confirmed that the detachment was originated as a consequence of one sliding in the layer wall, due to the large roof span present in the place of the accident, and probably triggered as a consequence of the existence of a protection pillar insufficient. The results allowed to establish some corrective measures avoiding future risks. For example, the dimensions of the protection zones that must be remained unexploited and their interaction with the crossing areas between galleries, or the use of more adequate supports for these conditions, in which the significant deformations may discourage the use of rigid supports such as shotcrete. At last, a grid of seismic control was proposed as a predictive system. Its efficiency was tested along the investigation period employing three control equipment that detected new incidents (although smaller) in other similar areas of the mine. These new incidents show that the use of explosives produces vibrations which are a new risk factor to analyse in a next future.Keywords: forensic analysis, hypothesis modelling, roof failure, seismic monitoring
Procedia PDF Downloads 11565 Detection the Ice Formation Processes Using Multiple High Order Ultrasonic Guided Wave Modes
Authors: Regina Rekuviene, Vykintas Samaitis, Liudas Mažeika, Audrius Jankauskas, Virginija Jankauskaitė, Laura Gegeckienė, Abdolali Sadaghiani, Shaghayegh Saeidiharzand
Abstract:
Icing brings significant damage to aviation and renewable energy installations. Air-conditioning, refrigeration, wind turbine blades, airplane and helicopter blades often suffer from icing phenomena, which cause severe energy losses and impair aerodynamic performance. The icing process is a complex phenomenon with many different causes and types. Icing mechanisms, distributions, and patterns are still relevant to research topics. The adhesion strength between ice and surfaces differs in different icing environments. This makes the task of anti-icing very challenging. The techniques for various icing environments must satisfy different demands and requirements (e.g., efficient, lightweight, low power consumption, low maintenance and manufacturing costs, reliable operation). It is noticeable that most methods are oriented toward a particular sector and adapting them to or suggesting them for other areas is quite problematic. These methods often use various technologies and have different specifications, sometimes with no clear indication of their efficiency. There are two major groups of anti-icing methods: passive and active. Active techniques have high efficiency but, at the same time, quite high energy consumption and require intervention in the structure’s design. It’s noticeable that vast majority of these methods require specific knowledge and personnel skills. The main effect of passive methods (ice-phobic, superhydrophobic surfaces) is to delay ice formation and growth or reduce the adhesion strength between the ice and the surface. These methods are time-consuming and depend on forecasting. They can be applied on small surfaces only for specific targets, and most are non-biodegradable (except for anti-freezing proteins). There is some quite promising information on ultrasonic ice mitigation methods that employ UGW (Ultrasonic Guided Wave). These methods are have the characteristics of low energy consumption, low cost, lightweight, and easy replacement and maintenance. However, fundamental knowledge of ultrasonic de-icing methodology is still limited. The objective of this work was to identify the ice formation processes and its progress by employing ultrasonic guided wave technique. Throughout this research, the universal set-up for acoustic measurement of ice formation in a real condition (temperature range from +240 C to -230 C) was developed. Ultrasonic measurements were performed by using high frequency 5 MHz transducers in a pitch-catch configuration. The selection of wave modes suitable for detection of ice formation phenomenon on copper metal surface was performed. Interaction between the selected wave modes and ice formation processes was investigated. It was found that selected wave modes are sensitive to temperature changes. It was demonstrated that proposed ultrasonic technique could be successfully used for the detection of ice layer formation on a metal surface.Keywords: ice formation processes, ultrasonic GW, detection of ice formation, ultrasonic testing
Procedia PDF Downloads 6464 Analysis of the Introduction of Carsharing in the Context of Developing Countries: A Case Study Based on On-Board Carsharing Survey in Kabul, Afghanistan
Authors: Mustafa Rezazada, Takuya Maruyama
Abstract:
Cars have a strong integration with the human being since its introduction, and this interaction is more evident in the urban context. Therefore, shifting city residents from driving private vehicles to public transits has been a big challenge. Accordingly, carsharing as an innovative, environmentally friendly transport alternative had a significant contribution to this transition so far. It helped to reduce the numbers of household car ownership, declining demand for on-street parking, dropping the numbers of kilometers traveled by car, and affects the future of mobility by decreasing the Green House Gases (GHS) emissions’ and the numbers of new cars to be purchased otherwise. However, majorities of carsharing researches were conducted in highly developed cities, and less attention has been paid to the cities of developing countries. This study is conducted in the Capital of Afghanistan, Kabul to investigate the current transport pattern, user behavior, and to examine the possibility of introducing the carsharing system. This study established a new survey method called Onboard Carsharing Survey OCS. In this survey, the carpooling passengers aboard are interviewed following the Onboard Transit Survey OTS guideline with a few refinements. The survey focuses on respondents’ daily travel behavior and hypothetical stated choice of carsharing opportunities. Moreover, it followed by an aggregate analysis at the end. The survey results indicate the following: two-thirds of the respondents 62% have been carpooling every day since 5 years or more, more than half of the respondents are not satisfied with current modes, besides other attributes the Traffic Congestion, Environment and Insufficient Public Transport were ranked the most critical in daily transportation by survey participants. Moreover, 68.24% of the respondent chose Carsharing over carpooling under different choice game scenarios. Overall, the findings in this research show that Kabul City is a potential underground for the introduction of Carsharing in the future. Taken together, insufficient public transit, dissatisfaction with current modes, and their stated interest will affect the future of carsharing positively in Kabul City. The modal choice in this study is limited to carpooling and carsharing; more choice sets, including bus, cycling, and walking, will have to be added to evaluate further.Keywords: carsharing, developing countries, Kabul Afghanistan, onboard carsharing survey, transportation, urban planning
Procedia PDF Downloads 13563 Selection and Identification of Some Spontaneous Plant Species Having the Ability to Grow Naturally on Crude Oil Contaminated Soil for a Possible Approach to Decontaminate and Rehabilitate an Industrial Area
Authors: Salima Agoun-Bahar, Ouzna Abrous-Belbachir, Souad Amelal
Abstract:
Industrial areas generally contain heavy metals; thus, negative consequences can appear in the medium and long term on the fauna and flora, but also on the food chain, which man constitutes the final link. The SONATRACH Company has become aware of the importance of environmental protection by setting up a rehabilitation program for polluted sites in order to avoid major ecological disasters and find both curative and preventive solutions. The aim of this work consists to study industrial pollution located around a crude oil storage tank in the Algiers refinery of Sidi R'cine and to select the plants which accumulate the most heavy metals for possible use in phytotechnology. Sampling of whole plants with their soil clod was realized around the pollution source at a depth of twenty centimeters, then transported to the laboratory to identify them. The quantification of heavy metals, lead, zinc, copper, and nickel was carried out by atomic absorption spectrophotometry with flame in the soil and at the level of the aerial and underground parts of the plants. Ten plant species were recorded in the polluted site, three of them belonging to the grass family with a dominance percentage higher than 50%, followed by three other species belonging to the Composite family represented by 12% and one species for each of the families Linaceae, Plantaginaceae, Papilionaceae, and Boraginaceae. Koeleria phleoïdes L. and Avena sterilis L. of the grass family seem to be the dominant plants, although they are quite far from the pollution source. Lead pollution of soils is the most pronounced for all stations, with values varying from 237.5 to 2682.5 µg.g⁻¹. Other peaks are observed for zinc (1177 µg.g⁻¹) and copper (635 µg.g⁻¹) at station 8 and nickel (1800 µg.g⁻¹) at station 10. Among the inventoried plants, some species accumulate a significant amount of metals: Trifolium sp and K.phleoides for lead and zinc, P.lanceolata and G.tomentosa for nickel, and A.clavatus for zinc. K.phloides is a very interesting species because it accumulates an important quantity of heavy metals, especially in its aerial part. This can be explained by its use of the phytoextraction technique, which will facilitate the recovery of the pollutants by the simple removal of shoots.Keywords: heavy metals, industrial pollution, phytotechnology, rehabilitation
Procedia PDF Downloads 6662 Digital Technology Relevance in Archival and Digitising Practices in the Republic of South Africa
Authors: Tashinga Matindike
Abstract:
By means of definition, digital artworks encompass an array of artistic productions that are expressed in a technological form as an essential part of a creative process. Examples include illustrations, photos, videos, sculptures, and installations. Within the context of the visual arts, the process of repatriation involves the return of once-appropriated goods. Archiving denotes the preservation of a commodity for storage purposes in order to nurture its continuity. The aforementioned definitions form the foundation of the academic framework and premise of the argument, which is outlined in this paper. This paper aims to define, discuss and decipher the complexities involved in digitising artworks, whilst explaining the benefits of the process, particularly within the South African context, which is rich in tangible and intangible traditional cultural material, objects, and performances. With the internet having been introduced to the African Continent in the early 1990s, this new form of technology, in its own right, initiated a high degree of efficiency, which also resulted in the progressive transformation of computer-generated visual output. Subsequently, this caused a revolutionary influence on the manner in which technological software was developed and uterlised in art-making. Digital technology and the digitisation of creative processes then opened up new avenues of collating and recording information. One of the first visual artists to make use of digital technology software in his creative productions was United States-based artist John Whitney. His inventive work contributed greatly to the onset and development of digital animation. Comparable by technique and originality, South African contemporary visual artists who make digital artworks, both locally and internationally, include David Goldblatt, Katherine Bull, Fritha Langerman, David Masoga, Zinhle Sethebe, Alicia Mcfadzean, Ivan Van Der Walt, Siobhan Twomey, and Fhatuwani Mukheli. In conclusion, the main objective of this paper is to address the following questions: In which ways has the South African art community of visual artists made use of and benefited from technology, in its digital form, as a means to further advance creativity? What are the positive changes that have resulted in art production in South Africa since the onset and use of digital technological software? How has digitisation changed the manner in which we record, interpret, and archive both written and visual information? What is the role of South African art institutions in the development of digital technology and its use in the field of visual art. What role does digitisation play in the process of the repatriation of artworks and artefacts. The methodology in terms of the research process of this paper takes on a multifacted form, inclusive of data analysis of information attained by means of qualitative and quantitative approaches.Keywords: digital art, digitisation, technology, archiving, transformation and repatriation
Procedia PDF Downloads 5261 Taxonomy of Araceous Plants on Limestone Mountains in Lop Buri and Saraburi Provinces, Thailand
Authors: Duangchai Sookchaloem, Sutida Maneeanakekul
Abstract:
Araceous plant or Araceae is a monocotyledon family having numerous potential useful plants. Two hundred and ten species of Araceae were reported in Thailand, of which 43 species were reported as threatened plants. Fifty percent of endemic status and rare status plants were recorded in limestone areas. Currently, these areas are seriously threatened by land-use changes. The study on taxonomy of Araceous plants was carried out in Lop Buri and Saraburi limestone mountains from February 2011 to May 2015. The purposes of this study were to study species diversity, taxonomic character and ecological habitat. 55 specimens collected from various limestone areas including Pra Phut Tabat National forest (Pra Phut Tabat Mountain, Khao Pra Phut Tabat Noi Mountains, Wat Thum Krabog Mountain), Tab Khwang and Muak Lek Natinal forest (Pha Lad mountain, and Muak Lek waterfall) in Saraburi province ,and Wang Plaeng Ta Muang and Lumnarai National forest (Wat Thum chang phuk mountain), Panead National forest (Wat Khao Samo Khon Mountain), Lan Ta Ridge National forest (Khao Wong Prachan mountain, Wat Pa Chumchon) in Lop Buri province. Twenty species of Araceous plants were identified using characteristics of underground stem, phyllotaxis and leaf blade, spathe and spadix. Species list are Aglaonema cochinchinense, A. simplex, Alocasia acuminata, Amorphophallus paeoniifolius, A. albispathus, A. saraburiensis, A. pseudoharmandii, Pycnospatha arietina, Hapaline kerri, Lasia spinosa, Pothos scandens, Typhonium laoticum, T. orbifolium, T. saraburiense, T. trilobatum, T. sp.1, T. sp. 2, Cryptocoryne crispatula var. balansae, Scindapsus sp., and Rhaphidophora peepla. Five species are new locality records. One species (Typhonium sp.1) is considered as a new species. Seven species were reported as threatened plants in Thailand Red Data Book. Taxonomic features were used for key to species constructions. Araceous specimens were found in mixed deciduous forests, dry evergreen forests with 50-470 m. elevation. New ecological habitat of Typhonium laoticum, T. orbifolium, and T. saraburiense were reported in this study.Keywords: ecology, limestone mountains, Lopburi and Saraburi provinces, species diversity, taxonomic character
Procedia PDF Downloads 24060 Desulphurization of Waste Tire Pyrolytic Oil (TPO) Using Photodegradation and Adsorption Techniques
Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng
Abstract:
The nature of tires makes them extremely challenging to recycle due to the available chemically cross-linked polymer and, therefore, they are neither fusible nor soluble and, consequently, cannot be remolded into other shapes without serious degradation. Open dumping of tires pollutes the soil, contaminates underground water and provides ideal breeding grounds for disease carrying vermins. The thermal decomposition of tires by pyrolysis produce char, gases and oil. The composition of oils derived from waste tires has common properties to commercial diesel fuel. The problem associated with the light oil derived from pyrolysis of waste tires is that it has a high sulfur content (> 1.0 wt.%) and therefore emits harmful sulfur oxide (SOx) gases to the atmosphere when combusted in diesel engines. Desulphurization of TPO is necessary due to the increasing stringent environmental regulations worldwide. Hydrodesulphurization (HDS) is the commonly practiced technique for the removal of sulfur species in liquid hydrocarbons. However, the HDS technique fails in the presence of complex sulfur species such as Dibenzothiopene (DBT) present in TPO. This study aims to investigate the viability of photodegradation (Photocatalytic oxidative desulphurization) and adsorptive desulphurization technologies for efficient removal of complex and non-complex sulfur species in TPO. This study focuses on optimizing the cleaning (removal of impurities and asphaltenes) process by varying process parameters; temperature, stirring speed, acid/oil ratio and time. The treated TPO will then be sent for vacuum distillation to attain the desired diesel like fuel. The effect of temperature, pressure and time will be determined for vacuum distillation of both raw TPO and the acid treated oil for comparison purposes. Polycyclic sulfides present in the distilled (diesel like) light oil will be oxidized dominantly to the corresponding sulfoxides and sulfone via a photo-catalyzed system using TiO2 as a catalyst and hydrogen peroxide as an oxidizing agent and finally acetonitrile will be used as an extraction solvent. Adsorptive desulphurization will be used to adsorb traces of sulfurous compounds which remained during photocatalytic desulphurization step. This desulphurization convoy is expected to give high desulphurization efficiency with reasonable oil recovery.Keywords: adsorption, asphaltenes, photocatalytic oxidation, pyrolysis
Procedia PDF Downloads 27259 Standardization of Solar Water Pumping System for Remote Areas in Indonesia
Authors: Danar Agus Susanto, Hermawan Febriansyah, Meilinda Ayundyahrini
Abstract:
The availability of spring water to meet people demand is often a problem, especially in tropical areas with very limited surface water sources, or very deep underground water. Although the technology and equipment of pumping system are available and easy to obtain, but in remote areas, the availability of pumping system is difficult, due to the unavailability of fuel or the lack of electricity. Solar Water Pumping System (SWPS) became one of the alternatives that can overcome these obstacles. In the tropical country, sunlight can be obtained throughout the year, even in remote areas. SWPS were already widely built in Indonesia, but many encounter problems during operations, such as decreased of efficiency; pump damaged, damaged of controllers or inverters, and inappropriate photovoltaic performance. In 2011, International Electrotechnical Commission (IEC) issued the IEC standard 62253:2011 titled Photovoltaic pumping systems - Design qualification and performance measurements. This standard establishes design qualifications and performance measurements related to the product of a solar water pumping system. National Standardization Agency of Indonesia (BSN) as the national standardization body in Indonesia, has not set the standard related to solar water pumping system. This research to study operational procedures of SWPS by adopting of IEC Standard 62253:2011 to be Indonesia Standard (SNI). This research used literature study and field observation for installed SWPS in Indonesia. Based on the results of research on SWPS already installed in Indonesia, IEC 62253: 2011 standard can improve efficiency and reduce operational failure of SWPS. SWPS installed in Indonesia still has GAP of 51% against parameters in IEC standard 62253: 2011. The biggest factor not being met is related to operating and maintenance handbooks for personnel that included operation and repair procedures. This may result in operator ignorance in installing, operating and maintaining the system. The Photovoltaic (PV) was also the most non-compliance factor of 71%, although there are 22 Indonesia Standard (SNI) for PV (modules, installation, testing, and construction). These research samples (installers, manufacturers/distributors, and experts) agreed on the parameter in the IEC standard 62253: 2011 able to improve the quality of SWPS in Indonesia. Recommendations of this study, that is required the adoption of IEC standard 62253:2011 into SNI to support the development of SWPS for remote areas in Indonesia.Keywords: efficiency, inappropriate installation, remote areas, solar water pumping system, standard
Procedia PDF Downloads 19758 Application of Ground-Penetrating Radar in Environmental Hazards
Authors: Kambiz Teimour Najad
Abstract:
The basic methodology of GPR involves the use of a transmitting antenna to send electromagnetic waves into the subsurface, which then bounce back to the surface and are detected by a receiving antenna. The transmitter and receiver antennas are typically placed on the ground surface and moved across the area of interest to create a profile of the subsurface. The GPR system consists of a control unit that powers the antennas and records the data, as well as a display unit that shows the results of the survey. The control unit sends a pulse of electromagnetic energy into the ground, which propagates through the soil or rock until it encounters a change in material or structure. When the electromagnetic wave encounters a buried object or structure, some of the energy is reflected back to the surface and detected by the receiving antenna. The GPR data is then processed using specialized software that analyzes the amplitude and travel time of the reflected waves. By interpreting the data, GPR can provide information on the depth, location, and nature of subsurface features and structures. GPR has several advantages over other geophysical survey methods, including its ability to provide high-resolution images of the subsurface and its non-invasive nature, which minimizes disruption to the site. However, the effectiveness of GPR depends on several factors, including the type of soil or rock, the depth of the features being investigated, and the frequency of the electromagnetic waves used. In environmental hazard assessments, GPR can be used to detect buried structures, such as underground storage tanks, pipelines, or utilities, which may pose a risk of contamination to the surrounding soil or groundwater. GPR can also be used to assess soil stability by identifying areas of subsurface voids or sinkholes, which can lead to the collapse of the surface. Additionally, GPR can be used to map the extent and movement of groundwater contamination, which is critical in designing effective remediation strategies. the methodology of GPR in environmental hazard assessments involves the use of electromagnetic waves to create high of the subsurface, which are then analyzed to provide information on the depth, location, and nature of subsurface features and structures. This information is critical in identifying and mitigating environmental hazards, and the non-invasive nature of GPR makes it a valuable tool in this field.Keywords: GPR, hazard, landslide, rock fall, contamination
Procedia PDF Downloads 8157 Test Rig Development for Up-to-Date Experimental Study of Multi-Stage Flash Distillation Process
Authors: Marek Vondra, Petr Bobák
Abstract:
Vacuum evaporation is a reliable and well-proven technology with a wide application range which is frequently used in food, chemical or pharmaceutical industries. Recently, numerous remarkable studies have been carried out to investigate utilization of this technology in the area of wastewater treatment. One of the most successful applications of vacuum evaporation principal is connected with seawater desalination. Since 1950’s, multi-stage flash distillation (MSF) has been the leading technology in this field and it is still irreplaceable in many respects, despite a rapid increase in cheaper reverse-osmosis-based installations in recent decades. MSF plants are conveniently operated in countries with a fluctuating seawater quality and at locations where a sufficient amount of waste heat is available. Nowadays, most of the MSF research is connected with alternative heat sources utilization and with hybridization, i.e. merging of different types of desalination technologies. Some of the studies are concerned with basic principles of the static flash phenomenon, but only few scientists have lately focused on the fundamentals of continuous multi-stage evaporation. Limited measurement possibilities at operating plants and insufficiently equipped experimental facilities may be the reasons. The aim of the presented study was to design, construct and test an up-to-date test rig with an advanced measurement system which will provide real time monitoring options of all the important operational parameters under various conditions. The whole system consists of a conventionally designed MSF unit with 8 evaporation chambers, versatile heating circuit for different kinds of feed water (e.g. seawater, waste water), sophisticated system for acquisition and real-time visualization of all the related quantities (temperature, pressure, flow rate, weight, conductivity, pH, water level, power input), access to a wide spectrum of operational media (salt, fresh and softened water, steam, natural gas, compressed air, electrical energy) and integrated transparent features which enable a direct visual control of selected physical mechanisms (water evaporation in chambers, water level right before brine and distillate pumps). Thanks to the adjustable process parameters, it is possible to operate the test unit at desired operational conditions. This allows researchers to carry out statistical design and analysis of experiments. Valuable results obtained in this manner could be further employed in simulations and process modeling. First experimental tests confirm correctness of the presented approach and promise interesting outputs in the future. The presented experimental apparatus enables flexible and efficient research of the whole MSF process.Keywords: design of experiment, multi-stage flash distillation, test rig, vacuum evaporation
Procedia PDF Downloads 38656 New Evaluation of the Richness of Cactus (Opuntia) in Active Biomolecules and their Use in Agri-Food, Cosmetic, and Pharmaceutical
Authors: Lazhar Zourgui
Abstract:
Opuntia species are used as local medicinal interventions for chronic diseases and as food sources, mainly because they possess nutritional properties and biological activities. Opuntia ficus-indica (L.) Mill, commonly known as prickly pear or nopal cactus, is the most economically valuable plant in the Cactaceae family worldwide. It is a tropical or subtropical plant native to tropical and subtropical America, which can grow in arid and semi-arid climates. It belongs to the family of angiosperms dicotyledons Cactaceae of which about 1500 species of cacti are known. The Opuntia plant is distributed throughout the world and has great economic potential. There are differences in the phytochemical composition of Opuntia species between wild and domesticated species and within the same species. It is an interesting source of plant bioactive compounds. Bioactive compounds are compounds with nutritional benefits and are generally classified into phenolic and non-phenolic compounds and pigments. Opuntia species are able to grow in almost all climates, for example, arid, temperate, and tropical climates, and their bioactive compound profiles change depending on the species, cultivar, and climatic conditions. Therefore, there is an opportunity for the discovery of new compounds from different Opuntia cultivars. Health benefits of prickly pear are widely demonstrated: There is ample evidence of the health benefits of consuming prickly pear due to its source of nutrients and vitamins and its antioxidant properties due to its content of bioactive compounds. In addition, prickly pear is used in the treatment of hyperglycemia and high cholesterol levels, and its consumption is linked to a lower incidence of coronary heart disease and certain types of cancer. It may be effective in insulin-independent type 2 diabetes mellitus. Opuntia ficus-Indica seed oil has shown potent antioxidant and prophylactic effects. Industrial applications of these bioactive compounds are increasing. In addition to their application in the pharmaceutical industries, bioactive compounds are used in the food industry for the production of nutraceuticals and new food formulations (juices, drinks, jams, sweeteners). In my lecture, I will review in a comprehensive way the phytochemical, nutritional, and bioactive compound composition of the different aerial and underground parts of Opuntia species. The biological activities and applications of Opuntia compounds are also discussed.Keywords: medicinal plants, cactus, Opuntia, actives biomolecules, biological activities
Procedia PDF Downloads 10555 Solar Power Forecasting for the Bidding Zones of the Italian Electricity Market with an Analog Ensemble Approach
Authors: Elena Collino, Dario A. Ronzio, Goffredo Decimi, Maurizio Riva
Abstract:
The rapid increase of renewable energy in Italy is led by wind and solar installations. The 2017 Italian energy strategy foresees a further development of these sustainable technologies, especially solar. This fact has resulted in new opportunities, challenges, and different problems to deal with. The growth of renewables allows to meet the European requirements regarding energy and environmental policy, but these types of sources are difficult to manage because they are intermittent and non-programmable. Operationally, these characteristics can lead to instability on the voltage profile and increasing uncertainty on energy reserve scheduling. The increasing renewable production must be considered with more and more attention especially by the Transmission System Operator (TSO). The TSO, in fact, every day provides orders on energy dispatch, once the market outcome has been determined, on extended areas, defined mainly on the basis of power transmission limitations. In Italy, six market zone are defined: Northern-Italy, Central-Northern Italy, Central-Southern Italy, Southern Italy, Sardinia, and Sicily. An accurate hourly renewable power forecasting for the day-ahead on these extended areas brings an improvement both in terms of dispatching and reserve management. In this study, an operational forecasting tool of the hourly solar output for the six Italian market zones is presented, and the performance is analysed. The implementation is carried out by means of a numerical weather prediction model, coupled with a statistical post-processing in order to derive the power forecast on the basis of the meteorological projection. The weather forecast is obtained from the limited area model RAMS on the Italian territory, initialized with IFS-ECMWF boundary conditions. The post-processing calculates the solar power production with the Analog Ensemble technique (AN). This statistical approach forecasts the production using a probability distribution of the measured production registered in the past when the weather scenario looked very similar to the forecasted one. The similarity is evaluated for the components of the solar radiation: global (GHI), diffuse (DIF) and direct normal (DNI) irradiation, together with the corresponding azimuth and zenith solar angles. These are, in fact, the main factors that affect the solar production. Considering that the AN performance is strictly related to the length and quality of the historical data a training period of more than one year has been used. The training set is made by historical Numerical Weather Prediction (NWP) forecasts at 12 UTC for the GHI, DIF and DNI variables over the Italian territory together with corresponding hourly measured production for each of the six zones. The AN technique makes it possible to estimate the aggregate solar production in the area, without information about the technologic characteristics of the all solar parks present in each area. Besides, this information is often only partially available. Every day, the hourly solar power forecast for the six Italian market zones is made publicly available through a website.Keywords: analog ensemble, electricity market, PV forecast, solar energy
Procedia PDF Downloads 15854 Ecophysiological Features of Acanthosicyos horridus (!Nara) to Survive the Namib Desert
Authors: Jacques M. Berner, Monja Gerber, Gillian L. Maggs-Kolling, Stuart J. Piketh
Abstract:
The enigmatic melon species, Acanthosicyos horridus Welw. ex Hook. f., locally known as !nara, is endemic to the hyper-arid Namib Desert, where it thrives in sandy dune areas and dry river banks. The Namib Desert is characterized by extreme weather conditions which include high temperatures, very low rainfall, and extremely dry air. Plant and animals that have made the Namib Dessert their home are dependent on non-rainfall water inputs, like fog, dew and water vapor, for survival. Fog is believed to be the most important non-rainfall water input for most of the coastal Namib Desert and is a life line to many Namib plants and animals. It is commonly assumed that the !nara plant is adapted and dependent upon coastal fog events. The !nara plant shares many comparable adaptive features with other organisms that are known to exploit fog as a source of moisture. These include groove-like structures on the stems and the cone-like structures of thorns. These structures are believed to be the driving forces behind directional water flow that allow plants to take advantage of fog events. The !nara-fog interaction was investigated in this study to determine the dependence of !nara on these fog events, as it would illustrate strategies to benefit from non-rainfall water inputs. The direct water uptake capacity of !nara shoots was investigated through absorption tests. Furthermore, the movement and behavior of fluorescent water droplets on a !nara stem were investigated through time-lapse macrophotography. The shoot water potential was measured to investigate the effect of fog on the water status of !nara stems. These tests were used to determine whether the morphology of !nara has evolved to exploit fog as a non-rainfall water input and whether the !nara plant has adapted physiologically in response to fog. Chlorophyll a fluorescence was used to compare the photochemical efficiency of !nara plants on days with fog events to that on non-foggy days. The results indicate that !nara plants do have the ability to take advantage of fog events as commonly believed. However, the !nara plant did not exhibit visible signs of drought stress and this, together with the strong shoot water potential, indicates that these plants are reliant on permanent underground water sources. Chlorophyll a fluorescence data indicated that temperature stress and wind were some of the main abiotic factors influencing the plants’ overall vitality.Keywords: Acanthosicyos horridus, chlorophyll a fluorescence, fog, foliar absorption, !nara
Procedia PDF Downloads 15853 Physical Properties Characterization of Shallow Aquifer and Groundwater Quality Using Geophysical Method Based on Electrical Resistivity Tomography in Arid Region, Northeastern Area of Tunisia: A Study Case of Smar Aquifer
Authors: Nesrine Frifita
Abstract:
In recent years, serious interest in underground sources has led to more intensive studies of depth, thickness, geometry and properties of aquifers. Geophysical method is the common technique used in discovering the subsurface. However, determining the exact location of groundwater in subsurface layers is one of problems that needs to be resolved. While the biggest problem is the quality of the groundwater which suffers from pollution risk especially with water shortage in arid regions under a remarkable climate change. The present study was conducted using electrical resistivity tomography at Jeffara coastal area in Southeast Tunisia to image the potential shallow aquifer and studying their physical properties. The purpose of this study is to understand the characteristics and depth of the Smar aquifer. Therefore, it can be used as a reference in groundwater drilling in order to guide the farmers and to improve the living of the inhabitants of nearby cities. The use of the Winner-Schlumberger array for data acquisition is suitable to obtain a deeper profile in areas with homogeneous layers. For that, six electrical resistivity profiles were carried out in Smar watershed using 72 electrodes with 4 and 5 m spacing. The resistivity measurements were carefully interpreted by a least-square inversion technique using the RES2DINV program. Findings show that the Smar aquifer has about 31 m thickness and it extends to 36.5 m depth in the downstream area of Oued Smar. The defined depth and geometry of Smar aquifer indicate that the sedimentary cover thins toward the coast, and the Smar shallow aquifer becomes deeper toward the West. While the resistivity values show a significant contrast even reaching < 1 Ωm in ERT1, this resistivity value can be related to the saline water that foretells a risk of pollution and bad groundwater quality. The ERT1 geoelectrical model defines an unsaturated zone, while under ERT3 site, the geoelectrical model presents a saturated zone, which reflect a low resistivity values indicate the locally surface water coming from the nearby Office of the National Sanitation Utility (ONAS) that can be a source of recharge of the studied shallow aquifer and more deteriorate the groundwater quality in this region.Keywords: electrical resistivity tomography, groundwater, recharge, smar aquifer, southeastern tunisia
Procedia PDF Downloads 7452 Investigation of Subsurface Structures within Bosso Local Government for Groundwater Exploration Using Magnetic and Resistivity Data
Authors: Adetona Abbassa, Aliyu Shakirat B.
Abstract:
The study area is part of Bosso local Government, enclosed within Longitude 6.25’ to 6.31’ and Latitude 9.35’ to 9.45’, an area of 16x8 km², within the basement region of central Nigeria. The region is a host to Nigerian Airforce base 12 (NAF 12quick response) and its staff quarters, the headquarters of Bosso local government, the Independent National Electoral Commission’s two offices, four government secondary schools, six primary schools and Minna international airport. The area suffers an acute shortage of water from November when rains stop to June when rains commence within North Central Nigeria. A way of addressing this problem is a reconnaissance method to delineate possible fractures and fault lines that exists within the region by sampling the Aeromagnetic data and using an appropriate analytical algorithm to delineate these fractures. This is followed by an appropriate ground truthing method that will confirm if the fracture is connected to underground water movement. The first vertical derivative for structural analysis, reveals a set of lineaments labeled AA’, BB’, CC’, DD’, EE’ and FF’ all trending in the Northeast – Southwest directions. AA’ is just below latitude 9.45’ above Maikunkele village, cutting off the upper part of the field, it runs through Kangwo, Nini, Lawo and other communities. BB’ is at Latitude 9.43’ it truncated at about 2Km before Maikunkele and Kuyi. CC’ is around 9.40’ sitting below Maikunkele runs down through Nanaum. DD’ is from Latitude 9.38’; interestingly no community within this region where the fault passes through. A result from the three sites where Vertical Electrical Sounding was carried out reveals three layers comprised of topsoil, intermediate Clay formation and weathered/fractured or fresh basement. The depth to basement map was also produced, depth to the basement from the ground surface with VES A₂, B5, D₂ and E₁ to be relatively deeper with depth values range between 25 to 35 m while the shallower region of the area has a depth range value between 10 to 20 m. Hence, VES A₂, A₅, B₄, B₅, C₂, C₄, D₄, D₅, E₁, E₃, and F₄ are high conductivity zone that are prolific for groundwater potential. The depth range of the aquifer potential zones is between 22.7 m to 50.4 m. The result from site C is quite unique though the 3 layers were detected in the majority of the VES points, the maximum depth to the basement in 90% of the VES points is below 8 km, only three VES points shows considerably viability, which are C₆, E₂ and F₂ with depths of 35.2 m and 38 m respectively but lack of connectivity will be a big challenge of chargeability.Keywords: lithology, aeromagnetic, aquifer, geoelectric, iso-resistivity, basement, vertical electrical sounding(VES)
Procedia PDF Downloads 13751 How to Reach Net Zero Emissions? On the Permissibility of Negative Emission Technologies and the Danger of Moral Hazards
Authors: Hanna Schübel, Ivo Wallimann-Helmer
Abstract:
In order to reach the goal of the Paris Agreement to not overshoot 1.5°C of warming above pre-industrial levels, various countries including the UK and Switzerland have committed themselves to net zero emissions by 2050. The employment of negative emission technologies (NETs) is very likely going to be necessary for meeting these national objectives as well as other internationally agreed climate targets. NETs are methods of removing carbon from the atmosphere and are thus a means for addressing climate change. They range from afforestation to technological measures such as direct air capture and carbon storage (DACCS), where CO2 is captured from the air and stored underground. As all so-called geoengineering technologies, the development and deployment of NETs are often subject to moral hazard arguments. As these technologies could be perceived as an alternative to mitigation efforts, so the argument goes, they are potentially a dangerous distraction from the main target of mitigating emissions. We think that this is a dangerous argument to make as it may hinder the development of NETs which are an essential element of net zero emission targets. In this paper we argue that the moral hazard argument is only problematic if we do not reflect upon which levels of emissions are at stake in order to meet net zero emissions. In response to the moral hazard argument we develop an account of which levels of emissions in given societies should be mitigated and not be the target of NETs and which levels of emissions can legitimately be a target of NETs. For this purpose, we define four different levels of emissions: the current level of individual emissions, the level individuals emit in order to appear in public without shame, the level of a fair share of individual emissions in the global budget, and finally the baseline of net zero emissions. At each level of emissions there are different subjects to be assigned responsibilities if societies and/or individuals are committed to the target of net zero emissions. We argue that all emissions within one’s fair share do not demand individual mitigation efforts. The same holds with regard to individuals and the baseline level of emissions necessary to appear in public in their societies without shame. Individuals are only under duty to reduce their emissions if they exceed this baseline level. This is different for whole societies. Societies demanding more emissions to appear in public without shame than the individual fair share are under duty to foster emission reductions and are not legitimate to reduce by introducing NETs. NETs are legitimate for reducing emissions only below the level of fair shares and for reaching net zero emissions. Since access to NETs to achieve net zero emissions demands technology not affordable to individuals there are also no full individual responsibilities to achieve net zero emissions. This is mainly a responsibility of societies as a whole.Keywords: climate change, mitigation, moral hazard, negative emission technologies, responsibility
Procedia PDF Downloads 11750 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application
Authors: Ramesh P., Aby Joseph
Abstract:
Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling
Procedia PDF Downloads 13349 Geothermal Resources to Ensure Energy Security During Climate Change
Authors: Debasmita Misra, Arthur Nash
Abstract:
Energy security and sufficiency enables the economic development and welfare of a nation or a society. Currently, the global energy system is dominated by fossil fuels, which is a non-renewable energy resource, which renders vulnerability to energy security. Hence, many nations have begun augmenting their energy system with renewable energy resources, such as solar, wind, biomass and hydro. However, with climate change, how sustainable are some of the renewable energy resources in the future is a matter of concern. Geothermal energy resources have been underexplored or underexploited in global renewable energy production and security, although it is gaining attractiveness as a renewable energy resource. The question is, whether geothermal energy resources are more sustainable than other renewable energy resources. High-temperature reservoirs (> 220 °F) can produce electricity from flash/dry steam plants as well as binary cycle production facilities. Most of the world’s high enthalpy geothermal resources are within the seismo-tectonic belt. However, exploration for geothermal energy is of great importance in conventional geothermal systems in order to improve its economic viability. In recent years, there has been an increase in the use and development of several exploration methods for geo-thermal resources, such as seismic or electromagnetic methods. The thermal infrared band of the Landsat can reflect land surface temperature difference, so the ETM+ data with specific grey stretch enhancement has been used to explore underground heat water. Another way of exploring for potential power is utilizing fairway play analysis for sites without surface expression and in rift zones. Utilizing this type of analysis can improve the success rate of project development by reducing exploration costs. Identifying the basin distribution of geologic factors that control the geothermal environment would help in identifying the control of resource concentration aside from the heat flow, thus improving the probability of success. The first step is compiling existing geophysical data. This leads to constructing conceptual models of potential geothermal concentrations which can then be utilized in creating a geodatabase to analyze risk maps. Geospatial analysis and other GIS tools can be used in such efforts to produce spatial distribution maps. The goal of this paper is to discuss how climate change may impact renewable energy resources and how could a synthesized analysis be developed for geothermal resources to ensure sustainable and cost effective exploitation of the resource.Keywords: exploration, geothermal, renewable energy, sustainable
Procedia PDF Downloads 15348 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports
Authors: Jonardan Koner, Avinash Purandare
Abstract:
In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders
Procedia PDF Downloads 13147 Familiarity with Flood and Engineering Solutions to Control It
Authors: Hamid Fallah
Abstract:
Undoubtedly, flood is known as a natural disaster, and in practice, flood is considered the most terrible natural disaster in the world both in terms of loss of life and financial losses. From 1988 to 1997, about 390,000 people were killed by natural disasters in the world, 58% of which were related to floods, 26% due to earthquakes, and 16% due to storms and other disasters. The total damages in these 10 years were about 700 billion dollars, which were 33, 29, 28% related to floods, storms and earthquakes, respectively. In this regard, the worrisome point has been the increasing trend of flood deaths and damages in the world in recent decades. The increase in population and assets in flood plains, changes in hydro systems and the destructive effects of human activities have been the main reasons for this increase. During rain and snow, some of the water is absorbed by the soil and plants. A percentage evaporates and the rest flows and is called runoff. Floods occur when the soil and plants cannot absorb the rainfall, and as a result, the natural river channel does not have the capacity to pass the generated runoff. On average, almost 30% of precipitation is converted into runoff, which increases with snow melting. Floods that occur differently create an area called flood plain around the river. River floods are often caused by heavy rains, which in some cases are accompanied by snow melt. A flood that flows in a river without warning or with little warning is called a flash flood. The casualties of these rapid floods that occur in small watersheds are generally more than the casualties of large river floods. Coastal areas are also subject to flooding caused by waves caused by strong storms on the surface of the oceans or waves caused by underground earthquakes. Floods not only cause damage to property and endanger the lives of humans and animals, but also leave other effects. Runoff caused by heavy rains causes soil erosion in the upstream and sedimentation problems in the downstream. The habitats of fish and other animals are often destroyed by floods. The high speed of the current increases the damage. Long-term floods stop traffic and prevent drainage and economic use of land. The supports of bridges, river banks, sewage outlets and other structures are damaged, and there is a disruption in shipping and hydropower generation. The economic losses of floods in the world are estimated at tens of billions of dollars annually.Keywords: flood, hydrological engineering, gis, dam, small hydropower, suitablity
Procedia PDF Downloads 6746 Drying Shrinkage of Concrete: Scale Effect and Influence of Reinforcement
Authors: Qier Wu, Issam Takla, Thomas Rougelot, Nicolas Burlion
Abstract:
In the framework of French underground disposal of intermediate level radioactive wastes, concrete is widely used as a construction material for containers and tunnels. Drying shrinkage is one of the most disadvantageous phenomena of concrete structures. Cracks generated by differential shrinkage could impair the mechanical behavior, increase the permeability of concrete and act as a preferential path for aggressive species, hence leading to an overall decrease in durability and serviceability. It is of great interest to understand the drying shrinkage phenomenon in order to predict and even to control the strains of concrete. The question is whether the results obtained from laboratory samples are in accordance with the measurements on a real structure. Another question concerns the influence of reinforcement on drying shrinkage of concrete. As part of a global project with Andra (French National Radioactive Waste Management Agency), the present study aims to experimentally investigate the scale effect as well as the influence of reinforcement on the development of drying shrinkage of two high performance concretes (based on CEM I and CEM V cements, according to European standards). Various sizes of samples are chosen, from ordinary laboratory specimens up to real-scale specimens: prismatic specimens with different volume-to-surface (V/S) ratios, thin slices (thickness of 2 mm), cylinders with different sizes (37 and 160 mm in diameter), hollow cylinders, cylindrical columns (height of 1000 mm) and square columns (320×320×1000 mm). The square columns have been manufactured with different reinforcement rates and can be considered as mini-structures, to approximate the behavior of a real voussoir from the waste disposal facility. All the samples are kept, in a first stage, at 20°C and 50% of relative humidity (initial conditions in the tunnel) in a specific climatic chamber developed by the Laboratory of Mechanics of Lille. The mass evolution and the drying shrinkage are monitored regularly. The obtained results show that the specimen size has a great impact on water loss and drying shrinkage of concrete. The specimens with a smaller V/S ratio and a smaller size have a bigger drying shrinkage. The correlation between mass variation and drying shrinkage follows the same tendency for all specimens in spite of the size difference. However, the influence of reinforcement rate on drying shrinkage is not clear based on the present results. The second stage of conservation (50°C and 30% of relative humidity) could give additional results on these influences.Keywords: concrete, drying shrinkage, mass evolution, reinforcement, scale effect
Procedia PDF Downloads 18345 Ultrasound Disintegration as a Potential Method for the Pre-Treatment of Virginia Fanpetals (Sida hermaphrodita) Biomass before Methane Fermentation Process
Authors: Marcin Dębowski, Marcin Zieliński, Mirosław Krzemieniewski
Abstract:
As methane fermentation is a complex series of successive biochemical transformations, its subsequent stages are determined, to a various extent, by physical and chemical factors. A specific state of equilibrium is being settled in the functioning fermentation system between environmental conditions and the rate of biochemical reactions and products of successive transformations. In the case of physical factors that influence the effectiveness of methane fermentation transformations, the key significance is ascribed to temperature and intensity of biomass agitation. Among the chemical factors, significant are pH value, type, and availability of the culture medium (to put it simply: the C/N ratio) as well as the presence of toxic substances. One of the important elements which influence the effectiveness of methane fermentation is the pre-treatment of organic substrates and the mode in which the organic matter is made available to anaerobes. Out of all known and described methods for organic substrate pre-treatment before methane fermentation process, the ultrasound disintegration is one of the most interesting technologies. Investigations undertaken on the ultrasound field and the use of installations operating on the existing systems result principally from very wide and universal technological possibilities offered by the sonication process. This physical factor may induce deep physicochemical changes in ultrasonicated substrates that are highly beneficial from the viewpoint of methane fermentation processes. In this case, special role is ascribed to disintegration of biomass that is further subjected to methane fermentation. Once cell walls are damaged, cytoplasm and cellular enzymes are released. The released substances – either in dissolved or colloidal form – are immediately available to anaerobic bacteria for biodegradation. To ensure the maximal release of organic matter from dead biomass cells, disintegration processes are aimed to achieve particle size below 50 μm. It has been demonstrated in many research works and in systems operating in the technical scale that immediately after substrate supersonication the content of organic matter (characterized by COD, BOD5 and TOC indices) was increasing in the dissolved phase of sedimentation water. This phenomenon points to the immediate sonolysis of solid substances contained in the biomass and to the release of cell material, and consequently to the intensification of the hydrolytic phase of fermentation. It results in a significant reduction of fermentation time and increased effectiveness of production of gaseous metabolites of anaerobic bacteria. Because disintegration of Virginia fanpetals biomass via ultrasounds applied in order to intensify its conversion is a novel technique, it is often underestimated by exploiters of agri-biogas works. It has, however, many advantages that have a direct impact on its technological and economical superiority over thus far applied methods of biomass conversion. As for now, ultrasound disintegrators for biomass conversion are not produced on the mass-scale, but by specialized groups in scientific or R&D centers. Therefore, their quality and effectiveness are to a large extent determined by their manufacturers’ knowledge and skills in the fields of acoustics and electronic engineering.Keywords: ultrasound disintegration, biomass, methane fermentation, biogas, Virginia fanpetals
Procedia PDF Downloads 36744 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability
Procedia PDF Downloads 208