Search results for: super elevation
39 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar
Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo
Abstract:
The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB
Procedia PDF Downloads 8938 Geotechnical Evaluation and Sizing of the Reinforcement Layer on Soft Soil in the Construction of the North Triage Road Clover, in Brasilia Federal District, Brazil
Authors: Rideci Farias, Haroldo Paranhos, Joyce Silva, Elson Almeida, Hellen Silva, Lucas Silva
Abstract:
The constant growth of the fleet of vehicles in the big cities, makes that the Engineering is dynamic, with respect to the new solutions for traffic flow in general. In the Federal District (DF), Brazil, it is no different. The city of Brasilia, Capital of Brazil, and Cultural Heritage of Humanity by UNESCO, is projected to 500 thousand inhabitants, and today circulates more than 3 million people in the city, and with a fleet of more than one vehicle for every two inhabitants. The growth of the city to the North region, made that the urban planning presented solutions for the fleet in constant growth. In this context, a complex of viaducts, road accesses, creation of new rolling roads and duplication of the Bragueto bridge over Paranoa lake in the northern part of the city was designed, giving access to the BR-020 highway, denominated Clover of North Triage (TTN). In the geopedological context, the region is composed of hydromorphic soils, with the presence of the water level at some times of the year. From the geotechnical point of view, are soils with SPT < 4 and Resistance not drained, Su < 50 kPa. According to urban planning in Brasília, special art works can not rise in the urban landscape, contrasting with the urban characteristics of the architects Lúcio Costa and Oscar Niemeyer. Architects hired to design the new Capital of Brazil. The urban criterion then created the technical impasse, resulting in the technical need to ‘bury’ the works of art and in turn the access greenhouses at different levels, in regions of low support soil and water level Outcrossing, generally inducing the need for this study and design. For the adoption of the appropriate solution, Standard Penetration Test (SPT), Vane Test, Diagnostic peritoneal lavage (DPL) and auger boring campaigns were carried out. With the comparison of the results of these tests, the profiles of resistance of the soils and water levels were created in the studied sections. Geometric factors such as existing sidewalks and lack of elevation for the discharge of deep drainage water have inhibited traditional techniques for total removal of soft soils, thus avoiding the use of temporary drawdown and shoring of excavations. Thus, a structural layer was designed to reinforce the subgrade by means of the ‘needling’ of the soft soil, without the need for longitudinal drains. In this context, the article presents the geological and geotechnical studies carried out, but also the dimensioning of the reinforcement layer on the soft soil with a view to the main objective of this solution that is to allow the execution of the civil works without the interference in the roads in use, Execution of services in rainy periods, presentation of solution compatible with drainage characteristics and soft soil reinforcement.Keywords: layer, reinforcement, soft soil, clover of north triage
Procedia PDF Downloads 22637 Dimethyl fumarate Alleviates Valproic Acid-Induced Autism in Wistar Rats via Activating NRF-2 and Inhibiting NF-κB Pathways
Authors: Sandy Elsayed, Aya Mohamed, Noha Nassar
Abstract:
Introduction: Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by social deficits and repetitive behavior. Multiple studies suggest that oxidative stress and neuroinflammation are key factors in the etiology of ASD and often associated with worsening of ASD-related behaviors. Nuclear factor erythroid 2-related factor 2 (NRF-2) is a transcription factor that promotes expression of antioxidant response element genes in oxidative stress. In ASD subjects, decreased expression of NRF-2 in frontal cortex shifted the redox homeostasis towards oxidative stress, and resulted in inflammation evidenced by elevation of nuclear factor kappa B (NF-κB) transcriptional activity. Dimethyl fumarate (DMF) is a NRF-2 activator that is used in the treatment of psoriasis and multiple sclerosis. It participates in the transcriptional control of inflammatory factors via inhibition of NF-κB and its downstream targets. This study aimed to investigate the role of DMF in alleviating the cognitive impairments and behavior deficits associated with ASD through mitigation of oxidative stress and inflammation in prenatal valproic acid (VPA) rat model of autism. Methods: Pregnant female Wistar rats received a single intraperitoneal injection of VPA (600 mg/kg) to induce autistic-like-behavioral and neurobiological alterations in their offspring. Chronic oral gavage of DMF (150mg/kg/day) started from postnatal day (PND) 24 till PND62 (39 days). Prenatal VPA exposure elicited autistic behaviors including decreased social interaction and stereotyped behavior. Social interaction was evaluated using three-chamber sociability test and calculation of sociability index (SI), while stereotyped repetitive behavior and anxiety associated with ASD were assessed using marble burying test (MBT). Biochemical analyses were done on prefrontal cortex homogenates including NRF-2, and NF-κB expression. Moreover, inducible nitric oxide synthase (iNOS) gene expression and tumor necrosis factor (TNF-) protein expression were evaluated as markers of inflammation. Results: Prenatal VPA elicited decreased social interaction shown by decreased SI compared to control group (p < 0.001) and DMF enhanced SI (p < 0.05). In MBT, prenatal injection of VPA manifested stereotyped behavior and enhanced number of buried marbles compared to control (p < 0.05) and DMF reduced the anxiety-related behavior in rats exhibiting ASD-like behaviors (p < 0.05). In prefrontal cortex, NRF-2 expression was downregulated in prenatal VPA model (p < 0.0001) and DMF reversed this effect (p < 0.0001). The inflammatory transcription factor NF-κB was elevated in prenatal VPA model (p < 0.0001) and reduced (p < 0.0001) upon NRF-2 activation by DMF. Prenatal VPA expressed higher levels of proinflammatory cytokine TNF- compared to control group (p < 0.0001) and DMF reduced it (p < 0.0001). Finally, the gene expression of iNOS was downregulated upon NRF-2 activation by DMF (p < 0.01). Conclusion: This study proposes that DMF is a potential agent that can be used to ameliorate autistic-like-changes through NRF-2 activation along with NF-κB downregulation and therefore, it is a promising novel therapy for ASD.Keywords: autism spectrum disorders, dimethyl fumarate, neuroinflammation, NRF-2
Procedia PDF Downloads 4136 Reducing Flood Risk in a Megacity: Using Mobile Application and Value Capture for Flood Risk Prevention and Risk Reduction Financing
Authors: Dedjo Yao Simon, Takahiro Saito, Norikazu Inuzuka, Ikuo Sugiyama
Abstract:
The megacity of Abidjan is a coastal urban area where the number of floods reported and the associated impacts are on a rapid increase due to climate change, an uncontrolled urbanization, a rapid population increase, a lack of flood disaster mitigation and citizens’ awareness. The objective of this research is to reduce in the short and long term period, the human and socio-economic impact of the flood. Hydrological simulation is applied on free of charge global spatial data (digital elevation model, satellite-based rainfall estimate, landuse) to identify the flood-prone area and to map the risk of flood. A direct interview to a sample residents is used to validate the simulation results. Then a mobile application (Flood Locator) is prototyped to disseminate the risk information to the citizen. In addition, a value capture strategy is proposed to mobilize financial resource for disaster risk reduction (DRRf) to reduce the impact of the flood. The town of Cocody in Abidjan is selected as a case study area to implement this research. The mapping of the flood risk reveals that population living in the study area is highly vulnerable. For a 5-year flood, more than 60% of the floodplain is affected by a water depth of at least 0.5 meters; and more than 1000 ha with at least 5000 buildings are directly exposed. The risk becomes higher for a 50 and 100-year floods. Also, the interview reveals that the majority of the citizen are not aware of the risk and severity of flooding in their community. This shortage of information is overcome by the Flood Locator and by an urban flood database we prototype for accumulate flood data. Flood Locator App allows the users to view floodplain and depth on a digital map; the user can activate the GPS sensor of the mobile to visualize his location on the map. Some more important additional features allow the citizen user to capture flood events and damage information that they can send remotely to the database. Also, the disclosure of the risk information could result to a decrement (-14%) of the value of properties locate inside floodplain and an increment (+19%) of the value of property in the suburb area. The tax increment due to the higher tax increment in the safer area should be captured to constitute the DRRf. The fund should be allocated to the reduction of flood risk for the benefit of people living in flood-prone areas. The flood prevention system discusses in this research will minimize in the short and long term the direct damages in the risky area due to effective awareness of citizen and the availability of DRRf. It will also contribute to the growth of the urban area in the safer zone and reduce human settlement in the risky area in the long term. Data accumulated in the urban flood database through the warning app will contribute to regenerate Abidjan towards the more resilient city by means of risk avoidable landuse in the master plan.Keywords: abidjan, database, flood, geospatial techniques, risk communication, smartphone, value capture
Procedia PDF Downloads 29035 Beneath the Leisurely Surface: An Analysis of the Piano Lesson Frenzy among Chinese Middle-Class Parents
Authors: Yijie Wang, Tianyue Wang
Abstract:
In the past two decades, there has been a great ‘piano lesson frenzy’ among Chinese middle-class families, with a large number of parents adding piano training to children’s extra-curriculum lists. Superficially, the frenzy reflects a rather ‘leisurely’ attitude: parents typically claim that pianos lessons are ‘just for fun’ and will hopefully render children’s life more exciting. However, a closer scrutiny reveals that there is great social-status anxiety hidden beneath this ‘leisurely’ surface. Based on pre-interviews of six Chinese middle-class parents who have enthusiastically signed their children up for piano lessons, several tentative analysis are made: 1. Owing to a series of historical and social factors, the Chinese middle-class have yet to establish their cultural norms in the past few decades, resulting in great confusion concerning how to cultivate cultural tastes in their offspring. And partly due to the fact that the middle-class status of the past Chinese generation is mostly self-acquired rather than inherited, parents are much less confident about their cultural resources—which require long-time accumulation—than material ones. Both factors combine to lead to a sort of blind, overcompensating enthusiasm in culture-related education, and the piano frenzy is but a demonstration. 2. The piano has been chosen to be the object of the frenzy partly because of its inherent characteristics as well as socially-constructed ones. Costly, large in size, imported from another culture and so forth, the piano has acquired the meaning of being exclusive, high-end and exotic, which renders it a token of top-tier status among Chinese people, and piano lessons for offspring have therefore become parents’ paths towards a kind of ‘symbolic elevation’. A child playing piano is an exhibition as well as psychological assurance of the families’ middle-class status. 3. A closer look at children’s piano training process reveals that there is much more anxiety than leisurely elements involved. Despite parents’ claim that ‘piano is mainly for kids to have fun,’ the whole process is evidently of a rather ‘ascetic’ nature, with the demands of diligence and senses of time urgency throughout, and techniques rather than flair or styles are emphasized. This either means that the apparent ‘piano-for-fun’ stance is unauthentic and is only other motives in disguise, or that the Chinese middle-class parents are not yet capable of shaking off the sense of anxiety even if they sincerely intend to. 4. When viewed in relation to Chinese formal school system as well as the job market at large, it can be said that by signing children up for piano lessons, parents are consciously or unconsciously seeking to prepare for, or reduce the risks of, their children’s future social mobility. In face of possible failures in the highly-crucial, highly-competitive formal school system, piano-playing as an extra-curriculum activity may be conveniently transferred into an alternative career path. Besides, in contemporary China, as the occupational structure goes through change, and the school-related certificates decline in value, aspects such as a person’s overall deportment, which can be gained or proved by piano-learning, have been gaining in significance.Keywords: extra-curriculum activities, middle class, piano lesson frenzy, status anxiety
Procedia PDF Downloads 24534 The Role of Uterine Artery Embolization in the Management of Postpartum Hemorrhage
Authors: Chee Wai Ku, Pui See Chin
Abstract:
As an emerging alternative to hysterectomy, uterine artery embolization (UAE) has been widely used in the management of fibroids and in controlling postpartum hemorrhage (PPH) unresponsive to other therapies. Research has shown UAE to be a safe, minimally invasive procedure with few complications and minimal effects on future fertility. We present two cases highlighting the use of UAE in preventing PPH in a patient with a large fibroid at the time of cesarean section and in the treatment of secondary PPH refractory to other therapies in another patient. We present a 36-year primiparous woman who booked at 18+6 weeks gestation with a 13.7 cm subserosal fibroid at the lower anterior wall of the uterus near the cervix and a 10.8 cm subserosal fibroid in the left wall. Prophylactic internal iliac artery occlusion balloons were placed prior to the planned classical midline cesarean section. The balloons were inflated once the baby was delivered. Bilateral uterine arteries were embolized subsequently. The estimated blood loss (EBL) was 400 mls and hemoglobin (Hb) remained stable at 10 g/DL. Ultrasound scan 2 years postnatally showed stable uterine fibroids 10.4 and 7.1 cm, which was significantly smaller than before. We present the second case of a 40-year-old G2P1 with a previous cesarean section for failure to progress. There were no antenatal problems, and the placenta was not previa. She presented with term labour and underwent an emergency cesarean section for failed vaginal birth after cesarean. Intraoperatively extensive adhesions were noted with bladder drawn high, and EBL was 300 mls. Postpartum recovery was uneventful. She presented with secondary PPH 3 weeks later complicated by hypovolemic shock. She underwent an emergency examination under anesthesia and evacuation of the uterus, with EBL 2500mls. Histology showed decidua with chronic inflammation. She was discharged well with no further PPH. She subsequently returned one week later for secondary PPH. Bedside ultrasound showed that the endometrium was thin with no evidence of retained products of conception. Uterotonics were administered, and examination under anesthesia was performed, with uterine Bakri balloon and vaginal pack insertion after. EBL was 1000 mls. There was no definite cause of PPH with no uterine atony or products of conception. To evaluate a potential cause, pelvic angiogram and super selective left uterine arteriogram was performed which showed profuse contrast extravasation and acute bleeding from the left uterine artery. Superselective embolization of the left uterine artery was performed. No gross contrast extravasation from the right uterine artery was seen. These two cases demonstrated the superior efficacy of UAE. Firstly, the prophylactic use of intra-arterial balloon catheters in pregnant patients with large fibroids, and secondly, in the diagnosis and management of secondary PPH refractory to uterotonics and uterine tamponade. In both cases, the need for laparotomy hysterectomy was avoided, resulting in the preservation of future fertility. UAE should be a consideration for hemodynamically stable patients in centres with access to interventional radiology.Keywords: fertility preservation, secondary postpartum hemorrhage, uterine embolization, uterine fibroids
Procedia PDF Downloads 18733 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 16932 Assessing P0.1 and Occlusion Pressures in Brain-Injured Patients on Pressure Support Ventilation: A Study Protocol
Authors: S. B. R. Slagmulder
Abstract:
Monitoring inspiratory effort and dynamic lung stress in patients on pressure support ventilation in the ICU is important for protecting against self inflicted lung injury (P-SILI) and diaphragm dysfunction. Strategies to address the detrimental effects of respiratory drive and effort can lead to improved patient outcomes. Two non-invasive estimation methods, occlusion pressure (Pocc) and P0.1, have been proposed for achieving lung and diaphragm protective ventilation. However, their relationship and interpretation in neuro ICU patients is not well understood. P0.1 is the airway pressure measured during a 100-millisecond occlusion of the inspiratory port. It reflects the neural drive from the respiratory centers to the diaphragm and respiratory muscles, indicating the patient's respiratory drive during the initiation of each breath. Occlusion pressure, measured during a brief inspiratory pause against a closed airway, provides information about the inspiratory muscles' strength and the system's total resistance and compliance. Research Objective: Understanding the relationship between Pocc and P0.1 in brain-injured patients can provide insights into the interpretation of these values in pressure support ventilation. This knowledge can contribute to determining extubation readiness and optimizing ventilation strategies to improve patient outcomes. The central goal is to asses a study protocol for determining the relationship between Pocc and P0.1 in brain-injured patients on pressure support ventilation and their ability to predict successful extubation. Additionally, comparing these values between brain-damaged and non-brain-damaged patients may provide valuable insights. Key Areas of Inquiry: 1. How do Pocc and P0.1 values correlate within brain injury patients undergoing pressure support ventilation? 2. To what extent can Pocc and P0.1 values serve as predictive indicators for successful extubation in patients with brain injuries? 3. What differentiates the Pocc and P0.1 values between patients with brain injuries and those without? Methodology: P0.1 and occlusion pressures are standard measurements for pressure support ventilation patients, taken by attending doctors as per protocol. We utilize electronic patient records for existing data. Unpaired T-test will be conducted to compare P0.1 and Pocc values between both study groups. Associations between P0.1 and Pocc and other study variables, such as extubation, will be explored with simple regression and correlation analysis. Depending on how the data evolve, subgroup analysis will be performed for patients with and without extubation failure. Results: While it is anticipated that neuro patients may exhibit high respiratory drive, the linkage between such elevation, quantified by P0.1, and successful extubation remains unknown The analysis will focus on determining the ability of these values to predict successful extubation and their potential impact on ventilation strategies. Conclusion: Further research is pending to fully understand the potential of these indices and their impact on mechanical ventilation in different patient populations and clinical scenarios. Understanding these relationships can aid in determining extubation readiness and tailoring ventilation strategies to improve patient outcomes in this specific patient population. Additionally, it is vital to account for the influence of sedatives, neurological scores, and BMI on respiratory drive and occlusion pressures to ensure a comprehensive analysis.Keywords: brain damage, diaphragm dysfunction, occlusion pressure, p0.1, respiratory drive
Procedia PDF Downloads 6831 Multi-Criteria Geographic Information System Analysis of the Costs and Environmental Impacts of Improved Overland Tourist Access to Kaieteur National Park, Guyana
Authors: Mark R. Leipnik, Dahlia Durga, Linda Johnson-Bhola
Abstract:
Kaieteur is the most iconic National Park in the rainforest-clad nation of Guyana in South America. However, the magnificent 226-meter-high waterfall at its center is virtually inaccessible by surface transportation, and the occasional charter flights to the small airstrip in the park are too expensive for many tourists and residents. Thus, the largest waterfall in all of Amazonia, where the Potaro River plunges over a single free drop twice as high as Victoria Falls, remains preserved in splendid isolation inside a 57,000-hectare National Park established by the British in 1929, in the deepest recesses of a remote jungle canyon. Kaieteur Falls are largely unseen firsthand, but images of the falls are depicted on the Guyanese twenty dollar note, in every Guyanese tourist promotion, and on many items in the national capital of Georgetown. Georgetown is only 223-241 kilometers away from the falls. The lack of a single mileage figure demonstrates there is no single overland route. Any journey, except by air, involves changes of vehicles, a ferry ride, and a boat ride up a jungle river. It also entails hiking for many hours to view the falls. Surface access from Georgetown (or any city) is thus a 3-5 day-long adventure; even in the dry season, during the two wet seasons, travel is a particularly sticky proposition. This journey was made overland by the paper's co-author Dahlia Durga. This paper focuses on potential ways to improve overland tourist access to Kaieteur National Park from Georgetown. This is primarily a GIS-based analysis, using multiple criteria to determine the least cost means of creating all-weather road access to the area near the base of the falls while minimizing distance and elevation changes. Critically, it also involves minimizing the number of new bridges required to be built while utilizing the one existing ferry crossings of a major river. Cost estimates are based on data from road and bridge construction engineers operating currently in the interior of Guyana. The paper contains original maps generated with ArcGIS of the potential routes for such an overland connection, including the one deemed optimal. Other factors, such as the impact on endangered species habitats and Indigenous populations, are considered. This proposed infrastructure development is taking place at a time when Guyana is undergoing the largest boom in its history due to revenues from offshore oil and gas development. Thus, better access to the most important tourist attraction in the country is likely to happen eventually in some manner. But the questions of the most environmentally sustainable and least costly alternatives for such access remain. This paper addresses those questions and others related to access to this magnificent natural treasure and the tradeoffs such access will have on the preservation of the currently pristine natural environment of Kaieteur Falls.Keywords: nature tourism, GIS, Amazonia, national parks
Procedia PDF Downloads 16630 Assessment of Antioxidant and Cholinergic Systems, and Liver Histopathologies in Lithobates catesbeianus Exposed to the Waters of an Urban Stream
Authors: Diego R. Boiarski, Camila M. Toigo, Thais M. Sobjak, Andrey F. P. Santos, Silvia Romao, Ana T. B. Guimaraes
Abstract:
Anthropogenic activities promote changes in the community’s structures and decrease the species abundance of amphibians. Biological communities of fluvial systems are assemblies of organisms that have adapted to regional conditions, including the physical environment and food resources, and are further refined through interactions with other species. The aim of this study was to assess neurotoxic alterations and in the antioxidant system on tadpoles of Lithobates catesbeianus exposed to waters from Cascavel River, in the south of Brazil. A total of 420 L of water was collected from the Cascavel River, 140 L from each of the three different locations: Site 1 – headwater; Site 2 – stretch of the stream that runs through an urbanized area; Site 3 – a stretch from the rural area. Twelve tadpoles were acclimated in each aquarium (100 L of water) for seven days. The water from each aquarium was replaced with the ones sampled from the river, except the one from the control aquarium. After seven days, a portion of the liver was removed and conditioned for ChE, SOD, CAT and LPO analysis; other part of the tissue was conditioned for histological analysis. The statistical analysis performed was one-way ANOVA, followed by post-hoc Tukey-HSD test, and the multivariate principal components analysis. It was not observed any neurotoxic effect, but a slight increase in SOD activity and elevation of CAT activity in both urban and rural environment. A decrease in LPO reaction was detected, mainly among the tadpoles exposed to the waters from the rural area. The results of the present study demonstrate the alteration of the antioxidant system, as well as liver histopathologies in tadpoles exposed mainly to waters collected in urban and rural environments. These alterations may cause the reduction in the velocity of the metamorphosis process from the tadpoles. Further, were observed histological alterations, highlighting necrotic areas mainly among the animals exposed to urban waters. Those damages can lead to metabolic dysfunction, interfering with survival capacity, diminishing not only individual fitness but for the whole population. In the interpretation synthesis of all biomarkers, the cellular damage gradient is perceptible, characterized by the variables related to the antioxidant system, due to the flow direction of the stream. This result is indicative that along the course of the creek occurs dumping of organic material, which promoted an acute response upon tadpoles of L. catesbeianus. and it was also observed the difference in tissue damage between the experimental groups and the control group, the latter presenting histological alterations, but to a lesser degree than the animals exposed to the waters of the Cascavel river. These damages, caused by reactive oxygen species possibly resulting from the contamination by organic compounds, can lead the animals to a series of metabolic dysfunctions, interfering with its metamorphosis capacity. Interruption of metamorphosis may affect survival, which may impair its growth, development and reproduction, diminishing not only the fitness of each individual but in a long-term, to the entire population.Keywords: American bullfrog, histopathology, oxidative stress, urban creeks pollution
Procedia PDF Downloads 18729 Phospholipid Cationic and Zwitterionic Compounds as Potential Non-Toxic Antifouling Agents: A Study of Biofilm Formation Assessed by Micro-titer Assays with Marine Bacteria and Eco-toxicological Effect on Marine Microalgae
Authors: D. Malouch, M. Berchel, C. Dreanno, S. Stachowski-Haberkorn, P-A. Jaffres
Abstract:
Biofouling is a complex natural phenomenon that involves biological, physical and chemical properties related to the environment, the submerged surface and the living organisms involved. Bio-colonization of artificial structures can cause various economic and environmental impacts. The increase in costs associated with the over-consumption of fuel from biocolonized vessels has been widely studied. Measurement drifts from submerged sensors, as well as obstructions in heat exchangers, and deterioration of offshore structures are major difficulties that industries are dealing with. Therefore, surfaces that inhibit biocolonization are required in different areas (water treatment, marine paints, etc.) and many efforts have been devoted to produce efficient and eco-compatible antifouling agents. The different steps of surface fouling are widely described in literature. Studying the biofilm and its stages provides a better understanding of how to elaborate more efficient antifouling strategies. Several approaches are currently applied, such as the use of biocide anti-fouling paint (mainly with copper derivatives) and super-hydrophobic coatings. While these two processes are proving to be the most effective, they are not entirely satisfactory, especially in a context of a changing legislation. Nowadays, the challenge is to prevent biofouling with non-biocide compounds, offering a cost effective solution, but with no toxic effects on marine organisms. Since the micro-fouling phase plays an important role in the regulation of the following steps of biofilm formation, it is desired to reduce or delate biofouling of a given surface by inhibiting the micro-fouling at its early stages. In our recent works, we reported that some amphiphilic compounds exhibited bacteriostatic or bactericidal properties at a concentration that did not affect mammalian eukaryotic cells. These remarkable properties invited us to assess this type of bio-inspired phospholipids to prevent the colonization of surfaces by marine bacteria. Of note, other studies reported that amphiphilic compounds interacted with bacteria leading to a reduction of their development. An amphiphilic compound is a molecule consisting of a hydrophobic domain and a polar head (ionic or non-ionic). These compounds appear to have interesting antifouling properties: some ionic compounds have shown antimicrobial activity, and zwitterions can reduce nonspecific adsorption of proteins. Herein, we investigate the potential of amphiphilic compounds as inhibitors of bacterial growth and marine biofilm formation. The aim of this study is to compare the efficacy of four synthetic phospholipids that features a cationic charge or a zwitterionic polar-head group to prevent microfouling with marine bacteria. Toxicity of these compounds was also studied in order to identify the most promising compounds that inhibit biofilm development and show low cytotoxicity on two links representative of coastal marine food webs: phytoplankton and oyster larvae.Keywords: amphiphilic phospholipids, biofilm, marine fouling, non-toxique assays
Procedia PDF Downloads 13428 Michel Foucault’s Docile Bodies and The Matrix Trilogy: A Close Reading Applied to the Human Pods and Growing Fields in the Films
Authors: Julian Iliev
Abstract:
The recent release of The Matrix Resurrections persuaded many film scholars that The Matrix trilogy had lost its appeal and its concepts were largely outdated. This study examines the human pods and growing fields in the trilogy. Their functionality is compared to Michel Foucault’s concept of docile bodies: linking fictional and contemporary worlds. This paradigm is scrutinized through surveillance literature. The analogy brings to light common elements of hidden surveillance practices in technologies. The comparison illustrates the effects of body manipulation portrayed in the movies and their relevance with contemporary surveillance practices. Many scholars have utilized a close reading methodology in film studies (J.Bizzocchi, J.Tanenbaum, P.Larsen, S. Herbrechter, and Deacon et al.). The use of a particular lens through which media text is examined is an indispensable factor that needs to be incorporated into the methodology. The study spotlights both scenes from the trilogy depicting the human pods and growing fields. The functionality of the pods and the fields compare directly with Foucault’s concept of docile bodies. By utilizing Foucault’s study as a lens, the research will unearth hidden components and insights into the films. Foucault recognizes three disciplines that produce docile bodies: 1) manipulation and the interchangeability of individual bodies, 2) elimination of unnecessary movements and management of time, and 3) command system guaranteeing constant supervision and continuity protection. These disciplines can be found in the pods and growing fields. Each body occupies a single pod aiding easier manipulation and fast interchangeability. The movement of the bodies in the pods is reduced to the absolute minimum. Thus, the body is transformed into the ultimate object of control – minimum movement correlates to maximum energy generation. Supervision is exercised by wiring the body with numerous types of cables. This ultimate supervision of body activity reduces the body’s purpose to mere functioning. If a body does not function as an energy source, then it’s unplugged, ejected, and liquefied. The command system secures the constant supervision and continuity of the process. To Foucault, the disciplines are distinctly different from slavery because they stop short of a total takeover of the bodies. This is a clear difference from the slave system implemented in the films. Even though their system might lack sophistication, it makes up for it in the elevation of functionality. Further, surveillance literature illustrates the connection between the generation of body energy in The Matrix trilogy to the generation of individual data in contemporary society. This study found that the three disciplines producing docile bodies were present in the portrayal of the pods and fields in The Matrix trilogy. The above comparison combined with surveillance literature yields insights into analogous processes and contemporary surveillance practices. Thus, the constant generation of energy in The Matrix trilogy can be equated to the consistent data generation in contemporary society. This essay shows the relevance of the body manipulation concept in the Matrix films with contemporary surveillance practices.Keywords: docile bodies, film trilogies, matrix movies, michel foucault, privacy loss, surveillance
Procedia PDF Downloads 9327 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples
Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges
Abstract:
Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review
Procedia PDF Downloads 18426 Operation System for Aluminium-Air Cell: A Strategy to Harvest the Energy from Secondary Aluminium
Authors: Binbin Chen, Dennis Y. C. Leung
Abstract:
Aluminium (Al) -air cell holds a high volumetric capacity density of 8.05 Ah cm-3, benefit from the trivalence of Al ions. Additional benefits of Al-air cell are low price and environmental friendliness. Furthermore, the Al energy conversion process is characterized of 100% recyclability in theory. Along with a large base of raw material reserve, Al attracts considerable attentions as a promising material to be integrated within the global energy system. However, despite the early successful applications in military services, several problems exist that prevent the Al-air cells from widely civilian use. The most serious issue is the parasitic corrosion of Al when contacts with electrolyte. To overcome this problem, super-pure Al alloyed with various traces of metal elements are used to increase the corrosion resistance. Nevertheless, high-purity Al alloys are costly and require high energy consumption during production process. An alternative approach is to add inexpensive inhibitors directly into the electrolyte. However, such additives would increase the internal ohmic resistance and hamper the cell performance. So far these methods have not provided satisfactory solutions for the problem within Al-air cells. For the operation of alkaline Al-air cell, there are still other minor problems. One of them is the formation of aluminium hydroxide in the electrolyte. This process decreases ionic conductivity of electrolyte. Another one is the carbonation process within the gas diffusion layer of cathode, blocking the porosity of gas diffusion. Both these would hinder the performance of cells. The present work optimizes the above problems by building an Al-air cell operation system, consisting of four components. A top electrolyte tank containing fresh electrolyte is located at a high level, so that it can drive the electrolyte flow by gravity force. A mechanical rechargeable Al-air cell is fabricated with low-cost materials including low grade Al, carbon paper, and PMMA plates. An electrolyte waste tank with elaborate channel is designed to separate the hydrogen generated from the corrosion, which would be collected by gas collection device. In the first section of the research work, we investigated the performance of the mechanical rechargeable Al-air cell with a constant flow rate of electrolyte, to ensure the repeatability experiments. Then the whole system was assembled together and the feasibility of operating was demonstrated. During experiment, pure hydrogen is collected by collection device, which holds potential for various applications. By collecting this by-product, high utilization efficiency of aluminum is achieved. Considering both electricity and hydrogen generated, an overall utilization efficiency of around 90 % or even higher under different working voltages are achieved. Fluidic electrolyte could remove aluminum hydroxide precipitate and solve the electrolyte deterioration problem. This operation system provides a low-cost strategy for harvesting energy from the abundant secondary Al. The system could also be applied into other metal-air cells and is suitable for emergency power supply, power plant and other applications. The low cost feature implies great potential for commercialization. Further optimization, such as scaling up and optimization of fabrication, will help to refine the technology into practical market offerings.Keywords: aluminium-air cell, high efficiency, hydrogen, mechanical recharge
Procedia PDF Downloads 28325 Regional Hydrological Extremes Frequency Analysis Based on Statistical and Hydrological Models
Authors: Hadush Kidane Meresa
Abstract:
The hydrological extremes frequency analysis is the foundation for the hydraulic engineering design, flood protection, drought management and water resources management and planning to utilize the available water resource to meet the desired objectives of different organizations and sectors in a country. This spatial variation of the statistical characteristics of the extreme flood and drought events are key practice for regional flood and drought analysis and mitigation management. For different hydro-climate of the regions, where the data set is short, scarcity, poor quality and insufficient, the regionalization methods are applied to transfer at-site data to a region. This study aims in regional high and low flow frequency analysis for Poland River Basins. Due to high frequent occurring of hydrological extremes in the region and rapid water resources development in this basin have caused serious concerns over the flood and drought magnitude and frequencies of the river in Poland. The magnitude and frequency result of high and low flows in the basin is needed for flood and drought planning, management and protection at present and future. Hydrological homogeneous high and low flow regions are formed by the cluster analysis of site characteristics, using the hierarchical and C- mean clustering and PCA method. Statistical tests for regional homogeneity are utilized, by Discordancy and Heterogeneity measure tests. In compliance with results of the tests, the region river basin has been divided into ten homogeneous regions. In this study, frequency analysis of high and low flows using AM for high flow and 7-day minimum low flow series is conducted using six statistical distributions. The use of L-moment and LL-moment method showed a homogeneous region over entire province with Generalized logistic (GLOG), Generalized extreme value (GEV), Pearson type III (P-III), Generalized Pareto (GPAR), Weibull (WEI) and Power (PR) distributions as the regional drought and flood frequency distributions. The 95% percentile and Flow duration curves of 1, 7, 10, 30 days have been plotted for 10 stations. However, the cluster analysis performed two regions in west and east of the province where L-moment and LL-moment method demonstrated the homogeneity of the regions and GLOG and Pearson Type III (PIII) distributions as regional frequency distributions for each region, respectively. The spatial variation and regional frequency distribution of flood and drought characteristics for 10 best catchment from the whole region was selected and beside the main variable (streamflow: high and low) we used variables which are more related to physiographic and drainage characteristics for identify and delineate homogeneous pools and to derive best regression models for ungauged sites. Those are mean annual rainfall, seasonal flow, average slope, NDVI, aspect, flow length, flow direction, maximum soil moisture, elevation, and drainage order. The regional high-flow or low-flow relationship among one streamflow characteristics with (AM or 7-day mean annual low flows) some basin characteristics is developed using Generalized Linear Mixed Model (GLMM) and Generalized Least Square (GLS) regression model, providing a simple and effective method for estimation of flood and drought of desired return periods for ungauged catchments.Keywords: flood , drought, frequency, magnitude, regionalization, stochastic, ungauged, Poland
Procedia PDF Downloads 60224 Tales of Two Cities: 'Motor City' Detroit and 'King Cotton' Manchester: Transatlantic Transmissions and Transformations, Flows of Communications, Commercial and Cultural Connections
Authors: Dominic Sagar
Abstract:
Manchester ‘King Cotton’, the first truly industrial city of the nineteenth century, passing on the baton to Detroit ‘Motor City’, is the first truly modern city. We are exploring the tales of the two cities, their rise and fall and subsequent post-industrial decline, their transitions and transformations, whilst alongside paralleling their corresponding, commercial, cultural, industrial and even agricultural, artistic and musical transactions and connections. The paper will briefly contextualize how technologies of the industrial age and modern age have been instrumental in the development of these cities and other similar cities including New York. However, the main focus of the study will be the present and more importantly the future, how globalisation and the advancements of digital technologies and industries have shaped the cities developments from AlanTuring and the making of the first programmable computer to the effect of digitalisation and digital initiatives. Manchester now has a thriving creative digital infrastructure of Digilabs, FabLabs, MadLabs and hubs, the study will reference the Smart Project and the Manchester Digital Development Association whilst paralleling similar digital and creative industrial initiatives now starting to happen in Detroit. The paper will explore other topics including the need to allow for zones of experimentation, areas to play, think and create in order develop and instigate new initiatives and ideas of production, carrying on the tradition of influential inventions throughout the history of these key cities. Other topics will be briefly touched on, such as urban farming, citing the Biospheric foundation in Manchester and other similar projects in Detroit. However, the main thread will focus on the music industries and how they are contributing to the regeneration of cities. Musically and artistically, Manchester and Detroit have been closely connected by the flow and transmission of information and transfer of ideas via ‘cars and trains and boats and planes’ through to the new ‘super highway’. From Detroit to Manchester often via New York and Liverpool and back again, these musical and artistic connections and flows have greatly affected and influenced both cities and the advancement of technology are still connecting the cities. In summary two hugely important industrial cities, subsequently both experienced massive decline in fortunes, having had their large industrial hearts ripped out, ravaged leaving dying industrial carcasses and car crashes of despair, dereliction, desolation and post-industrial wastelands vacated by a massive exodus of the cities’ inhabitants. To examine the affinity, similarity and differences between Manchester & Detroit, from their industrial importance to their post-industrial decline and their current transmutations, transformations, transient transgressions, cities in transition; contrasting how they have dealt with these problems and how they can learn from each other. With a view to framing these topics with regard to how various communities have shaped these cities and the creative industries and design [the new cotton/car manufacturing industries] are reinventing post-industrial cities, to speculate on future development of these themes in relation to Globalisation, digitalisation and how cities can function to develop solutions to communal living in cities of the future.Keywords: cultural capital, digital developments, musical initiatives, zones of experimentation
Procedia PDF Downloads 19423 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models
Authors: Lucille Alonso, Florent Renard
Abstract:
The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island
Procedia PDF Downloads 13722 An Integrated Water Resources Management Approach to Evaluate Effects of Transportation Projects in Urbanized Territories
Authors: Berna Çalışkan
Abstract:
The integrated water management is a colloborative approach to planning that brings together institutions that influence all elements of the water cycle, waterways, watershed characteristics, wetlands, ponds, lakes, floodplain areas, stream channel structure. It encourages collaboration where it will be beneficial and links between water planning and other planning processes that contribute to improving sustainable urban development and liveability. Hydraulic considerations can influence the selection of a highway corridor and the alternate routes within the corridor. widening a roadway, replacing a culvert, or repairing a bridge. Because of this, the type and amount of data needed for planning studies can vary widely depending on such elements as environmental considerations, class of the proposed highway, state of land use development, and individual site conditions. The extraction of drainage networks provide helpful preliminary drainage data from the digital elevation model (DEM). A case study was carried out using the Arc Hydro extension within ArcGIS in the study area. It provides the means for processing and presenting spatially-referenced Stream Model. Study area’s flow routing, stream levels, segmentation, drainage point processing can be obtained using DEM as the 'Input surface raster'. These processes integrate the fields of hydrologic, engineering research, and environmental modeling in a multi-disciplinary program designed to provide decision makers with a science-based understanding, and innovative tools for, the development of interdisciplinary and multi-level approach. This research helps to manage transport project planning and construction phases to analyze the surficial water flow, high-level streams, wetland sites for development of transportation infrastructure planning, implementing, maintenance, monitoring and long-term evaluations to better face the challenges and solutions associated with effective management and enhancement to deal with Low, Medium, High levels of impact. Transport projects are frequently perceived as critical to the ‘success’ of major urban, metropolitan, regional and/or national development because of their potential to affect significant socio-economic and territorial change. In this context, sustaining and development of economic and social activities depend on having sufficient Water Resources Management. The results of our research provides a workflow to build a stream network how can classify suitability map according to stream levels. Transportation projects establish, develop, incorporate and deliver effectively by selecting best location for reducing construction maintenance costs, cost-effective solutions for drainage, landslide, flood control. According to model findings, field study should be done for filling gaps and checking for errors. In future researches, this study can be extended for determining and preventing possible damage of Sensitive Areas and Vulnerable Zones supported with field investigations.Keywords: water resources management, hydro tool, water protection, transportation
Procedia PDF Downloads 5621 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 19520 The Role of Community Activism in Promoting Social Justice around Housing Issues: A Case Study of the Western Cape
Authors: Mapule Maema
Abstract:
The paper aims to highlight the role that community activism has played in promoting social justice around housing issues in the Western Cape. The Western Cape is one of the largest spatially segregated provinces in South Africa which continues to exhibit grave inequalities between cities, townships and farms. These inequalities cut across intersectional issues such as, race, class, gender, and politics. The main challenges facing marginalized communities in the Western Cape include access to housing, land and basic services. This is not peculiar to only the Western Cape, the entire country is facing similar challenges however the Western Cape is seen as a fasted urbanizing province in the country due to tourism. Various social movements have been formed across the country to counter these challenges, however, this paper focuses on the resilience communities have fostered despite the myriad housing and spatial crisis they are faced with. The paper focuses on the Legal Resource’s Centre’s clients from an informal settlement called Imizamo Yethu based in Hout Bay Valley area. The 18 hectare settlement houses approximately 33600 people. On the 21st July 2017, Hout Bay experienced violent protests following an eviction order passed by the City of Cape Town. The protest was characterized by tensions within the community regarding the super-blocking initiative which aims to establish roads in informal settlements to ensure basic services. Residents against the process argued that there were no proper consultations done to educate them on what this process entailed. Public participation is one of the objectives the municipalities aim to promote however it remains a great challenge. In order to highlight the experiences of the LRC clients in relation to what motivated their involvement in the movement, how it felt their participation, and aspirations, the paper will employ qualitative research methods. Qualitative research methods enable the researcher to get a deeper and nuanced understanding of the social world in the eyes of those who experienced it. It is a flexible methodology that enables one to also understand social processes and the significance they generate. Data will be collected through the use of the World Cafe as a focus group method. The World Café is a simple, effective and flexible format for hosting group dialogue. The steps taken when setting up a World Café includes the following: setting the context (why you are bringing people together and what you want to achieve), create hospitality space (make participants feel at home and free to discuss issues), explore questions that matter, connect diverse perspectives (the opportunity to actively contribute your thinking), listen together for patterns and insights, share collective discoveries and learnings. Secondary data will be used to augment the data collected. Stories of impact will be drawn from the exercises. This paper will contribute to the discourse of sustainable housing and urban development and the research outputs will be disseminated to the public for learning.Keywords: community activism, influence, social justice, development
Procedia PDF Downloads 13719 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 7218 Intelligent Materials and Functional Aspects of Shape Memory Alloys
Authors: Osman Adiguzel
Abstract:
Shape-memory alloys are a new class of functional materials with a peculiar property known as shape memory effect. These alloys return to a previously defined shape on heating after deformation in low temperature product phase region and take place in a class of functional materials due to this property. The origin of this phenomenon lies in the fact that the material changes its internal crystalline structure with changing temperature. Shape memory effect is based on martensitic transitions, which govern the remarkable changes in internal crystalline structure of materials. Martensitic transformation, which is a solid state phase transformation, occurs in thermal manner in material on cooling from high temperature parent phase region. This transformation is governed by changes in the crystalline structure of the material. Shape memory alloys cycle between original and deformed shapes in bulk level on heating and cooling, and can be used as a thermal actuator or temperature-sensitive elements due to this property. Martensitic transformations usually occur with the cooperative movement of atoms by means of lattice invariant shears. The ordered parent phase structures turn into twinned structures with this movement in crystallographic manner in thermal induced case. The twinned martensites turn into the twinned or oriented martensite by stressing the material at low temperature martensitic phase condition. The detwinned martensite turns into the parent phase structure on first heating, first cycle, and parent phase structures turn into the twinned and detwinned structures respectively in irreversible and reversible memory cases. On the other hand, shape memory materials are very important and useful in many interdisciplinary fields such as medicine, pharmacy, bioengineering, metallurgy and many engineering fields. The choice of material as well as actuator and sensor to combine it with the host structure is very essential to develop main materials and structures. Copper based alloys exhibit this property in metastable beta-phase region, which has bcc-based structures at high temperature parent phase field, and these structures martensitically turn into layered complex structures with lattice twinning following two ordered reactions on cooling. Martensitic transition occurs as self-accommodated martensite with inhomogeneous shears, lattice invariant shears which occur in two opposite directions, <110 > -type directions on the {110}-type plane of austenite matrix which is basal plane of martensite. This kind of shear can be called as {110}<110> -type mode and gives rise to the formation of layered structures, like 3R, 9R or 18R depending on the stacking sequences on the close-packed planes of the ordered lattice. In the present contribution, x-ray diffraction and transmission electron microscopy (TEM) studies were carried out on two copper based alloys which have the chemical compositions in weight; Cu-26.1%Zn 4%Al and Cu-11%Al-6%Mn. X-ray diffraction profiles and electron diffraction patterns reveal that both alloys exhibit super lattice reflections inherited from parent phase due to the displacive character of martensitic transformation. X-ray diffractograms taken in a long time interval show that locations and intensities of diffraction peaks change with the aging time at room temperature. In particular, some of the successive peak pairs providing a special relation between Miller indices come close each other.Keywords: Shape memory effect, martensite, twinning, detwinning, self-accommodation, layered structures
Procedia PDF Downloads 42617 Potential of Hyperion (EO-1) Hyperspectral Remote Sensing for Detection and Mapping Mine-Iron Oxide Pollution
Authors: Abderrazak Bannari
Abstract:
Acid Mine Drainage (AMD) from mine wastes and contaminations of soils and water with metals are considered as a major environmental problem in mining areas. It is produced by interactions of water, air, and sulphidic mine wastes. This environment problem results from a series of chemical and biochemical oxidation reactions of sulfide minerals e.g. pyrite and pyrrhotite. These reactions lead to acidity as well as the dissolution of toxic and heavy metals (Fe, Mn, Cu, etc.) from tailings waste rock piles, and open pits. Soil and aquatic ecosystems could be contaminated and, consequently, human health and wildlife will be affected. Furthermore, secondary minerals, typically formed during weathering of mine waste storage areas when the concentration of soluble constituents exceeds the corresponding solubility product, are also important. The most common secondary mineral compositions are hydrous iron oxide (goethite, etc.) and hydrated iron sulfate (jarosite, etc.). The objectives of this study focus on the detection and mapping of MIOP in the soil using Hyperion EO-1 (Earth Observing - 1) hyperspectral data and constrained linear spectral mixture analysis (CLSMA) algorithm. The abandoned Kettara mine, located approximately 35 km northwest of Marrakech city (Morocco) was chosen as study area. During 44 years (from 1938 to 1981) this mine was exploited for iron oxide and iron sulphide minerals. Previous studies have shown that Kettara surrounding soils are contaminated by heavy metals (Fe, Cu, etc.) as well as by secondary minerals. To achieve our objectives, several soil samples representing different MIOP classes have been resampled and located using accurate GPS ( ≤ ± 30 cm). Then, endmembers spectra were acquired over each sample using an Analytical Spectral Device (ASD) covering the spectral domain from 350 to 2500 nm. Considering each soil sample separately, the average of forty spectra was resampled and convolved using Gaussian response profiles to match the bandwidths and the band centers of the Hyperion sensor. Moreover, the MIOP content in each sample was estimated by geochemical analyses in the laboratory, and a ground truth map was generated using simple Kriging in GIS environment for validation purposes. The acquired and used Hyperion data were corrected for a spatial shift between the VNIR and SWIR detectors, striping, dead column, noise, and gain and offset errors. Then, atmospherically corrected using the MODTRAN 4.2 radiative transfer code, and transformed to surface reflectance, corrected for sensor smile (1-3 nm shift in VNIR and SWIR), and post-processed to remove residual errors. Finally, geometric distortions and relief displacement effects were corrected using a digital elevation model. The MIOP fraction map was extracted using CLSMA considering the entire spectral range (427-2355 nm), and validated by reference to the ground truth map generated by Kriging. The obtained results show the promising potential of the proposed methodology for the detection and mapping of mine iron oxide pollution in the soil.Keywords: hyperion eo-1, hyperspectral, mine iron oxide pollution, environmental impact, unmixing
Procedia PDF Downloads 22816 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 13915 Residential Building Facade Retrofit
Authors: Galit Shiff, Yael Gilad
Abstract:
The need to retrofit old buildings lies in the fact that buildings are responsible for the main energy use and CO₂ emission. Existing old structures are more dominant in their effect than new energy-efficient buildings. Nevertheless not every case of urban renewal that aims to replace old buildings with new neighbourhoods necessarily has a financial or sustainable justification. Façade design plays a vital role in the building's energy performance and the unit's comfort conditions. A retrofit façade residential methodology and feasibility applicative study has been carried out for the past four years, with two projects already fully renovated. The intention of this study is to serve as a case study for limited budget façade retrofit in Mediterranean climate urban areas. The two case study buildings are set in Israel. However, they are set in different local climatic conditions. One is in 'Sderot' in the south of the country, and one is in' Migdal Hahemek' in the north of the country. The building typology is similar. The budget of the projects is around $14,000 per unit and includes interventions at the buildings' envelope while tenants are living in. Extensive research and analysis of the existing conditions have been done. The building's components, materials and envelope sections were mapped, examined and compared to relevant updated standards. Solar radiation simulations for the buildings in their surroundings during winter and summer days were done. The energy rate of each unit, as well as the building as a whole, was calculated according to the Israeli Energy Code. The buildings’ facades were documented with the use of a thermal camera during different hours of the day. This information was superimposed with data about the electricity use and the thermal comfort that was collected from the residential units. Later in the process, similar tools were further used in order to compare the effectiveness of different design options and to evaluate the chosen solutions. Both projects showed that the most problematic units were the ones below the roof and the ones on top of the elevated entrance floor (pilotis). Old buildings tend to have poor insulation on those two horizontal surfaces which require treatment. Different radiation levels and wall sections in the two projects influenced the design strategies: In the southern project, there was an extreme difference in solar radiations levels between the main façade and the back elevation. Eventually, it was decided to invest in insulating the main south-west façade and the side façades, leaving the back north-east façade almost untouched. Lower levels of radiation in the northern project led to a different tactic: a combination of basic insulation on all façades, together with intense treatment on areas with problematic thermal behavior. While poor execution of construction details and bad installation of windows in the northern project required replacing them all, in the southern project it was found that it is more essential to shade the windows than replace them. Although the buildings and the construction typology was chosen for this study are similar, the research shows that there are large differences due to the location in different climatic zones and variation in local conditions. Therefore, in order to reach a systematic and cost-effective method of work, a more extensive catalogue database is needed. Such a catalogue will enable public housing companies in the Mediterranean climate to promote massive projects of renovating existing old buildings, drawing on minimal analysis and planning processes.Keywords: facade, low budget, residential, retrofit
Procedia PDF Downloads 20814 Extracellular Polymeric Substances (EPS) Attribute to Biofouling of Anaerobic Membrane Bioreactor: Adhesion and Viscoelastic Properties
Authors: Kbrom Mearg Haile
Abstract:
Introduction: Membrane fouling is the bottleneck for the anaerobic membrane bioreactor (AnMBR) robust continuous operation, primarily caused by the mixed liquor suspended solids (MLSS) characteristics formed by aggregated flocs and a scaffold of microbial self-produced extracellular polymeric substances (EPS), which dictates the flocs integrity. Accordingly, the adhesion of EPS to the membrane surface versus their role in forming firm, elastic, and mechanically stable flocs under the reactor’s hydraulic shear is critical for minimizing interactions between EPS and colloids originating from the MLSS flocs with the membrane. This study aims to gain insight and investigate the effect of MLSS flocs properties, EPS adhesion and viscoelasticity, viscoelastic properties of the sludge, and membrane fouling propensity. Experimental: As a working hypothesis, to alter the aforementioned flocs’ and EPS’s properties, the addition of either coagulant or surfactant was carried out during the AnMBR operation. In the AnMBR, two flat-sheet 300 kDa pore size polyether sulfone (PES) membranes with a total filtration area of 352 cm2 were immersed in the AnMBR system treating municipal wastewater of Midreshet Ben-Gurion village at the Negev highlands, Israel. The system temperature, pH, biogas recirculation, and hydraulic retention time were regulated. TMP fluctuations during a 30-day experiment were recorded under three operating conditions: Baseline (without the addition of coagulating or dispersing agent), coagulant addition (FeCl3), and surfactant addition (sodium dodecyl sulfate). At the end of each experiment, EPS were extracted from the MLSS and from the fouled membrane, characterized for their protein, polysaccharides, and DOC contents, and correlated with the fouling tendency of the submerged UF membrane. The EPS adherence and viscoelastic properties were revealed using QCM-D via the PES-coated gold sensor used as a membrane-mimicking surface providing a detailed real-time EPS adhesion. The associated shifts in the resonance frequency and dissipation at different overtones were further modeled using the Voigt-based viscoelastic model (using Dfind software, Q-Sense Biolin Scientific) in which the thickness, shear modulus, and shear viscosity values of the adsorbed EPS layers on the PES coated sensor were calculated. Results and discussion: The observations obtained from the QCM-D analysis indicate a greater decrease in the frequency shift for the elevated membrane fouling scenarios, likely due to an observed decrease in the calculated shear viscosity and shear modulus of the EPS adsorbed layer, coupled with an increase in EPS layer hydrated thickness and fluidity (ΔD/Δf slopes). Further analysis is being conducted for the three major operating conditions-analyzing their effects on sludge rheology, dewaterability (capillary suction time-CST) and settle ability (SVI). The biofouling layer is further characterized microscopically using a confocal laser scanning microscope (CLSM) and scanning electron microscope (SEM), for analyzing the consistency of the development of the biofouling layer with sludge characteristics, i.e., thicker biofouling layer on the membrane surface when operated with surfactant addition, due to flocs with reduced integrity and availability of EPS/colloids to the membrane. Conversely, a thinner layer when operated with coagulant compared to the baseline experiment, due to elevation in flocs integrity.Keywords: viscoelasticity, biofouling, viscoelastic, AnMBR, EPS, elocintegrity
Procedia PDF Downloads 2213 Dose Measurement in Veterinary Radiology Using Thermoluminescent Dosimeter
Authors: E. Saeedian, M. Shakerian, A. Zarif Sanayei, Z. Rakeb, F. N. Alizadeh, S. Sarshough, S. Sina
Abstract:
Radiological protection for plants and animals is an area of regulatory importance. Acute doses of 0.1 Gy/d (10 rad/d) or below are highly unlikely to produce permanent, measurable negative effects on populations or communities of plants or animals. The advancement of radio diagnostics for domestic animals, particularly dogs and cats, has gained popularity in veterinary medicine. As pets are considered to be members of the family worldwide, they are entitled to the same care and protection. It is important to have a system of radiological protection for nonhuman organisms that complies with the focus on human health as outlined in ICRP publication 19. The present study attempts to assess surface-skin entrance doses in small pets undergoing abdominal radio diagnostic procedures utilizing a direct measurements technique with a thermoluminescent dosimeter. These measurements allow the determination of the entrance skin dose (ESD) by calculating the amount of radiation absorbed by the skin during exposure. A group of Thirty TLD-100 dosimeters produced by Harshaw Company, each with repeatability greater than 95% and calibration using ¹³⁷Cs gamma source, were utilized to measure doses to ten small pets, including cats and dogs in the radiological department in a veterinary clinic in Shiraz, Iran. Radiological procedures were performed using a portable imaging unit (Philips Super M100, Philips Medical System, Germany) to acquire images of the abdomen; ten exams of abdomen images of different pets were monitored, measuring the thicknesses of the two projections (lateral and ventrodorsal) and the distance of the X-ray source from the surface of each pet during the exams. A group of two dosimeters was used for each pet which has been stacked on their skin on the abdomen region. The outcome of this study involved medical procedures with the same kVp, mAs, and nearly identical positions for different diagnostic X-ray procedures executed over a period of two months. The result showed the mean ESD value was 260.34±50.06 µGy due to the approximate size of pets. Based on the results, the ESD value is associated with animal size, and larger animals have higher values. If a procedure doesn't require repetition, the dose can be optimized. For smaller animals, the main challenge in veterinary radiology is the dose increase caused by repetitions, which is most noticeable in the ventrodorsal position due to the difficulty in immobilizing the animal. Animals are an area of regulatory importance. Acute doses of 0.1 Gy/d (10 rad/d) or below are highly unlikely to produce permanent, measurable negative effects on populations or communities of plants or animals. The advancement of radio diagnostics for domestic animals, particularly dogs and cats, has gained popularity in veterinary medicine. As pets are considered to be members of the family worldwide, they are entitled to the same care and protection. It is important to have a system of radiological protection for nonhuman organisms that complies with the focus on human health as outlined in ICRP publication 19. The present study attempts to assess surface-skin entrance doses in small pets undergoing abdominal radio diagnostic procedures utilizing direct measurements.Keywords: direct dose measuring, dosimetry, radiation protection, veterinary medicine
Procedia PDF Downloads 7012 Housing Recovery in Heavily Damaged Communities in New Jersey after Hurricane Sandy
Authors: Chenyi Ma
Abstract:
Background: The second costliest hurricane in U.S. history, Sandy landed in southern New Jersey on October 29, 2012, and struck the entire state with high winds and torrential rains. The disaster killed more than 100 people, left more than 8.5 million households without power, and damaged or destroyed more than 200,000 homes across the state. Immediately after the disaster, public policy support was provided in nine coastal counties that constituted 98% of the major and severely damaged housing units in NJ overall. The programs include Individuals and Households Assistance Program, Small Business Loan Program, National Flood Insurance Program, and the Federal Emergency Management Administration (FEMA) Public Assistance Grant Program. In the most severely affected counties, additional funding was provided through Community Development Block Grant: Reconstruction, Rehabilitation, Elevation, and Mitigation Program, and Homeowner Resettlement Program. How these policies individually and as a whole impacted housing recovery across communities with different socioeconomic and demographic profiles has not yet been studied, particularly in relation to damage levels. The concept of community social vulnerability has been widely used to explain many aspects of natural disasters. Nevertheless, how communities are vulnerable has been less fully examined. Community resilience has been conceptualized as a protective factor against negative impacts from disasters, however, how community resilience buffers the effects of vulnerability is not yet known. Because housing recovery is a dynamic social and economic process that varies according to context, this study examined the path from community vulnerability and resilience to housing recovery looking at both community characteristics and policy interventions. Sample/Methods: This retrospective longitudinal case study compared a literature-identified set of pre-disaster community characteristics, the effects of multiple public policy programs, and a set of time-variant community resilience indicators to changes in housing stock (operationally defined by percent of building permits to total occupied housing units/households) between 2010 and 2014, two years before and after Hurricane Sandy. The sample consisted of 51 municipalities in the nine counties in which between 4% and 58% of housing units suffered either major or severe damage. Structural equation modeling (SEM) was used to determine the path from vulnerability to the housing recovery, via multiple public programs, separately and as a whole, and via the community resilience indicators. The spatial analytical tool ArcGIS 10.2 was used to show the spatial relations between housing recovery patterns and community vulnerability and resilience. Findings: Holding damage levels constant, communities with higher proportions of Hispanic households had significantly lower levels of housing recovery while communities with households with an adult >age 65 had significantly higher levels of the housing recovery. The contrast was partly due to the different levels of total public support the two types of the community received. Further, while the public policy programs individually mediated the negative associations between African American and female-headed households and housing recovery, communities with larger proportions of African American, female-headed and Hispanic households were “vulnerable” to lower levels of housing recovery because they lacked sufficient public program support. Even so, higher employment rates and incomes buffered vulnerability to lower housing recovery. Because housing is the "wobbly pillar" of the welfare state, the housing needs of these particular groups should be more fully addressed by disaster policy.Keywords: community social vulnerability, community resilience, hurricane, public policy
Procedia PDF Downloads 37211 Cellular Mechanisms Involved in the Radiosensitization of Breast- and Lung Cancer Cells by Agents Targeting Microtubule Dynamics
Authors: Elsie M. Nolte, Annie M. Joubert, Roy Lakier, Maryke Etsebeth, Jolene M. Helena, Marcel Verwey, Laurence Lafanechere, Anne E. Theron
Abstract:
Treatment regimens for breast- and lung cancers may include both radiation- and chemotherapy. Ideally, a pharmaceutical agent which selectively sensitizes cancer cells to gamma (γ)-radiation would allow administration of lower doses of each modality, yielding synergistic anti-cancer benefits and lower metastasis occurrence, in addition to decreasing the side-effect profiles. A range of 2-methoxyestradiol (2-ME) analogues, namely 2-ethyl-3-O-sulphamoyl-estra-1,3,5 (10) 15-tetraene-3-ol-17one (ESE-15-one), 2-ethyl-3-O-sulphamoyl-estra-1,3,5(10),15-tetraen-17-ol (ESE-15-ol) and 2-ethyl-3-O-sulphamoyl-estra-1,3,5(10)16-tetraene (ESE-16) were in silico-designed by our laboratory, with the aim of improving the parent compound’s bioavailability in vivo. The main effect of these compounds is the disruption of microtubule dynamics with a resultant mitotic accumulation and induction of programmed cell death in various cancer cell lines. This in vitro study aimed to determine the cellular responses involved in the radiation sensitization effects of these analogues at low doses in breast- and lung cancer cell lines. The oestrogen receptor positive MCF-7-, oestrogen receptor negative MDA-MB-231- and triple negative BT-20 breast cancer cell lines as well as the A549 lung cancer cell line were used. The minimal compound- and radiation doses able to induce apoptosis were determined using annexin-V and cell cycle progression markers. These doses (cell line dependent) were used to pre-sensitize the cancer cells 24 hours prior to 6 gray (Gy) radiation. Experiments were conducted on samples exposed to the individual- as well as the combination treatment conditions in order to determine whether the combination treatment yielded an additive cell death response. Morphological studies included light-, fluorescence- and transmission electron microscopy. Apoptosis induction was determined by flow cytometry employing annexin V, cell cycle analysis, B-cell lymphoma 2 (Bcl-2) signalling, as well as reactive oxygen species (ROS) production. Clonogenic studies were performed by allowing colony formation for 10 days post radiation. Deoxyribonucleic acid (DNA) damage was quantified via γ-H2AX foci and micronuclei quantification. Amplification of the p53 signalling pathway was determined by western blot. Results indicated that exposing breast- and lung cancer cells to nanomolar concentrations of these analogues 24 hours prior to γ-radiation induced more cell death than the compound- and radiation treatments alone. Hypercondensed chromatin, decreased cell density, a damaged cytoskeleton and an increase in apoptotic body formation were observed in cells exposed to the combination treatment condition. An increased number of cells present in the sub-G1 phase as well as increased annexin-V staining, elevation of ROS formation and decreased Bcl-2 signalling confirmed the additive effect of the combination treatment. In addition, colony formation decreased significantly. p53 signalling pathways were significantly amplified in cells exposed to the analogues 24 hours prior to radiation, as was the amount of DNA damage. In conclusion, our results indicated that pre-treatment of breast- and lung cancer cells with low doses of 2-ME analogues sensitized breast- and lung cancer cells to γ-radiation and induced apoptosis more so than the individual treatments alone. Future studies will focus on the effect of the combination treatment on non-malignant cellular counterparts.Keywords: cancer, microtubule dynamics, radiation therapy, radiosensitization
Procedia PDF Downloads 20810 Amphiphilic Compounds as Potential Non-Toxic Antifouling Agents: A Study of Biofilm Formation Assessed by Micro-titer Assays with Marine Bacteria and Eco-toxicological Effect on Marine Algae
Authors: D. Malouch, M. Berchel, C. Dreanno, S. Stachowski-Haberkorn, P-A. Jaffres
Abstract:
Biofilm is a predominant lifestyle chosen by bacteria. Whether it is developed on an immerged surface or a mobile biofilm known as flocs, the bacteria within this form of life show properties different from its planktonic ones. Within the biofilm, the self-formed matrix of Extracellular Polymeric Substances (EPS) offers hydration, resources capture, enhanced resistance to antimicrobial agents, and allows cell-communication. Biofouling is a complex natural phenomenon that involves biological, physical and chemical properties related to the environment, the submerged surface and the living organisms involved. Bio-colonization of artificial structures can cause various economic and environmental impacts. The increase in costs associated with the over-consumption of fuel from biocolonized vessels has been widely studied. Measurement drifts from submerged sensors, as well as obstructions in heat exchangers, and deterioration of offshore structures are major difficulties that industries are dealing with. Therefore, surfaces that inhibit biocolonization are required in different areas (water treatment, marine paints, etc.) and many efforts have been devoted to produce efficient and eco-compatible antifouling agents. The different steps of surface fouling are widely described in literature. Studying the biofilm and its stages provides a better understanding of how to elaborate more efficient antifouling strategies. Several approaches are currently applied, such as the use of biocide anti-fouling paint6 (mainly with copper derivatives) and super-hydrophobic coatings. While these two processes are proving to be the most effective, they are not entirely satisfactory, especially in a context of a changing legislation. Nowadays, the challenge is to prevent biofouling with non-biocide compounds, offering a cost effective solution, but with no toxic effects on marine organisms. Since the micro-fouling phase plays an important role in the regulation of the following steps of biofilm formation7, it is desired to reduce or delate biofouling of a given surface by inhibiting the micro fouling at its early stages. In our recent works, we reported that some amphiphilic compounds exhibited bacteriostatic or bactericidal properties at a concentration that did not affect eukaryotic cells. These remarkable properties invited us to assess this type of bio-inspired phospholipids9 to prevent the colonization of surfaces by marine bacteria. Of note, other studies reported that amphiphilic compounds interacted with bacteria leading to a reduction of their development. An amphiphilic compound is a molecule consisting of a hydrophobic domain and a polar head (ionic or non-ionic). These compounds appear to have interesting antifouling properties: some ionic compounds have shown antimicrobial activity, and zwitterions can reduce nonspecific adsorption of proteins. Herein, we investigate the potential of amphiphilic compounds as inhibitors of bacterial growth and marine biofilm formation. The aim of this study is to compare the efficacy of four synthetic phospholipids that features a cationic charge (BSV36, KLN47) or a zwitterionic polar-head group (SL386, MB2871) to prevent microfouling with marine bacteria. We also study the toxicity of these compounds in order to identify the most promising compound that must feature high anti-adhesive properties and a low cytotoxicity on two links representative of coastal marine food webs: phytoplankton and oyster larvae.Keywords: amphiphilic phospholipids, bacterial biofilm, marine microfouling, non-toxic antifouling
Procedia PDF Downloads 147