Search results for: hydrogen trap sites
211 Transformation of the Relationship between Tourism Activities and Residential Environment in the Center of a Historical Suburban City of a Tourism Metropolis: A Case Study of Naka-Uji Area, Uji City, Kyoto Prefecture
Authors: Shuailing CUI, Nakajima Naoto
Abstract:
The tourism industry has experienced significant growth worldwide since the end of World War II. Tourists are drawn to suburban areas during weekends and holidays to explore historical and cultural heritage sites. Since the 1970s, there has been a resurgence in population growth in metropolitan areas, which has fueled the demand for suburban tourism and facilitated its development. The construction of infrastructure, such as railway lines and arterial roads, has also supported the growth of tourism. Tourists engaging in various activities can have a significant impact on the destinations they visit. Tourism has not only affected the local economy but has also begun to alter the social structures, culture, and lifestyle of the destinations visited. In addition, the growing number of tourists has affected the local commercial structure and daily life of suburban residents. Therefore, there is a need to figure out how tourism activities influence the residential environment of the tourist destination and how this influence changes over time. This study aims to analyze the transformation of the relationship between tourism activities and the residential environment in the Naka-Uji area of Uji City, Kyoto Prefecture. Specifically, it investigates how the growth of the tourism industry has influenced the local residential environment and how this influence has changed over time. The findings of the study indicate that the growth of tourism in the Naka-Uji area has had both positive and negative effects on the local residential environment. On the one hand, the tourism industry has created job opportunities and improved local economic conditions. On the other hand, it has also caused environmental degradation, particularly in terms of increased traffic and the construction of parking lots. The study also found that the development of the tourism industry has influenced the social structures, culture, and lifestyle of residents. For instance, the increase in the number of tourists has led to changes in the commercial structure and daily life of suburban residents. The study highlights the importance of collaboration and shared benefits among stakeholders in tourism development, particularly in terms of preserving the cultural and natural heritage of tourist destinations while promoting sustainable development. Overall, this study contributes to the growing body of research on the impact of tourism on suburban areas. It provides insights into the complex relationships between tourism, the natural environment, the local economy, and residential life, and emphasizes the need for sustainable tourism development in suburban areas. The findings of this study have important implications for policymakers, urban planners, and other stakeholders involved in promoting regional revitalization and sustainable tourism development.Keywords: tourism, residential environment, suburban area, metropolis
Procedia PDF Downloads 70210 Case Study of Human Factors and Ergonomics in the Design and Use of Harness-Embedded Costumes in the Entertainment Industry
Authors: Marielle Hanley, Brandon Takahashi, Gerry Hanley, Gabriella Hancock
Abstract:
Safety harnesses and their protocols are very common within the construction industry, and the Occupational Safety and Health Administration has provided extensive guidelines with protocols being constantly updated to ensure the highest level of safety within construction sites. There is also extensive research on harnesses that are meant to keep people in place in moving vehicles, such as seatbelts. Though this research is comprehensive in these areas, the findings and recommendations are not generally applicable to other industry sectors where harnesses are used, such as the entertainment industry. The focus of this case study is on the design and use of harnesses used by theme park employees wearing elaborate costumes in parades and performances. The key factors of posture, kinesthetic factors, and harness engineering interact in significantly different ways when the user is performing repetitive choreography with 20 to 40 lbs. of apparatus connected to harnesses that need to be hidden from the audience’s view. Human factors and ergonomic analysis take into account the required performers’ behaviors, the physical and mental preparation and posture of the performer, the design of the harness-embedded costume, and the environmental conditions during the performance (e.g., wind) that can determine the physical stresses placed on the harness and performer. The uniqueness and expense of elaborate costumes frequently result in one or two costumes created for production, and a variety of different performers need to fit into the same costume. Consequently, the harnesses should be adjustable if they are to minimize the physical and cognitive loads on the performer, but they are frequently more a “one-size fits all”. The complexity of human and technology interactions produces a range of detrimental outcomes, from muscle strains to nerve damage, mental and physical fatigue, and reduced motivation to perform at peak levels. Based on observations conducted over four years for this case study, a number of recommendations to institutionalize the human factors and ergonomic analyses can significantly improve the safety, reliability, and quality of performances with harness-embedded costumes in the entertainment industry. Human factors and ergonomic analyses can be integrated into the engineering design of the performance costumes with embedded harnesses, the conditioning and training of the performers using the costumes, the choreography of the performances within the staged setting and the maintenance of the harness-embedded costumes. By applying human factors and ergonomic methodologies in the entertainment industry, the industry management and support staff can significantly reduce the risks of injury, improve the longevity of unique performers, increase the longevity of the harness-embedded costumes, and produce the desired entertainment value for audiences.Keywords: ergonomics in entertainment industry, harness-embedded costumes, performer safety, injury prevention
Procedia PDF Downloads 90209 Response Surface Methodology for the Optimization of Radioactive Wastewater Treatment with Chitosan-Argan Nutshell Beads
Authors: Fatima Zahra Falah, Touria El. Ghailassi, Samia Yousfi, Ahmed Moussaif, Hasna Hamdane, Mouna Latifa Bouamrani
Abstract:
The management and treatment of radioactive wastewater pose significant challenges to environmental safety and public health. This study presents an innovative approach to optimizing radioactive wastewater treatment using a novel biosorbent: chitosan-argan nutshell beads. By employing Response Surface Methodology (RSM), we aimed to determine the optimal conditions for maximum removal efficiency of radioactive contaminants. Chitosan, a biodegradable and non-toxic biopolymer, was combined with argan nutshell powder to create composite beads. The argan nutshell, a waste product from argan oil production, provides additional adsorption sites and mechanical stability to the biosorbent. The beads were characterized using Fourier Transform Infrared Spectroscopy (FTIR), Scanning Electron Microscopy (SEM), and X-ray Diffraction (XRD) to confirm their structure and composition. A three-factor, three-level Box-Behnken design was utilized to investigate the effects of pH (3-9), contact time (30-150 minutes), and adsorbent dosage (0.5-2.5 g/L) on the removal efficiency of radioactive isotopes, primarily focusing on cesium-137. Batch adsorption experiments were conducted using synthetic radioactive wastewater with known concentrations of these isotopes. The RSM analysis revealed that all three factors significantly influenced the adsorption process. A quadratic model was developed to describe the relationship between the factors and the removal efficiency. The model's adequacy was confirmed through analysis of variance (ANOVA) and various diagnostic plots. Optimal conditions for maximum removal efficiency were pH 6.8, a contact time of 120 minutes, and an adsorbent dosage of 0.8 g/L. Under these conditions, the experimental removal efficiency for cesium-137 was 94.7%, closely matching the model's predictions. Adsorption isotherms and kinetics were also investigated to elucidate the mechanism of the process. The Langmuir isotherm and pseudo-second-order kinetic model best described the adsorption behavior, indicating a monolayer adsorption process on a homogeneous surface. This study demonstrates the potential of chitosan-argan nutshell beads as an effective and sustainable biosorbent for radioactive wastewater treatment. The use of RSM allowed for the efficient optimization of the process parameters, potentially reducing the time and resources required for large-scale implementation. Future work will focus on testing the biosorbent's performance with real radioactive wastewater samples and investigating its regeneration and reusability for long-term applications.Keywords: adsorption, argan nutshell, beads, chitosan, mechanism, optimization, radioactive wastewater, response surface methodology
Procedia PDF Downloads 35208 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise
Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou
Abstract:
Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction
Procedia PDF Downloads 122207 The Scenario Analysis of Shale Gas Development in China by Applying Natural Gas Pipeline Optimization Model
Authors: Meng Xu, Alexis K. H. Lau, Ming Xu, Bill Barron, Narges Shahraki
Abstract:
As an emerging unconventional energy, shale gas has been an economically viable step towards a cleaner energy future in U.S. China also has shale resources that are estimated to be potentially the largest in the world. In addition, China has enormous unmet for a clean alternative to substitute coal. Nonetheless, the geological complexity of China’s shale basins and issues of water scarcity potentially impose serious constraints on shale gas development in China. Further, even if China could replicate to a significant degree the U.S. shale gas boom, China faces the problem of transporting the gas efficiently overland with its limited pipeline network throughput capacity and coverage. The aim of this study is to identify the potential bottlenecks in China’s gas transmission network, as well as to examine the shale gas development affecting particular supply locations and demand centers. We examine this through application of three scenarios with projecting domestic shale gas supply by 2020: optimistic, medium and conservative shale gas supply, taking references from the International Energy Agency’s (IEA’s) projections and China’s shale gas development plans. Separately we project the gas demand at provincial level, since shale gas will have more significant impact regionally than nationally. To quantitatively assess each shale gas development scenario, we formulated a gas pipeline optimization model. We used ArcGIS to generate the connectivity parameters and pipeline segment length. Other parameters are collected from provincial “twelfth-five year” plans and “China Oil and Gas Pipeline Atlas”. The multi-objective optimization model uses GAMs and Matlab. It aims to minimize the demands that are unable to be met, while simultaneously seeking to minimize total gas supply and transmission costs. The results indicate that, even if the primary objective is to meet the projected gas demand rather than cost minimization, there’s a shortfall of 9% in meeting total demand under the medium scenario. Comparing the results between the optimistic and medium supply of shale gas scenarios, almost half of the shale gas produced in Sichuan province and Chongqing won’t be able to be transmitted out by pipeline. On the demand side, the Henan province and Shanghai gas demand gap could be filled as much as 82% and 39% respectively, with increased shale gas supply. To conclude, the pipeline network in China is currently not sufficient in meeting the projected natural gas demand in 2020 under medium and optimistic scenarios, indicating the need for substantial pipeline capacity expansion for some of the existing network, and the importance of constructing new pipelines from particular supply to demand sites. If the pipeline constraint is overcame, Beijing, Shanghai, Jiangsu and Henan’s gas demand gap could potentially be filled, and China could thereby reduce almost 25% its dependency on LNG imports under the optimistic scenario.Keywords: energy policy, energy systematic analysis, scenario analysis, shale gas in China
Procedia PDF Downloads 287206 Atypical Retinoid ST1926 Nanoparticle Formulation Development and Therapeutic Potential in Colorectal Cancer
Authors: Sara Assi, Berthe Hayar, Claudio Pisano, Nadine Darwiche, Walid Saad
Abstract:
Nanomedicine, the application of nanotechnology to medicine, is an emerging discipline that has gained significant attention in recent years. Current breakthroughs in nanomedicine have paved the way to develop effective drug delivery systems that can be used to target cancer. The use of nanotechnology provides effective drug delivery, enhanced stability, bioavailability, and permeability, thereby minimizing drug dosage and toxicity. As such, the use of nanoparticle (NP) formulations in drug delivery has been applied in various cancer models and have shown to improve the ability of drugs to reach specific targeted sites in a controlled manner. Cancer is one of the major causes of death worldwide; in particular, colorectal cancer (CRC) is the third most common type of cancer diagnosed amongst men and women and the second leading cause of cancer related deaths, highlighting the need for novel therapies. Retinoids, consisting of natural and synthetic derivatives, are a class of chemical compounds that have shown promise in preclinical and clinical cancer settings. However, retinoids are limited by their toxicity and resistance to treatment. To overcome this resistance, various synthetic retinoids have been developed, including the adamantyl retinoid ST1926, which is a potent anti-cancer agent. However, due to its limited bioavailability, the development of ST1926 has been restricted in phase I clinical trials. We have previously investigated the preclinical efficacy of ST1926 in CRC models. ST1926 displayed potent inhibitory and apoptotic effects in CRC cell lines by inducing early DNA damage and apoptosis. ST1926 significantly reduced the tumor doubling time and tumor burden in a xenograft CRC model. Therefore, we developed ST1926-NPs and assessed their efficacy in CRC models. ST1926-NPs were produced using Flash NanoPrecipitation with the amphiphilic diblock copolymer polystyrene-b-ethylene oxide and cholesterol as a co-stabilizer. ST1926 was formulated into NPs with a drug to polymer mass ratio of 1:2, providing a stable formulation for one week. The contin ST1926-NP diameter was 100 nm, with a polydispersity index of 0.245. Using the MTT cell viability assay, ST1926-NP exhibited potent anti-growth activities as naked ST1926 in HCT116 cells, at pharmacologically achievable concentrations. Future studies will be performed to study the anti-tumor activities and mechanism of action of ST1926-NPs in a xenograft mouse model and to detect the compound and its glucuroconjugated form in the plasma of mice. Ultimately, our studies will support the use of ST1926-NP formulations in enhancing the stability and bioavailability of ST1926 in CRC.Keywords: nanoparticles, drug delivery, colorectal cancer, retinoids
Procedia PDF Downloads 100205 Pump-as-Turbine: Testing and Characterization as an Energy Recovery Device, for Use within the Water Distribution Network
Authors: T. Lydon, A. McNabola, P. Coughlan
Abstract:
Energy consumption in the water distribution network (WDN) is a well established problem equating to the industry contributing heavily to carbon emissions, with 0.9 kg CO2 emitted per m3 of water supplied. It is indicated that 85% of energy wasted in the WDN can be recovered by installing turbines. Existing potential in networks is present at small capacity sites (5-10 kW), numerous and dispersed across networks. However, traditional turbine technology cannot be scaled down to this size in an economically viable fashion, thus alternative approaches are needed. This research aims to enable energy recovery potential within the WDN by exploring the potential of pumps-as-turbines (PATs), to realise this potential. PATs are estimated to be ten times cheaper than traditional micro-hydro turbines, presenting potential to contribute to an economically viable solution. However, a number of technical constraints currently prohibit their widespread use, including the inability of a PAT to control pressure, difficulty in the selection of PATs due to lack of performance data and a lack of understanding on how PATs can cater for fluctuations as extreme as +/- 50% of the average daily flow, characteristic of the WDN. A PAT prototype is undergoing testing in order to identify the capabilities of the technology. Results of preliminary testing, which involved testing the efficiency and power potential of the PAT for varying flow and pressure conditions, in order to develop characteristic and efficiency curves for the PAT and a baseline understanding of the technologies capabilities, are presented here: •The limitations of existing selection methods which convert BEP from pump operation to BEP in turbine operation was highlighted by the failure of such methods to reflect the conditions of maximum efficiency of the PAT. A generalised selection method for the WDN may need to be informed by an understanding of impact of flow variations and pressure control on system power potential capital cost, maintenance costs, payback period. •A clear relationship between flow and efficiency rate of the PAT has been established. The rate of efficiency reductions for flows +/- 50% BEP is significant and more extreme for deviations in flow above the BEP than below, but not dissimilar to the reaction of efficiency of other turbines. •PAT alone is not sufficient to regulate pressure, yet the relationship of pressure across the PAT is foundational in exploring ways which PAT energy recovery systems can maintain required pressure level within the WDN. Efficiencies of systems of PAT energy recovery systems operating conditions of pressure regulation, which have been conceptualise in current literature, need to be established. Initial results guide the focus of forthcoming testing and exploration of PAT technology towards how PATs can form part of an efficiency energy recovery system.Keywords: energy recovery, pump-as-turbine, water distribution network, water distribution network
Procedia PDF Downloads 260204 Selective Conversion of Biodiesel Derived Glycerol to 1,2-Propanediol over Highly Efficient γ-Al2O3 Supported Bimetallic Cu-Ni Catalyst
Authors: Smita Mondal, Dinesh Kumar Pandey, Prakash Biswas
Abstract:
During past two decades, considerable attention has been given to the value addition of biodiesel derived glycerol (~10wt.%) to make the biodiesel industry economically viable. Among the various glycerol value-addition methods, hydrogenolysis of glycerol to 1,2-propanediol is one of the attractive and promising routes. In this study, highly active and selective γ-Al₂O₃ supported bimetallic Cu-Ni catalyst was developed for selective hydrogenolysis of glycerol to 1,2-propanediol in the liquid phase. The catalytic performance was evaluated in a high-pressure autoclave reactor. The formation of mixed oxide indicated the strong interaction of Cu, Ni with the alumina support. Experimental results demonstrated that bimetallic copper-nickel catalyst was more active and selective to 1,2-PDO as compared to monometallic catalysts due to bifunctional behavior. To verify the effect of calcination temperature on the formation of Cu-Ni mixed oxide phase, the calcination temperature of 20wt.% Cu:Ni(1:1)/Al₂O₃ catalyst was varied from 300°C-550°C. The physicochemical properties of the catalysts were characterized by various techniques such as specific surface area (BET), X-ray diffraction study (XRD), temperature programmed reduction (TPR), and temperature programmed desorption (TPD). The BET surface area and pore volume of the catalysts were in the range of 71-78 m²g⁻¹, and 0.12-0.15 cm³g⁻¹, respectively. The peaks at the 2θ range of 43.3°-45.5° and 50.4°-52°, was corresponded to the copper-nickel mixed oxidephase [JCPDS: 78-1602]. The formation of mixed oxide indicated the strong interaction of Cu, Ni with the alumina support. The crystallite size decreased with increasing the calcination temperature up to 450°C. Further, the crystallite size was increased due to agglomeration. Smaller crystallite size of 16.5 nm was obtained for the catalyst calcined at 400°C. Total acidic sites of the catalysts were determined by NH₃-TPD, and the maximum total acidic of 0.609 mmol NH₃ gcat⁻¹ was obtained over the catalyst calcined at 400°C. TPR data suggested the maximum of 75% degree of reduction of catalyst calcined at 400°C among all others. Further, 20wt.%Cu:Ni(1:1)/γ-Al₂O₃ catalyst calcined at 400°C exhibited highest catalytic activity ( > 70%) and 1,2-PDO selectivity ( > 85%) at mild reaction condition due to highest acidity, highest degree of reduction, smallest crystallite size. Further, the modified Power law kinetic model was developed to understand the true kinetic behaviour of hydrogenolysis of glycerol over 20wt.%Cu:Ni(1:1)/γ-Al₂O₃ catalyst. Rate equations obtained from the model was solved by ode23 using MATLAB coupled with Genetic Algorithm. Results demonstrated that the model predicted data were very well fitted with the experimental data. The activation energy of the formation of 1,2-PDO was found to be 45 kJ mol⁻¹.Keywords: glycerol, 1, 2-PDO, calcination, kinetic
Procedia PDF Downloads 144203 Glasshouse Experiment to Improve Phytomanagement Solutions for Cu-Polluted Mine Soils
Authors: Marc Romero-Estonllo, Judith Ramos-Castro, Yaiza San Miguel, Beatriz Rodríguez-Garrido, Carmela Monterroso
Abstract:
Mining activity is among the main sources of trace and heavy metal(loid) pollution worldwide, which is a hazard to human and environmental health. That is why several projects have been emerging for the remediation of such polluted places. Phytomanagement strategies draw good performances besides big side benefits. In this work, a glasshouse assay with trace element polluted soils from an old Cu mine ore (NW of Spain) which forms part of the PhytoSUDOE network of phytomanaged contaminated field sites (PhytoSUDOE Project (SOE1/P5/E0189)) was set. The objective was to evaluate improvements induced by the following phytoremediation-related treatments. Three increasingly complex amendments alone or together with plant growth (Populus nigra L. alone and together with Tripholium repens L.) were tested. And three different rhizosphere bioinocula were applied (Plant Growth Promoting Bacteria (PGP), mycorrhiza (MYC), or mixed (PGP+MYC)). After 110 days of growth, plants were collected, biomass was weighed, and tree length was measured. Physical-chemical analyses were carried out to determine pH, effective Cation Exchange Capacity, carbon and nitrogen contents, bioavailable phosphorous (Olsen bicarbonate method), pseudo total element content (microwave acid digested fraction), EDTA extractable metals (complexed fraction), and NH4NO3 extractable metals (easily bioavailable fraction). On plant material, nitrogen content and acid digestion elements were determined. Amendment usage, plant growth, and bioinoculation were demonstrated to improve soil fertility and/or plant health within the time span of this study. Particularly, pH levels increased from 3 (highly acidic) to 5 (acidic) in the worst-case scenario, even reaching 7 (neutrality) in the best plots. Organic matter and pH increments were related to polluting metals’ bioavailability decrements. Plants grew better both with the most complex amendment and the middle one, with few differences due to bioinoculation. Using the less complex amendment (just compost) beneficial effects of bioinoculants were more observable, although plants didn’t thrive very well. On unamended soils, plants neither sprouted nor bloomed. The scheme assayed in this study is suitable for phytomanagement of these kinds of soils affected by mining activity. These findings should be tested now on a larger scale.Keywords: aided phytoremediation, mine pollution, phytostabilization, soil pollution, trace elements
Procedia PDF Downloads 66202 Landslide Susceptibility Analysis in the St. Lawrence Lowlands Using High Resolution Data and Failure Plane Analysis
Authors: Kevin Potoczny, Katsuichiro Goda
Abstract:
The St. Lawrence lowlands extend from Ottawa to Quebec City and are known for large deposits of sensitive Leda clay. Leda clay deposits are responsible for many large landslides, such as the 1993 Lemieux and 2010 St. Jude (4 fatalities) landslides. Due to the large extent and sensitivity of Leda clay, regional hazard analysis for landslides is an important tool in risk management. A 2018 regional study by Farzam et al. on the susceptibility of Leda clay slopes to landslide hazard uses 1 arc second topographical data. A qualitative method known as Hazus is used to estimate susceptibility by checking for various criteria in a location and determine a susceptibility rating on a scale of 0 (no susceptibility) to 10 (very high susceptibility). These criteria are slope angle, geological group, soil wetness, and distance from waterbodies. Given the flat nature of St. Lawrence lowlands, the current assessment fails to capture local slopes, such as the St. Jude site. Additionally, the data did not allow one to analyze failure planes accurately. This study majorly improves the analysis performed by Farzam et al. in two aspects. First, regional assessment with high resolution data allows for identification of local locations that may have been previously identified as low susceptibility. This then provides the opportunity to conduct a more refined analysis on the failure plane of the slope. Slopes derived from 1 arc second data are relatively gentle (0-10 degrees) across the region; however, the 1- and 2-meter resolution 2022 HRDEM provided by NRCAN shows that short, steep slopes are present. At a regional level, 1 arc second data can underestimate the susceptibility of short, steep slopes, which can be dangerous as Leda clay landslides behave retrogressively and travel upwards into flatter terrain. At the location of the St. Jude landslide, slope differences are significant. 1 arc second data shows a maximum slope of 12.80 degrees and a mean slope of 4.72 degrees, while the HRDEM data shows a maximum slope of 56.67 degrees and a mean slope of 10.72 degrees. This equates to a difference of three susceptibility levels when the soil is dry and one susceptibility level when wet. The use of GIS software is used to create a regional susceptibility map across the St. Lawrence lowlands at 1- and 2-meter resolutions. Failure planes are necessary to differentiate between small and large landslides, which have so far been ignored in regional analysis. Leda clay failures can only retrogress as far as their failure planes, so the regional analysis must be able to transition smoothly into a more robust local analysis. It is expected that slopes within the region, once previously assessed at low susceptibility scores, contain local areas of high susceptibility. The goal is to create opportunities for local failure plane analysis to be undertaken, which has not been possible before. Due to the low resolution of previous regional analyses, any slope near a waterbody could be considered hazardous. However, high-resolution regional analysis would allow for more precise determination of hazard sites.Keywords: hazus, high-resolution DEM, leda clay, regional analysis, susceptibility
Procedia PDF Downloads 75201 Analyzing the Construction of Collective Memories by History Movies/TV Programs: Case Study of Masters in the Forbidden City
Authors: Lulu Wang, Yongjun Xu, Xiaoyang Qiao
Abstract:
The Forbidden City is well known for being full of Chinese cultural and historical relics. However, the Masters in the Forbidden City, a documentary film, doesn’t just dwell on the stories of the past. Instead, it focuses on ordinary people—the restorers of the relics and antiquities, which has caught the sight of Chinese audiences. From this popular documentary film, a new way can be considered, that is to show the relics, antiquities and painting with a character of modern humanities by films and TV programs. Of course, it can’t just like a simple explanation from tour guides in museums. It should be a perfect combination of scenes, heritages, stories, storytellers and background music. All we want to do is trying to dig up the humanity behind the heritages and then create a virtual scene for the audience to have emotional resonance from the humanity. It is believed that there are two problems. One is that compared with the entertainment shows, why people prefer to see the boring restoration work. The other is that what the interaction is between those history documentary films, the heritages, the audiences and collective memory. This paper mainly used the methods of text analysis and data analysis. The audiences’ comment texts were collected from all kinds of popular video sites. Through analyzing those texts, there was a word cloud chart about people preferring to use what kind of words to comment the film. Then the usage rate of all comments words was calculated. After that, there was a Radar Chart to show the rank results. Eventually, each of them was given an emotional value classification according their comment tone and content. Based on the above analysis results, an interaction model among the audience, history films/TV programs and the collective memory can be summarized. According to the word cloud chart, people prefer to use such words to comment, including moving, history, love, family, celebrity, tone... From those emotional words, we can see Chinese audience felt so proud and shared the sense of Collective Identity, so they leave such comments: To our great motherland! Chinese traditional culture is really profound! It is found that in the construction of collective memory symbology, the films formed an imaginary system by organizing a ‘personalized audience’. The audience is not just a recipient of information, but a participant of the documentary films and a cooperator of collective memory. At the same time, it is believed that the traditional background music, the spectacular present scenes and the tone of the storytellers/hosts are also important, so it is suggested that the museums could try to cooperate with the producers of movie and TV program to create a vivid scene for the people. Maybe it’s a more artistic way for heritages to be open to all the world.Keywords: audience, heritages, history movies, TV programs
Procedia PDF Downloads 161200 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada
Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer
Abstract:
Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.Keywords: contaminated sediment, monitoring, recovery, remediation
Procedia PDF Downloads 236199 Demographic Assessment and Evaluation of Degree of Lipid Control in High Risk Indian Dyslipidemia Patients
Authors: Abhijit Trailokya
Abstract:
Background: Cardiovascular diseases (CVD’s) are the major cause of morbidity and mortality in both developed and developing countries. Many clinical trials have demonstrated that low-density lipoprotein cholesterol (LDL-C) lowering, reduces the incidence of coronary and cerebrovascular events across a broad spectrum of patients at risk. Guidelines for the management of patients at risk have been established in Europe and North America. The guidelines have advocated progressively lower LDL-C targets and more aggressive use of statin therapy. In Indian patients, comprehensive data on dyslipidemia management and its treatment outcomes are inadequate. There is lack of information on existing treatment patterns, the patient’s profile being treated, and factors that determine treatment success or failure in achieving desired goals. Purpose: The present study was planned to determine the lipid control status in high-risk dyslipidemic patients treated with lipid-lowering therapy in India. Methods: This cross-sectional, non-interventional, single visit program was conducted across 483 sites in India where male and female patients with high-risk dyslipidemia aged 18 to 65 years who had visited for a routine health check-up to their respective physician at hospital or a healthcare center. Percentage of high-risk dyslipidemic patients achieving adequate LDL-C level (< 70 mg/dL) on lipid-lowering therapy and the association of lipid parameters with patient characteristics, comorbid conditions, and lipid lowering drugs were analysed. Results: 3089 patients were enrolled in the study; of which 64% were males. LDL-C data was available for 95.2% of the patients; only 7.7% of these patients achieved LDL-C levels < 70 mg/dL on lipid-lowering therapy, which may be due to inability to follow therapeutic plans, poor compliance, or inadequate counselling by physician. The physician’s lack of awareness about recent treatment guidelines also might contribute to patients’ poor adherence, not explaining adequately the benefit and risks of a medication, not giving consideration to the patient’s life style and the cost of medication. Statin was the most commonly used anti-dyslipidemic drug across population. The higher proportion of patients had the comorbid condition of CVD and diabetes mellitus across all dyslipidemic patients. Conclusion: As per the European Society of Cardiology guidelines the ideal LDL-C levels in high risk dyslipidemic patients should be less than 70%. In the present study, 7.7% of the patients achieved LDL-C levels < 70 mg/dL on lipid lowering therapy which is very less. Most of high risk dyslipidemic patients in India are on suboptimal dosage of statin. So more aggressive and high dosage statin therapy may be required to achieve target LDLC levels in high risk Indian dyslipidemic patients.Keywords: cardiovascular disease, diabetes mellitus, dyslipidemia, LDL-C, lipid lowering drug, statins
Procedia PDF Downloads 201198 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web
Authors: Aayushi Somani, Siba P. Samal
Abstract:
Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR
Procedia PDF Downloads 170197 Risk and Coping: Understanding Community Responses to Calls for Disaster Evacuation in Central Philippines
Authors: Soledad Natalia M. Dalisay, Mylene De Guzman
Abstract:
In archipelagic countries like the Philippines, many communities thrive along coastal areas. The sea is the community members’ main source of livelihood and the site of many cultural activities. For these communities, the sea is their life and livelihood. Nevertheless, the sea also poses a hazard during the rainy season when typhoons frequent their communities. Coastal communities often encounter threats from storm surges and flooding that are common when there are typhoons. During such periods, disaster evacuation programs are implemented. However, in many instances, evacuation has been the bane of local government officials implementing such programs in their communities as resistance from community members is often encountered. Such resistance is often viewed by program implementers as due to the fact that people were hard headed and ignorant of the potential impacts of living in hazard prone areas. This paper argues that it is not for these reasons that people refused to evacuate. Drawing from data collected from fieldwork done in three sites in Central Philippines affected by super typhoon Haiyan, this study aimed to provide a contextualized understanding of peoples’ refusal to heed disaster evacuation warnings. This study utilized the multi-sited ethnography approach with in-depth episodic interviews, focus group discussions, participatory risk mapping and key informant interviews in gathering data on peoples’ experiences and insights specifically on evacuation during typhoon Haiyan. This study showed that people have priorities and considerations vital in their social lives that they are protecting in their refusal to leave their homes for pre-emptive evacuation. It is not that they are not aware of the risks when the face the hazard. It is more that they had faith in the local knowledge and strategies that they have developed since the time of their ancestors as a result of living and engaging with hazards in their areas for as long as they could remember. The study also revealed that risk in encounters with hazards was gendered. Furthermore, previous engagement with local government officials and the manner in which the pre-emptive evacuation programs were implemented had cast doubt on the value of such programs in saving their lives. Life in the designated evacuation areas can be as dangerous if not more compared with living in their coastal homes. There seems to be the impression that in the evacuation program of the government, people were being moved from hazard zones to death zones. Thus, this paper ends with several recommendations that may contribute to building more responsive evacuation programs that aim to build people’s resilience while taking into consideration the local moral world in communities in identified hazard zones.Keywords: coastal communities, disaster evacuation, disaster risk perception, social and cultural responses to hazards
Procedia PDF Downloads 337196 Evaluating Viability of Using South African Forestry Process Biomass Waste Mixtures as an Alternative Pyrolysis Feedstock in the Production of Bio Oil
Authors: Thembelihle Portia Lubisi, Malusi Ntandoyenkosi Mkhize, Jonas Kalebe Johakimu
Abstract:
Fertilizers play an important role in maintaining the productivity and quality of plants. Inorganic fertilizers (containing nitrogen, phosphorus, and potassium) are largely used in South Africa as they are considered inexpensive and highly productive. When applied, a portion of the excess fertilizer will be retained in the soil, a portion enters water streams due to surface runoff or the irrigation system adopted. Excess nutrient from the fertilizers entering the water stream eventually results harmful algal blooms (HABs) in freshwater systems, which not only disrupt wildlife but can also produce toxins harmful to humans. Use of agro-chemicals such as pesticides and herbicides has been associated with increased antimicrobial resistance (AMR) in humans as the plants are consumed by humans. This resistance of bacterial poses a threat as it prevents the Health sector from being able to treat infectious disease. Archaeological studies have found that pyrolysis liquids were already used in the time of the Neanderthal as a biocide and plant protection product. Pyrolysis is thermal degradation process of plant biomass or organic material under anaerobic conditions leading to production of char, bio-oils and syn gases. Bio-oil constituents can be categorized as water soluble (wood vinegar) and water insoluble fractions (tar and light oils). Wood vinegar (pyro-ligneous acid) is said to contain contains highly oxygenated compounds including acids, alcohols, aldehydes, ketones, phenols, esters, furans, and other multifunctional compounds with various molecular weights and compositions depending on the biomass material derived from and pyrolysis operating conditions. Various researchers have found the wood vinegar to be efficient in the eradication of termites, effective in plant protection and plant growth, has antibacterial characteristics and was found effective in inhibiting the micro-organisms such as candida yeast, E-coli, etc. This study investigated characterisation of South African forestry product processing waste with intention of evaluating the potential of using the respective biomass waste as feedstock for boil oil production via pyrolysis process. Ability to use biomass waste materials in production of wood-vinegar has advantages that it does not only allows for reduction of environmental pollution and landfill requirement, but it also does not negatively affect food security. The biomass wastes investigated were from the popular tree types in KZN, which are, pine saw dust (PSD), pine bark (PB), eucalyptus saw dust (ESD) and eucalyptus bark (EB). Furthermore, the research investigates the possibility of mixing the different wastes with an aim to lessen the cost of raw material separation prior to feeding into pyrolysis process and mixing also increases the amount of biomass material available for beneficiation. A 50/50 mixture of PSD and ESD (EPSD) and mixture containing pine saw dust; eucalyptus saw dust, pine bark and eucalyptus bark (EPSDB). Characterisation of the biomass waste will look at analysis such as proximate (volatiles, ash, fixed carbon), ultimate (carbon, hydrogen, nitrogen, oxygen, sulphur), high heating value, structural (cellulose, hemicellulose and lignin) and thermogravimetric analysis.Keywords: characterisation, biomass waste, saw dust, wood waste
Procedia PDF Downloads 68195 The Impact of Tourism on the Intangible Cultural Heritage of Pilgrim Routes: The Case of El Camino de Santiago
Authors: Miguel Angel Calvo Salve
Abstract:
This qualitative and quantitative study will identify the impact of tourism pressure on the intangible cultural heritage of the pilgrim route of El Camino de Santiago (Saint James Way) and propose an approach to a sustainable touristic model for these Cultural Routes. Since 1993, the Spanish Section of the Pilgrim Route of El Camino de Santiago has been on the World Heritage List. In 1994, the International Committee on Cultural Routes (CIIC-ICOMOS) initiated its work with the goal of studying, preserving, and promoting the cultural routes and their significance as a whole. Another ICOMOS group, the Charter on Cultural Routes, pointed out in 2008 the importance of both tangible and intangible heritage and the need for a holistic vision in preserving these important cultural assets. Tangible elements provide a physical confirmation of the existence of these cultural routes, while the intangible elements serve to give sense and meaning to it as a whole. Intangible assets of a Cultural Route are key to understanding the route's significance and its associated heritage values. Like many pilgrim routes, the Route to Santiago, as the result of a long evolutionary process, exhibits and is supported by intangible assets, including hospitality, cultural and religious expressions, music, literature, and artisanal trade, among others. A large increase in pilgrims walking the route, with very different aims and tourism pressure, has shown how the dynamic links between the intangible cultural heritage and the local inhabitants along El Camino are fragile and vulnerable. Economic benefits for the communities and population along the cultural routes are commonly fundamental for the micro-economies of the people living there, substituting traditional productive activities, which, in fact, modifies and has an impact on the surrounding environment and the route itself. Consumption of heritage is one of the major issues of sustainable preservation promoted with the intention of revitalizing those sites and places. The adaptation of local communities to new conditions aimed at preserving and protecting existing heritage has had a significant impact on immaterial inheritance. Based on questionnaires to pilgrims, tourists and local communities along El Camino during the peak season of the year, and using official statistics from the Galician Pilgrim’s Office, this study will identify the risk and threats to El Camino de Santiago as a Cultural Route. The threats visible nowadays due to the impact of mass tourism include transformations of tangible heritage, consumerism of the intangible, changes of local activities, loss in the authenticity of symbols and spiritual significance, and pilgrimage transformed into a tourism ‘product’, among others. The study will also approach some measures and solutions to mitigate those impacts and better preserve this type of cultural heritage. Therefore, this study will help the Route services providers and policymakers to better preserve the Cultural Route as a whole to ultimately improve the satisfying experience of pilgrims.Keywords: cultural routes, El Camino de Santiago, impact of tourism, intangible heritage
Procedia PDF Downloads 83194 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers
Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy
Abstract:
In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology
Procedia PDF Downloads 101193 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels
Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand
Abstract:
The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing
Procedia PDF Downloads 312192 From Social Equity to Spatial Equity in Urban Space: Precedent Study Approach
Authors: Dorsa Pourmojib, Marc J. Boutin
Abstract:
Urban space is used everyday by a diverse range of urban dwellers, each with different expectations. In this space, opportunities and resources are not distributed equitably among urban dwellers, despite the importance of inclusivity. In addition, some marginalized groups may not be considered. These include people with low incomes, immigrants from diverse cultures, various age groups, and those with special needs. To this end, this research aims to enhance social equity in urban space by bridging the gap between social equity and spatial equity in the urban context. This gap in the knowledge base related to urban design may be present for several reasons; lack of studies on relationship between social equity and spatial equity in urban open space, lack of practical design strategies for promoting social equity in urban open space, lack of proper site analysis in terms of context and users of the site both for designing new urban open spaces and developing the existing ones, and lack of researchers that are designers and finally it could be related to priorities of the city’s policies in addressing such issues, since it is time, money and energy consuming. The main objective of this project is addressing the aforementioned gap in the knowledge by exploring the relationship between social equity and spatial equity in urban open space. Answering the main question of this research is a promising step to this end; 'What are the considerations towards providing social equity through the design of urban elements that offer spatial equity?' To answer the main question of this research there are several secondary questions which should be addressed. Such as; how can the characteristics of social equity be translated to spatial equity? What are the diverse user’s needs and which of their needs are not considered in that site? What are the specific elements in the site which should be designed in order to promote social equity? What is the current situation of social and spatial equity in the proposed site? To answer the research questions and achieve the proposed objectives, a three-step methodology has been implemented. Firstly, a comprehensive research framework based on the available literature has been presented. Afterwards, three different urban spaces have been analyzed in terms of specific key research questions as the precedent studies; Naqsh-e Jahan Square (Iran), Superkilen Park (Denmark) and Campo Dei Fiori (Italy). In this regard, a proper gap analysis of the current situation and the proposed situation of these sites has been conducted. Finally, by combining the extracted design considerations from the precedent studies and the literature review, practical design strategies have been introduced as a result of this research. The presented guidelines enable the designers to create socially equitable urban spaces. To conclude, this research proposes a spatial approach to social inclusion and equity in urban space by presenting a practical framework and criteria for translating social equity to spatial equity in urban areas.Keywords: inclusive urban design, social equity, social inclusion, spatial equity
Procedia PDF Downloads 142191 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 205190 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea
Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park
Abstract:
Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques
Procedia PDF Downloads 168189 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂
Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang
Abstract:
CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces
Procedia PDF Downloads 278188 Fatigue Truck Modification Factor for Design Truck (CL-625)
Authors: Mohamad Najari, Gilbert Grondin, Marwan El-Rich
Abstract:
Design trucks in standard codes are selected based on the amount of damage they cause on structures-specifically bridges- and roads to represent the real traffic loads. Some limited numbers of trucks are run on a bridge one at a time and the damage on the bridge is recorded for each truck. One design track is also run on the same bridge “n” times -“n” is the number of trucks used previously- to calculate the damage of the design truck on the same bridge. To make these damages equal a reduction factor is needed for that specific design truck in the codes. As the limited number of trucks cannot be the exact representative of real traffic through the life of the structure, these reduction factors are not accurately calculated and they should be modified accordingly. Started on July 2004, the vehicle load data were collected in six weigh in motion (WIM) sites owned by Alberta Transportation for eight consecutive years. This database includes more than 200 million trucks. Having these data gives the opportunity to compare the effect of any standard fatigue trucks weigh and the real traffic load on the fatigue life of the bridges which leads to a modification for the fatigue truck factor in the code. To calculate the damage for each truck, the truck is run on the bridge, moment history of the detail under study is recorded, stress range cycles are counted, and then damage is calculated using available S-N curves. A 2000 lines FORTRAN code has been developed to perform the analysis and calculate the damages of the trucks in the database for all eight fatigue categories according to Canadian Institute of Steel Construction (CSA S-16). Stress cycles are counted using rain flow counting method. The modification factors for design truck (CL-625) are calculated for two different bridge configurations and ten span lengths varying from 1 m to 200 m. The two considered bridge configurations are single-span bridge and four span bridge. This was found to be sufficient and representative for a simply supported span, positive moment in end spans of bridges with two or more spans, positive moment in interior spans of three or more spans, and the negative moment at an interior support of multi-span bridges. The moment history of the mid span is recorded for single-span bridge and, exterior positive moment, interior positive moment, and support negative moment are recorded for four span bridge. The influence lines are expressed by a polynomial expression obtained from a regression analysis of the influence lines obtained from SAP2000. It is found that for design truck (CL-625) fatigue truck factor is varying from 0.35 to 0.55 depending on span lengths and bridge configuration. The detail results will be presented in the upcoming papers. This code can be used for any design trucks available in standard codes.Keywords: bridge, fatigue, fatigue design truck, rain flow analysis, FORTRAN
Procedia PDF Downloads 521187 A Kunitz-Type Serine Protease Inhibitor from Rock Bream, Oplegnathus fasciatus Involved in Immune Responses
Authors: S. D. N. K. Bathige, G. I. Godahewa, Navaneethaiyer Umasuthan, Jehee Lee
Abstract:
Kunitz-type serine protease inhibitors (KTIs) are identified in various organisms including animals, plants and microbes. These proteins shared single or multiple Kunitz inhibitory domains link together or associated with other types of domains. Characteristic Kunitz type domain composed of around 60 amino acid residues with six conserved cysteine residues to stabilize by three disulfide bridges. KTIs are involved in various physiological processes, such as ion channel blocking, blood coagulation, fibrinolysis and inflammation. In this study, two Kunitz-type domain containing protein was identified from rock bream database and designated as RbKunitz. The coding sequence of RbKunitz encoded for 507 amino acids with 56.2 kDa theoretical molecular mass and 5.7 isoelectric point (pI). There are several functional domains including MANEC superfamily domain, PKD superfamily domain, and LDLa domain were predicted in addition to the two characteristic Kunitz domain. Moreover, trypsin interaction sites were also identified in Kunitz domain. Homology analysis revealed that RbKunitz shared highest identity (77.6%) with Takifugu rubripes. Completely conserved 28 cysteine residues were recognized, when comparison of RbKunitz with other orthologs from different taxonomical groups. These structural evidences indicate the rigidity of RbKunitz folding structure to achieve the proper function. The phylogenetic tree was constructed using neighbor-joining method and exhibited that the KTIs from fish and non-fish has been evolved in separately. Rock bream was clustered with Takifugu rubripes. The SYBR Green qPCR was performed to quantify the RbKunitz transcripts in different tissues and challenged tissues. The mRNA transcripts of RbKunitz were detected in all tissues (muscle, spleen, head kidney, blood, heart, skin, liver, intestine, kidney and gills) analyzed and highest transcripts level was detected in gill tissues. Temporal transcription profile of RbKunitz in rock bream blood tissues was analyzed upon LPS (lipopolysaccharide), Poly I:C (Polyinosinic:polycytidylic acid) and Edwardsiella tarda challenge to understand the immune responses of this gene. Compare to the unchallenged control RbKunitz exhibited strong up-regulation at 24 h post injection (p.i.) after LPS and E. tarda injection. Comparatively robust expression of RbKunits was observed at 3 h p.i. upon Poly I:C challenge. Taken together all these data indicate that RbKunitz may involve into to immune responses upon pathogenic stress, in order to protect the rock bream.Keywords: Kunitz-type, rock bream, immune response, serine protease inhibitor
Procedia PDF Downloads 379186 RAD-Seq Data Reveals Evidence of Local Adaptation between Upstream and Downstream Populations of Australian Glass Shrimp
Authors: Sharmeen Rahman, Daniel Schmidt, Jane Hughes
Abstract:
Paratya australiensis Kemp (Decapoda: Atyidae) is a widely distributed indigenous freshwater shrimp, highly abundant in eastern Australia. This species has been considered as a model stream organism to study genetics, dispersal, biology, behaviour and evolution in Atyids. Paratya has a filter feeding and scavenging habit which plays a significant role in the formation of lotic community structure. It has been shown to reduce periphyton and sediment from hard substrates of coastal streams and hence acts as a strongly-interacting ecosystem macroconsumer. Besides, Paratya is one of the major food sources for stream dwelling fishes. Paratya australiensis is a cryptic species complex consisting of 9 highly divergent mitochondrial DNA lineages. Among them, one lineage has been observed to favour upstream sites at higher altitudes, with cooler water temperatures. This study aims to identify local adaptation in upstream and downstream populations of this lineage in three streams in the Conondale Range, North-eastern Brisbane, Queensland, Australia. Two populations (up and down stream) from each stream have been chosen to test for local adaptation, and a parallel pattern of adaptation is expected across all streams. Six populations each consisting of 24 individuals were sequenced using the Restriction Site Associated DNA-seq (RAD-seq) technique. Genetic markers (SNPs) were developed using double digest RAD sequencing (ddRAD-seq). These were used for de novo assembly of Paratya genome. De novo assembly was done using the STACKs program and produced 56, 344 loci for 47 individuals from one stream. Among these individuals, 39 individuals shared 5819 loci, and these markers are being used to test for local adaptation using Fst outlier tests (Arlequin) and Bayesian analysis (BayeScan) between up and downstream populations. Fst outlier test detected 27 loci likely to be under selection and the Bayesian analysis also detected 27 loci as under selection. Among these 27 loci, 3 loci showed evidence of selection at a significance level using BayeScan program. On the other hand, up and downstream populations are strongly diverged at neutral loci with a Fst =0.37. Similar analysis will be done with all six populations to determine if there is a parallel pattern of adaptation across all streams. Furthermore, multi-locus among population covariance analysis will be done to identify potential markers under selection as well as to compare single locus versus multi-locus approaches for detecting local adaptation. Adaptive genes identified in this study can be used for future studies to design primers and test for adaptation in related crustacean species.Keywords: Paratya australiensis, rainforest streams, selection, single nucleotide polymorphism (SNPs)
Procedia PDF Downloads 255185 Assessment of Surface Water Quality near Landfill Sites Using a Water Pollution Index
Authors: Alejandro Cittadino, David Allende
Abstract:
Landfilling of municipal solid waste is a common waste management practice in Argentina as in many parts of the world. There is extensive scientific literature on the potential negative effects of landfill leachates on the environment, so it’s necessary to be rigorous with the control and monitoring systems. Due to the specific municipal solid waste composition in Argentina, local landfill leachates contain large amounts of organic matter (biodegradable, but also refractory to biodegradation), as well as ammonia-nitrogen, small trace of some heavy metals, and inorganic salts. In order to investigate the surface water quality in the Reconquista river adjacent to the Norte III landfill, water samples both upstream and downstream the dumpsite are quarterly collected and analyzed for 43 parameters including organic matter, heavy metals, and inorganic salts, as required by the local standards. The objective of this study is to apply a water quality index that considers the leachate characteristics in order to determine the quality status of the watercourse through the landfill. The water pollution index method has been widely used in water quality assessments, particularly rivers, and it has played an increasingly important role in water resource management, since it provides a number simple enough for the public to understand, that states the overall water quality at a certain location and time. The chosen water quality index (ICA) is based on the values of six parameters: dissolved oxygen (in mg/l and percent saturation), temperature, biochemical oxygen demand (BOD5), ammonia-nitrogen and chloride (Cl-) concentration. The index 'ICA' was determined both upstream and downstream the Reconquista river, being the rating scale between 0 (very poor water quality) and 10 (excellent water quality). The monitoring results indicated that the water quality was unaffected by possible leachate runoff since the index scores upstream and downstream were ranked in the same category, although in general, most of the samples were classified as having poor water quality according to the index’s scale. The annual averaged ICA index scores (computed quarterly) were 4.9, 3.9, 4.4 and 5.0 upstream and 3.9, 5.0, 5.1 and 5.0 downstream the river during the study period between 2014 and 2017. Additionally, the water quality seemed to exhibit distinct seasonal variations, probably due to annual precipitation patterns in the study area. The ICA water quality index appears to be appropriate to evaluate landfill impacts since it accounts mainly for organic pollution and inorganic salts and the absence of heavy metals in the local leachate composition, however, the inclusion of other parameters could be more decisive in discerning the affected stream reaches from the landfill activities. A future work may consider adding to the index other parameters like total organic carbon (TOC) and total suspended solids (TSS) since they are present in the leachate in high concentrations.Keywords: landfill, leachate, surface water, water quality index
Procedia PDF Downloads 150184 Fe3O4 Decorated ZnO Nanocomposite Particle System for Waste Water Remediation: An Absorptive-Photocatalytic Based Approach
Authors: Prateek Goyal, Archini Paruthi, Superb K. Misra
Abstract:
Contamination of water resources has been a major concern, which has drawn attention to the need to develop new material models for treatment of effluents. Existing conventional waste water treatment methods remain ineffective sometimes and uneconomical in terms of remediating contaminants like heavy metal ions (mercury, arsenic, lead, cadmium and chromium); organic matter (dyes, chlorinated solvents) and high salt concentration, which makes water unfit for consumption. We believe that nanotechnology based strategy, where we use nanoparticles as a tool to remediate a class of pollutants would prove to be effective due to its property of high surface area to volume ratio, higher selectivity, sensitivity and affinity. In recent years, scientific advancement has been made to study the application of photocatalytic (ZnO, TiO2 etc.) nanomaterials and magnetic nanomaterials in remediating contaminants (like heavy metals and organic dyes) from water/wastewater. Our study focuses on the synthesis and monitoring remediation efficiency of ZnO, Fe3O4 and Fe3O4 coated ZnO nanoparticulate system for the removal of heavy metals and dyes simultaneously. Multitude of ZnO nanostructures (spheres, rods and flowers) using multiple routes (microwave & hydrothermal approach) offers a wide range of light active photo catalytic property. The phase purity, morphology, size distribution, zeta potential, surface area and porosity in addition to the magnetic susceptibility of the particles were characterized by XRD, TEM, CPS, DLS, BET and VSM measurements respectively. Further on, the introduction of crystalline defects into ZnO nanostructures can also assist in light activation for improved dye degradation. Band gap of a material and its absorbance is a concrete indicator for photocatalytic activity of the material. Due to high surface area, high porosity and affinity towards metal ions and availability of active surface sites, iron oxide nanoparticles show promising application in adsorption of heavy metal ions. An additional advantage of having magnetic based nanocomposite is, it offers magnetic field responsive separation and recovery of the catalyst. Therefore, we believe that ZnO linked Fe3O4 nanosystem would be efficient and reusable. Improved photocatalytic efficiency in addition to adsorption for environmental remediation has been a long standing challenge, and the nano-composite system offers the best of features which the two individual metal oxides provide for nanoremediation.Keywords: adsorption, nanocomposite, nanoremediation, photocatalysis
Procedia PDF Downloads 237183 Efficiency of Different Types of Addition onto the Hydration Kinetics of Portland Cement
Authors: Marine Regnier, Pascal Bost, Matthieu Horgnies
Abstract:
Some of the problems to be solved for the concrete industry are linked to the use of low-reactivity cement, the hardening of concrete under cold-weather and the manufacture of pre-casted concrete without costly heating step. The development of these applications needs to accelerate the hydration kinetics, in order to decrease the setting time and to obtain significant compressive strengths as soon as possible. The mechanisms enhancing the hydration kinetics of alite or Portland cement (e.g. the creation of nucleation sites) were already studied in literature (e.g. by using distinct additions such as titanium dioxide nanoparticles, calcium carbonate fillers, water-soluble polymers, C-S-H, etc.). However, the goal of this study was to establish a clear ranking of the efficiency of several types of additions by using a robust and reproducible methodology based on isothermal calorimetry (performed at 20°C). The cement was a CEM I 52.5N PM-ES (Blaine fineness of 455 m²/kg). To ensure the reproducibility of the experiments and avoid any decrease of the reactivity before use, the cement was stored in waterproof and sealed bags to avoid any contact with moisture and carbon dioxide. The experiments were performed on Portland cement pastes by using a water-to-cement ratio of 0.45, and incorporating different compounds (industrially available or laboratory-synthesized) that were selected according to their main composition and their specific surface area (SSA, calculated using the Brunauer-Emmett-Teller (BET) model and nitrogen adsorption isotherms performed at 77K). The intrinsic effects of (i) dry powders (e.g. fumed silica, activated charcoal, nano-precipitates of calcium carbonate, afwillite germs, nanoparticles of iron and iron oxides , etc.), and (ii) aqueous solutions (e.g. containing calcium chloride, hydrated Portland cement or Master X-SEED 100, etc.) were investigated. The influence of the amount of addition, calculated relatively to the dry extract of each addition compared to cement (and by conserving the same water-to-cement ratio) was also studied. The results demonstrated that the X-SEED®, the hydrated calcium nitrate, the calcium chloride (and, at a minor level, a solution of hydrated Portland cement) were able to accelerate the hydration kinetics of Portland cement, even at low concentration (e.g. 1%wt. of dry extract compared to cement). By using higher rates of additions, the fumed silica, the precipitated calcium carbonate and the titanium dioxide can also accelerate the hydration. In the case of the nano-precipitates of calcium carbonate, a correlation was established between the SSA and the accelerating effect. On the contrary, the nanoparticles of iron or iron oxides, the activated charcoal and the dried crystallised hydrates did not show any accelerating effect. Future experiments will be scheduled to establish the ranking of these additions, in terms of accelerating effect, by using low-reactivity cements and other water to cement ratios.Keywords: acceleration, hydration kinetics, isothermal calorimetry, Portland cement
Procedia PDF Downloads 256182 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series
Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold
Abstract:
To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network
Procedia PDF Downloads 139