Search results for: dumping sites
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2057

Search results for: dumping sites

197 Landslide Susceptibility Analysis in the St. Lawrence Lowlands Using High Resolution Data and Failure Plane Analysis

Authors: Kevin Potoczny, Katsuichiro Goda

Abstract:

The St. Lawrence lowlands extend from Ottawa to Quebec City and are known for large deposits of sensitive Leda clay. Leda clay deposits are responsible for many large landslides, such as the 1993 Lemieux and 2010 St. Jude (4 fatalities) landslides. Due to the large extent and sensitivity of Leda clay, regional hazard analysis for landslides is an important tool in risk management. A 2018 regional study by Farzam et al. on the susceptibility of Leda clay slopes to landslide hazard uses 1 arc second topographical data. A qualitative method known as Hazus is used to estimate susceptibility by checking for various criteria in a location and determine a susceptibility rating on a scale of 0 (no susceptibility) to 10 (very high susceptibility). These criteria are slope angle, geological group, soil wetness, and distance from waterbodies. Given the flat nature of St. Lawrence lowlands, the current assessment fails to capture local slopes, such as the St. Jude site. Additionally, the data did not allow one to analyze failure planes accurately. This study majorly improves the analysis performed by Farzam et al. in two aspects. First, regional assessment with high resolution data allows for identification of local locations that may have been previously identified as low susceptibility. This then provides the opportunity to conduct a more refined analysis on the failure plane of the slope. Slopes derived from 1 arc second data are relatively gentle (0-10 degrees) across the region; however, the 1- and 2-meter resolution 2022 HRDEM provided by NRCAN shows that short, steep slopes are present. At a regional level, 1 arc second data can underestimate the susceptibility of short, steep slopes, which can be dangerous as Leda clay landslides behave retrogressively and travel upwards into flatter terrain. At the location of the St. Jude landslide, slope differences are significant. 1 arc second data shows a maximum slope of 12.80 degrees and a mean slope of 4.72 degrees, while the HRDEM data shows a maximum slope of 56.67 degrees and a mean slope of 10.72 degrees. This equates to a difference of three susceptibility levels when the soil is dry and one susceptibility level when wet. The use of GIS software is used to create a regional susceptibility map across the St. Lawrence lowlands at 1- and 2-meter resolutions. Failure planes are necessary to differentiate between small and large landslides, which have so far been ignored in regional analysis. Leda clay failures can only retrogress as far as their failure planes, so the regional analysis must be able to transition smoothly into a more robust local analysis. It is expected that slopes within the region, once previously assessed at low susceptibility scores, contain local areas of high susceptibility. The goal is to create opportunities for local failure plane analysis to be undertaken, which has not been possible before. Due to the low resolution of previous regional analyses, any slope near a waterbody could be considered hazardous. However, high-resolution regional analysis would allow for more precise determination of hazard sites.

Keywords: hazus, high-resolution DEM, leda clay, regional analysis, susceptibility

Procedia PDF Downloads 48
196 Analyzing the Construction of Collective Memories by History Movies/TV Programs: Case Study of Masters in the Forbidden City

Authors: Lulu Wang, Yongjun Xu, Xiaoyang Qiao

Abstract:

The Forbidden City is well known for being full of Chinese cultural and historical relics. However, the Masters in the Forbidden City, a documentary film, doesn’t just dwell on the stories of the past. Instead, it focuses on ordinary people—the restorers of the relics and antiquities, which has caught the sight of Chinese audiences. From this popular documentary film, a new way can be considered, that is to show the relics, antiquities and painting with a character of modern humanities by films and TV programs. Of course, it can’t just like a simple explanation from tour guides in museums. It should be a perfect combination of scenes, heritages, stories, storytellers and background music. All we want to do is trying to dig up the humanity behind the heritages and then create a virtual scene for the audience to have emotional resonance from the humanity. It is believed that there are two problems. One is that compared with the entertainment shows, why people prefer to see the boring restoration work. The other is that what the interaction is between those history documentary films, the heritages, the audiences and collective memory. This paper mainly used the methods of text analysis and data analysis. The audiences’ comment texts were collected from all kinds of popular video sites. Through analyzing those texts, there was a word cloud chart about people preferring to use what kind of words to comment the film. Then the usage rate of all comments words was calculated. After that, there was a Radar Chart to show the rank results. Eventually, each of them was given an emotional value classification according their comment tone and content. Based on the above analysis results, an interaction model among the audience, history films/TV programs and the collective memory can be summarized. According to the word cloud chart, people prefer to use such words to comment, including moving, history, love, family, celebrity, tone... From those emotional words, we can see Chinese audience felt so proud and shared the sense of Collective Identity, so they leave such comments: To our great motherland! Chinese traditional culture is really profound! It is found that in the construction of collective memory symbology, the films formed an imaginary system by organizing a ‘personalized audience’. The audience is not just a recipient of information, but a participant of the documentary films and a cooperator of collective memory. At the same time, it is believed that the traditional background music, the spectacular present scenes and the tone of the storytellers/hosts are also important, so it is suggested that the museums could try to cooperate with the producers of movie and TV program to create a vivid scene for the people. Maybe it’s a more artistic way for heritages to be open to all the world.

Keywords: audience, heritages, history movies, TV programs

Procedia PDF Downloads 123
195 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.

Keywords: contaminated sediment, monitoring, recovery, remediation

Procedia PDF Downloads 211
194 Demographic Assessment and Evaluation of Degree of Lipid Control in High Risk Indian Dyslipidemia Patients

Authors: Abhijit Trailokya

Abstract:

Background: Cardiovascular diseases (CVD’s) are the major cause of morbidity and mortality in both developed and developing countries. Many clinical trials have demonstrated that low-density lipoprotein cholesterol (LDL-C) lowering, reduces the incidence of coronary and cerebrovascular events across a broad spectrum of patients at risk. Guidelines for the management of patients at risk have been established in Europe and North America. The guidelines have advocated progressively lower LDL-C targets and more aggressive use of statin therapy. In Indian patients, comprehensive data on dyslipidemia management and its treatment outcomes are inadequate. There is lack of information on existing treatment patterns, the patient’s profile being treated, and factors that determine treatment success or failure in achieving desired goals. Purpose: The present study was planned to determine the lipid control status in high-risk dyslipidemic patients treated with lipid-lowering therapy in India. Methods: This cross-sectional, non-interventional, single visit program was conducted across 483 sites in India where male and female patients with high-risk dyslipidemia aged 18 to 65 years who had visited for a routine health check-up to their respective physician at hospital or a healthcare center. Percentage of high-risk dyslipidemic patients achieving adequate LDL-C level (< 70 mg/dL) on lipid-lowering therapy and the association of lipid parameters with patient characteristics, comorbid conditions, and lipid lowering drugs were analysed. Results: 3089 patients were enrolled in the study; of which 64% were males. LDL-C data was available for 95.2% of the patients; only 7.7% of these patients achieved LDL-C levels < 70 mg/dL on lipid-lowering therapy, which may be due to inability to follow therapeutic plans, poor compliance, or inadequate counselling by physician. The physician’s lack of awareness about recent treatment guidelines also might contribute to patients’ poor adherence, not explaining adequately the benefit and risks of a medication, not giving consideration to the patient’s life style and the cost of medication. Statin was the most commonly used anti-dyslipidemic drug across population. The higher proportion of patients had the comorbid condition of CVD and diabetes mellitus across all dyslipidemic patients. Conclusion: As per the European Society of Cardiology guidelines the ideal LDL-C levels in high risk dyslipidemic patients should be less than 70%. In the present study, 7.7% of the patients achieved LDL-C levels < 70 mg/dL on lipid lowering therapy which is very less. Most of high risk dyslipidemic patients in India are on suboptimal dosage of statin. So more aggressive and high dosage statin therapy may be required to achieve target LDLC levels in high risk Indian dyslipidemic patients.

Keywords: cardiovascular disease, diabetes mellitus, dyslipidemia, LDL-C, lipid lowering drug, statins

Procedia PDF Downloads 178
193 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web

Authors: Aayushi Somani, Siba P. Samal

Abstract:

Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.

Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR

Procedia PDF Downloads 141
192 Risk and Coping: Understanding Community Responses to Calls for Disaster Evacuation in Central Philippines

Authors: Soledad Natalia M. Dalisay, Mylene De Guzman

Abstract:

In archipelagic countries like the Philippines, many communities thrive along coastal areas. The sea is the community members’ main source of livelihood and the site of many cultural activities. For these communities, the sea is their life and livelihood. Nevertheless, the sea also poses a hazard during the rainy season when typhoons frequent their communities. Coastal communities often encounter threats from storm surges and flooding that are common when there are typhoons. During such periods, disaster evacuation programs are implemented. However, in many instances, evacuation has been the bane of local government officials implementing such programs in their communities as resistance from community members is often encountered. Such resistance is often viewed by program implementers as due to the fact that people were hard headed and ignorant of the potential impacts of living in hazard prone areas. This paper argues that it is not for these reasons that people refused to evacuate. Drawing from data collected from fieldwork done in three sites in Central Philippines affected by super typhoon Haiyan, this study aimed to provide a contextualized understanding of peoples’ refusal to heed disaster evacuation warnings. This study utilized the multi-sited ethnography approach with in-depth episodic interviews, focus group discussions, participatory risk mapping and key informant interviews in gathering data on peoples’ experiences and insights specifically on evacuation during typhoon Haiyan. This study showed that people have priorities and considerations vital in their social lives that they are protecting in their refusal to leave their homes for pre-emptive evacuation. It is not that they are not aware of the risks when the face the hazard. It is more that they had faith in the local knowledge and strategies that they have developed since the time of their ancestors as a result of living and engaging with hazards in their areas for as long as they could remember. The study also revealed that risk in encounters with hazards was gendered. Furthermore, previous engagement with local government officials and the manner in which the pre-emptive evacuation programs were implemented had cast doubt on the value of such programs in saving their lives. Life in the designated evacuation areas can be as dangerous if not more compared with living in their coastal homes. There seems to be the impression that in the evacuation program of the government, people were being moved from hazard zones to death zones. Thus, this paper ends with several recommendations that may contribute to building more responsive evacuation programs that aim to build people’s resilience while taking into consideration the local moral world in communities in identified hazard zones.

Keywords: coastal communities, disaster evacuation, disaster risk perception, social and cultural responses to hazards

Procedia PDF Downloads 314
191 The Impact of Tourism on the Intangible Cultural Heritage of Pilgrim Routes: The Case of El Camino de Santiago

Authors: Miguel Angel Calvo Salve

Abstract:

This qualitative and quantitative study will identify the impact of tourism pressure on the intangible cultural heritage of the pilgrim route of El Camino de Santiago (Saint James Way) and propose an approach to a sustainable touristic model for these Cultural Routes. Since 1993, the Spanish Section of the Pilgrim Route of El Camino de Santiago has been on the World Heritage List. In 1994, the International Committee on Cultural Routes (CIIC-ICOMOS) initiated its work with the goal of studying, preserving, and promoting the cultural routes and their significance as a whole. Another ICOMOS group, the Charter on Cultural Routes, pointed out in 2008 the importance of both tangible and intangible heritage and the need for a holistic vision in preserving these important cultural assets. Tangible elements provide a physical confirmation of the existence of these cultural routes, while the intangible elements serve to give sense and meaning to it as a whole. Intangible assets of a Cultural Route are key to understanding the route's significance and its associated heritage values. Like many pilgrim routes, the Route to Santiago, as the result of a long evolutionary process, exhibits and is supported by intangible assets, including hospitality, cultural and religious expressions, music, literature, and artisanal trade, among others. A large increase in pilgrims walking the route, with very different aims and tourism pressure, has shown how the dynamic links between the intangible cultural heritage and the local inhabitants along El Camino are fragile and vulnerable. Economic benefits for the communities and population along the cultural routes are commonly fundamental for the micro-economies of the people living there, substituting traditional productive activities, which, in fact, modifies and has an impact on the surrounding environment and the route itself. Consumption of heritage is one of the major issues of sustainable preservation promoted with the intention of revitalizing those sites and places. The adaptation of local communities to new conditions aimed at preserving and protecting existing heritage has had a significant impact on immaterial inheritance. Based on questionnaires to pilgrims, tourists and local communities along El Camino during the peak season of the year, and using official statistics from the Galician Pilgrim’s Office, this study will identify the risk and threats to El Camino de Santiago as a Cultural Route. The threats visible nowadays due to the impact of mass tourism include transformations of tangible heritage, consumerism of the intangible, changes of local activities, loss in the authenticity of symbols and spiritual significance, and pilgrimage transformed into a tourism ‘product’, among others. The study will also approach some measures and solutions to mitigate those impacts and better preserve this type of cultural heritage. Therefore, this study will help the Route services providers and policymakers to better preserve the Cultural Route as a whole to ultimately improve the satisfying experience of pilgrims.

Keywords: cultural routes, El Camino de Santiago, impact of tourism, intangible heritage

Procedia PDF Downloads 42
190 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers

Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy

Abstract:

In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.

Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology

Procedia PDF Downloads 77
189 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels

Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand

Abstract:

The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.

Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing

Procedia PDF Downloads 281
188 From Social Equity to Spatial Equity in Urban Space: Precedent Study Approach

Authors: Dorsa Pourmojib, Marc J. Boutin

Abstract:

Urban space is used everyday by a diverse range of urban dwellers, each with different expectations. In this space, opportunities and resources are not distributed equitably among urban dwellers, despite the importance of inclusivity. In addition, some marginalized groups may not be considered. These include people with low incomes, immigrants from diverse cultures, various age groups, and those with special needs. To this end, this research aims to enhance social equity in urban space by bridging the gap between social equity and spatial equity in the urban context. This gap in the knowledge base related to urban design may be present for several reasons; lack of studies on relationship between social equity and spatial equity in urban open space, lack of practical design strategies for promoting social equity in urban open space, lack of proper site analysis in terms of context and users of the site both for designing new urban open spaces and developing the existing ones, and lack of researchers that are designers and finally it could be related to priorities of the city’s policies in addressing such issues, since it is time, money and energy consuming. The main objective of this project is addressing the aforementioned gap in the knowledge by exploring the relationship between social equity and spatial equity in urban open space. Answering the main question of this research is a promising step to this end; 'What are the considerations towards providing social equity through the design of urban elements that offer spatial equity?' To answer the main question of this research there are several secondary questions which should be addressed. Such as; how can the characteristics of social equity be translated to spatial equity? What are the diverse user’s needs and which of their needs are not considered in that site? What are the specific elements in the site which should be designed in order to promote social equity? What is the current situation of social and spatial equity in the proposed site? To answer the research questions and achieve the proposed objectives, a three-step methodology has been implemented. Firstly, a comprehensive research framework based on the available literature has been presented. Afterwards, three different urban spaces have been analyzed in terms of specific key research questions as the precedent studies; Naqsh-e Jahan Square (Iran), Superkilen Park (Denmark) and Campo Dei Fiori (Italy). In this regard, a proper gap analysis of the current situation and the proposed situation of these sites has been conducted. Finally, by combining the extracted design considerations from the precedent studies and the literature review, practical design strategies have been introduced as a result of this research. The presented guidelines enable the designers to create socially equitable urban spaces. To conclude, this research proposes a spatial approach to social inclusion and equity in urban space by presenting a practical framework and criteria for translating social equity to spatial equity in urban areas.

Keywords: inclusive urban design, social equity, social inclusion, spatial equity

Procedia PDF Downloads 116
187 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 177
186 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea

Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park

Abstract:

Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.

Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques

Procedia PDF Downloads 145
185 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂

Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang

Abstract:

CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.

Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces

Procedia PDF Downloads 241
184 Fatigue Truck Modification Factor for Design Truck (CL-625)

Authors: Mohamad Najari, Gilbert Grondin, Marwan El-Rich

Abstract:

Design trucks in standard codes are selected based on the amount of damage they cause on structures-specifically bridges- and roads to represent the real traffic loads. Some limited numbers of trucks are run on a bridge one at a time and the damage on the bridge is recorded for each truck. One design track is also run on the same bridge “n” times -“n” is the number of trucks used previously- to calculate the damage of the design truck on the same bridge. To make these damages equal a reduction factor is needed for that specific design truck in the codes. As the limited number of trucks cannot be the exact representative of real traffic through the life of the structure, these reduction factors are not accurately calculated and they should be modified accordingly. Started on July 2004, the vehicle load data were collected in six weigh in motion (WIM) sites owned by Alberta Transportation for eight consecutive years. This database includes more than 200 million trucks. Having these data gives the opportunity to compare the effect of any standard fatigue trucks weigh and the real traffic load on the fatigue life of the bridges which leads to a modification for the fatigue truck factor in the code. To calculate the damage for each truck, the truck is run on the bridge, moment history of the detail under study is recorded, stress range cycles are counted, and then damage is calculated using available S-N curves. A 2000 lines FORTRAN code has been developed to perform the analysis and calculate the damages of the trucks in the database for all eight fatigue categories according to Canadian Institute of Steel Construction (CSA S-16). Stress cycles are counted using rain flow counting method. The modification factors for design truck (CL-625) are calculated for two different bridge configurations and ten span lengths varying from 1 m to 200 m. The two considered bridge configurations are single-span bridge and four span bridge. This was found to be sufficient and representative for a simply supported span, positive moment in end spans of bridges with two or more spans, positive moment in interior spans of three or more spans, and the negative moment at an interior support of multi-span bridges. The moment history of the mid span is recorded for single-span bridge and, exterior positive moment, interior positive moment, and support negative moment are recorded for four span bridge. The influence lines are expressed by a polynomial expression obtained from a regression analysis of the influence lines obtained from SAP2000. It is found that for design truck (CL-625) fatigue truck factor is varying from 0.35 to 0.55 depending on span lengths and bridge configuration. The detail results will be presented in the upcoming papers. This code can be used for any design trucks available in standard codes.

Keywords: bridge, fatigue, fatigue design truck, rain flow analysis, FORTRAN

Procedia PDF Downloads 497
183 A Kunitz-Type Serine Protease Inhibitor from Rock Bream, Oplegnathus fasciatus Involved in Immune Responses

Authors: S. D. N. K. Bathige, G. I. Godahewa, Navaneethaiyer Umasuthan, Jehee Lee

Abstract:

Kunitz-type serine protease inhibitors (KTIs) are identified in various organisms including animals, plants and microbes. These proteins shared single or multiple Kunitz inhibitory domains link together or associated with other types of domains. Characteristic Kunitz type domain composed of around 60 amino acid residues with six conserved cysteine residues to stabilize by three disulfide bridges. KTIs are involved in various physiological processes, such as ion channel blocking, blood coagulation, fibrinolysis and inflammation. In this study, two Kunitz-type domain containing protein was identified from rock bream database and designated as RbKunitz. The coding sequence of RbKunitz encoded for 507 amino acids with 56.2 kDa theoretical molecular mass and 5.7 isoelectric point (pI). There are several functional domains including MANEC superfamily domain, PKD superfamily domain, and LDLa domain were predicted in addition to the two characteristic Kunitz domain. Moreover, trypsin interaction sites were also identified in Kunitz domain. Homology analysis revealed that RbKunitz shared highest identity (77.6%) with Takifugu rubripes. Completely conserved 28 cysteine residues were recognized, when comparison of RbKunitz with other orthologs from different taxonomical groups. These structural evidences indicate the rigidity of RbKunitz folding structure to achieve the proper function. The phylogenetic tree was constructed using neighbor-joining method and exhibited that the KTIs from fish and non-fish has been evolved in separately. Rock bream was clustered with Takifugu rubripes. The SYBR Green qPCR was performed to quantify the RbKunitz transcripts in different tissues and challenged tissues. The mRNA transcripts of RbKunitz were detected in all tissues (muscle, spleen, head kidney, blood, heart, skin, liver, intestine, kidney and gills) analyzed and highest transcripts level was detected in gill tissues. Temporal transcription profile of RbKunitz in rock bream blood tissues was analyzed upon LPS (lipopolysaccharide), Poly I:C (Polyinosinic:polycytidylic acid) and Edwardsiella tarda challenge to understand the immune responses of this gene. Compare to the unchallenged control RbKunitz exhibited strong up-regulation at 24 h post injection (p.i.) after LPS and E. tarda injection. Comparatively robust expression of RbKunits was observed at 3 h p.i. upon Poly I:C challenge. Taken together all these data indicate that RbKunitz may involve into to immune responses upon pathogenic stress, in order to protect the rock bream.

Keywords: Kunitz-type, rock bream, immune response, serine protease inhibitor

Procedia PDF Downloads 346
182 RAD-Seq Data Reveals Evidence of Local Adaptation between Upstream and Downstream Populations of Australian Glass Shrimp

Authors: Sharmeen Rahman, Daniel Schmidt, Jane Hughes

Abstract:

Paratya australiensis Kemp (Decapoda: Atyidae) is a widely distributed indigenous freshwater shrimp, highly abundant in eastern Australia. This species has been considered as a model stream organism to study genetics, dispersal, biology, behaviour and evolution in Atyids. Paratya has a filter feeding and scavenging habit which plays a significant role in the formation of lotic community structure. It has been shown to reduce periphyton and sediment from hard substrates of coastal streams and hence acts as a strongly-interacting ecosystem macroconsumer. Besides, Paratya is one of the major food sources for stream dwelling fishes. Paratya australiensis is a cryptic species complex consisting of 9 highly divergent mitochondrial DNA lineages. Among them, one lineage has been observed to favour upstream sites at higher altitudes, with cooler water temperatures. This study aims to identify local adaptation in upstream and downstream populations of this lineage in three streams in the Conondale Range, North-eastern Brisbane, Queensland, Australia. Two populations (up and down stream) from each stream have been chosen to test for local adaptation, and a parallel pattern of adaptation is expected across all streams. Six populations each consisting of 24 individuals were sequenced using the Restriction Site Associated DNA-seq (RAD-seq) technique. Genetic markers (SNPs) were developed using double digest RAD sequencing (ddRAD-seq). These were used for de novo assembly of Paratya genome. De novo assembly was done using the STACKs program and produced 56, 344 loci for 47 individuals from one stream. Among these individuals, 39 individuals shared 5819 loci, and these markers are being used to test for local adaptation using Fst outlier tests (Arlequin) and Bayesian analysis (BayeScan) between up and downstream populations. Fst outlier test detected 27 loci likely to be under selection and the Bayesian analysis also detected 27 loci as under selection. Among these 27 loci, 3 loci showed evidence of selection at a significance level using BayeScan program. On the other hand, up and downstream populations are strongly diverged at neutral loci with a Fst =0.37. Similar analysis will be done with all six populations to determine if there is a parallel pattern of adaptation across all streams. Furthermore, multi-locus among population covariance analysis will be done to identify potential markers under selection as well as to compare single locus versus multi-locus approaches for detecting local adaptation. Adaptive genes identified in this study can be used for future studies to design primers and test for adaptation in related crustacean species.

Keywords: Paratya australiensis, rainforest streams, selection, single nucleotide polymorphism (SNPs)

Procedia PDF Downloads 228
181 Assessment of Surface Water Quality near Landfill Sites Using a Water Pollution Index

Authors: Alejandro Cittadino, David Allende

Abstract:

Landfilling of municipal solid waste is a common waste management practice in Argentina as in many parts of the world. There is extensive scientific literature on the potential negative effects of landfill leachates on the environment, so it’s necessary to be rigorous with the control and monitoring systems. Due to the specific municipal solid waste composition in Argentina, local landfill leachates contain large amounts of organic matter (biodegradable, but also refractory to biodegradation), as well as ammonia-nitrogen, small trace of some heavy metals, and inorganic salts. In order to investigate the surface water quality in the Reconquista river adjacent to the Norte III landfill, water samples both upstream and downstream the dumpsite are quarterly collected and analyzed for 43 parameters including organic matter, heavy metals, and inorganic salts, as required by the local standards. The objective of this study is to apply a water quality index that considers the leachate characteristics in order to determine the quality status of the watercourse through the landfill. The water pollution index method has been widely used in water quality assessments, particularly rivers, and it has played an increasingly important role in water resource management, since it provides a number simple enough for the public to understand, that states the overall water quality at a certain location and time. The chosen water quality index (ICA) is based on the values of six parameters: dissolved oxygen (in mg/l and percent saturation), temperature, biochemical oxygen demand (BOD5), ammonia-nitrogen and chloride (Cl-) concentration. The index 'ICA' was determined both upstream and downstream the Reconquista river, being the rating scale between 0 (very poor water quality) and 10 (excellent water quality). The monitoring results indicated that the water quality was unaffected by possible leachate runoff since the index scores upstream and downstream were ranked in the same category, although in general, most of the samples were classified as having poor water quality according to the index’s scale. The annual averaged ICA index scores (computed quarterly) were 4.9, 3.9, 4.4 and 5.0 upstream and 3.9, 5.0, 5.1 and 5.0 downstream the river during the study period between 2014 and 2017. Additionally, the water quality seemed to exhibit distinct seasonal variations, probably due to annual precipitation patterns in the study area. The ICA water quality index appears to be appropriate to evaluate landfill impacts since it accounts mainly for organic pollution and inorganic salts and the absence of heavy metals in the local leachate composition, however, the inclusion of other parameters could be more decisive in discerning the affected stream reaches from the landfill activities. A future work may consider adding to the index other parameters like total organic carbon (TOC) and total suspended solids (TSS) since they are present in the leachate in high concentrations.

Keywords: landfill, leachate, surface water, water quality index

Procedia PDF Downloads 124
180 Fe3O4 Decorated ZnO Nanocomposite Particle System for Waste Water Remediation: An Absorptive-Photocatalytic Based Approach

Authors: Prateek Goyal, Archini Paruthi, Superb K. Misra

Abstract:

Contamination of water resources has been a major concern, which has drawn attention to the need to develop new material models for treatment of effluents. Existing conventional waste water treatment methods remain ineffective sometimes and uneconomical in terms of remediating contaminants like heavy metal ions (mercury, arsenic, lead, cadmium and chromium); organic matter (dyes, chlorinated solvents) and high salt concentration, which makes water unfit for consumption. We believe that nanotechnology based strategy, where we use nanoparticles as a tool to remediate a class of pollutants would prove to be effective due to its property of high surface area to volume ratio, higher selectivity, sensitivity and affinity. In recent years, scientific advancement has been made to study the application of photocatalytic (ZnO, TiO2 etc.) nanomaterials and magnetic nanomaterials in remediating contaminants (like heavy metals and organic dyes) from water/wastewater. Our study focuses on the synthesis and monitoring remediation efficiency of ZnO, Fe3O4 and Fe3O4 coated ZnO nanoparticulate system for the removal of heavy metals and dyes simultaneously. Multitude of ZnO nanostructures (spheres, rods and flowers) using multiple routes (microwave & hydrothermal approach) offers a wide range of light active photo catalytic property. The phase purity, morphology, size distribution, zeta potential, surface area and porosity in addition to the magnetic susceptibility of the particles were characterized by XRD, TEM, CPS, DLS, BET and VSM measurements respectively. Further on, the introduction of crystalline defects into ZnO nanostructures can also assist in light activation for improved dye degradation. Band gap of a material and its absorbance is a concrete indicator for photocatalytic activity of the material. Due to high surface area, high porosity and affinity towards metal ions and availability of active surface sites, iron oxide nanoparticles show promising application in adsorption of heavy metal ions. An additional advantage of having magnetic based nanocomposite is, it offers magnetic field responsive separation and recovery of the catalyst. Therefore, we believe that ZnO linked Fe3O4 nanosystem would be efficient and reusable. Improved photocatalytic efficiency in addition to adsorption for environmental remediation has been a long standing challenge, and the nano-composite system offers the best of features which the two individual metal oxides provide for nanoremediation.

Keywords: adsorption, nanocomposite, nanoremediation, photocatalysis

Procedia PDF Downloads 210
179 Efficiency of Different Types of Addition onto the Hydration Kinetics of Portland Cement

Authors: Marine Regnier, Pascal Bost, Matthieu Horgnies

Abstract:

Some of the problems to be solved for the concrete industry are linked to the use of low-reactivity cement, the hardening of concrete under cold-weather and the manufacture of pre-casted concrete without costly heating step. The development of these applications needs to accelerate the hydration kinetics, in order to decrease the setting time and to obtain significant compressive strengths as soon as possible. The mechanisms enhancing the hydration kinetics of alite or Portland cement (e.g. the creation of nucleation sites) were already studied in literature (e.g. by using distinct additions such as titanium dioxide nanoparticles, calcium carbonate fillers, water-soluble polymers, C-S-H, etc.). However, the goal of this study was to establish a clear ranking of the efficiency of several types of additions by using a robust and reproducible methodology based on isothermal calorimetry (performed at 20°C). The cement was a CEM I 52.5N PM-ES (Blaine fineness of 455 m²/kg). To ensure the reproducibility of the experiments and avoid any decrease of the reactivity before use, the cement was stored in waterproof and sealed bags to avoid any contact with moisture and carbon dioxide. The experiments were performed on Portland cement pastes by using a water-to-cement ratio of 0.45, and incorporating different compounds (industrially available or laboratory-synthesized) that were selected according to their main composition and their specific surface area (SSA, calculated using the Brunauer-Emmett-Teller (BET) model and nitrogen adsorption isotherms performed at 77K). The intrinsic effects of (i) dry powders (e.g. fumed silica, activated charcoal, nano-precipitates of calcium carbonate, afwillite germs, nanoparticles of iron and iron oxides , etc.), and (ii) aqueous solutions (e.g. containing calcium chloride, hydrated Portland cement or Master X-SEED 100, etc.) were investigated. The influence of the amount of addition, calculated relatively to the dry extract of each addition compared to cement (and by conserving the same water-to-cement ratio) was also studied. The results demonstrated that the X-SEED®, the hydrated calcium nitrate, the calcium chloride (and, at a minor level, a solution of hydrated Portland cement) were able to accelerate the hydration kinetics of Portland cement, even at low concentration (e.g. 1%wt. of dry extract compared to cement). By using higher rates of additions, the fumed silica, the precipitated calcium carbonate and the titanium dioxide can also accelerate the hydration. In the case of the nano-precipitates of calcium carbonate, a correlation was established between the SSA and the accelerating effect. On the contrary, the nanoparticles of iron or iron oxides, the activated charcoal and the dried crystallised hydrates did not show any accelerating effect. Future experiments will be scheduled to establish the ranking of these additions, in terms of accelerating effect, by using low-reactivity cements and other water to cement ratios.

Keywords: acceleration, hydration kinetics, isothermal calorimetry, Portland cement

Procedia PDF Downloads 234
178 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 110
177 3D Nanostructured Assembly of 2D Transition Metal Chalcogenide/Graphene as High Performance Electrocatalysts

Authors: Sunil P. Lonkar, Vishnu V. Pillai, Saeed Alhassan

Abstract:

Design and development of highly efficient, inexpensive, and long-term stable earth-abundant electrocatalysts hold tremendous promise for hydrogen evolution reaction (HER) in water electrolysis. The 2D transition metal dichalcogenides, especially molybdenum disulfide attracted a great deal of interests due to its high electrocatalytic activity. However, due to its poor electrical conductivity and limited exposed active sites, the performance of these catalysts is limited. In this context, a facile and scalable synthesis method for fabrication nanostructured electrocatalysts composed 3D graphene porous aerogels supported with MoS₂ and WS₂ is highly desired. Here we developed a highly active and stable electrocatalyst catalyst for the HER by growing it into a 3D porous architecture on conducting graphene. The resulting nanohybrids were thoroughly investigated by means of several characterization techniques to understand structure and properties. Moreover, the HER performance of these 3D catalysts is expected to greatly improve in compared to other, well-known catalysts which mainly benefits from the improved electrical conductivity of the by graphene and porous structures of the support. This technologically scalable process can afford efficient electrocatalysts for hydrogen evolution reactions (HER) and hydrodesulfurization catalysts for sulfur-rich petroleum fuels. Owing to the lower cost and higher performance, the resulting materials holds high potential for various energy and catalysis applications. In typical hydrothermal method, sonicated GO aqueous dispersion (5 mg mL⁻¹) was mixed with ammonium tetrathiomolybdate (ATTM) and tungsten molybdate was treated in a sealed Teflon autoclave at 200 ◦C for 4h. After cooling, a black solid macroporous hydrogel was recovered washed under running de-ionized water to remove any by products and metal ions. The obtained hydrogels were then freeze-dried for 24 h and was further subjected to thermal annealing driven crystallization at 600 ◦C for 2h to ensure complete thermal reduction of RGO into graphene and formation of highly crystalline MoS₂ and WoS₂ phases. The resulting 3D nanohybrids were characterized to understand the structure and properties. The SEM-EDS clearly reveals the formation of highly porous material with a uniform distribution of MoS₂ and WS₂ phases. In conclusion, a novice strategy for fabrication of 3D nanostructured MoS₂-WS₂/graphene is presented. The characterizations revealed that the in-situ formed promoters uniformly dispersed on to few layered MoS₂¬-WS₂ nanosheets that are well-supported on graphene surface. The resulting 3D hybrids hold high promise as potential electrocatalyst and hydrodesulfurization catalyst.

Keywords: electrocatalysts, graphene, transition metal chalcogenide, 3D assembly

Procedia PDF Downloads 108
176 Influence of Glass Plates Different Boundary Conditions on Human Impact Resistance

Authors: Alberto Sanchidrián, José A. Parra, Jesús Alonso, Julián Pecharromán, Antonia Pacios, Consuelo Huerta

Abstract:

Glass is a commonly used material in building; there is not a unique design solution as plates with a different number of layers and interlayers may be used. In most façades, a security glazing have to be used according to its performance in the impact pendulum. The European Standard EN 12600 establishes an impact test procedure for classification under the point of view of the human security, of flat plates with different thickness, using a pendulum of two tires and 50 kg mass that impacts against the plate from different heights. However, this test does not replicate the actual dimensions and border conditions used in building configurations and so the real stress distribution is not determined with this test. The influence of different boundary conditions, as the ones employed in construction sites, is not well taking into account when testing the behaviour of safety glazing and there is not a detailed procedure and criteria to determinate the glass resistance against human impact. To reproduce the actual boundary conditions on site, when needed, the pendulum test is arranged to be used "in situ", with no account for load control, stiffness, and without a standard procedure. Fracture stress of small and large glass plates fit a Weibull distribution with quite a big dispersion so conservative values are adopted for admissible fracture stress under static loads. In fact, test performed for human impact gives a fracture strength two or three times higher, and many times without a total fracture of the glass plate. Newest standards, as for example DIN 18008-4, states for an admissible fracture stress 2.5 times higher than the ones used for static and wing loads. Now two working areas are open: a) to define a standard for the ‘in situ’ test; b) to prepare a laboratory procedure that allows testing with more real stress distribution. To work on both research lines a laboratory that allows to test medium size specimens with different border conditions, has been developed. A special steel frame allows reproducing the stiffness of the glass support substructure, including a rigid condition used as reference. The dynamic behaviour of the glass plate and its support substructure have been characterized with finite elements models updated with modal tests results. In addition, a new portable impact machine is being used to get enough force and direction control during the impact test. Impact based on 100 J is used. To avoid problems with broken glass plates, the test have been done using an aluminium plate of 1000 mm x 700 mm size and 10 mm thickness supported on four sides; three different substructure stiffness conditions are used. A detailed control of the dynamic stiffness and the behaviour of the plate is done with modal tests. Repeatability of the test and reproducibility of results prove that procedure to control both, stiffness of the plate and the impact level, is necessary.

Keywords: glass plates, human impact test, modal test, plate boundary conditions

Procedia PDF Downloads 282
175 A Computational Approach to Screen Antagonist’s Molecule against Mycobacterium tuberculosis Lipoprotein LprG (Rv1411c)

Authors: Syed Asif Hassan, Tabrej Khan

Abstract:

Tuberculosis (TB) caused by bacillus Mycobacterium tuberculosis (Mtb) continues to take a disturbing toll on human life and healthcare facility worldwide. The global burden of TB remains enormous. The alarming rise of multi-drug resistant strains of Mycobacterium tuberculosis calls for an increase in research efforts towards the development of new target specific therapeutics against diverse strains of M. tuberculosis. Therefore, the discovery of new molecular scaffolds targeting new drug sites should be a priority for a workable plan for fighting resistance in Mycobacterium tuberculosis (Mtb). Mtb non-acylated lipoprotein LprG (Rv1411c) has a Toll-like receptor 2 (TLR2) agonist actions that depend on its association with triacylated glycolipids binding specifically with the hydrophobic pocket of Mtb LprG lipoprotein. The detection of a glycolipid carrier function has important implications for the role of LprG in Mycobacterial physiology and virulence. Therefore, considering the pivotal role of glycolipids in mycobacterial physiology and host-pathogen interactions, designing competitive antagonist (chemotherapeutics) ligands that competitively bind to glycolipid binding domain in LprG lipoprotein, will lead to inhibition of tuberculosis infection in humans. In this study, a unified approach involving ligand-based virtual screening protocol USRCAT (Ultra Shape Recognition) software and molecular docking studies using Auto Dock Vina 1.1.2 using the X-ray crystal structure of Mtb LprG protein was implemented. The docking results were further confirmed by DSX (DrugScore eXtented), a robust program to evaluate the binding energy of ligands bound to the Ligand binding domain of the Mtb LprG lipoprotein. The ligand, which has the higher hypothetical affinity, also has greater negative value. Based on the USRCAT, Lipinski’s values and molecular docking results, [(2R)-2,3-di(hexadecanoyl oxy)propyl][(2S,3S,5S,6R)-3,4,5-trihydroxy-2,6-bis[[(2R,3S,4S,5R,6S)-3,4,5-trihydroxy-6 (hydroxymethyl)tetrahydropyran-2-yl]oxy]cyclohexyl] phosphate (XPX) was confirmed as a promising drug-like lead compound (antagonist) binding specifically to the hydrophobic domain of LprG protein with affinity greater than that of PIM2 (agonist of LprG protein) with a free binding energy of -9.98e+006 Kcal/mol and binding affinity of -132 Kcal/mol, respectively. A further, in vitro assay of this compound is required to establish its potency in inhibiting molecular evasion mechanism of MTB within the infected host macrophages. These results will certainly be helpful in future anti-TB drug discovery efforts against Multidrug-Resistance Tuberculosis (MDR-TB).

Keywords: antagonist, agonist, binding affinity, chemotherapeutics, drug-like, multi drug resistance tuberculosis (MDR-TB), RV1411c protein, toll-like receptor (TLR2)

Procedia PDF Downloads 244
174 A Drop of Water for the Thirsty Ground: Implementing Drip-Irrigation System as an Alternative to the Existing System to Promote Sustainable Livelihoods in the Archipelagic Dryland East Nusa Tenggara, Indonesia

Authors: F. L. Benu, I. W. Mudita, R. L. Natonis

Abstract:

East Nusa Tenggara, together with part of East Java, West Nusa Tenggara, and Maluku, has been included as part of global drylands defined according to the ratio of annual precipitation (P) and annual potential evaporation (PET) and major vegetation types of grassland and savannah ecosystems. These tropical drylands are unique because, whereas drylands in other countries are mostly continental, here they are archipelagic. These archipelagic drylands are also unique in terms of being included because of more on their major vegetation types than of their P/PET ratio. Slash-and-burn cultivation and free roaming animal husbandry are two major livelihoods being widely practiced, along with alternative seasonal livelihood such as traditional fishing. Such livelihoods are vulnerable in various respects, especially because of drought, which becomes more unpredictable in the face of climate changes. To cope with such vulnerability, semi-intensive farming using drip irrigation is implemented as an appropriate technology with the goal of promoting a more sustainable alternative to the existing livelihoods. The implementation was started in 2016 with a pilot system at the university field laboratory in Kupang in which various designs of installation were tested. The modified system consisting of an uplifted water reservoir and solar-powered pump was tested in Papela, the District of Rote-Ndao, in 2017 to convince fishermen who had been involved in illegal fishing in Australia-Indonesia transboundary waters, to adopt small-scale farming as a more sustainable alternative to their existing livelihoods. The system was again tested in a larger coverage in Oesena, the District of Kupang, in 2018 to convince slash-and-burn cultivators to adopt an environmentally friendlier cultivation system. From the implementation of the modified system in both sites, the participating fishermen in Papela were able to manage the system under tight water supply to grow chili pepper, tomatoes, and watermelon and the slash-and-burn cultivators in Oesena to grow chili pepper in a more efficient water use than water use in a conventional irrigation system. The gross margin obtained from growing chili pepper, tomatoes, and watermelon in Papela and from growing chili pepper in Oesena showed that small-scale farming using drip irrigation system was a promising alternative to local people in generating cash income to support their livelihoods. However, before promoting this appropriate technology as a more sustainable alternative to the existing livelihoods elsewhere in the region, better understanding on social-related contexts of the implementation is needed.

Keywords: archipelagic drylands, drip irrigation system, East Nusa Tenggara, sustainable livelihoods

Procedia PDF Downloads 92
173 In vitro Regeneration of Neural Cells Using Human Umbilical Cord Derived Mesenchymal Stem Cells

Authors: Urvi Panwar, Kanchan Mishra, Kanjaksha Ghosh, ShankerLal Kothari

Abstract:

Background: Day-by-day the increasing prevalence of neurodegenerative diseases have become a global issue to manage them by medical sciences. The adult neural stem cells are rare and require an invasive and painful procedure to obtain it from central nervous system. Mesenchymal stem cell (MSCs) therapies have shown remarkable application in treatment of various cell injuries and cell loss. MSCs can be derived from various sources like adult tissues, human bone marrow, umbilical cord blood and cord tissue. MSCs have similar proliferation and differentiation capability, but the human umbilical cord-derived mesenchymal stem cells (hUCMSCs) are proved to be more beneficial with respect to cell procurement, differentiation to other cells, preservation, and transplantation. Material and method: Human umbilical cord is easily obtainable and non-controversial comparative to bone marrow and other adult tissues. The umbilical cord can be collected after delivery of baby, and its tissue can be cultured using explant culture method. Cell culture medium such as DMEMF12+10% FBS and DMEMF12+Neural growth factors (bFGF, human noggin, B27) with antibiotics (Streptomycin/Gentamycin) were used to culture and differentiate mesenchymal stem cells into neural cells, respectively. The characterisations of MSCs were done with Flow Cytometer for surface markers CD90, CD73 and CD105 and colony forming unit assay. The differentiated various neural cells will be characterised by fluorescence markers for neurons, astrocytes, and oligodendrocytes; quantitative PCR for genes Nestin and NeuroD1 and Western blotting technique for gap43 protein. Result and discussion: The high quality and number of MSCs were isolated from human umbilical cord via explant culture method. The obtained MSCs were differentiated into neural cells like neurons, astrocytes and oligodendrocytes. The differentiated neural cells can be used to treat neural injuries and neural cell loss by delivering cells by non-invasive administration via cerebrospinal fluid (CSF) or blood. Moreover, the MSCs can also be directly delivered to different injured sites where they differentiate into neural cells. Therefore, human umbilical cord is demonstrated to be an inexpensive and easily available source for MSCs. Moreover, the hUCMSCs can be a potential source for neural cell therapies and neural cell regeneration for neural cell injuries and neural cell loss. This new way of research will be helpful to treat and manage neural cell damages and neurodegenerative diseases like Alzheimer and Parkinson. Still the study has a long way to go but it is a promising approach for many neural disorders for which at present no satisfactory management is available.

Keywords: bone marrow, cell therapy, explant culture method, flow cytometer, human umbilical cord, mesenchymal stem cells, neurodegenerative diseases, neuroprotective, regeneration

Procedia PDF Downloads 178
172 Dynamic EEG Desynchronization in Response to Vicarious Pain

Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy

Abstract:

The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.

Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition

Procedia PDF Downloads 260
171 Mannose-Functionalized Lipopolysaccharide Nanoparticles for Macrophage-Targeted Dual Delivery of Rifampicin and Isoniazid

Authors: Mumuni Sumaila, Viness Pillay, Yahya E. Choonara, Pradeep Kumar, Pierre P. Kondiah

Abstract:

Tuberculosis (TB) remains a serious challenge to public health globally, despite every effort put together to curb the disease. Current TB therapeutics available have proven to be inefficient due to a multitude of drawbacks that range from serious adverse effects/drug toxicity to inconsistent bioavailability, which ultimately contributes to the emergence of drug-resistant TB. An effective ‘cargo’ system designed to cleverly deliver therapeutic doses of anti-TB drugs to infection sites and in a sustained-release manner may provide a better therapeutic choice towards winning the war against TB. In the current study, we investigated mannose-functionalized lipopolysaccharide hybrid nanoparticles for safety and efficacy towards macrophage-targeted simultaneous delivery of the two first-line anti-TB drugs, rifampicin (RF) and isoniazid (IS). RF-IS-loaded lipopolysaccharide hybrid nanoparticles were fabricated using the solvent injection technique (SIT), incorporating soy lecithin (SL) and low molecular weight chitosan (CS) as the lipid and polysaccharide components, respectively. Surface-functionalized nanoparticles were obtained through the reaction of the aldehyde group of mannose with free amine functionality present at the surface of the nanoparticles. The functionalized nanocarriers were spherical with average particle size and surface charge of 107.83 nm and +21.77 mV, respectively, and entrapment efficiencies (EE) were 53.52% and 69.80% for RF and IS, respectively. FTIR spectrum revealed high-intensity bands between 1663 cm⁻¹ and 1408 cm⁻¹ wavenumbers (absent in non-functionalized nanoparticles), which could be attributed to the C=N stretching vibration produced by the formation of Schiff’s base (–N=CH–) during the mannosylation reaction. In vitro release studies showed a sustained-release profile for RF and IS, with less than half of the total payload released over a 48-hour period. The nanocarriers were biocompatible and safe, with more than 80% cell viability achieved when incubated with RAW 264.7 cells at concentrations 30 to 500 μg/mL over a 24-hour period. Cellular uptake studies (after a 24-hour incubation period with the murine macrophage cells, RAW 264.7) revealed a 13- and a 9-fold increase in intracellular accumulation of RF and IS, respectively, when compared with the unformulated RF+IS solution. A 6- and a 3-fold increase in intracellular accumulation of RF and IS, respectively, were observed when compared with the non-functionalized nanoparticles. Furthermore, fluorescent microscopy images showed nanoparticle internalization and accumulation within the RAW 264.7 cells, which was more significant in the mannose-functionalized system compared to the non-functionalized nanoparticles. The overall results suggested that the fabricated mannose-functionalized lipopolysaccharide nanoparticles are a safe and promising platform for macrophage-targeted delivery of anti-TB therapeutics. However, in vivo pharmacokinetic/pharmacodynamics studies are required to further substantiate the therapeutic efficacy of the nanosystem.

Keywords: anti-tuberculosis therapeutics, hybrid nanosystem, lipopolysaccharide nanoparticles, macrophage-targeted delivery

Procedia PDF Downloads 144
170 Improved Food Security and Alleviation of Cyanide Intoxication through Commercialization and Utilization of Cassava Starch by Tanzania Industries

Authors: Mariam Mtunguja, Henry Laswai, Yasinta Muzanilla, Joseph Ndunguru

Abstract:

Starchy tuberous roots of cassava provide food for people but also find application in various industries. Recently there has been the focus of concentrated research efforts to fully exploit its potential as a sustainable multipurpose crop. High starch yield is the important trait for commercial cassava production for the starch industries. Furthermore, cyanide present in cassava root poses a health challenge in the use of cassava for food. Farming communities where cassava is a staple food, prefer bitter (high cyanogenic) varieties as protection from predators and thieves. As a result, food insecure farmers prefer growing bitter cassava. This has led to cyanide intoxication to this farming communities. Cassava farmers can benefit from marketing cassava to starch producers thereby improving their income and food security. This will decrease dependency on cassava as staple food as a result of increased income and be able to afford other food sources. To achieve this, adequate information is required on the right cassava cultivars and appropriate harvesting period so as to maximize cassava production and profitability. This study aimed at identifying suitable cassava cultivars and optimum time of harvest to maximize starch production. Six commonly grown cultivars were identified and planted in a complete random block design and further analysis was done to assess variation in physicochemical characteristics, starch yield and cyanogenic potentials across three environments. The analysis showed that there is a difference in physicochemical characteristics between landraces (p ≤ 0.05), and can be targeted to different industrial applications. Among landraces, dry matter (30-39%), amylose (11-19%), starch (74-80%) and reducing sugars content (1-3%) varied when expressed on a dry weight basis (p ≤ 0.05); however, only one of the six genotypes differed in crystallinity and mean starch granule particle size, while glucan chain distribution and granule morphology were the same. In contrast, the starch functionality features measured: swelling power, solubility, syneresis, and digestibility differed (p ≤ 0.05). This was supported by Partial least square discriminant analysis (PLS-DA), which highlighted the divergence among the cassavas based on starch functionality, permitting suggestions for the targeted uses of these starches in diverse industries. The study also illustrated genotypic difference in starch yield and cyanogenic potential. Among landraces, Kiroba showed potential for maximum starch yield (12.8 t ha-1) followed by Msenene (12.3 t ha-1) and third was Kilusungu (10.2 t ha-1). The cyanide content of cassava landraces was between 15 and 800 ppm across all trial sites. GGE biplot analysis further confirmed that Kiroba was a superior cultivar in terms of starch yield. Kilusungu had the highest cyanide content and average starch yield, therefore it can also be suitable for use in starch production.

Keywords: cyanogen, cassava starch, food security, starch yield

Procedia PDF Downloads 195
169 Hydrological-Economic Modeling of Two Hydrographic Basins of the Coast of Peru

Authors: Julio Jesus Salazar, Manuel Andres Jesus De Lama

Abstract:

There are very few models that serve to analyze the use of water in the socio-economic process. On the supply side, the joint use of groundwater has been considered in addition to the simple limits on the availability of surface water. In addition, we have worked on waterlogging and the effects on water quality (mainly salinity). In this paper, a 'complex' water economy is examined; one in which demands grow differentially not only within but also between sectors, and one in which there are limited opportunities to increase consumptive use. In particular, high-value growth, the growth of the production of irrigated crops of high value within the basins of the case study, together with the rapidly growing urban areas, provides a rich context to examine the general problem of water management at the basin level. At the same time, the long-term aridity of nature has made the eco-environment in the basins located on the coast of Peru very vulnerable, and the exploitation and immediate use of water resources have further deteriorated the situation. The presented methodology is the optimization with embedded simulation. The wide basin simulation of flow and water balances and crop growth are embedded with the optimization of water allocation, reservoir operation, and irrigation scheduling. The modeling framework is developed from a network of river basins that includes multiple nodes of origin (reservoirs, aquifers, water courses, etc.) and multiple demand sites along the river, including places of consumptive use for agricultural, municipal and industrial, and uses of running water on the coast of Peru. The economic benefits associated with water use are evaluated for different demand management instruments, including water rights, based on the production and benefit functions of water use in the urban agricultural and industrial sectors. This work represents a new effort to analyze the use of water at the regional level and to evaluate the modernization of the integrated management of water resources and socio-economic territorial development in Peru. It will also allow the establishment of policies to improve the process of implementation of the integrated management and development of water resources. The input-output analysis is essential to present a theory about the production process, which is based on a particular type of production function. Also, this work presents the Computable General Equilibrium (CGE) version of the economic model for water resource policy analysis, which was specifically designed for analyzing large-scale water management. As to the platform for CGE simulation, GEMPACK, a flexible system for solving CGE models, is used for formulating and solving CGE model through the percentage-change approach. GEMPACK automates the process of translating the model specification into a model solution program.

Keywords: water economy, simulation, modeling, integration

Procedia PDF Downloads 128
168 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley

Authors: Sajana Suwal, Ganesh R. Nhemafuki

Abstract:

Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.

Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response

Procedia PDF Downloads 271