Search results for: preprocessing techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1045

Search results for: preprocessing techniques

835 Critical Approach to Define the Architectural Structure of a Health Prototype in a Rural Area of Brazil

Authors: Domenico Chizzoniti, Monica Moscatelli, Letizia Cattani, Luca Preis

Abstract:

A primary healthcare facility in developing countries should be a multifunctional space able to respond to different requirements: Flexibility, modularity, aggregation and reversibility. These basic features could be better satisfied if applied to an architectural artifact that complies with the typological, figurative and constructive aspects of the context in which it is located. Therefore, the purpose of this paper is to identify a procedure that can define the figurative aspects of the architectural structure of the health prototype for the marginal areas of developing countries through a critical approach. The application context is the rural areas of the Northeast of Bahia in Brazil. The prototype should be located in the rural district of Quingoma, in the municipality of Lauro de Freitas, a particular place where there is still a cultural fusion of black and indigenous populations. Based on the historical analysis of settlement strategies and architectural structures in spaces of public interest or collective use, this paper aims to provide a procedure able to identify the categories and rules underlying typological and figurative aspects, in order to detect significant and generalizable elements, as well as materials and constructive techniques typically adopted in the rural areas of Brazil. The object of this work is therefore not only the recovery of certain constructive approaches but also the development of a procedure that integrates the requirements of the primary healthcare prototype with its surrounding economic, social, cultural, settlement and figurative conditions.

Keywords: architectural typology, developing countries, local construction techniques, primary health care.

Procedia PDF Downloads 288
834 Numerical Modelling of Immiscible Fluids Flow in Oil Reservoir Rocks during Enhanced Oil Recovery Processes

Authors: Zahreddine Hafsi, Manoranjan Mishra , Sami Elaoud

Abstract:

Ensuring the maximum recovery rate of oil from reservoir rocks is a challenging task that requires preliminary numerical analysis of different techniques used to enhance the recovery process. After conventional oil recovery processes and in order to retrieve oil left behind after the primary recovery phase, water flooding in one of several techniques used for enhanced oil recovery (EOR). In this research work, EOR via water flooding is numerically modeled, and hydrodynamic instabilities resulted from immiscible oil-water flow in reservoir rocks are investigated. An oil reservoir is a porous medium consisted of many fractures of tiny dimensions. For modeling purposes, the oil reservoir is considered as a collection of capillary tubes which provides useful insights into how fluids behave in the reservoir pore spaces. Equations governing oil-water flow in oil reservoir rocks are developed and numerically solved following a finite element scheme. Numerical results are obtained using Comsol Multiphysics software. The two phase Darcy module of COMSOL Multiphysics allows modelling the imbibition process by the injection of water (as wetting phase) into an oil reservoir. Van Genuchten, Brooks Corey and Levrett models were considered as retention models and obtained flow configurations are compared, and the governing parameters are discussed. For the considered retention models it was found that onset of instabilities viz. fingering phenomenon is highly dependent on the capillary pressure as well as the boundary conditions, i.e., the inlet pressure and the injection velocity.

Keywords: capillary pressure, EOR process, immiscible flow, numerical modelling

Procedia PDF Downloads 110
833 Noise Mitigation Techniques to Minimize Electromagnetic Interference/Electrostatic Discharge Effects for the Lunar Mission Spacecraft

Authors: Vabya Kumar Pandit, Mudit Mittal, N. Prahlad Rao, Ramnath Babu

Abstract:

TeamIndus is the only Indian team competing for the Google Lunar XPRIZE(GLXP). The GLXP is a global competition to challenge the private entities to soft land a rover on the moon, travel minimum 500 meters and transmit high definition images and videos to Earth. Towards this goal, the TeamIndus strategy is to design and developed lunar lander that will deliver a rover onto the surface of the moon which will accomplish GLXP mission objectives. This paper showcases the various system level noise control techniques adopted by Electrical Distribution System (EDS), to achieve the required Electromagnetic Compatibility (EMC) of the spacecraft. The design guidelines followed to control Electromagnetic Interference by proper electronic package design, grounding, shielding, filtering, and cable routing within the stipulated mass budget, are explained. The paper also deals with the challenges of achieving Electromagnetic Cleanliness in presence of various Commercial Off-The-Shelf (COTS) and In-House developed components. The methods of minimizing Electrostatic Discharge (ESD) by identifying the potential noise sources, susceptible areas for charge accumulation and the methodology to prevent arcing inside spacecraft are explained. The paper then provides the EMC requirements matrix derived from the mission requirements to meet the overall Electromagnetic compatibility of the Spacecraft.

Keywords: electromagnetic compatibility, electrostatic discharge, electrical distribution systems, grounding schemes, light weight harnessing

Procedia PDF Downloads 273
832 Evaluation of Newly Synthesized Steroid Derivatives Using In silico Molecular Descriptors and Chemometric Techniques

Authors: Milica Ž. Karadžić, Lidija R. Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Z. Kovačević, Anamarija I. Mandić, Katarina Penov-Gaši, Andrea R. Nikolić, Aleksandar M. Oklješa

Abstract:

This study considered selection of the in silico molecular descriptors and the models for newly synthesized steroid derivatives description and their characterization using chemometric techniques. Multiple linear regression (MLR) models were established and gave the best molecular descriptors for quantitative structure-retention relationship (QSRR) modeling of the retention of the investigated molecules. MLR models were without multicollinearity among the selected molecular descriptors according to the variance inflation factor (VIF) values. Used molecular descriptors were ranked using generalized pair correlation method (GPCM). In this method, the significant difference between independent variables can be noticed regardless almost equal correlation between dependent variable. Generated MLR models were statistically and cross-validated and the best models were kept. Models were ranked using sum of ranking differences (SRD) method. According to this method, the most consistent QSRR model can be found and similarity or dissimilarity between the models could be noticed. In this study, SRD was performed using average values of experimentally observed data as a golden standard. Chemometric analysis was conducted in order to characterize newly synthesized steroid derivatives for further investigation regarding their potential biological activity and further synthesis. This article is based upon work from COST Action (CM1105), supported by COST (European Cooperation in Science and Technology).

Keywords: generalized pair correlation method, molecular descriptors, regression analysis, steroids, sum of ranking differences

Procedia PDF Downloads 322
831 Comparative Evaluation of Vanishing Interfacial Tension Approach for Minimum Miscibility Pressure Determination

Authors: Waqar Ahmad Butt, Gholamreza Vakili Nezhaad, Ali Soud Al Bemani, Yahya Al Wahaibi

Abstract:

Minimum miscibility pressure (MMP) plays a great role in determining the displacement efficiency of different gas injection processes. Experimental techniques for MMP determination include industrially recommended slim tube, vanishing interfacial tension (VIT) and rising bubble apparatus (RBA). In this paper, MMP measurement study using slim tube and VIT experimental techniques for two different crude oil samples (M and N) both in live and stock tank oil forms is being presented. VIT measured MMP values for both 'M' and 'N' live crude oils were close to slim tube determined MMP values with 6.4 and 5 % deviation respectively. Whereas for both oil samples in stock tank oil form, VIT measured MMP showed a higher unacceptable deviation from slim tube determined MMP. This higher difference appears to be related to high stabilized crude oil heavier fraction and lack of multiple contacts miscibility. None of the different nine deployed crude oil and CO2 MMP computing correlations could result in reliable MMP, close to slim tube determined MMP. Since VIT determined MMP values for both considered live crude oils are in close match with slim tube determined MMP values, it confirms reliable, reproducible, rapid and cheap alternative for live crude oil MMP determination. Whereas VIT MMP determination for stock tank oil case needed further investigation about stabilization / destabilization mechanism of oil heavier ends and multiple contacts miscibility development issues.

Keywords: minimum miscibility pressure, interfacial tension, multiple contacts miscibility, heavier ends

Procedia PDF Downloads 250
830 Teacher Training Course: Conflict Resolution through Mediation

Authors: Csilla Marianna Szabó

Abstract:

In Hungary, the society has changes a lot for the past 25 years, and these changes could be detected in educational situations as well. The number and the intensity of conflicts have been increased in most fields of life, as well as at schools. Teachers have difficulties to be able to handle school conflicts. What is more, the new net generation, generation Z has values and behavioural patterns different from those of the previous one, which might generate more serious conflicts at school, especially with teachers who were mainly socialising in a traditional teacher – student relationships. In Hungary, the bill CCIV, 2011 declared the foundation of Institutes of Teacher Training in higher education institutes. One of the tasks of the Institutes is to survey the competences and needs of teachers working in public education and to provide further trainings and services for them according to their needs and requirements. This job is supported by the Social Renewal Operative Programs 4.1.2.B. The Institute of Teacher Training at the College of Dunaújváros, Hungary carried out a questionnaire and surveyed the needs and the requirements of teachers working in the Central Transdanubian region. Based on the results, the professors of the Institute of Teacher Training decided to meet the requirements of teachers and launch short courses in spring 2015. One of the courses is going to focus on school conflict management through mediation. The aim of the pilot course is to provide conflict management techniques for teachers presenting different mediation techniques to them. The theoretical part of the course (5 hours) will enable participants to understand the main points and the advantages of mediation, while the practical part (10 hours) will involve teachers in role plays to learn how to cope with conflict situations applying mediation. We hope if conflicts could be reduced, it would influence school atmosphere in a positive way and the teaching – learning process could be more successful and effective.

Keywords: conflict resolution, generation Z, mediation, teacher training

Procedia PDF Downloads 383
829 Influence of the Cooking Technique on the Iodine Content of Frozen Hake

Authors: F. Deng, R. Sanchez, A. Beltran, S. Maestre

Abstract:

The high nutritional value associated with seafood is related to the presence of essential trace elements. Moreover, seafood is considered an important source of energy, proteins, and long-chain polyunsaturated fatty acids. Generally, seafood is consumed cooked. Consequently, the nutritional value could be degraded. Seafood, such as fish, shellfish, and seaweed, could be considered as one of the main iodine sources. The deficient or excessive consumption of iodine could cause dysfunction and pathologies related to the thyroid gland. The main objective of this work is to evaluated iodine stability in hake (Merluccius) undergone different culinary techniques. The culinary process considered were: boiling, steaming, microwave cooking, baking, cooking en papillote (twisted cover with the shape of a sweet wrapper) and coating with a batter of flour and deep-frying. The determination of iodine was carried by Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Regarding sample handling strategies, liquid-liquid extraction has demonstrated to be a powerful pre-concentration and clean-up approach for trace metal analysis by ICP techniques. Extraction with tetramethylammonium hydroxide (TMAH reagent) was used as a sample preparation method in this work. Based on the results, it can be concluded that the stability of iodine was degraded with the cooking processes. The major degradation was observed for the boiling and microwave cooking processes. The content of iodine in hake decreased up to 60% and 52%, respectively. However, if the boiling cooking liquid is preserved, this loss that has been generated during cooking is reduced. Only when the fish was cooked by following the cooking en papillote process the iodine content was preserved.

Keywords: cooking process, ICP-MS, iodine, hake

Procedia PDF Downloads 117
828 Expert System for Road Bridge Constructions

Authors: Michael Dimmer, Holger Flederer

Abstract:

The basis of realizing a construction project is a technically flawless concept which satisfies conditions regarding environment and costs, as well as static-constructional terms. The presented software system actively supports civil engineers during the setup of optimal designs, by giving advice regarding durability, life-cycle costs, sustainability and much more. A major part of the surrounding conditions of a design process is gathered and assimilated by experienced engineers subconsciously. It is a question about eligible building techniques and their practicability by considering emerging costs. Planning engineers have acquired many of this experience during their professional life and use them for their daily work. Occasionally, the planning engineer should disassociate himself from his experience to be open for new and better solutions which meet the functional demands, as well. The developed expert system gives planning engineers recommendations for preferred design options of new constructions as well as for existing bridge constructions. It is possible to analyze construction elements and techniques regarding sustainability and life-cycle costs. This way the software provides recommendations for future constructions. Furthermore, there is an option to design existing road bridges especially for heavy duty transport. This implies a route planning tool to get quick and reliable information as to whether the bridge support structures of a transport route have been measured sufficiently for a certain heavy duty transport. The use of this expert system in bridge planning companies and building authorities will save costs massively for new and existent bridge constructions. This is achieved by consequently considering parameters like life-cycle costs and sustainability for its planning recommendations.

Keywords: expert system, planning process, road bridges, software system

Procedia PDF Downloads 256
827 The Role of Vibro-Stone Column for Enhancing the Soft Soil Properties

Authors: Mohsen Ramezan Shirazi, Orod Zarrin, Komeil Valipourian

Abstract:

This study investigated the behavior of improved soft soils through the vibro replacement technique by considering their settlements and consolidation rates and the applicability of this technique in various types of soils and settlement and bearing capacity calculations.

Keywords: bearing capacity, expansive clay, stone columns, vibro techniques

Procedia PDF Downloads 555
826 The Application of Lesson Study Model in Writing Review Text in Junior High School

Authors: Sulastriningsih Djumingin

Abstract:

This study has some objectives. It aims at describing the ability of the second-grade students to write review text without applying the Lesson Study model at SMPN 18 Makassar. Second, it seeks to describe the ability of the second-grade students to write review text by applying the Lesson Study model at SMPN 18 Makassar. Third, it aims at testing the effectiveness of the Lesson Study model in writing review text at SMPN 18 Makassar. This research was true experimental design with posttest Only group design involving two groups consisting of one class of the control group and one class of the experimental group. The research populations were all the second-grade students at SMPN 18 Makassar amounted to 250 students consisting of 8 classes. The sampling technique was purposive sampling technique. The control class was VIII2 consisting of 30 students, while the experimental class was VIII8 consisting of 30 students. The research instruments were in the form of observation and tests. The collected data were analyzed using descriptive statistical techniques and inferential statistical techniques with t-test types processed using SPSS 21 for windows. The results shows that: (1) of 30 students in control class, there are only 14 (47%) students who get the score more than 7.5, categorized as inadequate; (2) in the experimental class, there are 26 (87%) students who obtain the score of 7.5, categorized as adequate; (3) the Lesson Study models is effective to be applied in writing review text. Based on the comparison of the ability of the control class and experimental class, it indicates that the value of t-count is greater than the value of t-table (2.411> 1.667). It means that the alternative hypothesis (H1) proposed by the researcher is accepted.

Keywords: application, lesson study, review text, writing

Procedia PDF Downloads 180
825 Homogenization of a Non-Linear Problem with a Thermal Barrier

Authors: Hassan Samadi, Mustapha El Jarroudi

Abstract:

In this work, we consider the homogenization of a non-linear problem in periodic medium with two periodic connected media exchanging a heat flux throughout their common interface. The interfacial exchange coefficient λ is assumed to tend to zero or to infinity following a rate λ=λ(ε) when the size ε of the basic cell tends to zero. Three homogenized problems are determined according to some critical value depending of λ and ε. Our method is based on Γ-Convergence techniques.

Keywords: variational methods, epiconvergence, homogenization, convergence technique

Procedia PDF Downloads 498
824 Competing Risks Modeling Using within Node Homogeneity Classification Tree

Authors: Kazeem Adesina Dauda, Waheed Babatunde Yahya

Abstract:

To design a tree that maximizes within-node homogeneity, there is a need for a homogeneity measure that is appropriate for event history data with multiple risks. We consider the use of Deviance and Modified Cox-Snell residuals as a measure of impurity in Classification Regression Tree (CART) and compare our results with the results of Fiona (2008) in which homogeneity measures were based on Martingale Residual. Data structure approach was used to validate the performance of our proposed techniques via simulation and real life data. The results of univariate competing risk revealed that: using Deviance and Cox-Snell residuals as a response in within node homogeneity classification tree perform better than using other residuals irrespective of performance techniques. Bone marrow transplant data and double-blinded randomized clinical trial, conducted in other to compare two treatments for patients with prostate cancer were used to demonstrate the efficiency of our proposed method vis-à-vis the existing ones. Results from empirical studies of the bone marrow transplant data showed that the proposed model with Cox-Snell residual (Deviance=16.6498) performs better than both the Martingale residual (deviance=160.3592) and Deviance residual (Deviance=556.8822) in both event of interest and competing risks. Additionally, results from prostate cancer also reveal the performance of proposed model over the existing one in both causes, interestingly, Cox-Snell residual (MSE=0.01783563) outfit both the Martingale residual (MSE=0.1853148) and Deviance residual (MSE=0.8043366). Moreover, these results validate those obtained from the Monte-Carlo studies.

Keywords: within-node homogeneity, Martingale residual, modified Cox-Snell residual, classification and regression tree

Procedia PDF Downloads 245
823 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 64
822 Experimental Study of the Behavior of Elongated Non-spherical Particles in Wall-Bounded Turbulent Flows

Authors: Manuel Alejandro Taborda Ceballos, Martin Sommerfeld

Abstract:

Transport phenomena and dispersion of non-spherical particle in turbulent flows are found everywhere in industrial application and processes. Powder handling, pollution control, pneumatic transport, particle separation are just some examples where the particle encountered are not only spherical. These types of multiphase flows are wall bounded and mostly highly turbulent. The particles found in these processes are rarely spherical but may have various shapes (e.g., fibers, and rods). Although research related to the behavior of regular non-spherical particles in turbulent flows has been carried out for many years, it is still necessary to refine models, especially near walls where the interaction fiber-wall changes completely its behavior. Imaging-based experimental studies on dispersed particle-laden flows have been applied for many decades for a detailed experimental analysis. These techniques have the advantages that they provide field information in two or three dimensions, but have a lower temporal resolution compared to point-wise techniques such as PDA (phase-Doppler anemometry) and derivations therefrom. The applied imaging techniques in dispersed two-phase flows are extensions from classical PIV (particle image velocimetry) and PTV (particle tracking velocimetry) and the main emphasis was simultaneous measurement of the velocity fields of both phases. In a similar way, such data should also provide adequate information for validating the proposed models. Available experimental studies on the behavior of non-spherical particles are uncommon and mostly based on planar light-sheet measurements. Especially for elongated non-spherical particles, however, three-dimensional measurements are needed to fully describe their motion and to provide sufficient information for validation of numerical computations. For further providing detailed experimental results allowing a validation of numerical calculations of non-spherical particle dispersion in turbulent flows, a water channel test facility was built around a horizontal closed water channel. Into this horizontal main flow, a small cross-jet laden with fiber-like particles was injected, which was also solely driven by gravity. The dispersion of the fibers was measured by applying imaging techniques based on a LED array for backlighting and high-speed cameras. For obtaining the fluid velocity fields, almost neutrally buoyant tracer was used. The discrimination between tracer and fibers was done based on image size which was also the basis to determine fiber orientation with respect to the inertial coordinate system. The synchronous measurement of fluid velocity and fiber properties also allow the collection of statistics of fiber orientation, velocity fields of tracer and fibers, the angular velocity of the fibers and the orientation between fiber and instantaneous relative velocity. Consequently, an experimental study the behavior of elongated non-spherical particles in wall bounded turbulent flows was achieved. The development of a comprehensive analysis was succeeded, especially near the wall region, where exists hydrodynamic wall interaction effects (e.g., collision or lubrication) and abrupt changes of particle rotational velocity. This allowed us to predict numerically afterwards the behavior of non-spherical particles within the frame of the Euler/Lagrange approach, where the particles are therein treated as “point-particles”.

Keywords: crossflow, non-spherical particles, particle tracking velocimetry, PIV

Procedia PDF Downloads 61
821 A QoS Aware Cluster Based Routing Algorithm for Wireless Mesh Network Using LZW Lossless Compression

Authors: J. S. Saini, P. P. K. Sandhu

Abstract:

The multi-hop nature of Wireless Mesh Networks and the hasty progression of throughput demands results in multi- channels and multi-radios structures in mesh networks, but the main problem of co-channels interference reduces the total throughput, specifically in multi-hop networks. Quality of Service mentions a vast collection of networking technologies and techniques that guarantee the ability of a network to make available desired services with predictable results. Quality of Service (QoS) can be directed at a network interface, towards a specific server or router's performance, or in specific applications. Due to interference among various transmissions, the QoS routing in multi-hop wireless networks is formidable task. In case of multi-channel wireless network, since two transmissions using the same channel may interfere with each other. This paper has considered the Destination Sequenced Distance Vector (DSDV) routing protocol to locate the secure and optimised path. The proposed technique also utilizes the Lempel–Ziv–Welch (LZW) based lossless data compression and intra cluster data aggregation to enhance the communication between the source and the destination. The use of clustering has the ability to aggregate the multiple packets and locates a single route using the clusters to improve the intra cluster data aggregation. The use of the LZW based lossless data compression has ability to reduce the data packet size and hence it will consume less energy, thus increasing the network QoS. The MATLAB tool has been used to evaluate the effectiveness of the projected technique. The comparative analysis has shown that the proposed technique outperforms over the existing techniques.

Keywords: WMNS, QOS, flooding, collision avoidance, LZW, congestion control

Procedia PDF Downloads 311
820 Methylene Blue Removal Using NiO nanoparticles-Sand Adsorption Packed Bed

Authors: Nedal N. Marei, Nashaat Nassar

Abstract:

Many treatment techniques have been used to remove the soluble pollutants from wastewater as; dyes and metal ions which could be found in rich amount in the used water of the textile and tanneries industry. The effluents from these industries are complex, containing a wide variety of dyes and other contaminants, such as dispersants, acids, bases, salts, detergents, humectants, oxidants, and others. These techniques can be divided into physical, chemical, and biological methods. Adsorption has been developed as an efficient method for the removal of heavy metals from contaminated water and soil. It is now recognized as an effective method for the removal of both organic and inorganic pollutants from wastewaters. Nanosize materials are new functional materials, which offer high surface area and have come up as effective adsorbents. Nano alumina is one of the most important ceramic materials widely used as an electrical insulator, presenting exceptionally high resistance to chemical agents, as well as giving excellent performance as a catalyst for many chemical reactions, in microelectronic, membrane applications, and water and wastewater treatment. In this study, methylene blue (MB) dye has been used as model dye of textile wastewater in order to synthesize a synthetic MB wastewater. NiO nanoparticles were added in small percentage in the sand packed bed adsorption columns to remove the MB from the synthetic textile wastewater. Moreover, different parameters have been evaluated; flow of the synthetic wastewater, pH, height of the bed, percentage of the NiO to the sand in the packed material. Different mathematical models where employed to find the proper model which describe the experimental data and help to analyze the mechanism of the MB adsorption. This study will provide good understanding of the dyes adsorption using metal oxide nanoparticles in the classical sand bed.

Keywords: adsorption, column, nanoparticles, methylene

Procedia PDF Downloads 240
819 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents

Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty

Abstract:

A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.

Keywords: abstractive summarization, deep learning, natural language Processing, patent document

Procedia PDF Downloads 103
818 The Impact of Culture in Teaching English, the Case Study of Preparatory School of Sciences and Techniques

Authors: Nouzha Yasmina Soulimane-Benhabib

Abstract:

Language is a medium of communication and a means of expression that is why today the learning of foreign languages especially the English language has become a basic necessity for every student who is ambitious. It is known that culture and language are inseparable and complementary, however, in the process of teaching a foreign language, teachers used to focus mainly on preparing adequate syllabi for ESP students, yet, some parameters should be considered. For instance; the culture of the target language may play an important role since students attitudes towards a foreign language enhance their learning or vice versa. The aim of this study is to analyse how culture could influence the teaching of a foreign language, we have taken the example of the English language as it is considered as the second foreign language in Algeria after French. The study is conducted at the Preparatory School of Sciences and Techniques, Tlemcen where twenty-five students participated in this research. The reasons behind learning the English language are various, and since English is the most widely-spoken language in the world, it is the language of research and education and it is used in many other fields, we have to take into consideration one important factor which is the social distance between the culture of the Algerian learner and the culture of the target language, this gap may lead to a culture shock. Two steps are followed in this research: The first one is to collect data from those students who are studying at the Preparatory School under the form of questionnaire and an interview is submitted to six of them in order to reinforce our research and get effective and precise results, and the second step is to analyse these data taking into consideration the diversity of the learners within this institution. The results obtained show that learners’ attitudes towards the English community and culture are mixed and it may influence their curiosity and attention to learn. Despite of big variance between Algerian and European cultures, some of the students focused mainly on the benefits of the English language since they need it in their studies, research and a future carrier, however, the others manifest their reluctance towards this language and this is mainly due to the profound impact of the English culture which is different from the Algerian one.

Keywords: Algeria, culture, English, impact

Procedia PDF Downloads 363
817 Sonodynamic Activity of Porphyrins-SWCNT

Authors: F. Bosca, F. Foglietta, F. Turci, E. Calcio Gaudino, S. Mana, F. Dosio, R. Canaparo, L. Serpe, A. Barge

Abstract:

In recent years, medical science has improved chemotherapy, radiation therapy and adjuvant therapy and has developed newer targeted therapies as well as refining surgical techniques for removing cancer. However, the chances of surviving the disease depend greatly on the type and location of the cancer and the extent of the disease at the start of treatment. Moreover, mainstream forms of cancer treatment have side effects which range from the unpleasant to the fatal. Therefore, the continuation of progress in anti-cancer therapy may depend on placing emphasis on other existing but less thoroughly investigated therapeutic approaches such as Sonodynamic Therapy (SDT). SDT is based on the local activation of a so called 'sonosensitizer', a molecule able to be excited by ultrasound, the radical production as a consequence of its relaxation processes and cell death due to different mechanisms induced by radical production. The present work deals with synthesis, characterization and preliminary in vitro test of Single Walled Carbon Nanotubes (SWCNT) decorated with porphyrins and biological vectors. The SWCNT’s surface was modified exploiting 1, 3-dipolar cycloaddition or Dies Alder reactions. For this purpose, different porphyrins scaffolds were ad-hoc synthesized using also non-conventional techniques. To increase cellular specificity of porphyrin-conjugated SWCNTs and to improve their ability to be suspended in aqueous solution, the modified nano-tubes were grafted with suitable glutamine or hyaluronic acid derivatives. These nano-sized sonosensitizers were characterized by several methodologies and tested in vitro on different cancer cell lines.

Keywords: sonodynamic therapy, porphyrins synthesis and modification, SWNCT grafting, hyaluronic acid, anti-cancer treatment

Procedia PDF Downloads 368
816 Evaluation of Mechanical Behavior of Laser Cladding in Various Tilting Pad Bearing Materials

Authors: Si-Geun Choi, Hoon-Jae Park, Jung-Woo Cho, Jin-Ho Lim, Jin-Young Park, Joo-Young Oh, Jae-Il Jeong Seock-Sam Kim, Young Tae Cho, Chan Gyu Kim, Jong-Hyoung Kim

Abstract:

The tilting pad bearing is a kind of the fluid film bearing and it can contribute to the high speed and the high load performance compared to other bearings including the rolling element bearing. Furthermore, the tilting bearing has many advantages such as high stability at high-speed performance, long life, high damping, high impact resistance and low noise. Therefore, it mostly used in mid to large size turbomachines, despite the high price disadvantage. Recently, manufacture and process employing laser techniques advancing at a fast-growing rate in mechanical industry, the dissimilar metal weld process employing laser techniques is actively studied. Moreover, also, Industry fields try to apply for welding the white metal and the back metal using laser cladding method for high durability. Furthermore, it has followed that laser cladding method has a lot better bond strength, toughness, anti-abrasion and environment-friendly than centrifugal casting method through preceding research. Therefore, the laser cladding method has a lot better quality, cost reduction, eco-friendliness and permanence of technology than the centrifugal casting method or the gravity casting method. In this study, we compare the mechanical properties of different bearing materials by evaluating the behavior of laser cladding layer with various materials (i.e. SS400, SCM440, S20C) under the same parameters. Furthermore, we analyze the porosity of various tilting pad bearing materials which white metal treated on samples. SEM, EDS analysis and hardness tests of three materials are shown to understand the mechanical properties and tribological behavior. W/D ratio, surface roughness results with various materials are performed in this study.

Keywords: laser cladding, tilting pad bearing, white metal, mechanical properties

Procedia PDF Downloads 359
815 Synthesis and Characterization of Poly(2-[[4-(Dimethylamino)Benzylidene] Amino]Phenol) in Organic Medium: Investigation of Thermal Stability, Conductivity, and Antimicrobial Properties

Authors: Nuray Yilmaz Baran, Mehmet Saçak

Abstract:

Schiff base polymers are one class of conjugated polymers, also called as poly(azomethines). They have drawn the attention of researchers in recent years due to their some properties such as, optoelectronic, semiconductive, and photovoltaic, antimicrobial activities and high thermal stability. In this study, Poly(2-[[4-(dimethylamino)benzylidene]amino] phenol) P(2-DBAP), which is a Schiff base polymer, was synthesized by an oxidative polycondensation reaction of -[[4-(dimethylamino)benzylidene]amino]phenol (2-DBAP) with oxidants NaOCl, H₂O₂ and O₂ in various organic medium. At the end of the polymerizations carried out at various temperatures and time, maximum conversion of the monomer to the polymer could be obtained as around 93.7 %. The structures of the monomer and polymer were characterized by UV-Vis, FTIR and ¹HNMR techniques. Thermal analysis of the polymer was identified by TG-DTG and DTA techniques, and the thermal degradation behavior was supported by Thermo-IR spectra recorded in the temperature range of 25-800 °C. The number average molecular weight (Mn), weight average molecular weight (Mw) and polydispersity index (PDI) of the polymer were found to be 26337, 9860 g/mol 2.67, respectively. The change of electrical conductivity value of the P(2-DBAP) doped with iodine vapor at different temperatures and time was investigated its maximum was measured by increasing 10¹⁰ fold as 2 x10⁻⁴ Scm⁻¹ after doping for 48 h at 60 °C. Antibacterial and antifungal activities of P(2-DBAP) Schiff base and its polymer were also investigated against Sarcina lutea, Enterobacter aerogenes, Escherichia coli, Enterococcus Faecalis, Klebsiella pneumoniae, Bacillus subtilis, and Candida albicans, Saccharomyces cerevisiae, respectively.

Keywords: conductive properties, polyazomethines, polycondensation reaction, Schiff base polymers, thermal stability

Procedia PDF Downloads 265
814 Fine Characterization of Glucose Modified Human Serum Albumin by Different Biophysical and Biochemical Techniques at a Range

Authors: Neelofar, Khursheed Alam, Jamal Ahmad

Abstract:

Protein modification in diabetes mellitus may lead to early glycation products (EGPs) or amadori product as well as advanced glycation end products (AGEs). Early glycation involves the reaction of glucose with N-terminal and lysyl side chain amino groups to form Schiff’s base which undergoes rearrangements to form more stable early glycation product known as Amadori product. After Amadori, the reactions become more complicated leading to the formation of advanced glycation end products (AGEs) that interact with various AGE receptors, thereby playing an important role in the long-term complications of diabetes. Millard reaction or nonenzymatic glycation reaction accelerate in diabetes due to hyperglycation and alter serum protein’s structure, their normal functions that lead micro and macro vascular complications in diabetic patients. In this study, Human Serum Albumin (HSA) with a constant concentration was incubated with different concentrations of glucose at 370C for a week. At 4th day, Amadori product was formed that was confirmed by colorimetric method NBT assay and TBA assay which both are authenticate early glycation product. Conformational changes in native as well as all samples of Amadori albumin with different concentrations of glucose were investigated by various biophysical and biochemical techniques. Main biophysical techniques hyperchromacity, quenching of fluorescence intensity, FTIR, CD and SDS-PAGE were used. Further conformational changes were observed by biochemical assays mainly HMF formation, fructoseamine, reduction of fructoseamine with NaBH4, carbonyl content estimation, lysine and arginine residues estimation, ANS binding property and thiol group estimation. This study find structural and biochemical changes in Amadori modified HSA with normal to hyperchronic range of glucose with respect to native HSA. When glucose concentration was increased from normal to chronic range biochemical and structural changes also increased. Highest alteration in secondary and tertiary structure and conformation in glycated HSA was observed at the hyperchronic concentration (75mM) of glucose. Although it has been found that Amadori modified proteins is also involved in secondary complications of diabetes as AGEs but very few studies have been done to analyze the conformational changes in Amadori modified proteins due to early glycation. Most of the studies were found on the structural changes in Amadori protein at a particular glucose concentration but no study was found to compare the biophysical and biochemical changes in HSA due to early glycation with a range of glucose concentration at a constant incubation time. So this study provide the information about the biochemical and biophysical changes occur in Amadori modified albumin at a range of glucose normal to chronic in diabetes. Although many implicates currently in use i.e. glycaemic control, insulin treatment and other chemical therapies that can control many aspects of diabetes. However, even with intensive use of current antidiabetic agents more than 50 % of diabetic patient’s type 2 suffers poor glycaemic control and 18 % develop serious complications within six years of diagnosis. Experimental evidence related to diabetes suggests that preventing the nonenzymatic glycation of relevant proteins or blocking their biological effects might beneficially influence the evolution of vascular complications in diabetic patients or quantization of amadori adduct of HSA by authentic antibodies against HSA-EGPs can be used as marker for early detection of the initiation/progression of secondary complications of diabetes. So this research work may be helpful for the same.

Keywords: diabetes mellitus, glycation, albumin, amadori, biophysical and biochemical techniques

Procedia PDF Downloads 246
813 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer

Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom

Abstract:

Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.

Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN

Procedia PDF Downloads 46
812 Comprehensive Risk Analysis of Decommissioning Activities with Multifaceted Hazard Factors

Authors: Hyeon-Kyo Lim, Hyunjung Kim, Kune-Woo Lee

Abstract:

Decommissioning process of nuclear facilities can be said to consist of a sequence of problem solving activities, partly because there may exist working environments contaminated by radiological exposure, and partly because there may also exist industrial hazards such as fire, explosions, toxic materials, and electrical and physical hazards. As for an individual hazard factor, risk assessment techniques are getting known to industrial workers with advance of safety technology, but the way how to integrate those results is not. Furthermore, there are few workers who experienced decommissioning operations a lot in the past. Therefore, not a few countries in the world have been trying to develop appropriate counter techniques in order to guarantee safety and efficiency of the process. In spite of that, there still exists neither domestic nor international standard since nuclear facilities are too diverse and unique. In the consequence, it is quite inevitable to imagine and assess the whole risk in the situation anticipated one by one. This paper aimed to find out an appropriate technique to integrate individual risk assessment results from the viewpoint of experts. Thus, on one hand the whole risk assessment activity for decommissioning operations was modeled as a sequence of individual risk assessment steps, and on the other, a hierarchical risk structure was developed. Then, risk assessment procedure that can elicit individual hazard factors one by one were introduced with reference to the standard operation procedure (SOP) and hierarchical task analysis (HTA). With an assumption of quantification and normalization of individual risks, a technique to estimate relative weight factors was tried by using the conventional Analytic Hierarchical Process (AHP) and its result was reviewed with reference to judgment of experts. Besides, taking the ambiguity of human judgment into consideration, debates based upon fuzzy inference was added with a mathematical case study.

Keywords: decommissioning, risk assessment, analytic hierarchical process (AHP), fuzzy inference

Procedia PDF Downloads 402
811 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 295
810 Compost Bioremediation of Oil Refinery Sludge by Using Different Manures in a Laboratory Condition

Authors: O. Ubani, H. I. Atagana, M. S. Thantsha

Abstract:

This study was conducted to measure the reduction in polycyclic aromatic hydrocarbons (PAHs) content in oil sludge by co-composting the sludge with pig, cow, horse and poultry manures under laboratory conditions. Four kilograms of soil spiked with 800 g of oil sludge was co-composted differently with each manure in a ratio of 2:1 (w/w) spiked soil:manure and wood-chips in a ratio of 2:1 (w/v) spiked soil:wood-chips. Control was set up similar as the one above but without manure. Mixtures were incubated for 10 months at room temperature. Compost piles were turned weekly and moisture level was maintained at between 50% and 70%. Moisture level, pH, temperature, CO2 evolution and oxygen consumption were measured monthly and the ash content at the end of experimentation. Bacteria capable of utilizing PAHs were isolated, purified and characterized by molecular techniques using polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE), amplification of the 16S rDNA gene using the specific primers (16S-P1 PCR and 16S-P2 PCR) and the amplicons were sequenced. Extent of reduction of PAHs was measured using automated soxhlet extractor with dichloromethane as the extraction solvent coupled with gas chromatography/mass spectrometry (GC/MS). Temperature did not exceed 27.5O°C in all compost heaps, pH ranged from 5.5 to 7.8 and CO2 evolution was highest in poultry manure at 18.78 µg/dwt/day. Microbial growth and activities were enhanced. Bacteria identified were Bacillus, Arthrobacter and Staphylococcus species. Results from PAH measurements showed reduction between 77 and 99%. The results from the control experiments may be because it was invaded by fungi. Co-composting of spiked soils with animal manures enhanced the reduction in PAHs. Interestingly, all bacteria isolated and identified in this study were present in all treatments, including the control.

Keywords: bioremediation, co-composting, oil refinery sludge, PAHs, bacteria spp, animal manures, molecular techniques

Procedia PDF Downloads 452
809 Experimental Evaluation of Electrocoagulation for Hardness Removal of Bore Well Water

Authors: Pooja Kumbhare

Abstract:

Water is an important resource for the survival of life. The inadequate availability of surface water makes people depend on ground water for fulfilling their needs. However, ground water is generally too hard to satisfy the requirements for domestic as well as industrial applications. Removal of hardness involves various techniques such as lime soda process, ion exchange, reverse osmosis, nano-filtration, distillation, and, evaporation, etc. These techniques have individual problems such as high annual operating cost, sediment formation on membrane, sludge disposal problem, etc. Electrocoagulation (EC) is being explored as modern and cost-effective technology to cope up with the growing demand of high water quality at the consumer end. In general, earlier studies on electrocoagulation for hardness removal are found to deploy batch processes. As batch processes are always inappropriate to deal with large volume of water to be treated, it is essential to develop continuous flow EC process. So, in the present study, an attempt is made to investigate continuous flow EC process for decreasing excessive hardness of bore-well water. The experimental study has been conducted using 12 aluminum electrodes (25cm*10cm, 1cm thick) provided in EC reactor with volume of 8 L. Bore well water sample, collected from a local bore-well (i.e. at – Vishrambag, Sangli; Maharashtra) having average initial hardness of 680 mg/l (Range: 650 – 700 mg/l), was used for the study. Continuous flow electrocoagulation experiments were carried out by varying operating parameters specifically reaction time (Range: 10 – 60 min), voltage (Range: 5 – 20 V), current (Range: 1 – 5A). Based on the experimental study, it is found that hardness removal to the desired extent could be achieved even for continuous flow EC reactor, so the use of it is found promising.

Keywords: hardness, continuous flow EC process, aluminum electrode, optimal operating parameters

Procedia PDF Downloads 159
808 ESP: Peculiarities of Teaching Psychology in English to Russian Students

Authors: Ekaterina A. Redkina

Abstract:

The necessity and importance of teaching professionally oriented content in English needs no proof nowadays. Consequently, the ability to share personal ESP teaching experience seems of great importance. This paper is based on the 8-year ESP and EFL teaching experience at the Moscow State Linguistic University, Moscow, Russia, and presents theoretical analysis of specifics, possible problems, and perspectives of teaching Psychology in English to Russian psychology-students. The paper concerns different issues that are common for different ESP classrooms, and familiar to different teachers. Among them are: designing ESP curriculum (for psychologists in this case), finding the balance between content and language in the classroom, main teaching principles (the 4 C’s), the choice of assessment techniques and teaching material. The main objective of teaching psychology in English to Russian psychology students is developing knowledge and skills essential for professional psychologists. Belonging to international professional community presupposes high-level content-specific knowledge and skills, high level of linguistic skills and cross-cultural linguistic ability and finally high level of professional etiquette. Thus, teaching psychology in English pursues 3 main outcomes, such as content, language and professional skills. The paper provides explanation of each of the outcomes. Examples are also given. Particular attention is paid to the lesson structure, its objectives and the difference between a typical EFL and ESP lesson. There is also made an attempt to find commonalities between teaching ESP and CLIL. There is an approach that states that CLIL is more common for schools, while ESP is more common for higher education. The paper argues that CLIL methodology can be successfully used in ESP teaching and that many CLIL activities are also well adapted for professional purposes. The research paper provides insights into the process of teaching psychologists in Russia, real teaching experience and teaching techniques that have proved efficient over time.

Keywords: ESP, CLIL, content, language, psychology in English, Russian students

Procedia PDF Downloads 583
807 A 0-1 Goal Programming Approach to Optimize the Layout of Hospital Units: A Case Study in an Emergency Department in Seoul

Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee

Abstract:

This paper proposes a method to optimize the layout of an emergency department (ED) based on real executions of care processes by considering several planning objectives simultaneously. Recently, demand for healthcare services has been dramatically increased. As the demand for healthcare services increases, so do the need for new healthcare buildings as well as the need for redesign and renovating existing ones. The importance of implementation of a standard set of engineering facilities planning and design techniques has been already proved in both manufacturing and service industry with many significant functional efficiencies. However, high complexity of care processes remains a major challenge to apply these methods in healthcare environments. Process mining techniques applied in this study to tackle the problem of complexity and to enhance care process analysis. Process related information such as clinical pathways extracted from the information system of an ED. A 0-1 goal programming approach is then proposed to find a single layout that simultaneously satisfies several goals. The proposed model solved by optimization software CPLEX 12. The solution reached using the proposed method has 42.2% improvement in terms of walking distance of normal patients and 47.6% improvement in walking distance of critical patients at minimum cost of relocation. It has been observed that lots of patients must unnecessarily walk long distances during their visit to the emergency department because of an inefficient design. A carefully designed layout can significantly decrease patient walking distance and related complications.

Keywords: healthcare operation management, goal programming, facility layout problem, process mining, clinical processes

Procedia PDF Downloads 265
806 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.

Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation

Procedia PDF Downloads 218