Search results for: Martin Arguelles Perez
162 Determination of Influence Lines for Train Crossings on a Tied Arch Bridge to Optimize the Construction of the Hangers
Authors: Martin Mensinger, Marjolaine Pfaffinger, Matthias Haslbeck
Abstract:
The maintenance and expansion of the railway network represents a central task for transport planning in the future. In addition to the ultimate limit states, the aspects of resource conservation and sustainability are increasingly more necessary to include in the basic engineering. Therefore, as part of the AiF research project, ‘Integrated assessment of steel and composite railway bridges in accordance with sustainability criteria’, the entire lifecycle of engineering structures is involved in planning and evaluation, offering a way to optimize the design of steel bridges. In order to reduce the life cycle costs and increase the profitability of steel structures, it is particularly necessary to consider the demands on hanger connections resulting from fatigue. In order for accurate analysis, a number simulations were conducted as part of the research project on a finite element model of a reference bridge, which gives an indication of the internal forces of the individual structural components of a tied arch bridge, depending on the stress incurred by various types of trains. The calculations were carried out on a detailed FE-model, which allows an extraordinarily accurate modeling of the stiffness of all parts of the constructions as it is made up surface elements. The results point to a large impact of the formation of details on fatigue-related changes in stress, on the one hand, and on the other, they could depict construction-specific specifics over the course of adding stress. Comparative calculations with varied axle-stress distribution also provide information about the sensitivity of the results compared to the imposition of stress and axel distribution on the stress-resultant development. The calculated diagrams help to achieve an optimized hanger connection design through improved durability, which helps to reduce the maintenance costs of rail networks and to give practical application notes for the formation of details.Keywords: fatigue, influence line, life cycle, tied arch bridge
Procedia PDF Downloads 328161 Risk Assessment and Haloacetic Acids Exposure in Drinking Water in Tunja, Colombia
Authors: Bibiana Matilde Bernal Gómez, Manuel Salvador Rodríguez Susa, Mildred Fernanda Lemus Perez
Abstract:
In chlorinated drinking water, Haloacetic acids have been identified and are classified as disinfection byproducts originating from reaction between natural organic matter and/or bromide ions in water sources. These byproducts can be generated through a variety of chemical and pharmaceutical processes. The term ‘Total Haloacetic Acids’ (THAAs) is used to describe the cumulative concentration of dichloroacetic acid, trichloroacetic acid, monochloroacetic acid, monobromoacetic acid, and dibromoacetic acid in water samples, which are usually measured to evaluate water quality. Chronic presence of these acids in drinking water has a risk of cancer in humans. The detection of THAAs for the first time in 15 municipalities of Boyacá was accomplished in 2023. Aim is to describe the correlation between the levels of THAAs and digestive cancer in Tunja, a city in Colombia with higher rates of digestive cancer and to compare the risk across 15 towns, taking into account factors such as water quality. A research project was conducted with the aim of comparing water sources based on the geographical features of the town, describing the disinfection process in 15 municipalities, and exploring physical properties such as water temperature and pH level. The project also involved a study of contact time based on habits documented through a survey, and a comparison of socioeconomic factors and lifestyle, in order to assess the personal risk of exposure. Data on the levels of THAAs were obtained after characterizing the water quality in urban sectors in eight months of 2022. This, based on the protocol described in the Stage 2 DBP of the United States Environmental Protection Agency (USEPA) from 2006, which takes into account the size of the population being supplied. A cancer risk assessment was conducted to evaluate the likelihood of an individual developing cancer due to exposure to pollutants THAAs. The assessment considered exposure methods like oral ingestion, skin absorption, and inhalation. The chronic daily intake (CDI) for these exposure routes was calculated using specific equations. The lifetime cancer risk (LCR) was then determined by adding the cancer risks from the three exposure routes for each HAA. The risk assessment process involved four phases: exposure assessment, toxicity evaluation, data gathering and analysis, and risk definition and management. The results conclude that there is a cumulative higher risk of digestive cancer due to THAAs exposure in drinking water.Keywords: haloacetic acids, drinking water, water quality, cancer risk assessment
Procedia PDF Downloads 57160 Biodegradation of Endoxifen in Wastewater: Isolation and Identification of Bacteria Degraders, Kinetics, and By-Products
Authors: Marina Arino Martin, John McEvoy, Eakalak Khan
Abstract:
Endoxifen is an active metabolite responsible for the effectiveness of tamoxifen, a chemotherapeutic drug widely used for endocrine responsive breast cancer and chemo-preventive long-term treatment. Tamoxifen and endoxifen are not completely metabolized in human body and are actively excreted. As a result, they are released to the water environment via wastewater treatment plants (WWTPs). The presence of tamoxifen in the environment produces negative effects on aquatic lives due to its antiestrogenic activity. Because endoxifen is 30-100 times more potent than tamoxifen itself and also presents antiestrogenic activity, its presence in the water environment could result in even more toxic effects on aquatic lives compared to tamoxifen. Data on actual concentrations of endoxifen in the environment is limited due to recent discovery of endoxifen pharmaceutical activity. However, endoxifen has been detected in hospital and municipal wastewater effluents. The detection of endoxifen in wastewater effluents questions the treatment efficiency of WWTPs. Studies reporting information about endoxifen removal in WWTPs are also scarce. There was a study that used chlorination to eliminate endoxifen in wastewater. However, an inefficient degradation of endoxifen by chlorination and the production of hazardous disinfection by-products were observed. Therefore, there is a need to remove endoxifen from wastewater prior to chlorination in order to reduce the potential release of endoxifen into the environment and its possible effects. The aim of this research is to isolate and identify bacteria strain(s) capable of degrading endoxifen into less hazardous compound(s). For this purpose, bacteria strains from WWTPs were exposed to endoxifen as a sole carbon and nitrogen source for 40 days. Bacteria presenting positive growth were isolated and tested for endoxifen biodegradation. Endoxifen concentration and by-product formation were monitored. The Monod kinetic model was used to determine endoxifen biodegradation rate. Preliminary results of the study suggest that isolated bacteria from WWTPs are able to growth in presence of endoxifen as a sole carbon and nitrogen source. Ongoing work includes identification of these bacteria strains and by-product(s) of endoxifen biodegradation.Keywords: biodegradation, bacterial degraders, endoxifen, wastewater
Procedia PDF Downloads 215159 Research on the Conservation Strategy of Territorial Landscape Based on Characteristics: The Case of Fujian, China
Authors: Tingting Huang, Sha Li, Geoffrey Griffiths, Martin Lukac, Jianning Zhu
Abstract:
Territorial landscapes have experienced a gradual loss of their typical characteristics during long-term human activities. In order to protect the integrity of regional landscapes, it is necessary to characterize, evaluate and protect them in a graded manner. The study takes Fujian, China, as an example and classifies the landscape characters of the site at the regional scale, middle scale, and detailed scale. A multi-scale approach combining parametric and holistic approaches is used to classify and partition the landscape character types (LCTs) and landscape character areas (LCAs) at different scales, and a multi-element landscape assessment approach is adopted to explore the conservation strategies of the landscape character. Firstly, multiple fields and multiple elements of geography, nature and humanities were selected as the basis of assessment according to the scales. Secondly, the study takes a parametric approach to the classification and partitioning of landscape character, Principal Component Analysis, and two-stage cluster analysis (K-means and GMM) in MATLAB software to obtain LCTs, combines with Canny Operator Edge Detection Algorithm to obtain landscape character contours and corrects LCTs and LCAs by field survey and manual identification methods. Finally, the study adopts the Landscape Sensitivity Assessment method to perform landscape character conservation analysis and formulates five strategies for different LCAs: conservation, enhancement, restoration, creation, and combination. This multi-scale identification approach can efficiently integrate multiple types of landscape character elements, reduce the difficulty of broad-scale operations in the process of landscape character conservation, and provide a basis for landscape character conservation strategies. Based on the natural background and the restoration of regional characteristics, the results of landscape character assessment are scientific and objective and can provide a strong reference in regional and national scale territorial spatial planning.Keywords: parameterization, multi-scale, landscape character identify, landscape character assessment
Procedia PDF Downloads 99158 A Facile One Step Modification of Poly(dimethylsiloxane) via Smart Polymers for Biomicrofluidics
Authors: A. Aslihan Gokaltun, Martin L. Yarmush, Ayse Asatekin, O. Berk Usta
Abstract:
Poly(dimethylsiloxane) (PDMS) is one of the most widely used materials in the fabrication of microfluidic devices. It is easily patterned and can replicate features down to nanometers. Its flexibility, gas permeability that allows oxygenation, and low cost also drive its wide adoption. However, a major drawback of PDMS is its hydrophobicity and fast hydrophobic recovery after surface hydrophilization. This results in significant non-specific adsorption of proteins as well as small hydrophobic molecules such as therapeutic drugs limiting the utility of PDMS in biomedical microfluidic circuitry. While silicon, glass, and thermoplastics have been used, they come with problems of their own such as rigidity, high cost, and special tooling needs, which limit their use to a smaller user base. Many strategies to alleviate these common problems with PDMS are lack of general practical applicability, or have limited shelf lives in terms of the modifications they achieve. This restricts large scale implementation and adoption by industrial and research communities. Accordingly, we aim to tailor biocompatible PDMS surfaces by developing a simple and one step bulk modification approach with novel smart materials to reduce non-specific molecular adsorption and to stabilize long-term cell analysis with PDMS substrates. Smart polymers that blended with PDMS during device manufacture, spontaneously segregate to surfaces when in contact with aqueous solutions and create a < 1 nm layer that reduces non-specific adsorption of organic and biomolecules. Our methods are fully compatible with existing PDMS device manufacture protocols without any additional processing steps. We have demonstrated that our modified PDMS microfluidic system is effective at blocking the adsorption of proteins while retaining the viability of primary rat hepatocytes and preserving the biocompatibility, oxygen permeability, and transparency of the material. We expect this work will enable the development of fouling-resistant biomedical materials from microfluidics to hospital surfaces and tubing.Keywords: cell culture, microfluidics, non-specific protein adsorption, PDMS, smart polymers
Procedia PDF Downloads 294157 Effect of Curing Temperature on the Textural and Rheological of Gelatine-SDS Hydrogels
Authors: Virginia Martin Torrejon, Binjie Wu
Abstract:
Gelatine is a protein biopolymer obtained from the partial hydrolysis of animal tissues which contain collagen, the primary structural component in connective tissue. Gelatine hydrogels have attracted considerable research in recent years as an alternative to synthetic materials due to their outstanding gelling properties, biocompatibility and compostability. Surfactants, such as sodium dodecyl sulfate (SDS), are often used in hydrogels solutions as surface modifiers or solubility enhancers, and their incorporation can influence the hydrogel’s viscoelastic properties and, in turn, its processing and applications. Literature usually focuses on studying the impact of formulation parameters (e.g., gelatine content, gelatine strength, additives incorporation) on gelatine hydrogels properties, but processing parameters, such as curing temperature, are commonly overlooked. For example, some authors have reported a decrease in gel strength at lower curing temperatures, but there is a lack of research on systematic viscoelastic characterisation of high strength gelatine and gelatine-SDS systems at a wide range of curing temperatures. This knowledge is essential to meet and adjust the technological requirements for different applications (e.g., viscosity, setting time, gel strength or melting/gelling temperature). This work investigated the effect of curing temperature (10, 15, 20, 23 and 25 and 30°C) on the elastic modulus (G’) and melting temperature of high strength gelatine-SDS hydrogels, at 10 wt% and 20 wt% gelatine contents, by small-amplitude oscillatory shear rheology coupled with Fourier Transform Infrared Spectroscopy. It also correlates the gel strength obtained by rheological measurements with the gel strength measured by texture analysis. Gelatine and gelatine-SDS hydrogels’ rheological behaviour strongly depended on the curing temperature, and its gel strength and melting temperature can be slightly modified to adjust it to given processing and applications needs. Lower curing temperatures led to gelatine and gelatine-SDS hydrogels with considerably higher storage modulus. However, their melting temperature was lower than those gels cured at higher temperatures and lower gel strength. This effect was more considerable at longer timescales. This behaviour is attributed to the development of thermal-resistant structures in the lower strength gels cured at higher temperatures.Keywords: gelatine gelation kinetics, gelatine-SDS interactions, gelatine-surfactant hydrogels, melting and gelling temperature of gelatine gels, rheology of gelatine hydrogels
Procedia PDF Downloads 101156 Corrosion Analysis of Brazed Copper-Based Conducts in Particle Accelerator Water Cooling Circuits
Authors: A. T. Perez Fontenla, S. Sgobba, A. Bartkowska, Y. Askar, M. Dalemir Celuch, A. Newborough, M. Karppinen, H. Haalien, S. Deleval, S. Larcher, C. Charvet, L. Bruno, R. Trant
Abstract:
The present study investigates the corrosion behavior of copper (Cu) based conducts predominantly brazed with Sil-Fos (self-fluxing copper-based filler with silver and phosphorus) within various cooling circuits of demineralized water across different particle accelerator components at CERN. The study covers a range of sample service time, from a few months to fifty years, and includes various accelerator components such as quadrupoles, dipoles, and bending magnets. The investigation comprises the established sample extraction procedure, examination methodology including non-destructive testing, evaluation of the corrosion phenomena, and identification of commonalities across the studied components as well as analysis of the environmental influence. The systematic analysis included computed microtomography (CT) of the joints that revealed distributed defects across all brazing interfaces. Some defects appeared to result from areas not wetted by the filler during the brazing operation, displaying round shapes, while others exhibited irregular contours and radial alignment, indicative of a network or interconnection. The subsequent dry cutting performed facilitated access to the conduct's inner surface and the brazed joints for further inspection through light and electron microscopy (SEM) and chemical analysis via Energy Dispersive X-ray spectroscopy (EDS). Brazing analysis away from affected areas identified the expected phases for a Sil-Fos alloy. In contrast, the affected locations displayed micrometric cavities propagating into the material, along with selective corrosion of the bulk Cu initiated at the conductor-braze interface. Corrosion product analysis highlighted the consistent presence of sulfur (up to 6 % in weight), whose origin and role in the corrosion initiation and extension is being further investigated. The importance of this study is paramount as it plays a crucial role in comprehending the underlying factors contributing to recently identified water leaks and evaluating the extent of the issue. Its primary objective is to provide essential insights for the repair of impacted brazed joints when accessibility permits. Moreover, the study seeks to contribute to the improvement of design and manufacturing practices for future components, ultimately enhancing the overall reliability and performance of magnet systems within CERN accelerator facilities.Keywords: accelerator facilities, brazed copper conducts, demineralized water, magnets
Procedia PDF Downloads 46155 Synthesis of Microencapsulated Phase Change Material for Adhesives with Thermoregulating Properties
Authors: Christin Koch, Andreas Winkel, Martin Kahlmeyer, Stefan Böhm
Abstract:
Due to environmental regulations on greenhouse gas emissions and the depletion of fossil fuels, there is an increasing interest in electric vehicles.To maximize their driving range, batteries with high storage capacities are needed. In most electric cars, rechargeable lithium-ion batteries are used because of their high energy density. However, it has to be taken into account that these batteries generate a large amount of heat during the charge and discharge processes. This leads to a decrease in a lifetime and damage to the battery cells when the temperature exceeds the defined operating range. To ensure an efficient performance of the battery cells, reliable thermal management is required. Currently, the cooling is achieved by heat sinks (e.g., cooling plates) bonded to the battery cells with a thermally conductive adhesive (TCA) that directs the heat away from the components. Especially when large amounts of heat have to be dissipated spontaneously due to peak loads, the principle of heat conduction is not sufficient, so attention must be paid to the mechanism of heat storage. An efficient method to store thermal energy is the use of phase change materials (PCM). Through an isothermal phase change, PCM can briefly absorb or release thermal energy at a constant temperature. If the phase change takes place in the transition from solid to liquid, heat is stored during melting and is released to the ambient during the freezing process upon cooling. The presented work displays the great potential of thermally conductive adhesives filled with microencapsulated PCM to limit peak temperatures in battery systems. The encapsulation of the PCM avoids the effects of aging (e.g., migration) and chemical reactions between the PCM and the adhesive matrix components. In this study, microencapsulation has been carried out by in situ polymerization. The microencapsulated PCM was characterized by FT-IR spectroscopy, and the thermal properties were measured by DSC and laser flash method. The mechanical properties, electrical and thermal conductivity, and adhesive toughness of the TCA/PCM composite were also investigated.Keywords: phase change material, microencapsulation, adhesive bonding, thermal management
Procedia PDF Downloads 72154 Analytical Study of the Structural Response to Near-Field Earthquakes
Authors: Isidro Perez, Maryam Nazari
Abstract:
Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.Keywords: near-field, pulse, pushover, time-history
Procedia PDF Downloads 146153 The Accuracy of an In-House Developed Computer-Assisted Surgery Protocol for Mandibular Micro-Vascular Reconstruction
Authors: Christophe Spaas, Lies Pottel, Joke De Ceulaer, Johan Abeloos, Philippe Lamoral, Tom De Backer, Calix De Clercq
Abstract:
We aimed to evaluate the accuracy of an in-house developed low-cost computer-assisted surgery (CAS) protocol for osseous free flap mandibular reconstruction. All patients who underwent primary or secondary mandibular reconstruction with a free (solely or composite) osseous flap, either a fibula free flap or iliac crest free flap, between January 2014 and December 2017 were evaluated. The low-cost protocol consisted out of a virtual surgical planning, a prebend custom reconstruction plate and an individualized free flap positioning guide. The accuracy of the protocol was evaluated through comparison of the postoperative outcome with the 3D virtual planning, based on measurement of the following parameters: intercondylar distance, mandibular angle (axial and sagittal), inner angular distance, anterior-posterior distance, length of the fibular/iliac crest segments and osteotomy angles. A statistical analysis of the obtained values was done. Virtual 3D surgical planning and cutting guide design were performed with Proplan CMF® software (Materialise, Leuven, Belgium) and IPS Gate (KLS Martin, Tuttlingen, Germany). Segmentation of the DICOM data as well as outcome analysis were done with BrainLab iPlan® Software (Brainlab AG, Feldkirchen, Germany). A cost analysis of the protocol was done. Twenty-two patients (11 fibula /11 iliac crest) were included and analyzed. Based on voxel-based registration on the cranial base, 3D virtual planning landmark parameters did not significantly differ from those measured on the actual treatment outcome (p-values >0.05). A cost evaluation of the in-house developed CAS protocol revealed a 1750 euro cost reduction in comparison with a standard CAS protocol with a patient-specific reconstruction plate. Our results indicate that an accurate transfer of the planning with our in-house developed low-cost CAS protocol is feasible at a significant lower cost.Keywords: CAD/CAM, computer-assisted surgery, low-cost, mandibular reconstruction
Procedia PDF Downloads 140152 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 351151 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids
Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje
Abstract:
Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise
Procedia PDF Downloads 128150 The Medical Student Perspective on the Role of Doubt in Medical Education
Authors: Madhavi-Priya Singh, Liam Lowe, Farouk Arnaout, Ludmilla Pillay, Giordan Perez, Luke Mischker, Steve Costa
Abstract:
Introduction: An Emergency Department consultant identified the failure of medical students to complete the task of clerking a patient in its entirety. As six medical students on our first clinical placement, we recognised our own failure and endeavored to examine why this failure was consistent among all medical students that had been given this task, despite our best motivations as adult learners. Aim: Our aim is to understand and investigate the elements which impeded our ability to learn and perform as medical students in the clinical environment, with reference to the prescribed task. We also aim to generate a discussion around the delivery of medical education with potential solutions to these barriers. Methods: Six medical students gathered together to have a comprehensive reflective discussion to identify possible factors leading to the failure of the task. First, we thoroughly analysed the delivery of the instructions with reference to the literature to identify potential flaws. We then examined personal, social, ethical, and cultural factors which may have impacted our ability to complete the task in its entirety. Results: Through collation of our shared experiences, with support from discussion in the field of medical education and ethics, we identified two major areas that impacted our ability to complete the set task. First, we experienced an ethical conflict where we believed the inconvenience and potential harm inflicted on patients did not justify the positive impact the patient interaction would have on our medical learning. Second, we identified a lack of confidence stemming from multiple factors, including the conflict between preclinical and clinical learning, perceptions of perfectionism in the culture of medicine, and the influence of upward social comparison. Discussion: After discussions, we found that the various factors we identified exacerbated the fears and doubts we already had about our own abilities and that of the medical education system. This doubt led us to avoid completing certain aspects of the tasks that were prescribed and further reinforced our vulnerability and perceived incompetence. Exploration of philosophical theories identified the importance of the role of doubt in education. We propose the need for further discussion around incorporating both pedagogic and andragogic teaching styles in clinical medical education and the acceptance of doubt as a driver of our learning. Conclusion: Doubt will continue to permeate our thoughts and actions no matter what. The moral or psychological distress that arises from this is the key motivating factor for our avoidance of tasks. If we accept this doubt and education embraces this doubt, it will no longer linger in the shadows as a negative and restrictive emotion but fuel a brighter dialogue and positive learning experience, ultimately assisting us in achieving our full potential.Keywords: ethics, medical student, doubt, medical education, faith
Procedia PDF Downloads 107149 Reduction Shrinkage of Concrete without Use Reinforcement
Authors: Martin Tazky, Rudolf Hela, Lucia Osuska, Petr Novosad
Abstract:
Concrete’s volumetric changes are natural process caused by silicate minerals’ hydration. These changes can lead to cracking and subsequent destruction of cementitious material’s matrix. In most cases, cracks can be assessed as a negative effect of hydration, and in all cases, they lead to an acceleration of degradation processes. Preventing the formation of these cracks is, therefore, the main effort. Once of the possibility how to eliminate this natural concrete shrinkage process is by using different types of dispersed reinforcement. For this application of concrete shrinking, steel and polymer reinforcement are preferably used. Despite ordinarily used reinforcement in concrete to eliminate shrinkage it is possible to look at this specific problematic from the beginning by itself concrete mix composition. There are many secondary raw materials, which are helpful in reduction of hydration heat and also with shrinkage of concrete during curing. The new science shows the possibilities of shrinkage reduction also by the controlled formation of hydration products, which could act by itself morphology as a traditionally used dispersed reinforcement. This contribution deals with the possibility of controlled formation of mono- and tri-sulfate which are considered like degradation minerals. Mono- and tri- sulfate's controlled formation in a cementitious composite can be classified as a self-healing ability. Its crystal’s growth acts directly against the shrinking tension – this reduces the risk of cracks development. Controlled formation means that these crystals start to grow in the fresh state of the material (e.g. concrete) but stop right before it could cause any damage to the hardened material. Waste materials with the suitable chemical composition are very attractive precursors because of their added value in the form of landscape pollution’s reduction and, of course, low cost. In this experiment, the possibilities of using the fly ash from fluidized bed combustion as a mono- and tri-sulphate formation additive were investigated. The experiment itself was conducted on cement paste and concrete and specimens were subjected to a thorough analysis of physicomechanical properties as well as microstructure from the moment of mixing up to 180 days. In cement composites, were monitored the process of hydration and shrinkage. In a mixture with the used admixture of fluidized bed combustion fly ash, possible failures were specified by electronic microscopy and dynamic modulus of elasticity. The results of experiments show the possibility of shrinkage concrete reduction without using traditionally dispersed reinforcement.Keywords: shrinkage, monosulphates, trisulphates, self-healing, fluidized fly ash
Procedia PDF Downloads 186148 Introducing the Concept of Sustainable Learning: Redesigning the Social Studies and Citizenship Education Curriculum in the Context of Saudi Arabia
Authors: Aiydh Aljeddani, Fran Martin
Abstract:
Sustainable human development is an essential component of a sustainable economic, social and environmental development. Addressing sustainable learning only through the addition of new teaching methods, or embedding certain approaches, is not sufficient on its own to support the goals of sustainable human development. This research project seeks to explore how the process of redesigning the current principles of curriculum based on the concept of sustainable learning could contribute to preparing a citizen who could later contribute towards sustainable human development. Multiple qualitative methodologies were employed in order to achieve the aim of this study. The main research methods were teachers’ field notes, artefacts, informal interviews (unstructured interview), a passive participant observation, a mini nominal group technique (NGT), a weekly diary, and weekly meeting. The study revealed that the integration of a curriculum for sustainable development, in addition to the use of innovative teaching approaches, highly valued by students and teachers in social studies’ sessions. This was due to the fact that it created a positive atmosphere for interaction and aroused both teachers and students’ interest. The content of the new curriculum also contributed to increasing students’ sense of shared responsibility through involving them in thinking about solutions for some global issues. This was carried out through addressing these issues through the concept of sustainable development and the theory of Thinking Activity in a Social Context (TASC). Students had interacted with sustainable development sessions intellectually and they also practically applied it through designing projects and cut-outs. Ongoing meetings and workshops to develop work between both the researcher and the teachers, and by the teachers themselves, played a vital role in implementing the new curriculum. The participation of teachers in the development of the project through working papers, exchanging experiences and introducing amendments to the students' environment was also critical in the process of implementing the new curriculum. Finally, the concept of sustainable learning can contribute to the learning outcomes much better than the current curriculum and it can better develop the learning objectives in educational institutions.Keywords: redesigning, social studies and citizenship education curriculum, sustainable learning, thinking activity in a social context
Procedia PDF Downloads 231147 Task Based Functional Connectivity within Reward Network in Food Image Viewing Paradigm Using Functional MRI
Authors: Preetham Shankapal, Jill King, Kori Murray, Corby Martin, Paula Giselman, Jason Hicks, Owen Carmicheal
Abstract:
Activation of reward and satiety networks in the brain while processing palatable food cues, as well as functional connectivity during rest has been studied using functional Magnetic Resonance Imaging of the brain in various obesity phenotypes. However, functional connectivity within the reward and satiety network during food cue processing is understudied. 14 obese individuals underwent two fMRI scans during viewing of Macronutrient Picture System images. Each scan included two blocks of images of High Sugar/High Fat (HSHF), High Carbohydrate/High Fat (HCHF), Low Sugar/Low Fat (LSLF) and also non-food images. Seed voxels within seven food reward relevant ROIs: Insula, putamen and cingulate, precentral, parahippocampal, medial frontal and superior temporal gyri were isolated based on a prior meta-analysis. Beta series correlation for task-related functional connectivity between these seed voxels and the rest of the brain was computed. Voxel-level differences in functional connectivity were calculated between: first and the second scan; individuals who saw novel (N=7) vs. Repeated (N=7) images in the second scan; and between the HC/HF, HSHF blocks vs LSLF and non-food blocks. Computations and analysis showed that during food image viewing, reward network ROIs showed significant functional connectivity with each other and with other regions responsible for attentional and motor control, including inferior parietal lobe and precentral gyrus. These functional connectivity values were heightened among individuals who viewed novel HS/HF images in the second scan. In the second scan session, functional connectivity was reduced within the reward network but increased within attention, memory and recognition regions, suggesting habituation to reward properties and increased recollection of previously viewed images. In conclusion it can be inferred that Functional Connectivity within reward network and between reward and other brain regions, varies by important experimental conditions during food photography viewing, including habituation to shown foods.Keywords: fMRI, functional connectivity, task-based, beta series correlation
Procedia PDF Downloads 270146 Effective Medium Approximations for Modeling Ellipsometric Responses from Zinc Dialkyldithiophosphates (ZDDP) Tribofilms Formed on Sliding Surfaces
Authors: Maria Miranda-Medina, Sara Salopek, Andras Vernes, Martin Jech
Abstract:
Sliding lubricated surfaces induce the formation of tribofilms that reduce friction, wear and prevent large-scale damage of contact parts. Engine oils and lubricants use antiwear and antioxidant additives such as zinc dialkyldithiophosphate (ZDDP) from where protective tribofilms are formed by degradation. The ZDDP tribofilms are described as a two-layer structure composed of inorganic polymer material. On the top surface, the long chain polyphosphate is a zinc phosphate and in the bulk, the short chain polyphosphate is a mixed Fe/Zn phosphate with a gradient concentration. The polyphosphate chains are partially adherent to steel surface through a sulfide and work as anti-wear pads. In this contribution, ZDDP tribofilms formed on gray cast iron surfaces are studied. The tribofilms were generated in a reciprocating sliding tribometer with a piston ring-cylinder liner configuration. Fully formulated oil of SAE grade 5W-30 was used as lubricant during two tests at 40Hz and 50Hz. For the estimation of the tribofilm thicknesses, spectroscopic ellipsometry was used due to its high accuracy and non-destructive nature. Ellipsometry works under an optical principle where the change in polarisation of light reflected by the surface, is associated with the refractive index of the surface material or to the thickness of the layer deposited on top. Ellipsometrical responses derived from tribofilms are modelled by effective medium approximation (EMA), which includes the refractive index of involved materials, homogeneity of the film and thickness. The materials composition was obtained from x-ray photoelectron spectroscopic studies, where the presence of ZDDP, O and C was confirmed. From EMA models it was concluded that tribofilms formed at 40 Hz are thicker and more homogeneous than the ones formed at 50 Hz. In addition, the refractive index of each material is mixed to derive an effective refractive index that describes the optical composition of the tribofilm and exhibits a maximum response in the UV range, being a characteristic of glassy semitransparent films.Keywords: effective medium approximation, reciprocating sliding tribometer, spectroscopic ellipsometry, zinc dialkyldithiophosphate
Procedia PDF Downloads 251145 The Sociocultural, Economic, and Environmental Contestations of Agbogbloshie: A Critical Review
Authors: Khiddir Iddris, Martin Oteng – Ababio, Andreas Bürkert, Christoph Scherrer, Katharina Hemmler
Abstract:
Agbogbloshie, as an informal settlement and economy where the e-waste sector thrives, has become a global hub of complex urban contestations involving sociocultural, economic, and environmental dimensions due to the implication that e-waste and informal economic patterns have on livelihoods, urbanisation, development and sustainability. Multi-author collaborations have produced an ever-growing body of literature on Agbogbloshie and the informal e-waste economy. There is, however, a dearth of an assessment of Agbogbloshie as an urban informal settlement's intricate nexus of socioecological contestations. We address this gap by systematising, from literature, the context knowledge, navigating the complex terrain of Agbogbloshie's challenges, and employing a multidimensional lens to unravel the sociocultural intricacies, economic dynamics, and environmental complexities shaping its identity. A systematic critical review approach was espoused, with a pragmatic consolidation of content analysis and controversy mapping, grounded on the concept of ‘sustainable rurbanism,’ highlighted core themes and identified contrasting viewpoints. An analytical framework is presented. Five categories – geohistorical, sociocultural, economic, environmental and future trends - are proposed as an approach to systematising the literature. The review finds that the sociocultural dimension unveils a mosaic of cultural amalgamation, communal identity, and tensions impacting community cohesion. The analysis of economic intricacies reveals the prevalence of informal economies sustaining livelihoods yet entrenching economic disparities and marginalisation. Environmental scrutiny exposes the grim realities of e-waste disposal, pollution, and land use conflicts. The findings suggest that there is a high resilience within the community and the potential for sustainable trajectories. Theoretical and conceptual synergy is limited. This review provides a comprehensive exploration, offering insights and directions for future research, policy formulation, and community-driven interventions aimed at fostering sustainable transformations in Agbogbloshie and analogous urban contexts.Keywords: Agbogbloshie, economic complexities, environmental challenges, resilience, sociocultural dynamics, sustainability, urban informal settlement
Procedia PDF Downloads 71144 Human Rights in the United States: Challenges and Lessons from the Period 1948-2018
Authors: Mary Carmen Peloche Barrera
Abstract:
Since its early years as an independent nation, the United States has been one of the main promoters regarding the recognition, legislation, and protection of human rights. In the matter of freedom, the founding father Thomas Jefferson envisioned the role of the U.S. as a defender of freedom and equality throughout the world. This founding ideal shaped America’s domestic and foreign policy in the 19th and the 20th century and became an aspiration of the ideals of the country to expand its values and institutions. The history of the emergence of human rights cannot be studied without making reference to leaders such as Woodrow Wilson, Franklin, and Eleanor Roosevelt, as well as Martin Luther King. Throughout its history, this country has proclaimed that the protection of the freedoms of men, both inside and outside its borders, is practically the reason for its existence. Although the United States was one of the first countries to recognize the existence of inalienable rights for individuals, as well as the main promoter of the Universal Declaration of Human Rights of 1948, the country has gone through critical moments that had led to questioning its commitment to the issue. Racial segregation, international military interventions, national security strategy, as well as national legislation on immigration, are some of the most controversial issues related to decisions and actions driven by the United States, which at the same time mismatched with its role as an advocate of human rights, both in the Americas and in the rest of the world. The aim of this paper is to study the swinging of the efforts and commitments of the United States towards human rights. The paper will analyze the history and evolution of human rights in the United States, to study the greatest challenges for the country in this matter. The paper will focus on both the domestic policy (related to demographic issues) and foreign policy (about its role in a post-war world). Currently, more countries are joining the multilateral efforts for the promotion and protection of human rights. At the same time, the United States is one of the least committed countries in this respect, having ratified only 5 of the 18 treaties emanating from the United Nations. The last ratification was carried out in 2002 and, since then, the country has been losing ground, in an increasingly vertiginous way, in its credibility and, even worse, in its role as leader of 'the free world'. With or without the United States, the protection of human rights should remain the main goal of the international community.Keywords: United States, human rights, foreign policy, domestic policy
Procedia PDF Downloads 117143 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress
Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin
Abstract:
Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.Keywords: acceptance, coping strategies, stress, validation process
Procedia PDF Downloads 339142 Influence of a Cationic Membrane in a Double Compartment Filter-Press Reactor on the Atenolol Electro-Oxidation
Authors: Alan N. A. Heberle, Salatiel W. Da Silva, Valentin Perez-Herranz, Andrea M. Bernardes
Abstract:
Contaminants of emerging concern are substances widely used, such as pharmaceutical products. These compounds represent risk for both wild and human life since they are not completely removed from wastewater by conventional wastewater treatment plants. In the environment, they can be harm even in low concentration (µ or ng/L), causing bacterial resistance, endocrine disruption, cancer, among other harmful effects. One of the most common taken medicine to treat cardiocirculatory diseases is the Atenolol (ATL), a β-Blocker, which is toxic to aquatic life. In this way, it is necessary to implement a methodology, which is capable to promote the degradation of the ATL, to avoid the environmental detriment. A very promising technology is the advanced electrochemical oxidation (AEO), which mechanisms are based on the electrogeneration of reactive radicals (mediated oxidation) and/or on the direct substance discharge by electron transfer from contaminant to electrode surface (direct oxidation). The hydroxyl (HO•) and sulfate (SO₄•⁻) radicals can be generated, depending on the reactional medium. Besides that, at some condition, the peroxydisulfate (S₂O₈²⁻) ion is also generated from the SO₄• reaction in pairs. Both radicals, ion, and the direct contaminant discharge can break down the molecule, resulting in the degradation and/or mineralization. However, ATL molecule and byproducts can still remain in the treated solution. On this wise, some efforts can be done to implement the AEO process, being one of them the use of a cationic membrane to separate the cathodic (reduction) from the anodic (oxidation) reactor compartment. The aim of this study is investigate the influence of the implementation of a cationic membrane (Nafion®-117) to separate both cathodic and anodic, AEO reactor compartments. The studied reactor was a filter-press, with bath recirculation mode, flow 60 L/h. The anode was an Nb/BDD2500 and the cathode a stainless steel, both bidimensional, geometric surface area 100 cm². The solution feeding the anodic compartment was prepared with ATL 100 mg/L using Na₂SO₄ 4 g/L as support electrolyte. In the cathodic compartment, it was used a solution containing Na₂SO₄ 71 g/L. Between both solutions was placed the membrane. The applied currents densities (iₐₚₚ) of 5, 20 and 40 mA/cm² were studied over 240 minutes treatment time. Besides that, the ATL decay was analyzed by ultraviolet spectroscopy (UV/Vis). The mineralization was determined performing total organic carbon (TOC) in TOC-L CPH Shimadzu. In the cases without membrane, the iₐₚₚ 5, 20 and 40 mA/cm² resulted in 55, 87 and 98 % ATL degradation at the end of treatment time, respectively. However, with membrane, the degradation, for the same iₐₚₚ, was 90, 100 and 100 %, spending 240, 120, 40 min for the maximum degradation, respectively. The mineralization, without membrane, for the same studied iₐₚₚ, was 40, 55 and 72 %, respectively at 240 min, but with membrane, all tested iₐₚₚ reached 80 % of mineralization, differing only in the time spent, 240, 150 and 120 min, for the maximum mineralization, respectively. The membrane increased the ATL oxidation, probably due to avoid oxidant ions (S₂O₈²⁻) reduction on the cathode surface.Keywords: contaminants of emerging concern, advanced electrochemical oxidation, atenolol, cationic membrane, double compartment reactor
Procedia PDF Downloads 136141 Sound Source Localisation and Augmented Reality for On-Site Inspection of Prefabricated Building Components
Authors: Jacques Cuenca, Claudio Colangeli, Agnieszka Mroz, Karl Janssens, Gunther Riexinger, Antonio D'Antuono, Giuseppe Pandarese, Milena Martarelli, Gian Marco Revel, Carlos Barcena Martin
Abstract:
This study presents an on-site acoustic inspection methodology for quality and performance evaluation of building components. The work focuses on global and detailed sound source localisation, by successively performing acoustic beamforming and sound intensity measurements. A portable experimental setup is developed, consisting of an omnidirectional broadband acoustic source and a microphone array and sound intensity probe. Three main acoustic indicators are of interest, namely the sound pressure distribution on the surface of components such as walls, windows and junctions, the three-dimensional sound intensity field in the vicinity of junctions, and the sound transmission loss of partitions. The measurement data is post-processed and converted into a three-dimensional numerical model of the acoustic indicators with the help of the simultaneously acquired geolocation information. The three-dimensional acoustic indicators are then integrated into an augmented reality platform superimposing them onto a real-time visualisation of the spatial environment. The methodology thus enables a measurement-supported inspection process of buildings and the correction of errors during construction and refurbishment. Two experimental validation cases are shown. The first consists of a laboratory measurement on a full-scale mockup of a room, featuring a prefabricated panel. The latter is installed with controlled defects such as lack of insulation and joint sealing material. It is demonstrated that the combined acoustic and augmented reality tool is capable of identifying acoustic leakages from the building defects and assist in correcting them. The second validation case is performed on a prefabricated room at a near-completion stage in the factory. With the help of the measurements and visualisation tools, the homogeneity of the partition installation is evaluated and leakages from junctions and doors are identified. Furthermore, the integration of acoustic indicators together with thermal and geometrical indicators via the augmented reality platform is shown.Keywords: acoustic inspection, prefabricated building components, augmented reality, sound source localization
Procedia PDF Downloads 383140 Influence of Smoking on Fine And Ultrafine Air Pollution Pm in Their Pulmonary Genetic and Epigenetic Toxicity
Authors: Y. Landkocz, C. Lepers, P.J. Martin, B. Fougère, F. Roy Saint-Georges. A. Verdin, F. Cazier, F. Ledoux, D. Courcot, F. Sichel, P. Gosset, P. Shirali, S. Billet
Abstract:
In 2013, the International Agency for Research on Cancer (IARC) classified air pollution and fine particles as carcinogenic to humans. Causal relationships exist between elevated ambient levels of airborne particles and increase of mortality and morbidity including pulmonary diseases, like lung cancer. However, due to a double complexity of both physicochemical Particulate Matter (PM) properties and tumor mechanistic processes, mechanisms of action remain not fully elucidated. Furthermore, because of several common properties between air pollution PM and tobacco smoke, like the same route of exposure and chemical composition, potential mechanisms of synergy could exist. Therefore, smoking could be an aggravating factor of the particles toxicity. In order to identify some mechanisms of action of particles according to their size, two samples of PM were collected: PM0.03 2.5 and PM0.33 2.5 in the urban-industrial area of Dunkerque. The overall cytotoxicity of the fine particles was determined on human bronchial cells (BEAS-2B). Toxicological study focused then on the metabolic activation of the organic compounds coated onto PM and some genetic and epigenetic changes induced on a co-culture model of BEAS-2B and alveolar macrophages isolated from bronchoalveolar lavages performed in smokers and non-smokers. The results showed (i) the contribution of the ultrafine fraction of atmospheric particles to genotoxic (eg. DNA double-strand breaks) and epigenetic mechanisms (eg. promoter methylation) involved in tumor processes, and (ii) the influence of smoking on the cellular response. Three main conclusions can be discussed. First, our results showed the ability of the particles to induce deleterious effects potentially involved in the stages of initiation and promotion of carcinogenesis. The second conclusion is that smoking affects the nature of the induced genotoxic effects. Finally, the in vitro developed cell model, using bronchial epithelial cells and alveolar macrophages can take into account quite realistically, some of the existing cell interactions existing in the lung.Keywords: air pollution, fine and ultrafine particles, genotoxic and epigenetic alterations, smoking
Procedia PDF Downloads 347139 Displaying Compostela: Literature, Tourism and Cultural Representation, a Cartographic Approach
Authors: Fernando Cabo Aseguinolaza, Víctor Bouzas Blanco, Alberto Martí Ezpeleta
Abstract:
Santiago de Compostela became a stable object of literary representation during the period between 1840 and 1915, approximately. This study offers a partial cartographical look at this process, suggesting that a cultural space like Compostela’s becoming an object of literary representation paralleled the first stages of its becoming a tourist destination. We use maps as a method of analysis to show the interaction between a corpus of novels and the emerging tradition of tourist guides on Compostela during the selected period. Often, the novels constitute ways to present a city to the outside, marking it for the gaze of others, as guidebooks do. That leads us to examine the ways of constructing and rendering communicable the local in other contexts. For that matter, we should also acknowledge the fact that a good number of the narratives in the corpus evoke the representation of the city through the figure of one who comes from elsewhere: a traveler, a student or a professor. The guidebooks coincide in this with the emerging fiction, of which the mimesis of a city is a key characteristic. The local cannot define itself except through a process of symbolic negotiation, in which recognition and self-recognition play important roles. Cartography shows some of the forms that these processes of symbolic representation take through the treatment of space. The research uses GIS to find significant models of representation. We used the program ArcGIS for the mapping, defining the databases starting from an adapted version of the methodology applied by Barbara Piatti and Lorenz Hurni’s team at the University of Zurich. First, we designed maps that emphasize the peripheral position of Compostela from a historical and institutional perspective using elements found in the texts of our corpus (novels and tourist guides). Second, other maps delve into the parallels between recurring techniques in the fictional texts and characteristic devices of the guidebooks (sketching itineraries and the selection of zones and indexicalization), like a foreigner’s visit guided by someone who knows the city or the description of one’s first entrance into the city’s premises. Last, we offer a cartography that demonstrates the connection between the best known of the novels in our corpus (Alejandro Pérez Lugín’s 1915 novel La casa de la Troya) and the first attempt to create package tourist tours with Galicia as a destination, in a joint venture of Galician and British business owners, in the years immediately preceding the Great War. Literary cartography becomes a crucial instrument for digging deeply into the methods of cultural production of places. Through maps, the interaction between discursive forms seemingly so far removed from each other as novels and tourist guides becomes obvious and suggests the need to go deeper into a complex process through which a city like Compostela becomes visible on the contemporary cultural horizon.Keywords: compostela, literary geography, literary cartography, tourism
Procedia PDF Downloads 392138 A Comparative Study of the Tribological Behavior of Bilayer Coatings for Machine Protection
Authors: Cristina Diaz, Lucia Perez-Gandarillas, Gonzalo Garcia-Fuentes, Simone Visigalli, Roberto Canziani, Giuseppe Di Florio, Paolo Gronchi
Abstract:
During their lifetime, industrial machines are often subjected to chemical, mechanical and thermal extreme conditions. In some cases, the loss of efficiency comes from the degradation of the surface as a result of its exposition to abrasive environments that can cause wear. This is a common problem to be solved in industries of diverse nature such as food, paper or concrete industries, among others. For this reason, a good selection of the material is of high importance. In the machine design context, stainless steels such as AISI 304 and 316 are widely used. However, the severity of the external conditions can require additional protection for the steel and sometimes coating solutions are demanded in order to extend the lifespan of these materials. Therefore, the development of effective coatings with high wear resistance is of utmost technological relevance. In this research, bilayer coatings made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium, and Titanium-Zirconium have been developed using magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology. Their tribological behavior has been measured and evaluated under different environmental conditions. Two kinds of steels were used as substrates: AISI 304, AISI 316. For the comparison with these materials, titanium alloy substrate was also employed. Regarding the characterization, wear rate and friction coefficient were evaluated by a tribo-tester, using a pin-on-ball configuration with different lubricants such as tomato sauce, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl to approximate the results to real extreme conditions. In addition, topographical images of the wear tracks were obtained in order to get more insight of the wear behavior and scanning electron microscope (SEM) images were taken to evaluate the adhesion and quality of the coating. The characterization was completed with the measurement of nanoindentation hardness and elastic modulus. Concerning the results, thicknesses of the samples varied from 100 nm (Ti-Zr layer) to 1.4 µm (Ti-Hf layer) and SEM images confirmed that the addition of the Ti layer improved the adhesion of the coatings. Moreover, results have pointed out that these coatings have increased the wear resistance in comparison with the original substrates under environments of different severity. Furthermore, nanoindentation hardness results showed an improvement of the elastic strain to failure and a high modulus of elasticity (approximately 200 GPa). As a conclusion, Ti-Ta, Ti-Zr, Ti-Nb, and Ti-Hf are very promising and effective coatings in terms of tribological behavior, improving considerably the wear resistance and friction coefficient of typically used machine materials.Keywords: coating, stainless steel, tribology, wear
Procedia PDF Downloads 150137 Low Energy Technology for Leachate Valorisation
Authors: Jesús M. Martín, Francisco Corona, Dolores Hidalgo
Abstract:
Landfills present long-term threats to soil, air, groundwater and surface water due to the formation of greenhouse gases (methane gas and carbon dioxide) and leachate from decomposing garbage. The composition of leachate differs from site to site and also within the landfill. The leachates alter with time (from weeks to years) since the landfilled waste is biologically highly active and their composition varies. Mainly, the composition of the leachate depends on factors such as characteristics of the waste, the moisture content, climatic conditions, degree of compaction and the age of the landfill. Therefore, the leachate composition cannot be generalized and the traditional treatment models should be adapted in each case. Although leachate composition is highly variable, what different leachates have in common is hazardous constituents and their potential eco-toxicological effects on human health and on terrestrial ecosystems. Since leachate has distinct compositions, each landfill or dumping site would represent a different type of risk on its environment. Nevertheless, leachates consist always of high organic concentration, conductivity, heavy metals and ammonia nitrogen. Leachate could affect the current and future quality of water bodies due to uncontrolled infiltrations. Therefore, control and treatment of leachate is one of the biggest issues in urban solid waste treatment plants and landfills design and management. This work presents a treatment model that will be carried out "in-situ" using a cost-effective novel technology that combines solar evaporation/condensation plus forward osmosis. The plant is powered by renewable energies (solar energy, biomass and residual heat), which will minimize the carbon footprint of the process. The final effluent quality is very high, allowing reuse (preferred) or discharge into watercourses. In the particular case of this work, the final effluents will be reused for cleaning and gardening purposes. A minority semi-solid residual stream is also generated in the process. Due to its special composition (rich in metals and inorganic elements), this stream will be valorized in ceramic industries to improve the final products characteristics.Keywords: forward osmosis, landfills, leachate valorization, solar evaporation
Procedia PDF Downloads 202136 Ionic Liquids as Substrates for Metal-Organic Framework Synthesis
Authors: Julian Mehler, Marcus Fischer, Martin Hartmann, Peter S. Schulz
Abstract:
During the last two decades, the synthesis of metal-organic frameworks (MOFs) has gained ever increasing attention. Based on their pore size and shape as well as host-guest interactions, they are of interest for numerous fields related to porous materials, like catalysis and gas separation. Usually, MOF-synthesis takes place in an organic solvent between room temperature and approximately 220 °C, with mixtures of polyfunctional organic linker molecules and metal precursors as substrates. Reaction temperatures above the boiling point of the solvent, i.e. solvothermal reactions, are run in autoclaves or sealed glass vessels under autogenous pressures. A relatively new approach for the synthesis of MOFs is the so-called ionothermal synthesis route. It applies an ionic liquid as a solvent, which can serve as a structure-directing template and/or a charge-compensating agent in the final coordination polymer structure. Furthermore, this method often allows for less harsh reaction conditions than the solvothermal route. Here a variation of the ionothermal approach is reported, where the ionic liquid also serves as an organic linker source. By using 1-ethyl-3-methylimidazolium terephthalates ([EMIM][Hbdc] and [EMIM]₂[bdc]), the one-step synthesis of MIL-53(Al)/Boehemite composites with interesting features is possible. The resulting material is already formed at moderate temperatures (90-130 °C) and is stabilized in the usually unfavored ht-phase. Additionally, in contrast to already published procedures for MIL-53(Al) synthesis, no further activation at high temperatures is mandatory. A full characterization of this novel composite material is provided, including XRD, SS-NMR, El-Al., SEM as well as sorption measurements and its interesting features are compared to MIL-53(Al) samples produced by the classical solvothermal route. Furthermore, the syntheses of the applied ionic liquids and salts is discussed. The influence of the degree of ionicity of the linker source [EMIM]x[H(2-x)bdc] on the crystal structure and the achievable synthesis temperature are investigated and give insight into the role of the IL during synthesis. Aside from the synthesis of MIL-53 from EMIM terephthalates, the use of the phosphonium cation in this approach is discussed as well. Additionally, the employment of ILs in the preparation of other MOFs is presented briefly. This includes the ZIF-4 framework from the respective imidazolate ILs and chiral camphorate based frameworks from their imidazolium precursors.Keywords: ionic liquids, ionothermal synthesis, material synthesis, MIL-53, MOFs
Procedia PDF Downloads 208135 Waste Management Option for Bioplastics Alongside Conventional Plastics
Authors: Dan Akesson, Gauthaman Kuzhanthaivelu, Martin Bohlen, Sunil K. Ramamoorthy
Abstract:
Bioplastics can be defined as polymers derived partly or completely from biomass. Bioplastics can be biodegradable such as polylactic acid (PLA) and polyhydroxyalkonoates (PHA); or non-biodegradable (biobased polyethylene (bio-PE), polypropylene (bio-PP), polyethylene terephthalate (bio-PET)). The usage of such bioplastics is expected to increase in the future due to new found interest in sustainable materials. At the same time, these plastics become a new type of waste in the recycling stream. Most countries do not have separate bioplastics collection for it to be recycled or composted. After a brief introduction of bioplastics such as PLA in the UK, these plastics are once again replaced by conventional plastics by many establishments due to lack of commercial composting. Recycling companies fear the contamination of conventional plastic in the recycling stream and they said they would have to invest in expensive new equipment to separate bioplastics and recycle it separately. This project studies what happens when bioplastics contaminate conventional plastics. Three commonly used conventional plastics were selected for this study: polyethylene (PE), polypropylene (PP) and polyethylene terephthalate (PET). In order to simulate contamination, two biopolymers, either polyhydroxyalkanoate (PHA) or thermoplastic starch (TPS) were blended with the conventional polymers. The amount of bioplastics in conventional plastics was either 1% or 5%. The blended plastics were processed again to see the effect of degradation. The results from contamination showed that the tensile strength and the modulus of PE was almost unaffected whereas the elongation is clearly reduced indicating the increase in brittleness of the plastic. Generally, it can be said that PP is slightly more sensitive to the contamination than PE. This can be explained by the fact that the melting point of PP is higher than for PE and as a consequence, the biopolymer will degrade more quickly. However, the reduction of the tensile properties for PP is relatively modest. Impact strength is generally a more sensitive test method towards contamination. Again, PE is relatively unaffected by the contamination but for PP there is a relatively large reduction of the impact properties already at 1% contamination. PET is polyester, and it is, by its very nature, more sensitive to degradation than PE and PP. PET also has a much higher melting point than PE and PP, and as a consequence, the biopolymer will quickly degrade at the processing temperature of PET. As for the tensile strength, PET can tolerate 1% contamination without any reduction of the tensile strength. However, when the impact strength is examined, it is clear that already at 1% contamination, there is a strong reduction of the properties. The thermal properties show the change in the crystallinity. The blends were also characterized by SEM. Biphasic morphology can be seen as the two polymers are not truly blendable which also contributes to reduced mechanical properties. The study shows that PE is relatively robust against contamination, while polypropylene (PP) is sensitive and polyethylene terephthalate (PET) can be quite sensitive towards contamination.Keywords: bioplastics, contamination, recycling, waste management
Procedia PDF Downloads 225134 ScRNA-Seq RNA Sequencing-Based Program-Polygenic Risk Scores Associated with Pancreatic Cancer Risks in the UK Biobank Cohort
Authors: Yelin Zhao, Xinxiu Li, Martin Smelik, Oleg Sysoev, Firoj Mahmud, Dina Mansour Aly, Mikael Benson
Abstract:
Background: Early diagnosis of pancreatic cancer is clinically challenging due to vague, or no symptoms, and lack of biomarkers. Polygenic risk score (PRS) scores may provide a valuable tool to assess increased or decreased risk of PC. This study aimed to develop such PRS by filtering genetic variants identified by GWAS using transcriptional programs identified by single-cell RNA sequencing (scRNA-seq). Methods: ScRNA-seq data from 24 pancreatic ductal adenocarcinoma (PDAC) tumor samples and 11 normal pancreases were analyzed to identify differentially expressed genes (DEGs) in in tumor and microenvironment cell types compared to healthy tissues. Pathway analysis showed that the DEGs were enriched for hundreds of significant pathways. These were clustered into 40 “programs” based on gene similarity, using the Jaccard index. Published genetic variants associated with PDAC were mapped to each program to generate program PRSs (pPRSs). These pPRSs, along with five previously published PRSs (PGS000083, PGS000725, PGS000663, PGS000159, and PGS002264), were evaluated in a European-origin population from the UK Biobank, consisting of 1,310 PDAC participants and 407,473 non-pancreatic cancer participants. Stepwise Cox regression analysis was performed to determine associations between pPRSs with the development of PC, with adjustments of sex and principal components of genetic ancestry. Results: The PDAC genetic variants were mapped to 23 programs and were used to generate pPRSs for these programs. Four distinct pPRSs (P1, P6, P11, and P16) and two published PRSs (PGS000663 and PGS002264) were significantly associated with an increased risk of developing PC. Among these, P6 exhibited the greatest hazard ratio (adjusted HR[95% CI] = 1.67[1.14-2.45], p = 0.008). In contrast, P10 and P4 were associated with lower risk of developing PC (adjusted HR[95% CI] = 0.58[0.42-0.81], p = 0.001, and adjusted HR[95% CI] = 0.75[0.59-0.96], p = 0.019). By comparison, two of the five published PRS exhibited an association with PDAC onset with HR (PGS000663: adjusted HR[95% CI] = 1.24[1.14-1.35], p < 0.001 and PGS002264: adjusted HR[95% CI] = 1.14[1.07-1.22], p < 0.001). Conclusion: Compared to published PRSs, scRNA-seq-based pPRSs may be used not only to assess increased but also decreased risk of PDAC.Keywords: cox regression, pancreatic cancer, polygenic risk score, scRNA-seq, UK biobank
Procedia PDF Downloads 101133 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations
Authors: Milena Nanova, Radul Shishkov, Damyan Damov, Martin Georgiev
Abstract:
This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper places emphasis on algorithmic implementation of the logical constraint and intricacies in residential architecture by exploring the potential of generative design to create visually engaging and contextually harmonious structures. This exploration also contains an analysis of how these designs align with legal building parameters, showcasing the potential for creative solutions within the confines of urban building regulations. Concurrently, our methodology integrates functional, economic, and environmental factors. We investigate how generative design can be utilized to optimize buildings' performance, considering them, aiming to achieve a symbiotic relationship between the built environment and its natural surroundings. Through a blend of theoretical research and practical case studies, this research highlights the multifaceted capabilities of generative design and demonstrates practical applications of our framework. Our findings illustrate the rich possibilities that arise from an algorithmic design approach in the context of a vibrant urban landscape. This study contributes an alternative perspective to residential architecture, suggesting that the future of urban development lies in embracing the complex interplay between computational design innovation, regulatory adherence, and environmental responsibility.Keywords: generative design, computational design, parametric design, algorithmic modeling
Procedia PDF Downloads 65