Search results for: Gloria Inés Martínez Domínguez
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 373

Search results for: Gloria Inés Martínez Domínguez

133 Effect of Three Drying Methods on Antioxidant Efficiency and Vitamin C Content of Moringa oleifera Leaf Extract

Authors: Kenia Martínez, Geniel Talavera, Juan Alonso

Abstract:

Moringa oleifera is a plant containing many nutrients that are mostly concentrated within the leaves. Commonly, the separation process of these nutrients involves solid-liquid extraction followed by evaporation and drying to obtain a concentrated extract, which is rich in proteins, vitamins, carbohydrates, and other essential nutrients that can be used in the food industry. In this work, three drying methods were used, which involved very different temperature and pressure conditions, to evaluate the effect of each method on the vitamin C content and the antioxidant efficiency of the extracts. Solid-liquid extractions of Moringa leaf (LE) were carried out by employing an ethanol solution (35% v/v) at 50 °C for 2 hours. The resulting extracts were then dried i) in a convective oven (CO) at 100 °C and at an atmospheric pressure of 750 mbar for 8 hours, ii) in a vacuum evaporator (VE) at 50 °C and at 300 mbar for 2 hours, and iii) in a freeze-drier (FD) at -40 °C and at 0.050 mbar for 36 hours. The antioxidant capacity (EC50, mg solids/g DPPH) of the dry solids was calculated by the free radical inhibition method employing DPPH˙ at 517 nm, resulting in a value of 2902.5 ± 14.8 for LE, 3433.1 ± 85.2 for FD, 3980.1 ± 37.2 for VE, and 8123.5 ± 263.3 for CO. The calculated antioxidant efficiency (AE, g DPPH/(mg solids·min)) was 2.920 × 10-5 for LE, 2.884 × 10-5 for FD, 2.512 × 10-5 for VE, and 1.009 × 10-5 for CO. Further, the content of vitamin C (mg/L) determined by HPLC was 59.0 ± 0.3 for LE, 49.7 ± 0.6 for FD, 45.0 ± 0.4 for VE, and 23.6 ± 0.7 for CO. The results indicate that the convective drying preserves vitamin C and antioxidant efficiency to 40% and 34% of the initial value, respectively, while vacuum drying to 76% and 86%, and freeze-drying to 84% and 98%, respectively.

Keywords: antioxidant efficiency, convective drying, freeze-drying, Moringa oleifera, vacuum drying, vitamin C content

Procedia PDF Downloads 229
132 Environmental Consequences of Metal Concentrations in Stream Sediments of Atoyac River Basin, Central Mexico: Natural and Industrial Influences

Authors: V. C. Shruti, P. F. Rodríguez-Espinosa, D. C. Escobedo-Urías, Estefanía Martinez Tavera, M. P. Jonathan

Abstract:

Atoyac River, a major south-central river flowing through the states of Puebla and Tlaxcala in Mexico is significantly impacted by the natural volcanic inputs in addition with wastewater discharges from urban, agriculture and industrial zones. In the present study, core samples were collected from R. Atoyac and analyzed for sediment granularity, major (Al, Fe, Ca, Mg, K, P and S) and trace elemental concentrations (Ba, Cr, Cd, Mn, Pb, Sr, V, Zn, Zr). The textural studies reveal that the sediments are mostly sand sized particles exceeding 99% and with very few to no presence of mud fractions. It is observed that most of the metals like (avg: all values in μg g-1) Ca (35,528), Mg (10,789), K (7453), S (1394), Ba (203), Cr (30), Cd (4), Pb (11), Sr (435), Zn (76) and Zr (88) are enriched throughout the sediments mainly sourced from volcanic inputs, source rock composition of Atoyac River basin and industrial influences from the Puebla city region. Contamination indices, such as anthropogenic factor (AF), enrichment factor (EF) and geoaccumulation index (Igeo), were used to investigate the level of contamination and toxicity as well as quantitatively assess the influences of human activities on metal concentrations. The AF values (>1) for Ba, Ca, Mg, Na, K, P and S suggested volcanic inputs from the study region, where as Cd and Zn are attributed to the impacts of industrial inputs in this zone. The EF and Igeo values revealed an extreme enrichment of S and Cd. The ecological risks were evaluated using potential ecological risk index (RI) and the results indicate that the metals Cd and V pose a major hazard for the biological community.

Keywords: Atoyac River, contamination indices, metal concentrations, Mexico, textural studies

Procedia PDF Downloads 263
131 Failure Analysis of Fuel Pressure Supply from an Aircraft Engine

Authors: M. Pilar Valles-gonzalez, Alejandro Gonzalez Meije, Ana Pastor Muro, Maria Garcia-Martinez, Beatriz Gonzalez Caballero

Abstract:

This paper studies a failure case of a fuel pressure supply tube from an aircraft engine. Multiple fracture cases of the fuel pressure control tube from aircraft engines have been reported. The studied set was composed of the mentioned tube, a welded connecting pipe, where the fracture has been produced, and a union nut. The fracture has been produced in one most critical zones of the tube, in a region next to the supporting body of the union nut to the connector. The tube material was X6CrNiTi18-10, an austenitic stainless steel. Chemical composition was determined using an X-Ray fluorescence spectrometer (XRF) and combustion equipment. Furthermore, the material has been mechanical, by hardness test, and microstructural characterized using a stereomicroscope and an optical microscope. The results confirmed that it is within specifications. To determine the macrofractographic features, a visual examination and a stereo microscope of the tube fracture surface have been carried out. The results revealed a tube plastic macrodeformation, surface damaged, and signs of a possible corrosion process. Fracture surface was also inspected by scanning electron microscopy (FE-SEM), equipped with a microanalysis system by X-ray dispersive energy (EDX), to determine the microfractographic features in order to find out the failure mechanism involved in the fracture. Fatigue striations, which are typical from a progressive fracture by a fatigue mechanism, have been observed. The origin of the fracture has been placed in defects located on the outer wall of the tube, leading to a final overload fracture.

Keywords: aircraft engine, fatigue, FE-SEM, fractography, fracture, fuel tube, microstructure, stainless steel

Procedia PDF Downloads 118
130 Dispersion Effects in Waves Reflected by Lossy Conductors: The Optics vs. Electromagnetics Approach

Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda

Abstract:

The study of dispersion phenomena in electromagnetic waves reflected by conductors at infrared and lower frequencies is a topic which finds a number of applications. We aim to explain in this work what are the most relevant ones and how this phenomenon is modeled from both optics and electromagnetics points of view. We also explain here how the amplitude of an electromagnetic wave reflected by a lossy conductor could depend on both the frequency of the incident wave, as well as on the electrical properties of the conductor, and we illustrate this phenomenon with a practical example. The mathematical analysis made by a specialist in electromagnetics or a microwave engineer is apparently very different from the one made by a specialist in optics. We show here how both approaches lead to the same physical result and what are the key concepts which enable one to understand that despite the differences in the equations the solution to the problem happens to be the same. Our study starts with an analysis made by using the complex refractive index and the reflectance parameter. We show how this reflectance has a dependence with the square root of the frequency when the reflecting material is a good conductor, and the frequency of the wave is low enough. Then we analyze the same problem with a less known approach, which is based on the reflection coefficient of the electric field, a parameter that is most commonly used in electromagnetics and microwave engineering. In summary, this paper presents a mathematical study illustrated with a worked example which unifies the modeling of dispersion effects made by specialists in optics and the one made by specialists in electromagnetics. The main finding of this work is that it is possible to reproduce the dependence of the Fresnel reflectance with frequency from the intrinsic impedance of the reflecting media.

Keywords: dispersion, electromagnetic waves, microwaves, optics

Procedia PDF Downloads 97
129 Gas-Phase Nondestructive and Environmentally Friendly Covalent Functionalization of Graphene Oxide Paper with Amines

Authors: Natalia Alzate-Carvajal, Diego A. Acevedo-Guzman, Victor Meza-Laguna, Mario H. Farias, Luis A. Perez-Rey, Edgar Abarca-Morales, Victor A. Garcia-Ramirez, Vladimir A. Basiuk, Elena V. Basiuk

Abstract:

Direct covalent functionalization of prefabricated free-standing graphene oxide paper (GOP) is considered as the only approach suitable for systematic tuning of thermal, mechanical and electronic characteristics of this important class of carbon nanomaterials. At the same time, the traditional liquid-phase functionalization protocols can compromise physical integrity of the paper-like material up to its total disintegration. To avoid such undesirable effects, we explored the possibility of employing an alternative, solvent-free strategy for facile and nondestructive functionalization of GOP with two representative aliphatic amines, 1-octadecylamine (ODA) and 1,12-diaminododecane (DAD), as well as with two aromatic amines, 1-aminopyrene (AP) and 1,5-diaminonaphthalene (DAN). The functionalization was performed under moderate heating at 150-180 °C in vacuum. Under such conditions, it proceeds through both amidation and epoxy ring opening reactions. Comparative characterization of pristine and amine-functionalized GOP mats was carried out by using Fourier-transform infrared, Raman, and X-ray photoelectron spectroscopy (XPS), thermogravimetric (TGA) and differential thermal analysis, scanning electron and atomic force microscopy (SEM and AFM, respectively). Besides that, we compared the stability in water, wettability, electrical conductivity and elastic (Young's) modulus of GOP mats before and after amine functionalization. The highest content of organic species was obtained in the case of GOP-ODA, followed by GOP-DAD, GOP-AP and GOP-DAN samples. The covalent functionalization increased mechanical and thermal stability of GOP, as well as its electrical conductivity. The magnitude of each effect depends on the particular chemical structure of amine employed, which allows for tuning a given GOP property. Morphological characterization by using SEM showed that, compared to pristine graphene oxide paper, amine-modified GOP mats become relatively ordered layered assemblies, in which individual GO sheets are organized in a near-parallel pattern. Financial support from the National Autonomous University of Mexico (grants DGAPA-IN101118 and IN200516) and from the National Council of Science and Technology of Mexico (CONACYT, grant 250655) is greatly appreciated. The authors also thank David A. Domínguez (CNyN of UNAM) for XPS measurements and Dr. Edgar Alvarez-Zauco (Faculty of Science of UNAM) for the opportunity to use TGA equipment.

Keywords: amines, covalent functionalization, gas-phase, graphene oxide paper

Procedia PDF Downloads 138
128 Acerola and Orange By-Products as Sources of Bioactive Compounds for Probiotic Fermented Milks

Authors: Tatyane Lopes de Freitas, Antonio Diogo S. Vieira, Susana Marta Isay Saad, Maria Ines Genovese

Abstract:

The fruit processing industries generate a large volume of residues to produce juices, pulps, and jams. These residues, or by-products, consisting of peels, seeds, and pulps, are routinely discarded. Fruits are rich in bioactive compounds, including polyphenols, which have positive effects on health. Dry residues from two fruits, acerola (M. emarginata D. C.) and orange (C. sinensis), were characterized in relation to contents of ascorbic acid, minerals, total dietary fibers, moisture, ash, lipids, proteins, and carbohydrates, and also high performance liquid chromatographic profile of flavonoids, total polyphenols and proanthocyanidins contents, and antioxidant capacity by three different methods (Ferric reducing antioxidant power assay-FRAP, Oxygen Radical Absorbance Capacity-ORAC, 1,1-diphenyl-2-picrylhydrazil (DPPH) radical scavenging activity). Acerola by-products presented the highest acid ascorbic content (605 mg/100 g), and better antioxidant capacity than orange by-products. The dry residues from acerola demonstrated high contents of proanthocyanidins (617 µg CE/g) and total polyphenols (2525 mg gallic acid equivalents - GAE/100 g). Both presented high total dietary fiber (above 60%) and protein contents (acerola: 10.4%; orange: 9.9%), and reduced fat content (acerola: 1.6%; orange: 2.6%). Both residues showed high levels of potassium, calcium, and magnesium, and were considered sources of these minerals. With acerola by-product, four formulations of probiotics fermented milks were produced: F0 (without the addition of acerola residue (AR)), F2 (2% AR), F5 (5% AR) and F10 (10% AR). The physicochemical characteristics of the fermented milks throughout of storage were investigated, as well as the impact of in vitro simulated gastrointestinal conditions on flavonoids and probiotics. The microorganisms analyzed maintained their populations around 8 log CFU/g during storage. After the gastric phase of the simulated digestion, the populations decreased, and after the enteric phase, no colonies were detected. On the other hand, the flavonoids increased after the gastric phase, maintaining or suffering small decrease after enteric phase. Acerola by-products powder is a valuable ingredient to be used in functional foods because is rich in vitamin C, fibers and flavonoids. These flavonoids appear to be highly resistant to the acids and salts of digestion.

Keywords: acerola, orange, by-products, fermented milk

Procedia PDF Downloads 100
127 Design and Evaluation of a Fully-Automated Fluidized Bed Dryer for Complete Drying of Paddy

Authors: R. J. Pontawe, R. C. Martinez, N. T. Asuncion, R. V. Villacorte

Abstract:

Drying of high moisture paddy remains a major problem in the Philippines, especially during inclement weather condition. To alleviate the problem, mechanical dryers were used like a flat bed and recirculating batch-type dryers. However, drying to 14% (wet basis) final moisture content is long which takes 10-12 hours and tedious which is not the ideal for handling high moisture paddy. Fully-automated pilot-scale fluidized bed drying system with 500 kilograms per hour capacity was evaluated using a high moisture paddy. The developed fluidized bed dryer was evaluated using four drying temperatures and two variations in fluidization time at a constant airflow, static pressure and tempering period. Complete drying of paddy with ≥28% (w.b.) initial MC was attained after 2 passes of fluidized-bed drying at 2 minutes exposure to 70 °C drying temperature and 4.9 m/s superficial air velocity, followed by 60 min ambient air tempering period (30 min without ventilation and 30 min with air ventilation) for a total drying time of 2.07 h. Around 82% from normal mechanical drying time was saved at 70 °C drying temperature. The drying cost was calculated to be P0.63 per kilogram of wet paddy. Specific heat energy consumption was only 2.84 MJ/kg of water removed. The Head Rice Yield recovery of the dried paddy passed the Philippine Agricultural Engineering Standards. Sensory evaluation showed that the color and taste of the samples dried in the fluidized bed dryer were comparable to air dried paddy. The optimum drying parameters of using fluidized bed dryer is 70 oC drying temperature at 2 min fluidization time, 4.9 m/s superficial air velocity, 10.16 cm grain depth and 60 min ambient air tempering period.

Keywords: drying, fluidized bed dryer, head rice yield, paddy

Procedia PDF Downloads 294
126 Electronic Device Robustness against Electrostatic Discharges

Authors: Clara Oliver, Oibar Martinez

Abstract:

This paper is intended to reveal the severity of electrostatic discharge (ESD) effects in electronic and optoelectronic devices by performing sensitivity tests based on Human Body Model (HBM) standard. We explain here the HBM standard in detail together with the typical failure modes associated with electrostatic discharges. In addition, a prototype of electrostatic charge generator has been designed, fabricated, and verified to stress electronic devices, which features a compact high voltage source. This prototype is inexpensive and enables one to do a battery of pre-compliance tests aimed at detecting unexpected weaknesses to static discharges at the component level. Some tests with different devices were performed to illustrate the behavior of the proposed generator. A set of discharges was applied according to the HBM standard to commercially available bipolar transistors, complementary metal-oxide-semiconductor transistors and light emitting diodes. It is observed that high current and voltage ratings in electronic devices not necessarily provide a guarantee that the device will withstand high levels of electrostatic discharges. We have also compared the result obtained by performing the sensitivity tests based on HBM with a real discharge generated by a human. For this purpose, the charge accumulated in the person is monitored, and a direct discharge against the devices is generated by touching them. Every test has been performed under controlled relative humidity conditions. It is believed that this paper can be of interest for research teams involved in the development of electronic and optoelectronic devices which need to verify the reliability of their devices in terms of robustness to electrostatic discharges.

Keywords: human body model, electrostatic discharge, sensitivity tests, static charge monitoring

Procedia PDF Downloads 120
125 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery

Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa

Abstract:

In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.

Keywords: air quality, modeling pollution, particulate matter, remote sensing

Procedia PDF Downloads 126
124 Flirting with Ephemerality and the Daily Production of the Fleeting City

Authors: Rafael Martinez

Abstract:

Our view of cities is dominated by the built environment. Buildings, streets, avenues, bridges, flyovers, and so on virtually exclude anything not fixed, permanently alterable or indefinitely temporal. Yet, city environments can also be shaped by temporally produced structures which, regardless of their transience, act as thresholds separating or segregating people and spaces. Academic works on cities conceptualize them, whether temporary or permanent, as tangible environments. This paper considers the idea of the ephemeral city, a city purposely produced and lived in as an impermanent, fluid and transitional environment resulting from an alignment of different forces. In particular, the paper proposes to observe how certain performative practices inform the emergence of ephemeral spaces in the city’s daily life. With Singapore as its backdrop and focusing foreign workers, the paper aims at documenting how everyday life practices, such as flirting, result in production of transitional space, informed by semiotic blurs, and yet material, perceptible, human and tangible for some. In this paper, it is argued that flirting for Singapore's foreign workers entails skillful understanding of what is proposed as the 'flirting cartography.' Thus, spatially, flirtation becomes not only a matter to be taken for granted but also a form of producing a fleeting space that requires deployment of various techniques drawn upon a particular knowledge. The paper is based upon a performative methodology which seeks to understand the praxis and rationale of the ephemerality of some spaces produced by foreign workers within this cosmopolitan city. By resorting to this methodological approach, the paper aims to establish the connection between the visibility gained by usually marginalized populations through their ephemeral reclamation of public spaces in the city.

Keywords: ephemeral, flirting, Singapore, space

Procedia PDF Downloads 77
123 Heterogeneous Photocatalytic Degradation of Ibuprofen in Ultrapure Water, Municipal and Pharmaceutical Industry Wastewaters Using a TiO2/UV-LED System

Authors: Nabil Jallouli, Luisa M. Pastrana-Martínez, Ana R. Ribeiro, Nuno F. F. Moreira, Joaquim L. Faria, Olfa Hentati, Adrián M. T. Silva, Mohamed Ksibi

Abstract:

Degradation and mineralization of ibuprofen (IBU) were investigated using Ultraviolet (UV) Light Emitting Diodes (LEDs) in TiO2 photocatalysis. Samples of ultrapure water (UP) and a secondary treated effluent of a municipal wastewater treatment plant (WWTP), both spiked with IBU, as well as a highly concentrated IBU (230 mgL-1) pharmaceutical industry wastewater (PIWW), were tested in the TiO2/UV-LED system. Three operating parameters, namely, pH, catalyst load and number of LEDs were optimized. The process efficiency was evaluated in terms of IBU removal using high performance liquid chromatography (HPLC) and ultra-high performance liquid chromatography coupled to tandem mass spectrometry (UHPLC-MS/MS). Additionally, the mineralization was investigated by determining the dissolved organic carbon (DOC) content. The chemical structures of transformation products were proposed based on the data obtained using liquid chromatography with a high resolution mass spectrometer ion trap/time-of-flight (LC-MS-IT-TOF). A possible pathway of IBU degradation was accordingly proposed. Bioassays were performed using the marine bacterium Vibrio fischeri to evaluate the potential acute toxicity of original and treated wastewaters. TiO2 heterogeneous photocatalysis was efficient to remove IBU from UP and from PIWW, and less efficient in treating the wastewater from the municipal WWTP. The acute toxicity decreased by ca. 40% after treatment, regardless of the studied matrix.

Keywords: acute toxicity, Ibuprofen, UV-LEDs, wastewaters

Procedia PDF Downloads 223
122 Control of Doxorubicin Release Rate from Magnetic PLGA Nanoparticles Using a Non-Permanent Magnetic Field

Authors: Inês N. Peça , A. Bicho, Rui Gardner, M. Margarida Cardoso

Abstract:

Inorganic/organic nanocomplexes offer tremendous scope for future biomedical applications, including imaging, disease diagnosis and drug delivery. The combination of Fe3O4 with biocompatible polymers to produce smart drug delivery systems for use in pharmaceutical formulation present a powerful tool to target anti-cancer drugs to specific tumor sites through the application of an external magnetic field. In the present study, we focused on the evaluation of the effect of the magnetic field application time on the rate of drug release from iron oxide polymeric nanoparticles. Doxorubicin, an anticancer drug, was selected as the model drug loaded into the nanoparticles. Nanoparticles composed of poly(d-lactide-co-glycolide (PLGA), a biocompatible polymer already approved by FDA, containing iron oxide nanoparticles (MNP) for magnetic targeting and doxorubicin (DOX) were synthesized by the o/w solvent extraction/evaporation method and characterized by scanning electron microscopy (SEM), by dynamic light scattering (DLS), by inductively coupled plasma-atomic emission spectrometry and by Fourier transformed infrared spectroscopy. The produced particles yielded smooth surfaces and spherical shapes exhibiting a size between 400 and 600 nm. The effect of the magnetic doxorubicin loaded PLGA nanoparticles produced on cell viability was investigated in mammalian CHO cell cultures. The results showed that unloaded magnetic PLGA nanoparticles were nontoxic while the magnetic particles without polymeric coating show a high level of toxicity. Concerning the therapeutic activity doxorubicin loaded magnetic particles cause a remarkable enhancement of the cell inhibition rates compared to their non-magnetic counterpart. In vitro drug release studies performed under a non-permanent magnetic field show that the application time and the on/off cycle duration have a great influence with respect to the final amount and to the rate of drug release. In order to determine the mechanism of drug release, the data obtained from the release curves were fitted to the semi-empirical equation of the the Korsmeyer-Peppas model that may be used to describe the Fickian and non-Fickian release behaviour. Doxorubicin release mechanism has shown to be governed mainly by Fickian diffusion. The results obtained show that the rate of drug release from the produced magnetic nanoparticles can be modulated through the magnetic field time application.

Keywords: drug delivery, magnetic nanoparticles, PLGA nanoparticles, controlled release rate

Procedia PDF Downloads 233
121 Surface-Enhanced Raman Spectroscopy on Gold Nanoparticles in the Kidney Disease

Authors: Leonardo C. Pacheco-Londoño, Nataly J Galan-Freyle, Lisandro Pacheco-Lugo, Antonio Acosta-Hoyos, Elkin Navarro, Gustavo Aroca-Martinez, Karin Rondón-Payares, Alberto C. Espinosa-Garavito, Samuel P. Hernández-Rivera

Abstract:

At the Life Science Research Center at Simon Bolivar University, a primary focus is the diagnosis of various diseases, and the use of gold nanoparticles (Au-NPs) in diverse biomedical applications is continually expanding. In the present study, Au-NPs were employed as substrates for Surface-Enhanced Raman Spectroscopy (SERS) aimed at diagnosing kidney diseases arising from Lupus Nephritis (LN), preeclampsia (PC), and Hypertension (H). Discrimination models were developed for distinguishing patients with and without kidney diseases based on the SERS signals from urine samples by partial least squares-discriminant analysis (PLS-DA). A comparative study of the Raman signals across the three conditions was conducted, leading to the identification of potential metabolite signals. Model performance was assessed through cross-validation and external validation, determining parameters like sensitivity and specificity. Additionally, a secondary analysis was performed using machine learning (ML) models, wherein different ML algorithms were evaluated for their efficiency. Models’ validation was carried out using cross-validation and external validation, and other parameters were determined, such as sensitivity and specificity; the models showed average values of 0.9 for both parameters. Additionally, it is not possible to highlight this collaborative effort involved two university research centers and two healthcare institutions, ensuring ethical treatment and informed consent of patient samples.

Keywords: SERS, Raman, PLS-DA, kidney diseases

Procedia PDF Downloads 13
120 Fluid-Structure Interaction Analysis of a Vertical Axis Wind Turbine Blade Made with Natural Fiber Based Composite Material

Authors: Ivan D. Ortega, Juan D. Castro, Alberto Pertuz, Manuel Martinez

Abstract:

One of the problems considered when scientists talk about climate change is the necessity of utilizing renewable sources of energy, on this category there are many approaches to the problem, one of them is wind energy and wind turbines whose designs have frequently changed along many years trying to achieve a better overall performance on different conditions. From that situation, we get the two main types known today: Vertical and Horizontal axis wind turbines, which have acronyms VAWT and HAWT, respectively. This research aims to understand how well suited a composite material, which is still in development, made with natural origin fibers is for its implementation on vertical axis wind turbines blades under certain wind loads. The study consisted on acquiring the mechanical properties of the materials to be used which where bactris guineenis, also known as pama de lata in Colombia, and adhesive that acts as the matrix which had not been previously studied to the point required for this project. Then, a simplified 3D model of the airfoil was developed and tested under some preliminary loads using finite element analysis (FEA), these loads were acquired in the Colombian Chicamocha Canyon. Afterwards, a more realistic pressure profile was obtained using computational fluid dynamics which took into account the 3D shape of the complete blade and its rotation. Finally, the blade model was subjected to the wind loads using what is known as one way fluidstructure interaction (FSI) and its behavior analyzed to draw conclusions. The observed overall results were positive since the material behaved fairly as expected. Data suggests the material would be really useful in this kind of applications in small to medium size turbines if it is given more attention and time to develop.

Keywords: CFD, FEA, FSI, natural fiber, VAWT

Procedia PDF Downloads 196
119 Chromosomal Damage in Human Lymphocytes by Ultraviolet Radiation

Authors: Felipe Osorio Ospina, Maria Adelaida Mejia Arango, Esteban Onésimo Vallejo Agudelo, Victoria Lucía Dávila Osorio, Natalia Vargas Grisales, Lina María Martínez Sanchez, Camilo Andrés Agudelo Vélez, Ángela Maria Londoño García, Isabel Cristina Ortiz Trujillo

Abstract:

Excessive exposure to ultraviolet radiation, has shown to be a risk factor for photodamage, alteration of the immune mechanisms to recognize malignant cells and cutaneous pro-inflamatorios States and skin cancers. Objective: Identify the time of exposure to ultraviolet radiation for the production of chromosomal damage in human lymphocytes. Methodology: We conducted an in vitro study serial, in which samples were taken from heparinized blood of healthy people, who do not submit exposure to agents that could induce chromosomal alterations. The samples were cultured in RPMI-1640 medium containing 10% fetal bovine serum, penicillin and streptomycin antibiotic. Subsequently, they were grouped and exposed to ultraviolet light for 1 to 20 seconds. At the end of the treatments, cytology samples were prepared, and it was colored with Giemsa (5%). Reading was carried out in an optical microscope and 100 metaphases analysed by treatment for posting chromosomal alterations. Each treatment was conducted at three separate times and each became two replicas. Results: We only presented chromosomal alterations in lymphocytes exposed to UV for a groups 1 to 3 seconds (p<0.05). Conclusions: Exposure to ultraviolet radiation generates visible damage in chromosomes from human lymphocytes observed in light microscopy, the highest rates of injury was observed between two and three seconds, and above this value, the reduction in the number of mitotic cells was evident.

Keywords: ultraviolet rays, lymphocytes, chromosome breakpoints, photodamage

Procedia PDF Downloads 398
118 Ultraviolet Radiation and Chromosomal Damage in Human Lymphocytes

Authors: Felipe Osorio Ospina, Maria Adelaida Mejia Arango, Esteban Onésimo Vallejo Agudelo, Victoria Lucía Dávila Osorio, Natalia Vargas Grisales, Lina María Martínez Sanchez, Camilo Andrés Agudelo Vélez, Ángela Maria Londoño García, Isabel Cristina Ortiz Trujillo

Abstract:

Excessive exposure to ultraviolet radiation, has shown to be a risk factor for photodamage, alteration of the immune mechanisms to recognize malignant cells and cutaneous pro-inflamatorios states and skin cancers. Objective: To identify the time of exposure to ultraviolet radiation for the production of chromosomal damage in human lymphocytes. Methodology: We conducted an in vitro study serial, in which samples were taken from the heparinized blood of healthy people, who do not submit exposure to agents that could induce chromosomal alterations. The samples were cultured in RPMI-1640 medium containing 10% fetal bovine serum, penicillin, and streptomycin antibiotic. Subsequently, they were grouped and exposed to ultraviolet light for 1 to 20 seconds. At the end of the treatments, cytology samples were prepared, and it was colored with Giemsa (5%). Reading was carried out in an optical microscope and 100 metaphases analysed by treatment for posting chromosomal alterations. Each treatment was conducted at three separate times and each became two replicas. Results: We only presented chromosomal alterations in lymphocytes exposed to UV for groups 1 to 3 seconds (p < 0.05). Conclusions: Exposure to ultraviolet radiation generates visible damage in chromosomes from human lymphocytes observed in light microscopy, the highest rates of injury was observed between two and three seconds, and above this value, the reduction in the number of mitotic cells was evident.

Keywords: chromosome breakpoints, lymphocytes, photodamage, ultraviolet rays

Procedia PDF Downloads 548
117 Use of Corn Stover for the Production of 2G Bioethanol, Enzymes, and Xylitol Under a Biorefinery Concept

Authors: Astorga-Trejo Rebeca, Fonseca-Peralta Héctor Manuel, Beltrán-Arredondo Laura Ivonne, Castro-Martínez Claudia

Abstract:

The use of biomass as feedstock for the production of fuels and other chemicals of interest is an ever-growing accepted option in the way to the development of biorefinery complexes; in the Mexican state of Sinaloa, two million tons of residues from corn crops are produced every year, most of which can be converted to bioethanol and other products through biotechnological conversion using yeast and other microorganisms. Therefore, the objective of this work was to take advantage of corn stover and evaluate its potential as a substrate for the production of second-generation bioethanol (2G), enzymes, and xylitol. To produce bioethanol 2G, an acid-alkaline pretreatment was carried out prior to saccharification and fermentation. The microorganisms used for the production of enzymes, as well as for the production of xylitol, were isolated and characterized in our workgroup. Statistical analysis was performed using Design Expert version 11.0. The results showed that it is possible to obtain 2G bioethanol employing corn stover as a carbon source and Saccharomyces cerevisiae ItVer01 and Candida intermedia CBE002 with yields of 0.42 g and 0.31 g, respectively. It was also shown that C. intermedia has the ability to produce xylitol with a good yield (0.46 g/g). On the other hand, qualitative and quantitative studies showed that the native strains of Fusarium equiseti (0.4 IU/mL - xylanase), Bacillus velezensis (1.2 IU/mL – xylanase and 0.4 UI/mL - amylase) and Penicillium funiculosum (1.5 IU / mL - cellulases) have the capacity to produce xylanases, amylases or cellulases using corn stover as raw material. This study allowed us to demonstrate that it is possible to use corn stover as a carbon source, a low-cost raw material with high availability in our country, to obtain bioproducts of industrial interest, using processes that are more environmentally friendly and sustainable. It is necessary to continue the optimization of each bioprocess.

Keywords: biomass, corn stover, biorefinery, bioethanol 2G, enzymes, xylitol

Procedia PDF Downloads 133
116 Difficulties in the Emotional Processing of Intimate Partner Violence Perpetrators

Authors: Javier Comes Fayos, Isabel RodríGuez Moreno, Sara Bressanutti, Marisol Lila, Angel Romero MartíNez, Luis Moya Albiol

Abstract:

Given the great impact produced by gender-based violence, its comprehensive approach seems essential. Consequently, research has focused on risk factors for violent behaviour, linking various psychosocial variables, as well as cognitive and neuropsychological deficits with the aggressors. However, studies on affective processing are scarce, so the present study investigates possible emotional alterations in men convicted of gender violence. The participants were 51 aggressors, who attended the CONTEXTO program with sentences of less than two years, and 47 men with no history of violence. The sample did not differ in age, socioeconomic level, education, or alcohol and other substances consumption. Anger, alexithymia and facial recognition of other people´s emotions were assessed through the State-Trait Anger Expression Inventory (STAXI-2), the Toronto Alexithymia Scale (TAS-20) and Reading the mind in the eyes (REM), respectively. Men convicted of gender-based violence showed higher scores on the anger trait and temperament dimensions, as well as on the anger expression index. They also scored higher on alexithymia and in the identification and emotional expression subscales. In addition, they showed greater difficulties in the facial recognition of emotions by having a lower score in the REM. These results seem to show difficulties in different affective areas in men condemned for gender violence. The deficits are reflected in greater difficulty in identifying and expressing emotions, in processing anger and in recognizing the emotions of others. All these difficulties have been related to the use of violent behavior. Consequently, it is essential and necessary to include emotional regulation in intervention programs for men who have been convicted of gender-based violence.

Keywords: alexithymia, anger, emotional processing, emotional recognition, empathy, intimate partner violence

Procedia PDF Downloads 163
115 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 21
114 Rescaled Range Analysis of Seismic Time-Series: Example of the Recent Seismic Crisis of Alhoceima

Authors: Marina Benito-Parejo, Raul Perez-Lopez, Miguel Herraiz, Carolina Guardiola-Albert, Cesar Martinez

Abstract:

Persistency, long-term memory and randomness are intrinsic properties of time-series of earthquakes. The Rescaled Range Analysis (RS-Analysis) was introduced by Hurst in 1956 and modified by Mandelbrot and Wallis in 1964. This method represents a simple and elegant analysis which determines the range of variation of one natural property (the seismic energy released in this case) in a time interval. Despite the simplicity, there is complexity inherent in the property measured. The cumulative curve of the energy released in time is the well-known fractal geometry of a devil’s staircase. This geometry is used for determining the maximum and minimum value of the range, which is normalized by the standard deviation. The rescaled range obtained obeys a power-law with the time, and the exponent is the Hurst value. Depending on this value, time-series can be classified in long-term or short-term memory. Hence, an algorithm has been developed for compiling the RS-Analysis for time series of earthquakes by days. Completeness time distribution and locally stationarity of the time series are required. The interest of this analysis is their application for a complex seismic crisis where different earthquakes take place in clusters in a short period. Therefore, the Hurst exponent has been obtained for the seismic crisis of Alhoceima (Mediterranean Sea) of January-March, 2016, where at least five medium-sized earthquakes were triggered. According to the values obtained from the Hurst exponent for each cluster, a different mechanical origin can be detected, corroborated by the focal mechanisms calculated by the official institutions. Therefore, this type of analysis not only allows an approach to a greater understanding of a seismic series but also makes possible to discern different types of seismic origins.

Keywords: Alhoceima crisis, earthquake time series, Hurst exponent, rescaled range analysis

Procedia PDF Downloads 289
113 Facilitating Written Biology Assessment in Large-Enrollment Courses Using Machine Learning

Authors: Luanna B. Prevost, Kelli Carter, Margaurete Romero, Kirsti Martinez

Abstract:

Writing is an essential scientific practice, yet, in several countries, the increasing university science class-size limits the use of written assessments. Written assessments allow students to demonstrate their learning in their own words and permit the faculty to evaluate students’ understanding. However, the time and resources required to grade written assessments prohibit their use in large-enrollment science courses. This study examined the use of machine learning algorithms to automatically analyze student writing and provide timely feedback to the faculty about students' writing in biology. Written responses to questions about matter and energy transformation were collected from large-enrollment undergraduate introductory biology classrooms. Responses were analyzed using the LightSide text mining and classification software. Cohen’s Kappa was used to measure agreement between the LightSide models and human raters. Predictive models achieved agreement with human coding of 0.7 Cohen’s Kappa or greater. Models captured that when writing about matter-energy transformation at the ecosystem level, students focused on primarily on the concepts of heat loss, recycling of matter, and conservation of matter and energy. Models were also produced to capture writing about processes such as decomposition and biochemical cycling. The models created in this study can be used to provide automatic feedback about students understanding of these concepts to biology faculty who desire to use formative written assessments in larger enrollment biology classes, but do not have the time or personnel for manual grading.

Keywords: machine learning, written assessment, biology education, text mining

Procedia PDF Downloads 248
112 Resistance of Haemonchus spp. to Albendazole, Fenbendazole and Levamisole in 4 Goat Farms of Antioquia, Colombia

Authors: Jose D. Zapata-Torres, Esteban Naranjo-Gutiérrez, Angela M. Martínez-Valencia, Jenny J. Chaparro-Gutiérrez, David Villar-Argaiz

Abstract:

Reports of drug resistance have been made in every livestock host and to every anthelmintic class. In some regions of world, the extremely high prevalence of multi-drug resistance in nematodes of sheep and goats threatens the viability of small-ruminant industries. In the region of Antioquia, Colombia, no reports of nematode resistance have been documented due to a lack of veterinary diagnostic laboratories. The objective of this study was to evaluate the efficacy of albendazole, fenbendazole, and levamisole to control gastrointestinal nematodes in goat farms of Antioquia by doing fecal egg count reduction tests. A total of 139 crossbreed goats from four separate farms were sampled for feces prior to, and 14 days following anthelmintc treatments. Individual fecal egg counts were performed using the modified three chamber McMaster technique. The anthelmintics administered at day 0 were albendazole (farm 1, n=63), fenbendazole (farm 2, n=20), and levamisole (farm 3 and 4, n= 37, and 19). Larval cultures were used to identify the genus of nematodes using Baermann`s technique and the morphological keys for identification of L3 in small ruminants. There was no difference in fecal egg counts between 0 and 14, with means (±SD) of 1681,5 ± 2121,5 and 1715,12 ± 1895,4 epg (eggs per gram), respectively. The egg count reductions for each anthelmintic and farm were 25,86% for albendazole (farm 1), 0% for fenbendazole (farm 2), 0% (farm 3), and 5,5% (farm 4) for levamisole. The genus of nematodes identified was predominantly Haemonchus spp., with 70,27% and 82,81% for samples from day 0 and 14, respectively. These results provide evidence of a total state of resistance to 3 common anthelmintics. Further research is needed to design integrate management programs to control nematodes in small ruminants in Colombia.

Keywords: anthelmintics, goat, haemonchus, resistance

Procedia PDF Downloads 495
111 TutorBot+: Automatic Programming Assistant with Positive Feedback based on LLMs

Authors: Claudia Martínez-Araneda, Mariella Gutiérrez, Pedro Gómez, Diego Maldonado, Alejandra Segura, Christian Vidal-Castro

Abstract:

The purpose of this document is to showcase the preliminary work in developing an EduChatbot-type tool and measuring the effects of its use aimed at providing effective feedback to students in programming courses. This bot, hereinafter referred to as tutorBot+, was constructed based on chatGPT and is tasked with assisting and delivering timely positive feedback to students in the field of computer science at the Universidad Católica de Concepción. The proposed working method consists of four stages: (1) Immersion in the domain of Large Language Models (LLMs), (2) Development of the tutorBot+ prototype and integration, (3) Experiment design, and (4) Intervention. The first stage involves a literature review on the use of artificial intelligence in education and the evaluation of intelligent tutors, as well as research on types of feedback for learning and the domain of chatGPT. The second stage encompasses the development of tutorBot+, and the final stage involves a quasi-experimental study with students from the Programming and Database labs, where the learning outcome involves the development of computational thinking skills, enabling the use and measurement of the tool's effects. The preliminary results of this work are promising, as a functional chatBot prototype has been developed in both conversational and non-conversational versions integrated into an open-source online judge and programming contest platform system. There is also an exploration of the possibility of generating a custom model based on a pre-trained one tailored to the domain of programming. This includes the integration of the created tool and the design of the experiment to measure its utility.

Keywords: assessment, chatGPT, learning strategies, LLMs, timely feedback

Procedia PDF Downloads 33
110 Food Composition Tables Used as an Instrument to Estimate the Nutrient Ingest in Ecuador

Authors: Ortiz M. Rocío, Rocha G. Karina, Domenech A. Gloria

Abstract:

There are several tools to assess the nutritional status of the population. A main instrument commonly used to build those tools is the food composition tables (FCT). Despite the importance of FCT, there are many error sources and variability factors that can be presented on building those tables and can lead to an under or over estimation of ingest of nutrients of a population. This work identified different food composition tables used as an instrument to estimate the nutrient ingest in Ecuador.The collection of data for choosing FCT was made through key informants –self completed questionnaires-, supplemented with institutional web research. A questionnaire with general variables (origin, year of edition, etc) and methodological variables (method of elaboration, information of the table, etc) was passed to the identified FCT. Those variables were defined based on an extensive literature review. A descriptive analysis of content was performed. Ten printed tables and three databases were reported which were all indistinctly treated as food composition tables. We managed to get information from 69% of the references. Several informants referred to printed documents that were not accessible. In addition, searching the internet was not successful. Of the 9 final tables, n=8 are from Latin America, and, n= 5 of these were constructed by indirect method (collection of already published data) having as a main source of information a database from the United States department of agriculture USDA. One FCT was constructed by using direct method (bromatological analysis) and has its origin in Ecuador. The 100% of the tables made a clear distinction of the food and its method of cooking, 88% of FCT expressed values of nutrients per 100g of edible portion, 77% gave precise additional information about the use of the table, and 55% presented all the macro and micro nutrients on a detailed way. The more complete FCT were: INCAP (Central America), Composition of foods (Mexico). The more referred table was: Ecuadorian food composition table of 1965 (70%). The indirect method was used for most tables within this study. However, this method has the disadvantage that it generates less reliable food composition tables because foods show variations in composition. Therefore, a database cannot accurately predict the composition of any isolated sample of a food product.In conclusion, analyzing the pros and cons, and, despite being a FCT elaborated by using an indirect method, it is considered appropriate to work with the FCT of INCAP Central America, given the proximity to our country and a food items list that is very similar to ours. Also, it is imperative to have as a reference the table of composition for Ecuadorian food, which, although is not updated, was constructed using the direct method with Ecuadorian foods. Hence, both tables will be used to elaborate a questionnaire with the purpose of assessing the food consumption of the Ecuadorian population. In case of having disparate values, we will proceed by taking just the INCAP values because this is an updated table.

Keywords: Ecuadorian food composition tables, FCT elaborated by direct method, ingest of nutrients of Ecuadorians, Latin America food composition tables

Procedia PDF Downloads 398
109 Applying Resilience Engineering to improve Safety Management in a Construction Site: Design and Validation of a Questionnaire

Authors: M. C. Pardo-Ferreira, J. C. Rubio-Romero, M. Martínez-Rojas

Abstract:

Resilience Engineering is a new paradigm of safety management that proposes to change the way of managing the safety to focus on the things that go well instead of the things that go wrong. Many complex and high-risk sectors such as air traffic control, health care, nuclear power plants, railways or emergencies, have applied this new vision of safety and have obtained very positive results. In the construction sector, safety management continues to be a problem as indicated by the statistics of occupational injuries worldwide. Therefore, it is important to improve safety management in this sector. For this reason, it is proposed to apply Resilience Engineering to the construction sector. The Construction Phase Health and Safety Plan emerges as a key element for the planning of safety management. One of the key tools of Resilience Engineering is the Resilience Assessment Grid that allows measuring the four essential abilities (respond, monitor, learn and anticipate) for resilient performance. The purpose of this paper is to develop a questionnaire based on the Resilience Assessment Grid, specifically on the ability to learn, to assess whether a Construction Phase Health and Safety Plans helps companies in a construction site to implement this ability. The research process was divided into four stages: (i) initial design of a questionnaire, (ii) validation of the content of the questionnaire, (iii) redesign of the questionnaire and (iii) application of the Delphi method. The questionnaire obtained could be used as a tool to help construction companies to evolve from Safety-I to Safety-II. In this way, companies could begin to develop the ability to learn, which will serve as a basis for the development of the other abilities necessary for resilient performance. The following steps in this research are intended to develop other questions that allow evaluating the rest of abilities for resilient performance such as monitoring, learning and anticipating.

Keywords: resilience engineering, construction sector, resilience assessment grid, construction phase health and safety plan

Procedia PDF Downloads 110
108 Bituminous Geomembranes: Sustainable Products for Road Construction and Maintenance

Authors: Ines Antunes, Andrea Massari, Concetta Bartucca

Abstract:

Greenhouse gasses (GHG) role in the atmosphere has been well known since the 19th century; however, researchers have begun to relate them to climate changes only in the second half of the following century. From this moment, scientists started to correlate the presence of GHG such as CO₂ with the global warming phenomena. This has raised the awareness not only of those who were experts in this field but also of public opinion, which is becoming more and more sensitive to environmental pollution and sustainability issues. Nowadays the reduction of GHG emissions is one of the principal objectives of EU nations. The target is an 80% reduction of emissions in 2050 and to reach the important goal of carbon neutrality. Road sector is responsible for an important amount of those emissions (about 20%). The most part is due to traffic, but a good contribution is also given directly or indirectly from road construction and maintenance. Raw material choice and reuse of post-consumer plastic rather than a cleverer design of roads have an important contribution to reducing carbon footprint. Bituminous membranes can be successfully used as reinforcement systems in asphalt layers to improve road pavement performance against cracking. Composite materials coupling membranes with grids and/or fabrics should be able to combine improved tensile properties of the reinforcement with stress absorbing and waterproofing effects of membranes. Polyglass, with its brand dedicated to road construction and maintenance called Polystrada, has done more than this. The company's target was not only to focus sustainability on the final application but also to implement a greener mentality from the cradle to the grave. Starting from production, Polyglass has made important improvements finalized to increase efficiency and minimize waste. The installation of a trigeneration plant and the usage of selected production scraps inside the products as well as the reduction of emissions into the environment, are one of the main efforts of the company to reduce impact during final product build-up. Moreover, the benefit given by installing Polystrada products brings a significant improvement in road lifetime. This has an impact not only on the number of maintenance or renewal that needs to be done (build less) but also on traffic density due to works and road deviation in case of operations. During the end of the life of a road, Polystrada products can be 100% recycled and milled with classical systems used without changing the normal maintenance procedures. In this work, all these contributions were quantified in terms of CO₂ emission thanks to an LCA analysis. The data obtained were compared with a classical system or a standard production of a membrane. What it is possible to see is that the usage of Polyglass products for street maintenance and building gives a significant reduction of emissions in case of membrane installation under the road wearing course.

Keywords: CO₂ emission, LCA, maintenance, sustainability

Procedia PDF Downloads 35
107 Predictors of Survival of Therapeutic Hypothermia Based on Analysis of a Consecutive American Inner City Population over 4 Years

Authors: Jorge Martinez, Brandon Roberts, Holly Payton Toca

Abstract:

Background: Therapeutic hypothermia (TH) is the international standard of care for all comatose patients after cardiac arrest, but criticism focuses on poor outcomes. We sought to develop criteria to identify American urban patients more likely to benefit from TH. Methods: Retrospective chart review of 107 consecutive adults undergoing TH in downtown New Orleans from 2010-2014 yielded records for 99 patients with all 44 survivors or families contacted up to four years. Results: 69 males and 38 females with a mean age of 60.2 showed 63 dead (58%) and 44 survivors (42%). Presenting cardiac rhythm was divided into shockable (Pulseless Ventricular Tachycardia, Ventricular Fibrillation) and non-shockable (Pulseless Electrical Activity, Asystole). Presenting in shockable rhythms with ROSC <20 minutes were 21 patients with 15 (71%) survivors (p=.001). Time >20 minutes until ROSC in shockable rhythms had 5 patients with 3 survivors (78%, p=0.001). Presenting in non-shockable rhythms with ROSC <20 minutes were 54 patients with 18 survivors (33%, p=.001). ROSC >20 minutes in non-shockable rhythms had 19 patients with 2 survivors (8%, p=.001). Survivors of shockable rhythms showed 19 (100%) living post TH. 15 survivors (79%, n=19, p=.001) had CPC score 1 or 2 with 4 survivors (21%, n=19) having a CPC score of 3. A total of 25 survived non-shockable rhythm. Acute survival of patients with non-shockable rhythm showed 18 expired <72 hours (72%, n=25) with long-term survival of 4 patients (5%, n=74) and CPC scores of 1 or 2 (p=.001). Interestingly, patients with time to ROSC <20 minutes exhibiting more than one loss of sustained ROSC showed 100% mortality (p=.001). Patients presenting with shockable >20 minutes ROSC had overall survival of 70% (p=.001), but those undergoing >3 cardiac rhythm changes had 100% mortality (p=.001). Conclusion: Patients presenting with shockable rhythms undergoing TH had overall acute survival of 70% followed by long-term survival of 100% after 4 years. In contrast, patients presenting with non-shockable rhythm had long-term survival of 5%. TH is not recommended for patients presenting with non-shockable rhythm and requiring greater than 20 minutes for restoration of ROSC.

Keywords: cardiac rhythm changes, Pulseless Electrical Activity (PEA), Therapeutic Hypothermia (TH)

Procedia PDF Downloads 184
106 Strategic Entrepreneurship: Model Proposal for Post-Troika Sustainable Cultural Organizations

Authors: Maria Inês Pinho

Abstract:

Recent literature on issues of Cultural Management (also called Strategic Management for cultural organizations) systematically seeks for models that allow such equipment to adapt to the constant change that occurs in contemporary societies. In the last decade, the world, and in particular Europe has experienced a serious financial problem that has triggered defensive mechanisms, both in the direction of promoting the balance of public accounts and in the sense of the anonymous loss of the democratic and cultural values of each nation. If in the first case emerged the Troika that led to strong cuts in funding for Culture, deeply affecting those organizations; in the second case, the commonplace citizen is seen fighting for the non-closure of cultural equipment. Despite this, the cultural manager argues that there is no single formula capable of solving the need to adapt to change. In another way, it is up to this agent to know the existing scientific models and to adapt them in the best way to the reality of the institution he coordinates. These actions, as a rule, are concerned with the best performance vis-à-vis external audiences or with the financial sustainability of cultural organizations. They forget, therefore, that all this mechanics cannot function without its internal public, without its Human Resources. The employees of the cultural organization must then have an entrepreneurial posture - must be intrapreneurial. This paper intends to break this form of action and lead the cultural manager to understand that his role should be in the sense of creating value for society, through a good organizational performance. This is only possible with a posture of strategic entrepreneurship. In other words, with a link between: Cultural Management, Cultural Entrepreneurship and Cultural Intrapreneurship. In order to prove this assumption, the case study methodology was used with the symbol of the European Capital of Culture (Casa da Música) as well as qualitative and quantitative techniques. The qualitative techniques included the procedure of in-depth interviews to managers, founders and patrons and focus groups to public with and without experience in managing cultural facilities. The quantitative techniques involved the application of a questionnaire to middle management and employees of Casa da Música. After the triangulation of the data, it was proved that contemporary management of cultural organizations must implement among its practices, the concept of Strategic Entrepreneurship and its variables. Also, the topics which characterize the Cultural Intrapreneurship notion (job satisfaction, the quality in organizational performance, the leadership and the employee engagement and autonomy) emerged. The findings show then that to be sustainable, a cultural organization should meet the concerns of both external and internal forum. In other words, it should have an attitude of citizenship to the communities, visible on a social responsibility and a participatory management, only possible with the implementation of the concept of Strategic Entrepreneurship and its variable of Cultural Intrapreneurship.

Keywords: cultural entrepreneurship, cultural intrapreneurship, cultural organizations, strategic management

Procedia PDF Downloads 151
105 Characterization of Chest Pain in Patients Consulting to the Emergency Department of a Health Institution High Level of Complexity during 2014-2015, Medellin, Colombia

Authors: Jorge Iván Bañol-Betancur, Lina María Martínez-Sánchez, María de los Ángeles Rodríguez-Gázquez, Estefanía Bahamonde-Olaya, Ana María Gutiérrez-Tamayo, Laura Isabel Jaramillo-Jaramillo, Camilo Ruiz-Mejía, Natalia Morales-Quintero

Abstract:

Acute chest pain is a distressing sensation between the diaphragm and the base of the neck and it represents a diagnostic challenge for any physician in the emergency department. Objective: To establish the main clinical and epidemiological characteristics of patients who present with chest pain to the emergency department in a private clinic from the city of Medellin, during 2014-2015. Methods: Cross-sectional retrospective observational study. Population and sample were patients who consulted for chest pain in the emergency department who met the eligibility criteria. The information was analyzed in SPSS program vr.21; qualitative variables were described through relative frequencies, and the quantitative through mean and standard deviation ‬or medians according to their distribution in the study population. Results: A total of 231 patients were evaluated, the mean age was 49.5 ± 19.9 years, 56.7% were females. The most frequent pathological antecedents were hypertension 35.5%, diabetes 10,8%, dyslipidemia 10.4% and coronary disease 5.2%. Regarding pain features, in 40.3% of the patients the pain began abruptly, in 38.2% it had a precordial location, for 20% of the cases physical activity acted as a trigger, and 60.6% was oppressive. Costochondritis was the most common cause of chest pain among patients with an established etiologic diagnosis, representing the 18.2%. Conclusions: Although the clinical features of pain reported coincide with the clinical presentation of an acute coronary syndrome, the most common cause of chest pain in study population was costochondritis instead, indicating that it is a differential diagnostic in the approach of patients with pain acute chest.

Keywords: acute coronary syndrome, chest pain, epidemiology, osteochondritis

Procedia PDF Downloads 309
104 New Bio-Strategies for Ochratoxin a Detoxification Using Lactic Acid Bacteria

Authors: José Maria, Vânia Laranjo, Luís Abrunhosa, António Inês

Abstract:

The occurrence of mycotoxigenic moulds such as Aspergillus, Penicillium and Fusarium in food and feed has an important impact on public health, by the appearance of acute and chronic mycotoxicoses in humans and animals, which is more severe in the developing countries due to lack of food security, poverty and malnutrition. This mould contamination also constitutes a major economic problem due the lost of crop production. A great variety of filamentous fungi is able to produce highly toxic secondary metabolites known as mycotoxins. Most of the mycotoxins are carcinogenic, mutagenic, neurotoxic and immunosuppressive, being ochratoxin A (OTA) one of the most important. OTA is toxic to animals and humans, mainly due to its nephrotoxic properties. Several approaches have been developed for decontamination of mycotoxins in foods, such as, prevention of contamination, biodegradation of mycotoxins-containing food and feed with microorganisms or enzymes and inhibition or absorption of mycotoxin content of consumed food into the digestive tract. Some group of Gram-positive bacteria named lactic acid bacteria (LAB) are able to release some molecules that can influence the mould growth, improving the shelf life of many fermented products and reducing health risks due to exposure to mycotoxins. Some LAB are capable of mycotoxin detoxification. Recently our group was the first to describe the ability of LAB strains to biodegrade OTA, more specifically, Pediococcus parvulus strains isolated from Douro wines. The pathway of this biodegradation was identified previously in other microorganisms. OTA can be degraded through the hydrolysis of the amide bond that links the L-β-phenylalanine molecule to the ochratoxin alpha (OTα) a non toxic compound. It is known that some peptidases from different origins can mediate the hydrolysis reaction like, carboxypeptidase A an enzyme from the bovine pancreas, a commercial lipase and several commercial proteases. So, we wanted to have a better understanding of this OTA degradation process when LAB are involved and identify which molecules where present in this process. For achieving our aim we used some bioinformatics tools (BLAST, CLUSTALX2, CLC Sequence Viewer 7, Finch TV). We also designed specific primers and realized gene specific PCR. The template DNA used came from LAB strains samples of our previous work, and other DNA LAB strains isolated from elderberry fruit, silage, milk and sausages. Through the employment of bioinformatics tools it was possible to identify several proteins belonging to the carboxypeptidase family that participate in the process of OTA degradation, such as serine type D-Ala-D-Ala carboxypeptidase and membrane carboxypeptidase. In conclusions, this work has identified carboxypeptidase proteins being one of the molecules present in the OTA degradation process when LAB are involved.

Keywords: carboxypeptidase, lactic acid bacteria, mycotoxins, ochratoxin a.

Procedia PDF Downloads 429