Search results for: Zita Lopez
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 269

Search results for: Zita Lopez

59 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 115
58 Quercetin Nanoparticles and Their Hypoglycemic Effect in a CD1 Mouse Model with Type 2 Diabetes Induced by Streptozotocin and a High-Fat and High-Sugar Diet

Authors: Adriana Garcia-Gurrola, Carlos Adrian Peña Natividad, Ana Laura Martinez Martinez, Alberto Abraham Escobar Puentes, Estefania Ochoa Ruiz, Aracely Serrano Medina, Abraham Wall Medrano, Simon Yobanny Reyes Lopez

Abstract:

Type 2 diabetes mellitus (T2DM) is a metabolic disease characterized by elevated blood glucose levels. Quercetin is a natural flavonoid with a hypoglycemic effect, but reported data are inconsistent due mainly to the structural instability and low solubility of quercetin. Nanoencapsulation is a distinct strategy to overcome the intrinsic limitations of quercetin. Therefore, this work aims to develop a quercetin nano-formulation based on biopolymeric starch nanoparticles to enhance the release and hypoglycemic effect of quercetin in T2DM induced mice model. Starch-quercetin nanoparticles were synthesized using high-intensity ultrasonication, and structural and colloidal properties were determined by FTIR and DLS. For in vivo studies, CD1 male mice (n=25) were divided into five groups (n=5). T2DM was induced using a high-fat and high-sugar diet for 32 weeks and streptozotocin injection. Group 1 consisted of healthy mice fed with a normal diet and water ad libitum; Group 2 were diabetic mice treated with saline solution; Group 3 were diabetic mice treated with glibenclamide; Group 4 were diabetic mice treated with empty nanoparticles; and Group 5 was diabetic mice treated with quercetin nanoparticles. Quercetin nanoparticles had a hydrodynamic size of 232 ± 88.45 nm, a PDI of 0.310 ± 0.04 and a zeta potential of -4 ± 0.85 mV. The encapsulation efficiency of nanoparticles was 58 ± 3.33 %. No significant differences (p = > 0.05) were observed in biochemical parameters (lipids, insulin, and peptide C). Groups 3 and 5 showed a similar hypoglycemic effect, but quercetin nanoparticles showed a longer-lasting effect. Histopathological studies reveal that T2DM mice groups showed degenerated and fatty liver tissue; however, a treated group with quercetin nanoparticles showed liver tissue like that of the healthy mice group. These results demonstrate that quercetin nano-formulations based on starch nanoparticles are effective alternatives with hypoglycemic effects.

Keywords: quercetin, diabetes mellitus tipo 2, in vivo study, nanoparticles

Procedia PDF Downloads 39
57 Counting Fishes in Aquaculture Ponds: Application of Imaging Sonars

Authors: Juan C. Gutierrez-Estrada, Inmaculada Pulido-Calvo, Ignacio De La Rosa, Antonio Peregrin, Fernando Gomez-Bravo, Samuel Lopez-Dominguez, Alejandro Garrocho-Cruz, Jairo Castro-Gutierrez

Abstract:

The semi-intensive aquaculture in traditional earth ponds is the main rearing system in Southern Spain. These fish rearing systems are approximately two thirds of aquatic production in this area which has made a significant contribution to the regional economy in recent years. In this type of rearing system, a crucial aspect is the correct quantification and control of the fish abundance in the ponds because the fish farmer knows how many fishes he puts in the ponds but doesn’t know how many fishes will harvest at the end of the rear period. This is a consequence of the mortality induced by different causes as pathogen agents as parasites, viruses and bacteria and other factors as predation of fish-eating birds and poaching. Track the fish abundance in these installations is very difficult because usually the ponds take up a large area of land and the management of the water flow is not automatized. Therefore, there is a very high degree of uncertainty on the abundance fishes which strongly hinders the management and planning of the sales. A novel and non-invasive procedure to count fishes in the ponds is by the means of imaging sonars, particularly fixed systems and/or linked to aquatic vehicles as Remotely Operated Vehicles (ROVs). In this work, a method based on census stations procedures is proposed to evaluate the fish abundance estimation accuracy using images obtained of multibeam sonars. The results indicate that it is possible to obtain a realistic approach about the number of fishes, sizes and therefore the biomass contained in the ponds. This research is included in the framework of the KTTSeaDrones Project (‘Conocimiento y transferencia de tecnología sobre vehículos aéreos y acuáticos para el desarrollo transfronterizo de ciencias marinas y pesqueras 0622-KTTSEADRONES-5-E’) financed by the European Regional Development Fund (ERDF) through the Interreg V-A Spain-Portugal Programme (POCTEP) 2014-2020.

Keywords: census station procedure, fish biomass, semi-intensive aquaculture, multibeam sonars

Procedia PDF Downloads 230
56 Effect of Cooking Process on the Antioxidant Activity of Different Variants of Tomato-Based Sofrito

Authors: Ana Beltran Sanahuja, A. Valdés García, Saray Lopez De Pablo Gallego, Maria Soledad Prats Moya

Abstract:

Tomato consumption has greatly increased worldwide in the last few years, mostly due to a growing demand for products like sofrito. In this sense, regular consumption of tomato-based products has been consistently associated with a reduction in the incidence of chronic degenerative diseases. The sofrito is a homemade tomato sauce typical of the Mediterranean area, which contains as main ingredients: tomato, onion, garlic and olive oil. There are also sofrito’s variations by adding other spices which bring at the same time not only color, flavor, smell and or aroma; they also provide medicinal properties, due to their antioxidant power. This protective effect has mainly been attributed to the predominant bioactive compounds present in sofrito, such as lycopene and other carotenoids as well as more than 40 different polyphenols. Regarding the cooking process, it is known that it can modify the properties and the availability of nutrients in sofrito; however, there is not enough information regarding this issue. For this reason, the aim of the present work is to evaluate the cooking effect on the antioxidant capacity of different variants of tomato-based sofrito combined with other spices, through the analysis of total phenols content (TPC) and to evaluate the antioxidant capacity by using the method of free radical 2,2-diphenyl-1-picrylhydrazyl (DPPH). Based on the results obtained, it can be confirmed that the basic sofrito composed of tomato, onion, garlic and olive oil and the sofrito with 1 g of rosemary added, are the ones with the highest content of phenols presenting greater antioxidant power than other industrial sofrito, and that of other variables of sofrito with added thyme or higher amounts of garlic. Moreover, it has been observed that in the elaboration of the tomato-based sofrito, it is possible to cook until 60 minutes, since the cooking process increases the bioavailability of the carotenoids when breaking the cell walls, which weakens the binding forces between the carotenoids and increases the levels of antioxidants present, confirmed both with the TPC and DPPH methods. It can be concluded that the cooking process of different variants of tomato-based sofrito, including spices, can improve the antioxidant capacity. The synergistic effects of different antioxidants may have a greater protective effect; increasing, also, the digestibility of proteins. In addition, the antioxidants help to deactivate the free radicals of diseases such as atherosclerosis, aging, immune suppression, cancer, and diabetes.

Keywords: antioxidants, cooking process, phenols sofrito

Procedia PDF Downloads 142
55 Anaerobic Co-digestion in Two-Phase TPAD System of Sewage Sludge and Fish Waste

Authors: Rocio López, Miriam Tena, Montserrat Pérez, Rosario Solera

Abstract:

Biotransformation of organic waste into biogas is considered an interesting alternative for the production of clean energy from renewable sources by reducing the volume and organic content of waste Anaerobic digestion is considered one of the most efficient technologies to transform waste into fertilizer and biogas in order to obtain electrical energy or biofuel within the concept of the circular economy. Currently, three types of anaerobic processes have been developed on a commercial scale: (1) single-stage process where sludge bioconversion is completed in a single chamber, (2) two-stage process where the acidogenic and methanogenic stages are separated into two chambers and, finally, (3) temperature-phase sequencing (TPAD) process that combines a thermophilic pretreatment unit prior to mesophilic anaerobic digestion. Two-stage processes can provide hydrogen and methane with easier control of the first and second stage conditions producing higher total energy recovery and substrate degradation than single-stage processes. On the other hand, co-digestion is the simultaneous anaerobic digestion of a mixture of two or more substrates. The technology is similar to anaerobic digestion but is a more attractive option as it produces increased methane yields due to the positive synergism of the mixtures in the digestion medium thus increasing the economic viability of biogas plants. The present study focuses on the energy recovery by anaerobic co-digestion of sewage sludge and waste from the aquaculture-fishing sector. The valorization is approached through the application of a temperature sequential phase process or TPAD technology (Temperature - Phased Anaerobic Digestion). Moreover, two-phase of microorganisms is considered. Thus, the selected process allows the development of a thermophilic acidogenic phase followed by a mesophilic methanogenic phase to obtain hydrogen (H₂) in the first stage and methane (CH₄) in the second stage. The combination of these technologies makes it possible to unify all the advantages of these anaerobic digestion processes individually. To achieve these objectives, a sequential study has been carried out in which the biochemical potential of hydrogen (BHP) is tested followed by a BMP test, which will allow checking the feasibility of the two-stage process. The best results obtained were high total and soluble COD yields (59.8% and 82.67%, respectively) as well as H₂ production rates of 12LH₂/kg SVadded and methane of 28.76 L CH₄/kg SVadded for TPAD.

Keywords: anaerobic co-digestion, TPAD, two-phase, BHP, BMP, sewage sludge, fish waste

Procedia PDF Downloads 156
54 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English

Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista

Abstract:

The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.

Keywords: corpus linguistics, historical linguistics, old English, parallel corpus

Procedia PDF Downloads 213
53 Wastewater Treatment in the Abrasives Industry via Fenton and Photo-Fenton Oxidation Processes: A Case Study from Peru

Authors: Hernan Arturo Blas López, Gustavo Henndel Lopes, Antonio Carlos Silva Costa Teixeira, Carmen Elena Flores Barreda, Patricia Araujo Pantoja

Abstract:

Phenols are toxic for life and the environment and may come from many sources. Uncured phenolic monomers present in phenolic resins used as binders in grinding wheels and emery paper can contaminate industrial wastewaters in abrasives manufacture plants. Furthermore, vestiges of resol and novolacs resins generated by wear and tear of abrasives are also possible sources of water contamination by phenolics in these facilities. Fortunately, advanced oxidation by dark Fenton and photo-Fenton techniques are capable of oxidizing phenols and their degradation products up to their mineralization into H₂O and CO₂. The maximal allowable concentrations for phenols in Peruvian waterbodies is very low, such that insufficiently treated effluents from the abrasives industry are a potential environmental noncompliance. The current case study highlights findings obtained during the lab-scale application of Fenton’s and photo-assisted Fenton’s chemistries to real industrial wastewater samples from an abrasives manufacture plant in Peru. The goal was to reduce the phenolic content and sample toxicity. For this purpose, two independent variables-reaction time and effect of ultraviolet radiation–were studied as for their impacts on the concentration of total phenols, total organic carbon (TOC), biological oxygen demand (BOD) and chemical oxygen demand (COD). In this study, diluted samples (1 L) of the industrial effluent were treated with Fenton’s reagent (H₂O₂ and Fe²⁺ from FeSO₄.H₂O) during 10 min in a photochemical batch reactor (Alphatec RFS-500, Brazil) at pH 2.92. In the case of photo-Fenton tests with ultraviolet lamps of 9 W, UV-A, UV-B and UV-C lamps were evaluated. All process conditions achieved 100% of phenols degraded within 5 minutes. TOC, BOD and COD decreased by 49%, 52% and 86% respectively (all processes together). However, Fenton treatment was not capable of reducing BOD, COD and TOC below a certain value even after 10 minutes, contrarily to photo-Fenton. It was also possible to conclude that the processes here studied degrade other compounds in addition to phenols, what is an advantage. In all cases, elevated effluent dilution factors and high amounts of oxidant agent impact negatively the overall economy of the processes here investigated.

Keywords: fenton oxidation, wastewater treatment, phenols, abrasives industry

Procedia PDF Downloads 317
52 Jurisdictional Federalism and Formal Federalism: Levels of Political Centralization on American and Brazilian Models

Authors: Henrique Rangel, Alexandre Fadel, Igor De Lazari, Bianca Neri, Carlos Bolonha

Abstract:

This paper promotes a comparative analysis of American and Brazilian models of federalism assuming their levels of political centralization as main criterion. The central problem faced herein is the Brazilian approach of Unitarian regime. Although the hegemony of federative form after 1989, Brazil had a historical frame of political centralization that remains under the 1988 constitutional regime. Meanwhile, United States framed a federalism in which States absorb significant authorities. The hypothesis holds that the amount of alternative criteria of federalization – which can generate political centralization –, and the way they are upheld on judicial review, are crucial to understand the levels of political centralization achieved in each model. To test this hypothesis, the research is conducted by a methodology temporally delimited to 1994-2014 period. Three paradigmatic precedents of U.S. Supreme Court were selected: United States vs. Morrison (2000), on gender-motivated violence, Gonzales vs. Raich (2005), on medical use of marijuana, and United States vs. Lopez (1995), on firearm possession on scholar zones. These most relevant cases over federalism in the recent activity of Supreme Court indicates a determinant parameter of deliberation: the commerce clause. After observe the criterion used to permit or prohibit the political centralization in America, the Brazilian normative context is presented. In this sense, it is possible to identify the eventual legal treatment these controversies could receive in this Country. The decision-making reveals some deliberative parameters, which characterizes each federative model. At the end of research, the precedents of Rehnquist Court promote a broad revival of federalism debate, establishing the commerce clause as a secure criterion to uphold or not the necessity of centralization – even with decisions considered conservative. Otherwise, the Brazilian federalism solves them controversies upon in a formalist fashion, within numerous and comprehensive – sometimes casuistic too – normative devices, oriented to make an intense centralization. The aim of this work is indicate how jurisdictional federalism found in United States can preserve a consistent model with States robustly autonomous, while Brazil gives preference to normative mechanisms designed to starts from centralization.

Keywords: constitutional design, federalism, U.S. Supreme Court, legislative authority

Procedia PDF Downloads 516
51 Modeling the Acquisition of Expertise in a Sequential Decision-Making Task

Authors: Cristóbal Moënne-Loccoz, Rodrigo C. Vergara, Vladimir López, Domingo Mery, Diego Cosmelli

Abstract:

Our daily interaction with computational interfaces is plagued of situations in which we go from inexperienced users to experts through self-motivated exploration of the same task. In many of these interactions, we must learn to find our way through a sequence of decisions and actions before obtaining the desired result. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion so that a specific sequence of actions must be performed in order to produce the expected outcome. But, as they become experts in the use of such interfaces, do users adopt specific search and learning strategies? Moreover, if so, can we use this information to follow the process of expertise development and, eventually, predict future actions? This would be a critical step towards building truly adaptive interfaces that can facilitate interaction at different moments of the learning curve. Furthermore, it could provide a window into potential mechanisms underlying decision-making behavior in real world scenarios. Here we tackle this question using a simple game interface that instantiates a 4-level binary decision tree (BDT) sequential decision-making task. Participants have to explore the interface and discover an underlying concept-icon mapping in order to complete the game. We develop a Hidden Markov Model (HMM)-based approach whereby a set of stereotyped, hierarchically related search behaviors act as hidden states. Using this model, we are able to track the decision-making process as participants explore, learn and develop expertise in the use of the interface. Our results show that partitioning the problem space into such stereotyped strategies is sufficient to capture a host of exploratory and learning behaviors. Moreover, using the modular architecture of stereotyped strategies as a Mixture of Experts, we are able to simultaneously ask the experts about the user's most probable future actions. We show that for those participants that learn the task, it becomes possible to predict their next decision, above chance, approximately halfway through the game. Our long-term goal is, on the basis of a better understanding of real-world decision-making processes, to inform the construction of interfaces that can establish dynamic conversations with their users in order to facilitate the development of expertise.

Keywords: behavioral modeling, expertise acquisition, hidden markov models, sequential decision-making

Procedia PDF Downloads 252
50 Potential Applications of Biosurfactants from Corn Steep Liquor in Cosmetic

Authors: J. M. Cruz, X. Vecıno, L. Rodrıguez-López, J. M. Dominguez, A. B. Moldes

Abstract:

The cosmetic and personal care industry are the fields where biosurfactants could have more possibilities of success because in this kind of products the replacement of synthetic detergents by natural surfactants will provide an additional added value to the product, at the same time that the harmful effects produced by some synthetic surfactants could be avoided or reduced. Therefore, nowadays, consumers are disposed to pay and additional cost if they obtain more natural products. In this work we provide data about the potential of biosurfactants in the cosmetic and personal care industry. Biosurfactants from corn steep liquor, that is a fermented and condensed stream, have showed good surface-active properties, reducing substantially the surface tension of water. The bacteria that usually growth in corn steep liquor comprises Lactobacillus species, generally recognize as safe. The biosurfactant extracted from CSL consists of a lipopeptide, composed by fatty acids, which can reduce the surface tension of water in more than 30 units. It is a yellow and viscous liquid with a density of 1.053 mg/mL and pH=4. By these properties, they could be introduced in the formulation of cosmetic creams, hair conditioners or shampoos. Moreover this biosurfactant extracted from corn steep liquor, have showed a potent antimicrobial effect on different strains of Streptococcus. Some species of Streptococcus are commonly found weakly living in the human respiratory and genitourinary systems, producing several diseases in humans, including skin diseases. For instance, Streptococcus pyogenes produces many toxins and enzymes that help to stabilize skin infections; probably biosurfactants from corn steep liquor can inhibit the mechanisms of the S. pyogenes enzymes. S. pyogenes is an important cause of pharyngitis, impetigo, cellulitis and necrotizing fasciitis. In this work it was observed that 50 mg/L of biosurfactant extract obtained from corn steep liquor is able to inhibit more than 50% the growth of S. pyogenes. Thus, cosmetic and personal care products, formulated with biosurfactants from corn steep liquor, could have prebiotic properties. The natural biosurfactant presented in this work and obtained from corn milling industry streams, have showed a high potential to provide an interesting and sustainable alternative to those, antibacterial and surfactant ingredients used in cosmetic and personal care manufacture, obtained by chemical synthesis, which can cause irritation, and often only show short time effects.

Keywords: antimicrobial activity, biosurfactants, cosmetic, personal care

Procedia PDF Downloads 257
49 The Emergence of Memory at the Nanoscale

Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann

Abstract:

Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.

Keywords: memories, memdevices, memristors, nonequilibrium states

Procedia PDF Downloads 99
48 Terrestrial Laser Scans to Assess Aerial LiDAR Data

Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani

Abstract:

The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.

Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy

Procedia PDF Downloads 101
47 Analysis in Mexico on Workers Performing Highly Repetitive Movements with Sensory Thermography in the Surface of the Wrist and Elbows

Authors: Sandra K. Enriquez, Claudia Camargo, Jesús E. Olguín, Juan A. López, German Galindo

Abstract:

Currently companies have increased the number of disorders of cumulative trauma (CTDs), these are increasing significantly due to the Highly Repetitive Movements (HRM) performed in workstations, which causes economic losses to businesses, due to temporary and permanent disabilities of workers. This analysis focuses on the prevention of disorders caused by: repeatability, duration and effort; And focuses on reducing cumulative trauma disorders such as occupational diseases using sensory thermography as a noninvasive method, the above is to evaluate the injuries could have workers to perform repetitive motions. Objectives: The aim is to define rest periods or job rotation before they generate a CTD, this sensory thermography by analyzing changes in temperature patterns on wrists and elbows when the worker is performing HRM over a period of time 2 hours and 30 minutes. Information on non-work variables such as wrist and elbow injuries, weight, gender, age, among others, and work variables such as temperature workspace, repetitiveness and duration also met. Methodology: The analysis to 4 industrial designers, 2 men and 2 women to be specific was conducted in a business in normal health for a period of 12 days, using the following time ranges: the first day for every 90 minutes continuous work were asked to rest 5 minutes, the second day for every 90 minutes of continuous work were asked to rest 10 minutes, the same to work 60 and 30 minutes straight. Each worker was tested with 6 different ranges at least twice. This analysis was performed in a controlled room temperature between 20 and 25 ° C, and a time to stabilize the temperature of the wrists and elbows than 20 minutes at the beginning and end of the analysis. Results: The range time of 90 minutes working continuous and a rest of 5 minutes of activity is where the maximum temperature (Tmax) was registered in the wrists and elbows in the office, we found the Tmax was 35.79 ° C with a difference of 2.79 ° C between the initial and final temperature of the left elbow presented at the individual 4 during the 86 minutes, in of range in 90 minutes continuously working and rested for 5 minutes of your activity. Conclusions: It is possible with this alternative technology is sensory thermography predict ranges of rotation or rest for the prevention of CTD to perform HRM work activities, obtaining with this reduce occupational disease, quotas by health agencies and increasing the quality of life of workers, taking this technology a cost-benefit acceptable in the future.

Keywords: sensory thermography, temperature, cumulative trauma disorder (CTD), highly repetitive movement (HRM)

Procedia PDF Downloads 431
46 Dynamic Two-Way FSI Simulation for a Blade of a Small Wind Turbine

Authors: Alberto Jiménez-Vargas, Manuel de Jesús Palacios-Gallegos, Miguel Ángel Hernández-López, Rafael Campos-Amezcua, Julio Cesar Solís-Sanchez

Abstract:

An optimal wind turbine blade design must be able of capturing as much energy as possible from the wind source available at the area of interest. Many times, an optimal design means the use of large quantities of material and complicated processes that make the wind turbine more expensive, and therefore, less cost-effective. For the construction and installation of a wind turbine, the blades may cost up to 20% of the outline pricing, and become more important due to they are part of the rotor system that is in charge of transmitting the energy from the wind to the power train, and where the static and dynamic design loads for the whole wind turbine are produced. The aim of this work is the develop of a blade fluid-structure interaction (FSI) simulation that allows the identification of the major damage zones during the normal production situation, and thus better decisions for design and optimization can be taken. The simulation is a dynamic case, since we have a time-history wind velocity as inlet condition instead of a constant wind velocity. The process begins with the free-use software NuMAD (NREL), to model the blade and assign material properties to the blade, then the 3D model is exported to ANSYS Workbench platform where before setting the FSI system, a modal analysis is made for identification of natural frequencies and modal shapes. FSI analysis is carried out with the two-way technic which begins with a CFD simulation to obtain the pressure distribution on the blade surface, then these results are used as boundary condition for the FEA simulation to obtain the deformation levels for the first time-step. For the second time-step, CFD simulation is reconfigured automatically with the next time-step inlet wind velocity and the deformation results from the previous time-step. The analysis continues the iterative cycle solving time-step by time-step until the entire load case is completed. This work is part of a set of projects that are managed by a national consortium called “CEMIE-Eólico” (Mexican Center in Wind Energy Research), created for strengthen technological and scientific capacities, the promotion of creation of specialized human resources, and to link the academic with private sector in national territory. The analysis belongs to the design of a rotor system for a 5 kW wind turbine design thought to be installed at the Isthmus of Tehuantepec, Oaxaca, Mexico.

Keywords: blade, dynamic, fsi, wind turbine

Procedia PDF Downloads 482
45 Optimized Renewable Energy Mix for Energy Saving in Waste Water Treatment Plants

Authors: J. D. García Espinel, Paula Pérez Sánchez, Carlos Egea Ruiz, Carlos Lardín Mifsut, Andrés López-Aranguren Oliver

Abstract:

This paper shortly describes three main actuations over a Waste Water Treatment Plant (WWTP) for reducing its energy consumption: Optimization of the biological reactor in the aeration stage by including new control algorithms and introducing new efficient equipment, the installation of an innovative hybrid system with zero Grid injection (formed by 100kW of PV energy and 5 kW of mini-wind energy generation) and an intelligent management system for load consumption and energy generation control in the most optimum way. This project called RENEWAT, involved in the European Commission call LIFE 2013, has the main objective of reducing the energy consumptions through different actions on the processes which take place in a WWTP and introducing renewable energies on these treatment plants, with the purpose of promoting the usage of treated waste water for irrigation and decreasing the C02 gas emissions. WWTP is always required before waste water can be reused for irrigation or discharged in water bodies. However, the energetic demand of the treatment process is high enough for making the price of treated water to exceed the one for drinkable water. This makes any policy very difficult to encourage the re-use of treated water, with a great impact on the water cycle, particularly in those areas suffering hydric stress or deficiency. The cost of treating waste water involves another climate-change related burden: the energy necessary for the process is obtained mainly from the electric network, which is, in most of the cases in Europe, energy obtained from the burning of fossil fuels. The innovative part of this project is based on the implementation, adaptation and integration of solutions for this problem, together with a new concept of the integration of energy input and operative energy demand. Moreover, there is an important qualitative jump between the technologies used and the alleged technologies to use in the project which give it an innovative character, due to the fact that there are no similar previous experiences of a WWTP including an intelligent discrimination of energy sources, integrating renewable ones (PV and Wind) and the grid.

Keywords: aeration system, biological reactor, CO2 emissions, energy efficiency, hybrid systems, LIFE 2013 call, process optimization, renewable energy sources, wasted water treatment plants

Procedia PDF Downloads 352
44 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 315
43 Contribution of the Corn Milling Industry to a Global and Circular Economy

Authors: A. B. Moldes, X. Vecino, L. Rodriguez-López, J. M. Dominguez, J. M. Cruz

Abstract:

The concept of the circular economy is focus on the importance of providing goods and services sustainably. Thus, in a future it will be necessary to respond to the environmental contamination and to the use of renewables substrates by moving to a more restorative economic system that drives towards the utilization and revalorization of residues to obtain valuable products. During its evolution our industrial economy has hardly moved through one major characteristic, established in the early days of industrialization, based on a linear model of resource consumption. However, this industrial consumption system will not be maintained during long time. On the other hand, there are many industries, like the corn milling industry, that although does not consume high amount of non renewable substrates, they produce valuable streams that treated accurately, they could provide additional, economical and environmental, benefits by the extraction of interesting commercial renewable products, that can replace some of the substances obtained by chemical synthesis, using non renewable substrates. From this point of view, the use of streams from corn milling industry to obtain surface-active compounds will decrease the utilization of non-renewables sources for obtaining this kind of compounds, contributing to a circular and global economy. However, the success of the circular economy depends on the interest of the industrial sectors in the revalorization of their streams by developing relevant and new business models. Thus, it is necessary to invest in the research of new alternatives that reduce the consumption of non-renewable substrates. In this study is proposed the utilization of a corn milling industry stream to obtain an extract with surfactant capacity. Once the biosurfactant is extracted, the corn milling stream can be commercialized as nutritional media in biotechnological process or as animal feed supplement. Usually this stream is combined with other ingredients obtaining a product namely corn gluten feed or may be sold separately as a liquid protein source for beef and dairy feeding, or as a nutritional pellet binder. Following the productive scheme proposed in this work, the corn milling industry will obtain a biosurfactant extract that could be incorporated in its productive process replacing those chemical detergents, used in some point of its productive chain, or it could be commercialized as a new product of the corn manufacture. The biosurfactants obtained from corn milling industry could replace the chemical surfactants in many formulations, and uses, and it supposes an example of the potential that many industrial streams could offer for obtaining valuable products when they are manage properly.

Keywords: biosurfactantes, circular economy, corn, sustainability

Procedia PDF Downloads 263
42 Optimization of Artisanal Fishing Waste Fermentation for Volatile Fatty Acids Production

Authors: Luz Stella Cadavid-Rodriguez, Viviana E. Castro-Lopez

Abstract:

Fish waste (FW) has a high content of potentially biodegradable components, so it is amenable to be digested anaerobically. In this line, anaerobic digestion (AD) of FW has been studied for biogas production. Nevertheless, intermediate products such as volatile fatty acids (VFA), generated during the acidogenic stage, have been scarce investigated, even though they have a high potential as a renewable source of carbon. In the literature, there are few studies about the Inoculum-Substrate (I/S) ratio on acidogenesis. On the other hand, it is well known that pH is a critical factor in the production of VFA. The optimum pH for the production of VFA seems to change depending on the substrate and can vary in a range between 5.25 and 11. Nonetheless, the literature about VFA production from protein-rich waste, such as FW, is scarce. In this context, it is necessary to deepen on the determination of the optimal operating conditions of acidogenic fermentation for VFA production from protein-rich waste. Therefore, the aim of this research was to optimize the volatile fatty acid production from artisanal fishing waste, studying the effect of pH and the I/S ratio on the acidogenic process. For this research, the inoculum used was a methanogenic sludge (MS) obtained from a UASB reactor treating wastewater of a slaughterhouse plant, and the FW was collected in the port of Tumaco (Colombia) from the local artisanal fishers. The acidogenic fermentation experiments were conducted in batch mode, in 500 mL glass bottles as anaerobic reactors, equipped with rubber stoppers provided with a valve to release biogas. The effective volume used was 300 mL. The experiments were carried out for 15 days at a mesophilic temperature of 37± 2 °C and constant agitation of 200 rpm. The effect of 3 pH levels: 5, 7, 9, coupled with five I/S ratios, corresponding to 0.20, 0.15, 0.10, 0.05, 0.00 was evaluated taking as a response variable the production of VFA. A complete randomized block design was selected for the experiments in a 5x3 factorial arrangement, with two repetitions per treatment. At the beginning and during the process, pH in the experimental reactors was adjusted to the corresponding values of 5, 7, and 9 using 1M NaOH or 1M H2SO4, as was appropriated. In addition, once the optimum I/S ratio was determined, the process was evaluated at this condition without pH control. The results indicated that pH is the main factor in the production of VFA, obtaining the highest concentration with neutral pH. By reducing the I/S ratio, as low as 0.05, it was possible to maximize VFA production. Thus, the optimum conditions found were natural pH (6.6-7.7) and I/S ratio of 0.05, with which it was possible to reach a maximum total VFA concentration of 70.3 g Ac/L, whose major components were acetic acid (35%) and butyric acid (32%). The findings showed that the acidogenic fermentation of FW is an efficient way of producing VFA and that the operating conditions can be simple and economical.

Keywords: acidogenesis, artisanal fishing waste, inoculum to substrate ratio, volatile fatty acids

Procedia PDF Downloads 126
41 Pelvic Floor Training in Elite Athletes: Fact or Fiction

Authors: Maria Barbano Acevedo-Gomez, Elena Sonsoles Rodriguez-Lopez, Sofia Olivia Calvo-Moreno, Angel Basas-Garcia, Cristophe Ramirez

Abstract:

Introduction: Urinary incontinence (UI) is defined as the involuntary leakage of urine. In persons who practice sport, its prevalence is 36.1% (95% CI 26.5%-46.8%) and varies as it seems to depend on the intensity of exercise, movements, and impact on the ground. Such high impact sports are likely to generate higher intra-abdominal pressures and leading to pelvic floor muscle weakness. Even though the emphasis of this research is on female athletes, all women should perform pelvic floor muscle exercises as a part of their general physical exercise. Pelvic floor exercises are generally considered the first treatment against urinary incontinence. Objective: The main objective of the present study was to determine the knowledge of the pelvic floor and of the UI in elite athletes and know if they incorporate pelvic floor strengthening in their training. Methods: This was an observational study conducted on 754 elite athletes. After collecting questions about the pelvic floor, UI, and sport-related data, participants completed the questionnaire International Consultation on Incontinence Questionnaire-UI Short-Form (ICIQ-SF). Results: 57.3% of the athletes reflect not having knowledge of their pelvic floor, 48.3% do not know what strengthening exercises are, and around 90% have never practiced them. 78.1% (n=589) of all elite athletes do not include pelvic floor exercises in their training. Of the elite athletes surveyed, 33% had UI according to ICIQ-SF (mean age 23.75 ± 7.74 years). In response to the question 'Do you think you have or have had UI?', Only 9% of the 754 elite athletes admitted they presently had UI, and 13.3% indicated they had had UI at some time. However, 22.7% (n=171) reported they had experienced urine leakage while training. Of the athletes who indicated they did not have UI in the ICIQ-SF, 25.7% stated they did experience urine leakage during training (χ² [1] = 265.56; p < 0.001). Further, 12.3% of the athletes who considered they did not have UI and 60% of those who admitted they had had UI on some occasion stated they had suffered some urine leakage in the past 3 months (χ² [1] = 287.59; p < 0.001). Conclusions: There is a lack of knowledge about UI in sport. Through the use of validated questionnaires, we observed a UI prevalence of 33%, and 22.7% reported they experienced urine leakage while training. These figures contrast with only 9% of athletes who reported they had or had in the past had UI. This discrepancy could reflect the great lack of knowledge about UI in sports and that sometimes an athlete may consider that urine leakage is normal and a consequence of the demands of training. These data support the idea that coaches, physiotherapists, and other professionals involved in maximizing the performance of athletes should include pelvic floor muscle exercises in their training programs. Measures such as this could help to prevent UI during training and could be a starting point for future studies designed to develop adequate prevention and treatment strategies for this embarrassing problem affecting young athletes, both male and female.

Keywords: athletes, pelvic floor, performance, prevalence, sport, training, urinary incontinence

Procedia PDF Downloads 127
40 Impact of the Dog-Technic for D1-D4 and Longitudinal Stroke Technique for Diaphragm on Peak Expiratory Flow (PEF) in Asthmatic Patients

Authors: Victoria Eugenia Garnacho-Garnacho, Elena Sonsoles Rodriguez-Lopez, Raquel Delgado-Delgado, Alvaro Otero-Campos, Jesus Guodemar-Perez, Angelo Michelle Vagali, Juan Pablo Hervas-Perez

Abstract:

Asthma is a heterogeneous disease which has always had a drug treatment. Osteopathic treatment that we propose is aimed, seen through a dorsal manipulation (Dog Technic D1-D4) and a technique for diaphragm (Longitudinal Stroke) forced expiratory flow in spirometry changes there are in particular that there is an increase in the volumes of the Peak Flow and Post intervention and effort and that the application of these two techniques together is more powerful if we applied only a Longitudinal (Stroke). Also rating if this type of treatment will have repercussions on breathlessness, a very common symptom in asthma. And finally to investigate if provided vertebra pain decreased after a manipulation. Methods—Participants were recruited between students and professors of the University, aged 18-65, patients (n = 18) were assigned randomly to one of the two groups, group 1 (longitudinal Stroke and manipulation dorsal Dog Technic) and group 2 (diaphragmatic technique, Longitudinal Stroke). The statistical analysis is characterized by the comparison of the main indicator of obstruction of via area PEF (peak expiratory flow) in various situations through the peak flow meter Datospir Peak-10. The measurements were carried out in four phases: at rest, after the stress test, after the treatment, after treatment and the stress test. After each stress test was evaluated, through the Borg scale, the level of Dyspnea on each patient, regardless of the group. In Group 1 in addition to these parameters was calculated using an algometer spinous pain before and after the manipulation. All data were taken at the minute. Results—12 Group 1 (Dog Technic and Longitudinal Stroke) patients responded positively to treatment, there was an increase of 5.1% and 6.1% of the post-treatment PEF and post-treatment, and effort. The results of the scale of Borg by which we measure the level of Dyspnea were positive, a 54.95%, patients noted an improvement in breathing. In addition was confirmed through the means of both groups group 1 in which two techniques were applied was 34.05% more effective than group 2 in which applied only a. After handling pain fell by 38% of the cases. Conclusions—The impact of the technique of Dog-Technic for D1-D4 and the Longitudinal Stroke technique for diaphragm in the volumes of peak expiratory flow (PEF) in asthmatic patients were positive, there was a change of the PEF Post intervention and post-treatment, and effort and showed the most effective group in which only a technique was applied. Furthermore this type of treatment decreased facilitated vertebrae pain and was efficient in the improvement of Dyspnea and the general well-being of the patient.

Keywords: ANS, asthma, manipulation, manual therapy, osteopathic

Procedia PDF Downloads 288
39 Secondary Prisonization and Mental Health: A Comparative Study with Elderly Parents of Prisoners Incarcerated in Remote Jails

Authors: Luixa Reizabal, Inaki Garcia, Eneko Sansinenea, Ainize Sarrionandia, Karmele Lopez De Ipina, Elsa Fernandez

Abstract:

Although the effects of incarceration in prisons close to prisoners’ and their families’ residences have been studied, little is known about the effects of remote incarceration. The present study shows the impact of secondary prisonization on mental health of elderly parents of Basque prisoners who are incarcerated in prisons located far away from prisoners’ and their families’ residences. Secondary prisonization refers to the effects that imprisonment of a family member has on relatives. In the study, psychological effects are analyzed by means of comparative methodology. Specifically, levels of psychopathology (depression, anxiety, and stress) and positive mental health (psychological, social, and emotional well-being) are studied in a sample of parents over 65 years old of prisoners incarcerated in prisons located a long distance away (concretely, some of them in a distance of less than 400 km, while others farther than 400 km) from the Basque Country. The dataset consists of data collected through a questionnaire and from a spontaneous speech recording. The statistical and automatic analyses show that levels of psychopathology and positive mental health of elderly parents of prisoners incarcerated in remote jails are affected by the incarceration of their sons or daughters. Concretely, these parents show higher levels of depression, anxiety, and stress and lower levels of emotional (but not psychological or social) wellbeing than parents with no imprisoned daughters or sons. These findings suggest that parents with imprisoned sons or daughters suffer the impact of secondary prisonization on their mental health. When comparing parents with sons or daughters incarcerated within 400 kilometers from home and parents whose sons or daughters are incarcerated farther than 400 kilometers from home, the latter present higher levels of psychopathology, but also higher levels of positive mental health (although the difference between the two groups is not statistically significant). These findings might be explained by resilience. In fact, in traumatic situations, people can develop a force to cope with the situation, and even present a posttraumatic growth. Bearing in mind all these findings, it could be concluded that secondary prisonization implies for elderly parents with sons or daughters incarcerated in remote jails suffering and, in consequence, that changes in the penitentiary policy applied to Basque prisoners are required in order to finish this suffering.

Keywords: automatic spontaneous speech analysis, elderly parents, machine learning, positive mental health, psychopathology, remote incarceration, secondary prisonization

Procedia PDF Downloads 291
38 Urinary Incontinence and Performance in Elite Athletes

Authors: María Barbaño Acevedo Gómez, Elena Sonsoles Rodríguez López, Sofía Olivia Calvo Moreno, Ángel Basas García, Christophe RamíRez Parenteau

Abstract:

Introduction: Urinary incontinence (UI) is defined as the involuntary leakage of urine. In persons who practice sport, its prevalence is 36.1% (95% CI 26.5% –46.8%) and varies as it seems to depend on the intensity of exercise, movements and impact on the ground. Such high impact sports are likely to generate higher intra-abdominal pressures and leading to pelvic floor muscle weakness. Although physical exercise reduces the risk of suffering from many diseases the mentality of an elite athlete is not to optimize their health, achieving their goals can put their health at risk. Furthermore, feeling or suffering from any discomfort during training seems to be normal within the elite sport demands. Objective: The main objective of the present study was to know the effects of UI in sports performance in athletes. Methods: This was an observational study conducted in 754 elite athletes. After collecting questions about pelvic floor, UI and sport-related data, participants completed the questionnaire International Consultation on Incontinence Questionnaire-UI Short- Form (ICIQ-SF) and ISI (index of incontinence severity). Results: 48.8% of the athletes declare having losses also in rest, preseason and / or competition (χ2 [3] = 3.64; p = 0.302), being the competition period (29.1%) the most frequent where suffer from urine leakage. Of the elite athletes surveyed, 33% had UI according ICIQ-SF (mean age 23.75 ± 7.74 years). Elite athletes with UI (5.31 ± 1.07 days) dedicate significantly more days per week to training [M = 0.28; 95% CI = 0.08-0.48; t (752) = 2.78; p = 0.005] than those without UI. Regarding frequency, 59.7% lose urine once a week, 25.6% lose urine more than 3 times a week, and 14.7% daily. Based on the amount, approximately 15% claim to lose a moderate and abundant. Athletes with the highest number of urine leaks during their training, the UI affects them more in their daily life (r = 0.259; p = 0.001), they present a greater number of losses in their day to day (r = 0.341; p <0.001 ) and greater severity of UI (r = 0.341; p <0.001). Conclusions: Athletes consider that UI affects them negatively in their daily routine, 30.9% affirm having a severity between moderate and severe in their daily routine, and 29.1% loss urine in competition period. An interesting fact is that more than half of the samples collected were elite athletes who compete at the highest level (Olympic Games, World and European Championship), the dedication to sport occupies a big piece in their life. The most frequent period where athletes suffers urine leakage is in competition and there are many emotions that athletes manage to get their best performance, if we add urine losses in that moments it is possible that their performance could be affected.

Keywords: athletes, performance, prevalence, sport, training, urinary incontinence

Procedia PDF Downloads 131
37 Evaluation of the Energy Performance and Emissions of an Aircraft Engine: J69 Using Fuel Blends of Jet A1 and Biodiesel

Authors: Gabriel Fernando Talero Rojas, Vladimir Silva Leal, Camilo Bayona-Roa, Juan Pava, Mauricio Lopez Gomez

Abstract:

The substitution of conventional aviation fuels with biomass-derived alternative fuels is an emerging field of study in the aviation transport, mainly due to its energy consumption, the contribution to the global Greenhouse Gas - GHG emissions and the fossil fuel price fluctuations. Nevertheless, several challenges remain as the biofuel production cost and its degradative effect over the fuel systems that alter the operating safety. Moreover, experimentation on full-scale aeronautic turbines are expensive and complex, leading to most of the research to the testing of small-size turbojets with a major absence of information regarding the effects in the energy performance and the emissions. The main purpose of the current study is to present the results of experimentation in a full-scale military turbojet engine J69-T-25A (presented in Fig. 1) with 640 kW of power rating and using blends of Jet A1 with oil palm biodiesel. The main findings are related to the thrust specific fuel consumption – TSFC, the engine global efficiency – η, the air/fuel ratio – AFR and the volume fractions of O2, CO2, CO, and HC. Two fuels are used in the present study: a commercial Jet A1 and a Colombian palm oil biodiesel. The experimental plan is conducted using the biodiesel volume contents - w_BD from 0 % (B0) to 50 % (B50). The engine operating regimes are set to Idle, Cruise, and Take-off conditions. The turbojet engine J69 is used by the Colombian Air Force and it is installed in a testing bench with the instrumentation that corresponds to the technical manual of the engine. The increment of w_BD from 0 % to 50 % reduces the η near 3,3 % and the thrust force in a 26,6 % at Idle regime. These variations are related to the reduction of the 〖HHV〗_ad of the fuel blend. The evolved CO and HC tend to be reduced in all the operating conditions when increasing w_BD. Furthermore, a reduction of the atomization angle is presented in Fig. 2, indicating a poor atomization in the fuel nozzle injectors when using a higher biodiesel content as the viscosity of fuel blend increases. An evolution of cloudiness is also observed during the shutdown procedure as presented in Fig. 3a, particularly after 20 % of biodiesel content in the fuel blend. This promotes the contamination of some components of the combustion chamber of the J69 engine with soot and unburned matter (Fig. 3). Thus, the substitution of biodiesel content above 20 % is not recommended in order to avoid a significant decrease of η and the thrust force. A more detail examination of the mechanical wearing of the main components of the engine is advised in further studies.

Keywords: aviation, air to fuel ratio, biodiesel, energy performance, fuel atomization, gas turbine

Procedia PDF Downloads 110
36 Structural and Biochemical Characterization of Red and Green Emitting Luciferase Enzymes

Authors: Wael M. Rabeh, Cesar Carrasco-Lopez, Juliana C. Ferreira, Pance Naumov

Abstract:

Bioluminescence, the emission of light from a biological process, is found in various living organisms including bacteria, fireflies, beetles, fungus and different marine organisms. Luciferase is an enzyme that catalyzes a two steps oxidation of luciferin in the presence of Mg2+ and ATP to produce oxyluciferin and releases energy in the form of light. The luciferase assay is used in biological research and clinical applications for in vivo imaging, cell proliferation, and protein folding and secretion analysis. The luciferase enzyme consists of two domains, a large N-terminal domain (1-436 residues) that is connected to a small C-terminal domain (440-544) by a flexible loop that functions as a hinge for opening and closing the active site. The two domains are separated by a large cleft housing the active site that closes after binding the substrates, luciferin and ATP. Even though all insect luciferases catalyze the same chemical reaction and share 50% to 90% sequence homology and high structural similarity, they emit light of different colors from green at 560nm to red at 640 nm. Currently, the majority of the structural and biochemical studies have been conducted on green-emitting firefly luciferases. To address the color emission mechanism, we expressed and purified two luciferase enzymes with blue-shifted green and red emission from indigenous Brazilian species Amydetes fanestratus and Phrixothrix, respectively. The two enzymes naturally emit light of different colors and they are an excellent system to study the color-emission mechanism of luciferases, as the current proposed mechanisms are based on mutagenesis studies. Using a vapor-diffusion method and a high-throughput approach, we crystallized and solved the crystal structure of both enzymes, at 1.7 Å and 3.1 Å resolution respectively, using X-ray crystallography. The free enzyme adopted two open conformations in the crystallographic unit cell that are different from the previously characterized firefly luciferase. The blue-shifted green luciferase crystalized as a monomer similar to other luciferases reported in literature, while the red luciferases crystalized as an octamer and was also purified as an octomer in solution. The octomer conformation is the first of its kind for any insect’s luciferase, which might be relate to the red color emission. Structurally designed mutations confirmed the importance of the transition between the open and close conformations in the fine-tuning of the color and the characterization of other interesting mutants is underway.

Keywords: bioluminescence, enzymology, structural biology, x-ray crystallography

Procedia PDF Downloads 326
35 Variation of Carbon Isotope Ratio (δ13C) and Leaf-Productivity Traits in Aquilaria Species (Thymelaeceae)

Authors: Arlene López-Sampson, Tony Page, Betsy Jackes

Abstract:

Aquilaria genus produces a highly valuable fragrant oleoresin known as agarwood. Agarwood forms in a few trees in the wild as a response to injure or pathogen attack. The resin is used in perfume and incense industry and medicine. Cultivation of Aquilaria species as a sustainable source of the resin is now a common strategy. Physiological traits are frequently used as a proxy of crop and tree productivity. Aquilaria species growing in Queensland, Australia were studied to investigate relationship between leaf-productivity traits with tree growth. Specifically, 28 trees, representing 12 plus trees and 16 trees from yield plots, were selected to conduct carbon isotope analysis (δ13C) and monitor six leaf attributes. Trees were grouped on four diametric classes (diameter at 150 mm above ground level) ensuring the variability in growth of the whole population was sampled. Model averaging technique based on the Akaike’s information criterion (AIC) was computed to identify whether leaf traits could assist in diameter prediction. Carbon isotope values were correlated with height classes and leaf traits to determine any relationship. In average four leaves per shoot were recorded. Approximately one new leaf per week is produced by a shoot. Rate of leaf expansion was estimated in 1.45 mm day-1. There were no statistical differences between diametric classes and leaf expansion rate and number of new leaves per week (p > 0.05). Range of δ13C values in leaves of Aquilaria species was from -25.5 ‰ to -31 ‰ with an average of -28.4 ‰ (± 1.5 ‰). Only 39% of the variability in height can be explained by δ13C in leaf. Leaf δ13C and nitrogen content values were positively correlated. This relationship implies that leaves with higher photosynthetic capacities also had lower intercellular carbon dioxide concentrations (ci/ca) and less depleted values of 13C. Most of the predictor variables have a weak correlation with diameter (D). However, analysis of the 95% confidence of best-ranked regression models indicated that the predictors that could likely explain growth in Aquilaria species are petiole length (PeLen), values of δ13C (true13C) and δ15N (true15N), leaf area (LA), specific leaf area (SLA) and number of new leaf produced per week (NL.week). The model constructed with PeLen, true13C, true15N, LA, SLA and NL.week could explain 45% (R2 0.4573) of the variability in D. The leaf traits studied gave a better understanding of the leaf attributes that could assist in the selection of high-productivity trees in Aquilaria.

Keywords: 13C, petiole length, specific leaf area, tree growth

Procedia PDF Downloads 512
34 Challenge in Teaching Physics during the Pandemic: Another Way of Teaching and Learning

Authors: Edson Pierre, Gustavo de Jesus Lopez Nunez

Abstract:

The objective of this work is to analyze how physics can be taught remotely through the use of platforms and software to attract the attention of 2nd-year high school students at Colégio Cívico Militar Professor Carmelita Souza Dias and point out how remote teaching can be a teaching-learning strategy during the period of social distancing. Teaching physics has been a challenge for teachers and students, permeating common sense with the great difficulty of teaching and learning the subject. The challenge increased in 2020 and 2021 with the impact caused by the new coronavirus pandemic (Sars-Cov-2) and its variants that have affected the entire world. With these changes, a new teaching modality emerged: remote teaching. It brought new challenges and one of them was promoting distance research experiences, especially in physics teaching, since there are learning difficulties and it is often impossible for the student to relate the theory observed in class with the reality that surrounds them. Teaching physics in schools faces some difficulties, which makes it increasingly less attractive for young people to choose this profession. Bearing in mind that the study of physics is very important, as it puts students in front of concrete and real situations, situations that physical principles can respond to, helping to understand nature, nourishing and nurturing a taste for science. The use of new platforms and software, such as PhET Interactive Simulations from the University of Colorado at Boulder, is a virtual laboratory that has numerous simulations of scientific experiments, which serve to improve the understanding of the content taught practically, facilitating student learning and absorption of content, being a simple, practical and free simulation tool, attracts more attention from students, causing them to acquire greater knowledge about the subject studied, or even a quiz, bringing certain healthy competitiveness to students, generating knowledge and interest in the themes used. The present study takes the Theory of Social Representations as a theoretical reference, examining the content and process of constructing the representations of teachers, subjects of our investigation, on the evaluation of teaching and learning processes, through a methodology of qualitative. The result of this work has shown that remote teaching was really a very important strategy for the process of teaching and learning physics in the 2nd year of high school. It provided greater interaction between the teacher and the student. Therefore, the teacher also plays a fundamental role since technology is increasingly present in the educational environment, and he is the main protagonist of this process.

Keywords: physics teaching, technologies, remote learning, pandemic

Procedia PDF Downloads 67
33 Life Cycle Assessment to Study the Acidification and Eutrophication Impacts of Sweet Cherry Production

Authors: G. Bravo, D. Lopez, A. Iriarte

Abstract:

Several organizations and governments have created a demand for information about the environmental impacts of agricultural products. Today, the export oriented fruit sector in Chile is being challenged to quantify and reduce their environmental impacts. Chile is the largest southern hemisphere producer and exporter of sweet cherry fruit. Chilean sweet cherry production reached a volume of 80,000 tons in 2012. The main destination market for the Chilean cherry in 2012 was Asia (including Hong Kong and China), taking in 69% of exported volume. Another important market was the United States with 16% participation, followed by Latin America (7%) and Europe (6%). Concerning geographical distribution, the Chilean conventional cherry production is focused in the center-south area, between the regions of Maule and O’Higgins; both regions represent 81% of the planted surface. The Life Cycle Assessment (LCA) is widely accepted as one of the major methodologies for assessing environmental impacts of products or services. The LCA identifies the material, energy, material, and waste flows of a product or service, and their impact on the environment. There are scant studies that examine the impacts of sweet cherry cultivation, such as acidification and eutrophication. Within this context, the main objective of this study is to evaluate, using the LCA, the acidification and eutrophication impacts of sweet cherry production in Chile. The additional objective is to identify the agricultural inputs that contributed significantly to the impacts of this fruit. The system under study included all the life cycle stages from the cradle to the farm gate (harvested sweet cherry). The data of sweet cherry production correspond to nationwide representative practices and are based on technical-economic studies and field information obtained in several face-to-face interviews. The study takes into account the following agricultural inputs: fertilizers, pesticides, diesel consumption for agricultural operations, machinery and electricity for irrigation. The results indicated that the mineral fertilizers are the most important contributors to the acidification and eutrophication impacts of the sheet cherry cultivation. Improvement options are suggested for the hotspot in order to reduce the environmental impacts. The results allow planning and promoting low impacts procedures across fruit companies, as well as policymakers, and other stakeholders on the subject. In this context, this study is one of the first assessments of the environmental impacts of sweet cherry production. New field data or evaluation of other life cycle stages could further improve the knowledge on the impacts of this fruit. This study may contribute to environmental information in other countries where there is similar agricultural production for sweet cherry.

Keywords: acidification, eutrophication, life cycle assessment, sweet cherry production

Procedia PDF Downloads 271
32 Effect of Fermented Orange Juice Intake on Urinary 6‑Sulfatoxymelatonin in Healthy Volunteers

Authors: I. Cerrillo, A. Carrillo-Vico, M. A. Ortega, B. Escudero-López, N. Álvarez-Sánchez, F. Martín, M. S. Fernández-Pachón

Abstract:

Melatonin is a bioactive compound involved in multiple biological activities such as glucose tolerance, circadian rhythm regulation, antioxidant defense or immune system action. In elderly subjects the intake of foods and drinks rich in melatonin is very important due to its endogenous level decreases with age. Alcoholic fermentation is a process carried out in fruits, vegetables and legumes to obtain new products with improved bioactive compounds profile in relation to original substrates. Alcoholic fermentation process carried out by Saccharomycetaceae var. Pichia kluyveri induces an important synthesis of melatonin in orange juice. A novel beverage derived of fermented orange juice could be a promising source of this bioactive compound. The aim of the present study was to determine whether the acute intake of fermented orange juice increase the levels of urinary 6-sulfatoxymelatonin in healthy humans. Nine healthy volunteers (7 women and 2 men), aged between 20 and 25 years old and BMI of 21.1  2.4 kg/m2, were recruited. On the study day, participants ingested 500 mL of fermented orange juice. The first urine collection was made before fermented orange juice consumption (basal). The rest of urine collections were made in the following time intervals after fermented orange juice consumption: 0-2, 2-5, 5-10, 10- 15 and 15-24 hours. During the experimental period only the consumption of water was allowed. At lunch time a meal was provided (60 g of white bread, two slices of ham, a slice of cheese, 125 g of sweetened natural yoghurt and water). The subjects repeated the protocol with orange juice following a 2-wk washout period between both types of beverages. The levels of 6-sulfatoxymelatonin (6-SMT) were measured in urine recollected at different time points using the Melatonin-Sulfate Urine ELISA (IBL International GMBH, Hamburg, Germany). Levels of 6-SMT were corrected to those of creatinine for each sample. A significant (p < 0.05) increase in urinary 6-SMT levels was observed between 2-5 hours after fermented orange juice ingestion with respect to basal values (increase of 67,8 %). The consumption of orange juice did not induce any significant change in urinary 6-SMT levels. In addition, urinary 6-SMT levels obtained between 2-5 hours after fermented orange juice ingestion (115,6 ng/mg) were significantly different (p < 0.05) from those of orange juice (42,4 ng/mg). The enhancement of urinary 6-SMT after the ingestion of 500 mL of fermented orange juice in healthy humans compared to orange juice could be an important advantage of this novel product as an excellent source of melatonin. Fermented orange juice could be a new functional food, and its consumption could exert a potentially positive effect on health in both the maintenance of health status and the prevention of chronic diseases.

Keywords: fermented orange juice, functional beverage, healthy human, melatonin

Procedia PDF Downloads 408
31 Electron Bernstein Wave Heating in the Toroidally Magnetized System

Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten

Abstract:

The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.

Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS

Procedia PDF Downloads 96
30 Genetic Diversity of Cord Blood of the National Center of Blood Transfusion, Mexico (NCBT)

Authors: J. Manuel Bello-López, Julieta Rojo-Medina

Abstract:

Introduction: The transplant of Umbilical Cord Blood Units (UCBU) are a therapeutic possibility for patients with oncohaematological disorders, especially in children. In Mexico, 48.5% of oncological diseases in children 1-4 years old are leukemias; whereas in patients 5-14 and 15-24 years old, lymphomas and leukemias represent the second and third cause of death in these groups respectively. Therefore it is necessary to have more registries of UCBU in order to ensure genetic diversity in the country; the above because the search for appropriate a UCBU is increasingly difficult for patients of mixed ethnicity. Objective: To estimate the genetic diversity (polymorphisms) of Human Leucocyte Antigen (HLA) Class I (A, B) and Class II (DRB1) in UCBU cryopreserved for transplant at Cord Blood Bank of the NCBT. Material and Methods: HLA typing of 533 UCBU for transplant was performed from 2003-2012 at the Histocompatibility Laboratory from the Research Department (evaluated by Los Angeles Ca. Immunogenetics Center) of the NCBT. Class I HLA-A, HLA-B and Class II HLA-DRB1 typing was performed using medium resolution Sequence-Specific Primer (SSP). In cases of an ambiguity detected by SSP; Sequence-Specific Oligonucleotide (SSO) method was carried out. A strict analysis of populations genetic parameters were done in 5 representative UCBU populations. Results: 46.5% of UCBU were collected from Mexico City, State of Mexico (30.95%), Puebla (8.06%), Morelos (6.37%) and Veracruz (3.37%). The remaining UCBU (4.75%) are represented by other states. The identified genotypes correspond to Amerindian origins (HLA-A*02, 31; HLA-B*39, 15, 48), Caucasian (HLA-A*02, 68, 01, 30, 31; HLA-B*35, 15, 40, 44, 07 y HLA-DRB1*04, 08, 07, 15, 03, 14), Oriental (HLA-A*02, 30, 01, 31; HLA-B* 35, 39, 15, 40, 44, 07,48 y HLA-DRB1*04, 07,15, 03) and African (HLA-A*30 y HLA-DRB1*03). The genetic distances obtained by Cavalli-Sforza analysis of the five states showed significant genetic differences by comparing genetic frequencies. The shortest genetic distance exists between Mexico City and the state of Puebla (0.0039) and the largest between Veracruz and Morelos (0.0084). In order to identify significant differences between this states, the ANOVA test was performed. This demonstrates that UCBU is significantly different according to their origin (P <0.05). This is shown by the divergence between arms at the Dendogram of Neighbor-Joining. Conclusions: The NCBT provides UCBU in patients with oncohaematological disorders in all the country. There is a group of patients for which not compatible UCBU can be find due to the mixed ethnic origin. For example, the population of northern Mexico is mostly Caucasian. Most of the NCBT donors are of various ethnic origins, predominantly Amerindians and Caucasians; although some ethnic minorities like Oriental, African and pure Indian ethnics are not represented. The NCBT is, therefore, establishing agreements with different states of Mexico to promote the altruistic donation of Umbilical Cord Blood in order to enrich the genetic diversity in its files.

Keywords: cord blood, genetic diversity, human leucocyte antigen, transplant

Procedia PDF Downloads 382