Search results for: Enrique Javier de la Vega Bustillos
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 198

Search results for: Enrique Javier de la Vega Bustillos

48 Hydraulic Optimization of an Adjustable Spiral-Shaped Evaporator

Authors: Matthias Feiner, Francisco Javier Fernández García, Michael Arneman, Martin Kipfmüller

Abstract:

To ensure reliability in miniaturized devices or processes with increased heat fluxes, very efficient cooling methods have to be employed in order to cope with small available cooling surfaces. To address this problem, a certain type of evaporator/heat exchanger was developed: It is called a swirl evaporator due to its flow characteristic. The swirl evaporator consists of a concentrically eroded screw geometry in which a capillary tube is guided, which is inserted into a pocket hole in components with high heat load. The liquid refrigerant R32 is sprayed through the capillary tube to the end face of the blind hole and is sucked off against the injection direction in the screw geometry. Its inner diameter is between one and three millimeters. The refrigerant is sprayed into the pocket hole via a small tube aligned in the center of the bore hole and is sucked off on the front side of the hole against the direction of injection. The refrigerant is sucked off in a helical geometry (twisted flow) so that it is accelerated against the hot wall (centrifugal acceleration). This results in an increase in the critical heat flux of up to 40%. In this way, more heat can be dissipated on the same surface/available installation space. This enables a wide range of technical applications. To optimize the design for the needs in various fields of industry, like the internal tool cooling when machining nickel base alloys like Inconel 718, a correlation-based model of the swirl-evaporator was developed. The model is separated into 3 subgroups with overall 5 regimes. The pressure drop and heat transfer are calculated separately. An approach to determine the locality of phase change in the capillary and the swirl was implemented. A test stand has been developed to verify the simulation.

Keywords: helically-shaped, oil-free, R-32, swirl-evaporator, twist-flow

Procedia PDF Downloads 84
47 Different Motor Inhibition Processes in Action Selection Stage: A Study with Spatial Stroop Paradigm

Authors: German Galvez-Garcia, Javier Albayay, Javiera Peña, Marta Lavin, George A. Michael

Abstract:

The aim of this research was to investigate whether the selection of the actions needs different inhibition processes during the response selection stage. In Experiment 1, we compared the magnitude of the Spatial Stroop effect, which occurs in response selection stage, in two motor actions (lifting vs reaching) when the participants performed both actions in the same block or in different blocks (mixed block vs. pure blocks).Within pure blocks, we obtained faster latencies when lifting actions were performed, but no differences in the magnitude of the Spatial Stroop effect were observed. Within mixed block, we obtained faster latencies as well as bigger-magnitude for Spatial Stroop effect when reaching actions were performed. We concluded that when no action selection is required (the pure blocks condition), inhibition works as a unitary system, whereas in the mixed block condition, where action selection is required, different inhibitory processes take place within a common processing stage. In Experiment 2, we investigated this common processing stage in depth by limiting participants’ available resources, requiring them to engage in a concurrent auditory task within a mixed block condition. The Spatial Stroop effect interacted with Movement as it did in Experiment 1, but it did not significantly interact with available resources (Auditory task x Spatial Stroop effect x Movement interaction). Thus, we concluded that available resources are distributed equally to both inhibition processes; this reinforces the likelihood of there being a common processing stage in which the different inhibitory processes take place.

Keywords: inhibition process, motor processes, selective inhibition, dual task

Procedia PDF Downloads 359
46 A Wearable Device to Overcome Post–Stroke Learned Non-Use; The Rehabilitation Gaming System for wearables: Methodology, Design and Usability

Authors: Javier De La Torre Costa, Belen Rubio Ballester, Martina Maier, Paul F. M. J. Verschure

Abstract:

After a stroke, a great number of patients experience persistent motor impairments such as hemiparesis or weakness in one entire side of the body. As a result, the lack of use of the paretic limb might be one of the main contributors to functional loss after clinical discharge. We aim to reverse this cycle by promoting the use of the paretic limb during activities of daily living (ADLs). To do so, we describe the key components of a system that is composed of a wearable bracelet (i.e., a smartwatch) and a mobile phone, designed to bring a set of neurorehabilitation principles that promote acquisition, retention and generalization of skills to the home of the patient. A fundamental question is whether the loss in motor function derived from learned–non–use may emerge as a consequence of decision–making processes for motor optimization. Our system is based on well-established rehabilitation strategies that aim to reverse this behaviour by increasing the reward associated with action execution as well as implicitly reducing the expected cost associated with the use of the paretic limb, following the notion of the reinforcement–induced movement therapy (RIMT). Here we validate an accelerometer–based measure of arm use, and its capacity to discriminate different activities that require increasing movement of the arm. We also show how the system can act as a personalized assistant by providing specific goals and adjusting them depending on the performance of the patients. The usability and acceptance of the device as a rehabilitation tool is tested using a battery of self–reported and objective measurements obtained from acute/subacute patients and healthy controls. We believe that an extension of these technologies will allow for the deployment of unsupervised rehabilitation paradigms during and beyond the hospitalization time.

Keywords: stroke, wearables, learned non use, hemiparesis, ADLs

Procedia PDF Downloads 179
45 Hybrid Bimodal Magnetic Force Microscopy

Authors: Fernández-Brito David, Lopez-Medina Javier Alonso, Murillo-Bracamontes Eduardo Antonio, Palomino-Ovando Martha Alicia, Gervacio-Arciniega José Juan

Abstract:

Magnetic Force Microscopy (MFM) is an Atomic Force Microscopy (AFM) technique that characterizes, at a nanometric scale, the magnetic properties of ferromagnetic materials. Conventional MFM works by scanning in two different AFM modes. The first one is tapping mode, in which the cantilever has short-range force interactions with the sample, with the purpose to obtain the topography. Then, the lift AFM mode starts, raising the cantilever to maintain a fixed distance between the tip and the surface of the sample, only interacting with the magnetic field forces of the sample, which are long-ranged. In recent years, there have been attempts to improve the MFM technique. Bimodal MFM was first theoretically developed and later experimentally proven. In bimodal MFM, the AFM internal piezoelectric is used to cause the cantilever oscillations in two resonance modes simultaneously, the first mode detects the topography, while the second is more sensitive to the magnetic forces between the tip and the sample. However, it has been proven that the cantilever vibrations induced by the internal AFM piezoelectric ceramic are not optimal, affecting the bimodal MFM characterizations. Moreover, the Secondary Resonance Magnetic Force Microscopy (SR-MFM) was developed. In this technique, a coil located below the sample generates an external magnetic field. This alternating magnetic field excites the cantilever at a second frequency to apply the Bimodal MFM mode. Nonetheless, for ferromagnetic materials with a low coercive field, the external field used in SR-MFM technique can modify the magnetic domains of the sample. In this work, a Hybrid Bimodal MFM (HB-MFM) technique is proposed. In HB-MFM, the bimodal MFM is used, but the first resonance frequency of the cantilever is induced by the magnetic field of the ferromagnetic sample due to its vibrations caused by a piezoelectric element placed under the sample. The advantages of this new technique are demonstrated through the preliminary results obtained by HB-MFM on a hard disk sample. Additionally, traditional two pass MFM and HB-MFM measurements were compared.

Keywords: magnetic force microscopy, atomic force microscopy, magnetism, bimodal MFM

Procedia PDF Downloads 43
44 Fabrication of Superhydrophobic Galvanized Steel by Sintering Zinc Nanopowder

Authors: Francisco Javier Montes Ruiz-Cabello, Guillermo Guerrero-Vacas, Sara Bermudez-Romero, Miguel Cabrerizo Vilchez, Miguel Angel Rodriguez-Valverde

Abstract:

Galvanized steel is one of the widespread metallic materials used in industry. It consists on a iron-based alloy (steel) coated with a layer of zinc with variable thickness. The zinc is aimed to prevent the inner steel from corrosion and staining. Its production is cheaper than the stainless steel and this is the reason why it is employed in the construction of materials with large dimensions in aeronautics, urban/ industrial edification or ski-resorts. In all these applications, turning the natural hydrophilicity of the metal surface into superhydrophobicity is particularly interesting and would open a wide variety of additional functionalities. However, producing a superhydrophobic surface on galvanized steel may be a very difficult task. Superhydrophobic surfaces are characterized by a specific surface texture which is reached either by coating the surface with a material that incorporates such texture, or by conducting several roughening methods. Since galvanized steel is already a coated material, the incorporation of a second coating may be undesired. On the other hand, the methods that are recurrently used to incorporate the surface texture leading to superhydrophobicity in metals are aggressive and may damage their surface. In this work, we used a novel strategy which goal is to produce superhydrophobic galvanized steel by a two-step non-aggressive process. The first process is aimed to create a hierarchical structure by incorporating zinc nanoparticles sintered on the surface at a temperature slightly lower than the zinc’s melting point. The second one is a hydrophobization by a thick fluoropolymer layer deposition. The wettability of the samples is characterized in terms of tilting plate and bouncing drop experiments, while the roughness is analyzed by confocal microscopy. The durability of the produced surfaces was also explored.

Keywords: galvanaized steel, superhydrophobic surfaces, sintering nanoparticles, zinc nanopowder

Procedia PDF Downloads 119
43 Evaluation of the Heating Capability and in vitro Hemolysis of Nanosized MgxMn1-xFe2O4 (x = 0.3 and 0.4) Ferrites Prepared by Sol-gel Method

Authors: Laura Elena De León Prado, Dora Alicia Cortés Hernández, Javier Sánchez

Abstract:

Among the different cancer treatments that are currently used, hyperthermia has a promising potential due to the multiple benefits that are obtained by this technique. In general terms, hyperthermia is a method that takes advantage of the sensitivity of cancer cells to heat, in order to damage or destroy them. Within the different ways of supplying heat to cancer cells and achieve their destruction or damage, the use of magnetic nanoparticles has attracted attention due to the capability of these particles to generate heat under the influence of an external magnetic field. In addition, these nanoparticles have a high surface area and sizes similar or even lower than biological entities, which allow their approaching and interaction with a specific region of interest. The most used magnetic nanoparticles for hyperthermia treatment are those based on iron oxides, mainly magnetite and maghemite, due to their biocompatibility, good magnetic properties and chemical stability. However, in order to fulfill more efficiently the requirements that demand the treatment of magnetic hyperthermia, there have been investigations using ferrites that incorporate different metallic ions, such as Mg, Mn, Co, Ca, Ni, Cu, Li, Gd, etc., in their structure. This paper reports the synthesis of nanosized MgxMn1-xFe2O4 (x = 0.3 and 0.4) ferrites by sol-gel method and their evaluation in terms of heating capability and in vitro hemolysis to determine the potential use of these nanoparticles as thermoseeds for the treatment of cancer by magnetic hyperthermia. It was possible to obtain ferrites with nanometric sizes, a single crystalline phase with an inverse spinel structure and a behavior near to that of superparamagnetic materials. Additionally, at concentrations of 10 mg of magnetic material per mL of water, it was possible to reach a temperature of approximately 45°C, which is within the range of temperatures used for the treatment of hyperthermia. The results of the in vitro hemolysis assay showed that, at the concentrations tested, these nanoparticles are non-hemolytic, as their percentage of hemolysis is close to zero. Therefore, these materials can be used as thermoseeds for the treatment of cancer by magnetic hyperthermia.

Keywords: ferrites, heating capability, hemolysis, nanoparticles, sol-gel

Procedia PDF Downloads 313
42 Development of Technologies for the Treatment of Nutritional Problems in Primary Care

Authors: Marta Fernández Batalla, José María Santamaría García, Maria Lourdes Jiménez Rodríguez, Roberto Barchino Plata, Adriana Cercas Duque, Enrique Monsalvo San Macario

Abstract:

Background: Primary Care Nursing is taking more autonomy in clinical decisions. One of the most frequent therapies to solve is related to the problems of maintaining a sufficient supply of food. Nursing diagnoses related to food are addressed by the nurse-family and community as the first responsible. Objectives and interventions are set according to each patient. To improve the goal setting and the treatment of these care problems, a technological tool is developed to help nurses. Objective: To evaluate the computational tool developed to support the clinical decision in feeding problems. Material and methods: A cross-sectional descriptive study was carried out at the Meco Health Center, Madrid, Spain. The study population consisted of four specialist nurses in primary care. These nurses tested the tool on 30 people with ‘need for nutritional therapy’. Subsequently, the usability of the tool and the satisfaction of the professional were sought. Results: A simple and convenient computational tool is designed for use. It has 3 main entrance fields: age, size, sex. The tool returns the following information: BMI (Body Mass Index) and caloric consumed by the person. The next step is the caloric calculation depending on the activity. It is possible to propose a goal of BMI or weight to achieve. With this, the amount of calories to be consumed is proposed. After using the tool, it was determined that the tool calculated the BMI and calories correctly (in 100% of clinical cases). satisfaction on nutritional assessment was ‘satisfactory’ or ‘very satisfactory’, linked to the speed of operations. As a point of improvement, the options of ‘stress factor’ linked to weekly physical activity. Conclusion: Based on the results, it is clear that the computational tools of decision support are useful in the clinic. Nurses are not only consumers of computational tools, but can develop their own tools. These technological solutions improve the effectiveness of nutrition assessment and intervention. We are currently working on improvements such as the calculation of protein percentages as a function of protein percentages as a function of stress parameters.

Keywords: feeding behavior health, nutrition therapy, primary care nursing, technology assessment

Procedia PDF Downloads 199
41 Hydro Geochemistry and Water Quality in a River Affected by Lead Mining in Southern Spain

Authors: Rosendo Mendoza, María Carmen Hidalgo, María José Campos-Suñol, Julián Martínez, Javier Rey

Abstract:

The impact of mining environmental liabilities and mine drainage on surface water quality has been investigated in the hydrographic basin of the La Carolina mining district (southern Spain). This abandoned mining district is characterized by the existence of important mineralizations of sulfoantimonides of Pb - Ag, and sulfides of Cu - Fe. All surface waters reach the main river of this mining area, the Grande River, which ends its course in the Rumblar reservoir. This waterbody is intended to supply 89,000 inhabitants, as well as irrigation and livestock. Therefore, the analysis and control of the metal(loid) concentration that exists in these surface waters is an important issue because of the potential pollution derived from metallic mining. A hydrogeochemical campaign consisting of 20 water sampling points was carried out in the hydrographic network of the Grande River, as well as two sampling points in the Rumbler reservoir and at the main tailings impoundment draining to the river. Although acid mine drainage (pH below 4) is discharged into the Grande river from some mine adits, the pH values in the river water are always neutral or slightly alkaline. This is mainly the result of a dilution process of the small volumes of mine waters by net alkaline waters of the river. However, during the dry season, the surface waters present high mineralization due to a constant discharge from the abandoned flooded mines and a decrease in the contribution of surface runoff. The concentrations of dissolved Cd and Pb in the water reach values of 2 and 81 µg/l, respectively, exceeding the limit established by the Environmental Quality Standard for surface water. In addition, the concentrations of dissolved As, Cu, and Pb in the waters of the Rumblar reservoir reached values of 10, 20, and 11 µg/l, respectively. These values are higher than the maximum allowable concentration for human consumption, a circumstance that is especially alarming.

Keywords: environmental quality, hydrogeochemistry, metal mining, surface water

Procedia PDF Downloads 114
40 Experimental Study of Reflective Roof as a Passive Cooling Method in Homes Under the Paradigm of Appropriate Technology

Authors: Javier Ascanio Villabona, Brayan Eduardo Tarazona Romero, Camilo Leonardo Sandoval Rodriguez, Arly Dario Rincon, Omar Lengerke Perez

Abstract:

Efficient energy consumption in the housing sector in relation to refrigeration is a concern in the construction and rehabilitation of houses in tropical areas. Thermal comfort is aggravated by heat gain on the roof surface by heat gains. Thus, in the group of passive cooling techniques, one of the practices and technologies in solar control that provide improvements in comfortable conditions are thermal insulation or geometric changes of the roofs. On the other hand, methods with reflection and radiation are the methods used to decrease heat gain by facilitating the removal of excess heat inside a building to maintain a comfortable environment. Since the potential of these techniques varies in different climatic zones, their application in different zones should be examined. This research is based on the experimental study of a prototype of a roof radiator as a method of passive cooling in homes, which was developed through an experimental research methodology making measurements in a prototype built by means of the paradigm of appropriate technology, with the aim of establishing an initial behavior of the internal temperature resulting from the climate of the external environment. As a starting point, a selection matrix was made to identify the typologies of passive cooling systems to model the system and its subsequent implementation, establishing its constructive characteristics. Step followed by the measurement of the climatic variables (outside the prototype) and microclimatic variables (inside the prototype) to obtain a database to be analyzed. As a final result, the decrease in temperature that occurs inside the chamber with respect to the outside temperature was evidenced. likewise, a linearity in its behavior in relation to the variations of the climatic variables.

Keywords: appropriate technology, enveloping, energy efficiency, passive cooling

Procedia PDF Downloads 69
39 Topographic Coast Monitoring Using UAV Photogrammetry: A Case Study in Port of Veracruz Expansion Project

Authors: Francisco Liaño-Carrera, Jorge Enrique Baños-Illana, Arturo Gómez-Barrero, José Isaac Ramírez-Macías, Erik Omar Paredes-JuáRez, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga

Abstract:

Topographical changes in coastal areas are usually assessed with airborne LIDAR and conventional photogrammetry. In recent times Unmanned Aerial Vehicles (UAV) have been used several in photogrammetric applications including coastline evolution. However, its use goes further by using the points cloud associated to generate beach Digital Elevation Models (DEM). We present a methodology for monitoring coastal topographic changes along a 50 km coastline in Veracruz, Mexico using high-resolution images (less than 10 cm ground resolution) and dense points cloud captured with an UAV. This monitoring develops in the context of the port of Veracruz expansion project which construction began in 2015 and intends to characterize coast evolution and prevent and mitigate project impacts on coastal environments. The monitoring began with a historical coastline reconstruction since 1979 to 2015 using aerial photography and Landsat imagery. We could define some patterns: the northern part of the study area showed accretion while the southern part of the study area showed erosion. Since the study area is located off the port of Veracruz, a touristic and economical Mexican urban city, where coastal development structures have been built since 1979 in a continuous way, the local beaches of the touristic area are been refilled constantly. Those areas were not described as accretion since every month sand-filled trucks refill the sand beaches located in front of the hotel area. The construction of marinas and the comitial port of Veracruz, the old and the new expansion were made in the erosion part of the area. Northward from the City of Veracruz the beaches were described as accretion areas while southward from the city, the beaches were described as erosion areas. One of the problems is the expansion of the new development in the southern area of the city using the beach view as an incentive to buy front beach houses. We assessed coastal changes between seasons using high-resolution images and also points clouds during 2016 and preliminary results confirm that UAVs can be used in permanent coast monitoring programs with excellent performance and detail.

Keywords: digital elevation model, high-resolution images, topographic coast monitoring, unmanned aerial vehicle

Procedia PDF Downloads 244
38 Subdued Electrodermal Response to Empathic Induction Task in Intimate Partner Violence (IPV) Perpetrators

Authors: Javier Comes Fayos, Isabel Rodríguez Moreno, Sara Bressanutti, Marisol Lila, Angel Romero Martínez, Luis Moya Albiol

Abstract:

Empathy is a cognitive-affective capacity whose deterioration is associated with aggressive behaviour. Deficient affective processing is one of the predominant risk factors in men convicted of intimate partner violence (IPV perpetrators), since it makes their capacity to empathize very difficult. The objective of this study is to compare the response of electrodermal activity (EDA), as an indicator of emotionality, to an empathic induction task, between IPV perpetrators and men without a history of violence. The sample was composed of 51 men who attended the CONTEXTO program, with penalties for gender violence under two years, and 47 men with no history of violence. Empathic induction was achieved through the visualization of 4 negative emotional-eliciting videos taken from an emotional induction battery of videos validated for the Spanish population. The participants were asked to actively empathize with the video characters (previously pointed out). The psychophysiological recording of the EDA was accomplished by the "Vrije Universiteit Ambulatory Monitoring System (VU-AMS)." An analysis of repeated measurements was carried out with 10 intra-subject measurements (time) and "group" (IPV perpetrators and non-violent perpetrators) as the inter-subject factor. First, there were no significant differences between groups in the baseline AED levels. Yet, a significant interaction between the “time” and “group” was found with IPV perpetrators exhibiting lower EDA response than controls after the empathic induction task. These findings provide evidence of a subdued EDA response after an empathic induction task in IPV perpetrators with respect to men without a history of violence. Therefore, the lower psychophysiological activation would be indicative of difficulties in the emotional processing and response, functions that are necessary for the empathic function. Consequently, the importance of addressing possible empathic difficulties in IPV perpetrator psycho-educational programs is reinforced, putting special emphasis on the affective dimension that could hinder the empathic function.

Keywords: electrodermal activity, emotional induction, empathy, intimate partner violence

Procedia PDF Downloads 166
37 The Comparative Electroencephalogram Study: Children with Autistic Spectrum Disorder and Healthy Children Evaluate Classical Music in Different Ways

Authors: Galina Portnova, Kseniya Gladun

Abstract:

In our EEG experiment participated 27 children with ASD with the average age of 6.13 years and the average score for CARS 32.41 and 25 healthy children (of 6.35 years). Six types of musical stimulation were presented, included Gluck, Javier-Naida, Kenny G, Chopin and other classic musical compositions. Children with autism showed orientation reaction to the music and give behavioral responses to different types of music, some of them might assess stimulation by scales. The participants were instructed to remain calm. Brain electrical activity was recorded using a 19-channel EEG recording device, 'Encephalan' (Russia, Taganrog). EEG epochs lasting 150 s were analyzed using EEGLab plugin for MatLab (Mathwork Inc.). For EEG analysis we used Fast Fourier Transform (FFT), analyzed Peak alpha frequency (PAF), correlation dimension D2 and Stability of rhythms. To express the dynamics of desynchronizing of different rhythms we've calculated the envelope of the EEG signal, using the whole frequency range and a set of small narrowband filters using Hilbert transformation. Our data showed that healthy children showed similar EEG spectral changes during musical stimulation as well as described the feelings induced by musical fragments. The exception was the ‘Chopin. Prelude’ fragment (no.6). This musical fragment induced different subjective feeling, behavioral reactions and EEG spectral changes in children with ASD and healthy children. The correlation dimension D2 was significantly lower in autists compared to healthy children during musical stimulation. Hilbert envelope frequency was reduced in all group of subjects during musical compositions 1,3,5,6 compositions compared to the background. During musical fragments 2 and 4 (terrible) lower Hilbert envelope frequency was observed only in children with ASD and correlated with the severity of the disease. Alfa peak frequency was lower compared to the background during this musical composition in healthy children and conversely higher in children with ASD.

Keywords: electroencephalogram (EEG), emotional perception, ASD, musical perception, childhood Autism rating scale (CARS)

Procedia PDF Downloads 257
36 Protein-Enrichment of Oilseed Meals by Triboelectrostatic Separation

Authors: Javier Perez-Vaquero, Katryn Junker, Volker Lammers, Petra Foerst

Abstract:

There is increasing importance to accelerate the transition to sustainable food systems by including environmentally friendly technologies. Our work focuses on protein enrichment and fractionation of agricultural side streams by dry triboelectrostatic separation technology. Materials are fed in particulate form into a system dispersed in a highly turbulent gas stream, whereby the high collision rate of particles against surfaces and other particles greatly enhances the electrostatic charge build-up over the particle surface. A subsequent step takes the charged particles to a delimited zone in the system where there is a highly uniform, intense electric field applied. Because the charge polarity acquired by a particle is influenced by its chemical composition, morphology, and structure, the protein-rich and fiber-rich particles of the starting material get opposite charge polarities, thus following different paths as they move through the region where the electric field is present. The output is two material fractions, which differ in their respective protein content. One is a fiber-rich, low-protein fraction, while the other is a high-protein, low-fiber composition. Prior to testing, materials undergo a milling process, and some samples are stored under controlled humidity conditions. In this way, the influence of both particle size and humidity content was established. We used two oilseed meals: lupine and rapeseed. In addition to a lab-scale separator to perform the experiments, the triboelectric separation process could be successfully scaled up to a mid-scale belt separator, increasing the mass feed from g/sec to kg/hour. The triboelectrostatic separation technology opens a huge potential for the exploitation of so far underutilized alternative protein sources. Agricultural side-streams from cereal and oil production, which are generated in high volumes by the industries, can further be valorized by this process.

Keywords: bench-scale processing, dry separation, protein-enrichment, triboelectrostatic separation

Procedia PDF Downloads 158
35 Compression-Extrusion Test to Assess Texture of Thickened Liquids for Dysphagia

Authors: Jesus Salmeron, Carmen De Vega, Maria Soledad Vicente, Mireia Olabarria, Olaia Martinez

Abstract:

Dysphagia or difficulty in swallowing affects mostly elder people: 56-78% of the institutionalized and 44% of the hospitalized. Liquid food thickening is a necessary measure in this situation because it reduces the risk of penetration-aspiration. Until now, and as proposed by the American Dietetic Association in 2002, possible consistencies have been categorized in three groups attending to their viscosity: nectar (50-350 mPa•s), honey (350-1750 mPa•s) and pudding (>1750 mPa•s). The adequate viscosity level should be identified for every patient, according to her/his impairment. Nevertheless, a systematic review on dysphagia diet performed recently indicated that there is no evidence to suggest that there is any transition of clinical relevance between the three levels proposed. It was also stated that other physical properties of the bolus (slipperiness, density or cohesiveness, among others) could influence swallowing in affected patients and could contribute to the amount of remaining residue. Texture parameters need to be evaluated as possible alternative to viscosity. The aim of this study was to evaluate the instrumental extrusion-compression test as a possible tool to characterize changes along time in water thickened with various products and in the three theoretical consistencies. Six commercial thickeners were used: NM® (NM), Multi-thick® (M), Nutilis Powder® (Nut), Resource® (R), Thick&Easy® (TE) and Vegenat® (V). All of them with a modified starch base. Only one of them, Nut, also had a 6,4% of gum (guar, tara and xanthan). They were prepared as indicated in the instructions of each product and dispensing the correspondent amount for nectar, honey and pudding consistencies in 300 mL of tap water at 18ºC-20ºC. The mixture was stirred for about 30 s. Once it was homogeneously spread, it was dispensed in 30 mL plastic glasses; always to the same height. Each of these glasses was used as a measuring point. Viscosity was measured using a rotational viscometer (ST-2001, Selecta, Barcelona). Extrusion-compression test was performed using a TA.XT2i texture analyzer (Stable Micro Systems, UK) with a 25 mm diameter cylindrical probe (SMSP/25). Penetration distance was set at 10 mm and a speed of 3 mm/s. Measurements were made at 1, 5, 10, 20, 30, 40, 50 and 60 minutes from the moment samples were mixed. From the force (g)–time (s) curves obtained in the instrumental assays, maximum force peak (F) was chosen a reference parameter. Viscosity (mPa•s) and F (g) showed to be highly correlated and had similar development along time, following time-dependent quadratic models. It was possible to predict viscosity using F as an independent variable, as they were linearly correlated. In conclusion, compression-extrusion test could be an alternative and a useful tool to assess physical characteristics of thickened liquids.

Keywords: compression-extrusion test, dysphagia, texture analyzer, thickener

Procedia PDF Downloads 341
34 Climate Change, Women's Labour Markets and Domestic Work in Mexico

Authors: Luis Enrique Escalante Ochoa

Abstract:

This paper attempts to assess the impacts of Climate change (CC) on inequalities in the labour market. CC will have the most serious effects on some vulnerable economic sectors, such as agriculture, livestock or tourism, but also on the most vulnerable population groups. The objective of this research is to evaluate the impact of CC on the labour market and particularly on Mexican women. Influential documents such as the synthesis reports produced by the Intergovernmental Panel on Climate Change (IPCC) in 2007 and 2014 revived a global effort to counteract the effects of CC, called for an analysis of the impacts on vulnerable socio-economic groups and on economic activities, and for the development of decision-making tools to enable policy and other decisions based on the complexity of the world in relation to climate change, taking into account socio-economic attributes. We follow up this suggestion and determine the impact of CC on vulnerable populations in the Mexican labour market, taking into account two attributes (gender and level of qualification of workers). Most studies have focused on the effects of CC on the agricultural sector, as it is considered a highly vulnerable economic sector to the effects of climate variability. This research seeks to contribute to the existing literature taking into account, in addition to the agricultural sector, other sectors such as tourism, water availability, and energy that are of vital importance to the Mexican economy. Likewise, the effects of climate change will be extended to the labour market and specifically to women who in some cases have been left out. The studies are sceptical about the impact of CC on the female labour market because of the perverse effects on women's domestic work, which are too often omitted from analyses. This work will contribute to the literature by integrating domestic work, which in the case of Mexico is much higher among women than among men (80.9% vs. 19.1%), according to the 2009 time use survey. This study is relevant since it will allow us to analyse impacts of climate change not only in the labour market of the formal economy, but also in the non-market sphere. Likewise, we consider that including the gender dimension is valid for the Mexican economy as it is a country with high degrees of gender inequality in the labour market. In the OECD economic study for Mexico (2017), the low labour participation of Mexican women is highlighted. Although participation has increased substantially in recent years (from 36% in 1990 to 47% in 2017), it remains low compared to the OECD average where women participate around 70% of the labour market. According to Mexico's 2009 time use survey, domestic work represents about 13% of the total time available. Understanding the interdependence between the market and non-market spheres, and the gender division of labour within them is the necessary premise for any economic analysis aimed at promoting gender equality and inclusive growth.

Keywords: climate change, labour market, domestic work, rural sector

Procedia PDF Downloads 105
33 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2

Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk

Abstract:

Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.

Keywords: ecosystem services, grassland management, machine learning, remote sensing

Procedia PDF Downloads 184
32 Mapping Intertidal Changes Using Polarimetry and Interferometry Techniques

Authors: Khalid Omari, Rene Chenier, Enrique Blondel, Ryan Ahola

Abstract:

Northern Canadian coasts have vulnerable and very dynamic intertidal zones with very high tides occurring in several areas. The impact of climate change presents challenges not only for maintaining this biodiversity but also for navigation safety adaptation due to the high sediment mobility in these coastal areas. Thus, frequent mapping of shorelines and intertidal changes is of high importance. To help in quantifying the changes in these fragile ecosystems, remote sensing provides practical monitoring tools at local and regional scales. Traditional methods based on high-resolution optical sensors are often used to map intertidal areas by benefiting of the spectral response contrast of intertidal classes in visible, near and mid-infrared bands. Tidal areas are highly reflective in visible bands mainly because of the presence of fine sand deposits. However, getting a cloud-free optical data that coincide with low tides in intertidal zones in northern regions is very difficult. Alternatively, the all-weather capability and daylight-independence of the microwave remote sensing using synthetic aperture radar (SAR) can offer valuable geophysical parameters with a high frequency revisit over intertidal zones. Multi-polarization SAR parameters have been used successfully in mapping intertidal zones using incoherence target decomposition. Moreover, the crustal displacements caused by ocean tide loading may reach several centimeters that can be detected and quantified across differential interferometric synthetic aperture radar (DInSAR). Soil moisture change has a significant impact on both the coherence and the backscatter. For instance, increases in the backscatter intensity associated with low coherence is an indicator for abrupt surface changes. In this research, we present primary results obtained following our investigation of the potential of the fully polarimetric Radarsat-2 data for mapping an inter-tidal zone located on Tasiujaq on the south-west shore of Ungava Bay, Quebec. Using the repeat pass cycle of Radarsat-2, multiple seasonal fine quad (FQ14W) images are acquired over the site between 2016 and 2018. Only 8 images corresponding to low tide conditions are selected and used to build an interferometric stack of data. The observed displacements along the line of sight generated using HH and VV polarization are compared with the changes noticed using the Freeman Durden polarimetric decomposition and Touzi degree of polarization extrema. Results show the consistency of both approaches in their ability to monitor the changes in intertidal zones.

Keywords: SAR, degree of polarization, DInSAR, Freeman-Durden, polarimetry, Radarsat-2

Procedia PDF Downloads 117
31 Development of Polylactic Acid Insert with a Cinnamaldehyde-Betacyclodextrin Complex for Cape Gooseberry (Physalis Peruviana L.) Packed

Authors: Gómez S. Jennifer, Méndez V. Camila, Moncayo M. Diana, Vega M. Lizeth

Abstract:

The cape gooseberry is a climacteric fruit; Colombia is one of the principal exporters in the world. The environmental condition of temperature and relative moisture decreases the titratable acidity and pH. These conditions and fruit maturation result in the fungal proliferation of Botrytis cinerea disease. Plastic packaging for fresh cape gooseberries was used for mechanical damage protection but created a suitable atmosphere for fungal growth. Beta-cyclodextrins are currently implemented as coatings for the encapsulation of hydrophobic compounds, for example, with bioactive compounds from essential oils such as cinnamaldehyde, which has a high antimicrobial capacity. However, it is a volatile substance. In this article, the casting method was used to obtain a polylactic acid (PLA) polymer film containing the beta-cyclodextrin-cinnamaldehyde inclusion complex, generating an insert that allowed the controlled release of the antifungal substance in packed cape gooseberries to decrease contamination by Botrytis cinerea in a latent state during storage. For the encapsulation technique, three ratios for the cinnamaldehyde: beta-cyclodextrin inclusion complex were proposed: (25:75), (40:60), and (50:50). Spectrophotometry, colorimetry in L*a*b* coordinate space and scanning electron microscopy (SEM) were made for the complex characterization. Subsequently, two ratios of tween and water (40:60) and (50:50) were used to obtain the polylactic acid (PLA) film. To determine mechanical and physical parameters of colourimetry in L*a*b* coordinate space, atomic force microscopy and stereoscopy were done to determine the transparency and flexibility of the film; for both cases, Statgraphics software was used to determine the best ratio in each of the proposed phases, where for encapsulation it was (50:50) with an encapsulation efficiency of 65,92%, and for casting the ratio (40:60) obtained greater transparency and flexibility that permitted its incorporation into the polymeric packaging. A liberation assay was also developed under ambient temperature conditions to evaluate the concentration of cinnamaldehyde inside the packaging through gas chromatography for three weeks. It was found that the insert had a controlled release. Nevertheless, a higher cinnamaldehyde concentration is needed to obtain the minimum inhibitory concentration for the fungus Botrytis cinerea (0.2g/L). The homogeneity of the cinnamaldehyde gas phase inside the packaging can be improved by considering other insert configurations. This development aims to impact emerging food preservation technologies with the controlled release of antifungals to reduce the affectation of the physico-chemical and sensory properties of the fruit as a result of contamination by microorganisms in the postharvest stage.

Keywords: antifungal, casting, encapsulation, postharvest

Procedia PDF Downloads 46
30 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat

Authors: M. Venegas, M. De Vega, N. García-Hernando

Abstract:

Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.

Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy

Procedia PDF Downloads 255
29 Laparoscopic Resection Shows Comparable Outcomes to Open Thoracotomy for Thoracoabdominal Neuroblastomas: A Meta-Analysis and Systematic Review

Authors: Peter J. Fusco, Dave M. Mathew, Chris Mathew, Kenneth H. Levy, Kathryn S. Varghese, Stephanie Salazar-Restrepo, Serena M. Mathew, Sofia Khaja, Eamon Vega, Mia Polizzi, Alyssa Mullane, Adham Ahmed

Abstract:

Background: Laparoscopic (LS) removal of neuroblastomas in children has been reported to offer favorable outcomes compared to the conventional open thoracotomy (OT) procedure. Critical perioperative measures such as blood loss, operative time, length of stay, and time to postoperative chemotherapy have all supported laparoscopic use rather than its more invasive counterpart. Herein, a pairwise meta-analysis was performed comparing perioperative outcomes between LS and OT in thoracoabdominal neuroblastoma cases. Methods: A comprehensive literature search was performed on PubMed, Ovid EMBASE, and Scopus databases to identify studies comparing the outcomes of pediatric patients with thoracoabdominal neuroblastomas undergoing resection via OT or LS. After deduplication, 4,227 studies were identified and subjected to initial title screening with exclusion and inclusion criteria to ensure relevance. When studies contained overlapping cohorts, only the larger series were included. Primary outcomes include estimated blood loss (EBL), hospital length of stay (LOS), and mortality, while secondary outcomes were tumor recurrence, post-operative complications, and operation length. The “meta” and “metafor” packages were used in R, version 4.0.2, to pool risk ratios (RR) or standardized mean differences (SMD) in addition to their 95% confidence intervals in the random effects model via the Mantel-Haenszel method. Heterogeneity between studies was assessed using the I² test, while publication bias was assessed via funnel plot. Results: The pooled analysis included 209 patients from 5 studies (141 OT, 68 LS). Of the included studies, 2 originated from the United States, 1 from Toronto, 1 from China, and 1was from a Japanese center. Mean age between study cohorts ranged from 2.4 to 5.3 years old, with female patients occupying between 30.8% to 50% of the study populations. No statistically significant difference was found between the two groups for LOS (SMD -1.02; p=0.083), mortality (RR 0.30; p=0.251), recurrence(RR 0.31; p=0.162), post-operative complications (RR 0.73; p=0.732), or operation length (SMD -0.07; p=0.648). Of note, LS appeared to be protective in the analysis for EBL, although it did not reach statistical significance (SMD -0.4174; p= 0.051). Conclusion: Despite promising literature assessing LS removal of pediatric neuroblastomas, results showed it was non-superior to OT for any explored perioperative outcomes. Given the limited comparative data on the subject, it is evident that randomized trials are necessary to further the efficacy of the conclusions reached.

Keywords: laparoscopy, neuroblastoma, thoracoabdominal, thoracotomy

Procedia PDF Downloads 101
28 Characteristics of Acute Bacterial Prostatitis in Elderly Patients Attended in the Emergency Department

Authors: Carles Ferré, Ferran Llopis, Javier Jacob, Jordi Giol, Xavier Palom, Ignasi Bardés

Abstract:

Objective: To analyze the characteristics of acute bacterial prostatitis (ABP) in elderly patients attended in the emergency department (ED). Methods: Observational and cohort study with prospective follow-up including patients with ABP presenting to the ED from January-December 2012. Data were collected for demographic variables, comorbidities, clinical and microbiological findings, treatment, outcome, and reconsultation at 30 days follow up. Findings were compared between patients ≥ 75 years (study group) and < 75 years (control group). Results: During the study period 241 episodes of ABP were included for analysis. Mean age was 62,9 ± 16 years, and 64 (26.5%) were ≥ 75 years old. A history of prostate adenoma was reported in 54 cases (22,4%), diabetes mellitus in 47 patients (19,5%) and prior manipulation of the lower urinary tract in 40 (17%). Mean symptoms duration was 3.38 ± 4.04 days, voiding symptoms were present in 176 cases (73%) and fever in 154 (64%). From 216 urine cultures, 128 were positive (59%) and 24 (17,6%) out of 136 blood cultures. Escherichia coli was the main pathogen in 58.6% of urine cultures and 64% of blood cultures (with resistant strains to fluoroquinolones in 27,7%, cotrimoxazole in 22,9% and amoxicillin/clavulanic in 27.7% of cases). Seventy patients (29%) were admitted to the hospital, and 3 died. At 30-day follow-up, 29 patients (12%) returned to the ED. In the bivariate analysis previous manipulation of the urinary tract, history of cancer, previous antibiotic treatment, resistant E. coli strains to amoxicillin-clavulanate and ciprofloxacin and extended spectrum beta-lactamase (ESBL) producers, renal impairment, and admission to the hospital were significantly more frequent (p < 0.05) among patients ≥ 75 years compared to those younger than 75 years. Conclusions: Ciprofloxacin and amoxicillin-clavulanate appear not to be good options for the empiric treatment of ABP for patients ≥ 75 years given the drug-resistance pattern in our series, and the proportion of ESBL-producing strains of E. coli should be taken into account. Awaiting bacteria identification and antibiogram from urine and/or blood cultures, treatment on an inpatient basis should be considered in older patients with ABP.

Keywords: acute bacterial prostatitits, antibiotic resistance, elderly patients, emergency

Procedia PDF Downloads 341
27 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 88
26 Improving Patient Outcomes for Aspiration Pneumonia

Authors: Mary Farrell, Maria Soubra, Sandra Vega, Dorothy Kakraba, Joanne Fontanilla, Moira Kendra, Danielle Tonzola, Stephanie Chiu

Abstract:

Pneumonia is the most common infectious cause of hospitalizations in the United States, with more than one million admissions annually and costs of $10 billion every year, making it the 8th leading cause of death. Aspiration pneumonia is an aggressive type of pneumonia that results from inhalation of oropharyngeal secretions and/or gastric contents and is preventable. The authors hypothesized that an evidence-based aspiration pneumonia clinical care pathway could reduce 30-day hospital readmissions and mortality rates, while improving the overall care of patients. We conducted a retrospective chart review on 979 patients discharged with aspiration pneumonia from January 2021 to December 2022 at Overlook Medical Center. The authors identified patients who were coded with aspiration pneumonia and/or stable sepsis. Secondarily, we identified 30-day readmission rates for aspiration pneumonia from a SNF. The Aspiration Pneumonia Clinical Care Pathway starts in the emergency department (ED) with the initiation of antimicrobials within 4 hours of admission and early recognition of aspiration. Once this is identified, a swallow test is initiated by the bedside nurse, and if the patient demonstrates dysphagia, they are maintained on strict nothing by mouth (NPO) followed by a speech and language pathologist (SLP) referral for an appropriate modified diet recommendation. Aspiration prevention techniques included the avoidance of straws, 45-degree positioning, no talking during meals, taking small bites, placement of the aspiration wrist band, and consuming meals out of the bed in a chair. Nursing education was conducted with a newly created online learning module about aspiration pneumonia. The authors identified 979 patients, with an average age of 73.5 years old, who were diagnosed with aspiration pneumonia on the index hospitalization. These patients were reviewed for a 30-day readmission for aspiration pneumonia or stable sepsis, and mortality rates from January 2021 to December 2022 at Overlook Medical Center (OMC). The 30-day readmission rates were significantly lower in the cohort that received the clinical care pathway (35.0% vs. 27.5%, p = 0.011). When evaluating the mortality rates in the pre and post intervention cohort the authors discovered the mortality rates were lower in the post intervention cohort (23.7% vs 22.4%, p = 0.61) Mortality among non-white (self-reported as non-white) patients were lower in the post intervention cohort (34.4% vs. 21.0% , p = 0.05). Patients who reported as a current smoker/vaper in the pre and post cohorts had increased mortality rates (5.9% vs 22%). There was a decrease in mortality for the male population but an increase in mortality for women in the pre and post cohorts (19% vs. 25%). The authors attributed this increase in mortality in the post intervention cohort to more active smokers, more former smokers, and more being admitted from a SNF. This research identified that implementation of an Aspiration Pneumonia Clinical Care Pathway showed a statistically significant decrease in readmission rates and mortality rates in non-whites. The 30-day readmission rates were lower in the cohort that received the clinical care pathway (35.0% vs. 27.5%, p = 0.011).

Keywords: aspiration pneumonia, mortality, quality improvement, 30-day pneumonia readmissions

Procedia PDF Downloads 26
25 Parametric Study for Obtaining the Structural Response of Segmental Tunnels in Soft Soil by Using No-Linear Numerical Models

Authors: Arturo Galván, Jatziri Y. Moreno-Martínez, Israel Enrique Herrera Díaz, José Ramón Gasca Tirado

Abstract:

In recent years, one of the methods most used for the construction of tunnels in soft soil is the shield-driven tunneling. The advantage of this construction technique is that it allows excavating the tunnel while at the same time a primary lining is placed, which consists of precast segments. There are joints between segments, also called longitudinal joints, and joints between rings (called as circumferential joints). This is the reason because of this type of constructions cannot be considered as a continuous structure. The effect of these joints influences in the rigidity of the segmental lining and therefore in its structural response. A parametric study was performed to take into account the effect of different parameters in the structural response of typical segmental tunnels built in soft soil by using non-linear numerical models based on Finite Element Method by means of the software package ANSYS v. 11.0. In the first part of this study, two types of numerical models were performed. In the first one, the segments were modeled by using beam elements based on Timoshenko beam theory whilst the segment joints were modeled by using inelastic rotational springs considering the constitutive moment-rotation relation proposed by Gladwell. In this way, the mechanical behavior of longitudinal joints was simulated. On the other hand for simulating the mechanical behavior of circumferential joints elastic springs were considered. As well as, the stability given by the soil was modeled by means of elastic-linear springs. In the second type of models, the segments were modeled by means of three-dimensional solid elements and the joints with contact elements. In these models, the zone of the joints is modeled as a discontinuous (increasing the computational effort) therefore a discrete model is obtained. With these contact elements the mechanical behavior of joints is simulated considering that when the joint is closed, there is transmission of compressive and shear stresses but not of tensile stresses and when the joint is opened, there is no transmission of stresses. This type of models can detect changes in the geometry because of the relative movement of the elements that form the joints. A comparison between the numerical results with two types of models was carried out. In this way, the hypothesis considered in the simplified models were validated. In addition, the numerical models were calibrated with (Lab-based) experimental results obtained from the literature of a typical tunnel built in Europe. In the second part of this work, a parametric study was performed by using the simplified models due to less used computational effort compared to complex models. In the parametric study, the effect of material properties, the geometry of the tunnel, the arrangement of the longitudinal joints and the coupling of the rings were studied. Finally, it was concluded that the mechanical behavior of segment and ring joints and the arrangement of the segment joints affect the global behavior of the lining. As well as, the effect of the coupling between rings modifies the structural capacity of the lining.

Keywords: numerical models, parametric study, segmental tunnels, structural response

Procedia PDF Downloads 205
24 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English

Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista

Abstract:

The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.

Keywords: corpus linguistics, historical linguistics, old English, parallel corpus

Procedia PDF Downloads 173
23 Kinetic Evaluation of Sterically Hindered Amines under Partial Oxy-Combustion Conditions

Authors: Sara Camino, Fernando Vega, Mercedes Cano, Benito Navarrete, José A. Camino

Abstract:

Carbon capture and storage (CCS) technologies should play a relevant role towards low-carbon systems in the European Union by 2030. Partial oxy-combustion emerges as a promising CCS approach to mitigate anthropogenic CO₂ emissions. Its advantages respect to other CCS technologies rely on the production of a higher CO₂ concentrated flue gas than these provided by conventional air-firing processes. The presence of more CO₂ in the flue gas increases the driving force in the separation process and hence it might lead to further reductions of the energy requirements of the overall CO₂ capture process. A higher CO₂ concentrated flue gas should enhance the CO₂ capture by chemical absorption in solvent kinetic and CO₂ cyclic capacity. They have impact on the performance of the overall CO₂ absorption process by reducing the solvent flow-rate required for a specific CO₂ removal efficiency. Lower solvent flow-rates decreases the reboiler duty during the regeneration stage and also reduces the equipment size and pumping costs. Moreover, R&D activities in this field are focused on novel solvents and blends that provide lower CO₂ absorption enthalpies and therefore lower energy penalties associated to the solvent regeneration. In this respect, sterically hindered amines are considered potential solvents for CO₂ capture. They provide a low energy requirement during the regeneration process due to its molecular structure. However, its absorption kinetics are slow and they must be promoted by blending with faster solvents such as monoethanolamine (MEA) and piperazine (PZ). In this work, the kinetic behavior of two sterically hindered amines were studied under partial oxy-combustion conditions and compared with MEA. A lab-scale semi-batch reactor was used. The CO₂ composition of the synthetic flue gas varied from 15%v/v – conventional coal combustion – to 60%v/v – maximum CO₂ concentration allowable for an optimal partial oxy-combustion operation. Firstly, 2-amino-2-methyl-1-propanol (AMP) showed a hybrid behavior with fast kinetics and a low enthalpy of CO₂ absorption. The second solvent was Isophrondiamine (IF), which has a steric hindrance in one of the amino groups. Its free amino group increases its cyclic capacity. In general, the presence of higher CO₂ concentration in the flue gas accelerated the CO₂ absorption phenomena, producing higher CO₂ absorption rates. In addition, the evolution of the CO2 loading also exhibited higher values in the experiments using higher CO₂ concentrated flue gas. The steric hindrance causes a hybrid behavior in this solvent, between both fast and slow kinetic solvents. The kinetics rates observed in all the experiments carried out using AMP were higher than MEA, but lower than the IF. The kinetic enhancement experienced by AMP at a high CO2 concentration is slightly over 60%, instead of 70% – 80% for IF. AMP also improved its CO₂ absorption capacity by 24.7%, from 15%v/v to 60%v/v, almost double the improvements achieved by MEA. In IF experiments, the CO₂ loading increased around 10% from 15%v/v to 60%v/v CO₂ and it changed from 1.10 to 1.34 mole CO₂ per mole solvent, more than 20% of increase. This hybrid kinetic behavior makes AMP and IF promising solvents for partial oxy–combustion applications.

Keywords: absorption, carbon capture, partial oxy-combustion, solvent

Procedia PDF Downloads 162
22 A 500 MWₑ Coal-Fired Power Plant Operated under Partial Oxy-Combustion: Methodology and Economic Evaluation

Authors: Fernando Vega, Esmeralda Portillo, Sara Camino, Benito Navarrete, Elena Montavez

Abstract:

The European Union aims at strongly reducing their CO₂ emissions from energy and industrial sector by 2030. The energy sector contributes with more than two-thirds of the CO₂ emission share derived from anthropogenic activities. Although efforts are mainly focused on the use of renewables by energy production sector, carbon capture and storage (CCS) remains as a frontline option to reduce CO₂ emissions from industrial process, particularly from fossil-fuel power plants and cement production. Among the most feasible and near-to-market CCS technologies, namely post-combustion and oxy-combustion, partial oxy-combustion is a novel concept that can potentially reduce the overall energy requirements of the CO₂ capture process. This technology consists in the use of higher oxygen content in the oxidizer that should increase the CO₂ concentration of the flue gas once the fuel is burnt. The CO₂ is then separated from the flue gas downstream by means of a conventional CO₂ chemical absorption process. The production of a higher CO₂ concentrated flue gas should enhance the CO₂ absorption into the solvent, leading to further reductions of the CO₂ separation performance in terms of solvent flow-rate, equipment size, and energy penalty related to the solvent regeneration. This work evaluates a portfolio of CCS technologies applied to fossil-fuel power plants. For this purpose, an economic evaluation methodology was developed in detail to determine the main economical parameters for CO₂ emission removal such as the levelized cost of electricity (LCOE) and the CO₂ captured and avoided costs. ASPEN Plus™ software was used to simulate the main units of power plant and solve the energy and mass balance. Capital and investment costs were determined from the purchased cost of equipment, also engineering costs and project and process contingencies. The annual capital cost and operating and maintenance costs were later obtained. A complete energy balance was performed to determine the net power produced in each case. The baseline case consists of a supercritical 500 MWe coal-fired power plant using anthracite as a fuel without any CO₂ capture system. Four cases were proposed: conventional post-combustion capture, oxy-combustion and partial oxy-combustion using two levels of oxygen-enriched air (40%v/v and 75%v/v). CO₂ chemical absorption process using monoethanolamine (MEA) was used as a CO₂ separation process whereas the O₂ requirement was achieved using a conventional air separation unit (ASU) based on Linde's cryogenic process. Results showed a reduction of 15% of the total investment cost of the CO₂ separation process when partial oxy-combustion was used. Oxygen-enriched air production also reduced almost half the investment costs required for ASU in comparison with oxy-combustion cases. Partial oxy-combustion has a significant impact on the performance of both CO₂ separation and O₂ production technologies, and it can lead to further energy reductions using new developments on both CO₂ and O₂ separation processes.

Keywords: carbon capture, cost methodology, economic evaluation, partial oxy-combustion

Procedia PDF Downloads 120
21 Recommendations to Improve Classification of Grade Crossings in Urban Areas of Mexico

Authors: Javier Alfonso Bonilla-Chávez, Angélica Lozano

Abstract:

In North America, more than 2,000 people annually die in accidents related to railroad tracks. In 2020, collisions at grade crossings were the main cause of deaths related to railway accidents in Mexico. Railway networks have constant interaction with motor transport users, cyclists, and pedestrians, mainly in grade crossings, where is the greatest vulnerability and risk of accidents. Usually, accidents at grade crossings are directly related to risky behavior and non-compliance with regulations by motorists, cyclists, and pedestrians, especially in developing countries. Around the world, countries classify these crossings in different ways. In Mexico, according to their dangerousness (high, medium, or low), types A, B and C have been established, recommending for each one different type of auditive and visual signaling and gates, as well as horizontal and vertical signaling. This classification is based in a weighting, but regrettably, it is not explained how the weight values were obtained. A review of the variables and the current approach for the grade crossing classification is required, since it is inadequate for some crossings. In contrast, North America (USA and Canada) and European countries consider a broader classification so that attention to each crossing is addressed more precisely and equipment costs are adjusted. Lack of a proper classification, could lead to cost overruns in the equipment and a deficient operation. To exemplify the lack of a good classification, six crossings are studied, three located in the rural area of Mexico and three in Mexico City. These cases show the need of: improving the current regulations, improving the existing infrastructure, and implementing technological systems, including informative signals with nomenclature of the involved crossing and direct telephone line for reporting emergencies. This implementation is unaffordable for most municipal governments. Also, an inventory of the most dangerous grade crossings in urban and rural areas must be obtained. Then, an approach for improving the classification of grade crossings is suggested. This approach must be based on criteria design, characteristics of adjacent roads or intersections which can influence traffic flow through the crossing, accidents related to motorized and non-motorized vehicles, land use and land management, type of area, and services and economic activities in the zone where the grade crossings is located. An expanded classification of grade crossing in Mexico could reduce accidents and improve the efficiency of the railroad.

Keywords: accidents, grade crossing, railroad, traffic safety

Procedia PDF Downloads 79
20 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges

Authors: Dianelys Vega, Carlos Magluta, Ney Roitman

Abstract:

The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.

Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction

Procedia PDF Downloads 97
19 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles

Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo

Abstract:

Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.

Keywords: HRRP, NCTI, simulated/synthetic database, SVD

Procedia PDF Downloads 328